Executive Summary
Most investment professionals do not spend time thinking about who owns the research they read. It arrives via terminal, email, or portal. It is read, acted on, and filed. The licensing framework that governs its use is, for most analysts, invisible.
AI has made this invisibility dangerous. When an analyst pastes a broker note into a general-purpose AI tool, or when a research platform retrieves content from an uncontrolled document store to answer a question, the licensing question does not disappear. It becomes acute.
This paper is not legal advice. It is a practical guide for investment professionals who want to understand the rights landscape they are operating in — and who want to avoid creating personal or institutional exposure through well-intentioned AI use.
The conclusion is straightforward: licensed content requires a licensed AI environment. An AI tool that does not check entitlements before retrieving and reasoning over research content is not a safe research tool, regardless of how capable it is.
1. The Licensing Landscape Most Analysts Do Not Think About
Investment research is, overwhelmingly, licensed content. Broker research distributed to buy-side institutions is typically governed by bilateral agreements between the broker and the asset manager. These agreements define who may use the content, for what purpose, and under what conditions.
The specific terms vary considerably across brokers and institutions. But the common structure includes:
- Named user or seat-level restrictions — research is licensed for use by identified investment professionals, not for firm-wide redistribution
- Purpose limitations — the research is licensed for investment decision support, not for training AI models, building derivative products, or commercial redistribution
- Secondary use restrictions — summarising, extracting from, or feeding broker research into downstream systems may require explicit permission
- Confidentiality obligations — some research is distributed under obligations that restrict sharing or disclosure outside the investment team
Most professionals are aware of these terms in a general sense. Few have read the specific agreements that govern the research they use every day.
The agreement governs the content, not the platform. Using a broker note in an unlicensed AI environment does not change the terms under which you hold that content.
2. What AI Changes — and Why It Matters
Before AI-assisted research tools, the compliance question around licensed content was relatively contained. A human analyst read a broker note, formed a view, and wrote a memo. The chain of use was short and largely implicit.
AI changes this in several ways that have material implications for rights and compliance:
Retrieval Is Now Systematic
AI research tools retrieve content programmatically, at scale, against queries. A single analyst interaction may cause a system to retrieve passages from dozens of broker notes, synthesise across them, and return a consolidated output. This is qualitatively different from a human reading a note. Whether it falls within the permitted use scope of bilateral broker agreements is a question most teams have not formally addressed.
Provenance Is Often Lost
General-purpose AI tools typically do not preserve the licensing status of the content they retrieve. An output that synthesises across licensed and unlicensed content without distinguishing between them creates a provenance problem — and potentially a rights problem — that the analyst cannot see or manage.
Model Training Risk
Most institutional-grade AI vendors are clear that they do not train on client data. But the risk is real enough that broker agreements increasingly include explicit provisions around AI use. Several major sell-side institutions have issued guidance — and in some cases formal restrictions — on how buy-side clients may use distributed research in AI contexts.
The Compliance Function Is Not Always Across It
In many asset managers, compliance oversight of AI tool adoption is running behind the pace of actual adoption. Analysts are making pragmatic decisions about which tools to use based on capability rather than rights alignment. This creates firm-level exposure that may not become visible until an incident occurs.
3. What Your Broker Agreements Actually Permit
The honest answer is: it depends on the specific agreement, and most professionals do not know what theirs say about AI use.
However, the common industry pattern — which is evolving rapidly — is moving in a consistent direction:
- Use of research in a governed, walled-garden AI environment with no model training and no secondary use is increasingly treated within the RMS permitted use framework
- Use of research in general-purpose consumer or enterprise AI tools is more likely to fall outside permitted use — particularly where the tool uses client data for model improvement or where content retrieval is not entitlement-aware
- Firms that can demonstrate a structured, auditable AI environment are in a much stronger position when engaging brokers on AI use clarification
A structured, governed AI research platform is not just a compliance answer — it is a significantly easier conversation with your broker relationships.
4. What Compliance-Safe AI Research Looks Like
An AI research environment that manages licensed content appropriately has several characteristics that distinguish it from general-purpose tools:
Entitlement-Aware Retrieval
The system checks what a user is licensed and permitted to see before retrieving content, not after. Research from a broker the firm does not hold a current agreement with is not surfaced. Content restricted to named users is not available to the broader team.
No Model Training on Licensed Content
The platform operates as a governed decision-support environment. Content retrieved to answer a query is not used to train or fine-tune models. This is typically the first question that broker agreements and internal compliance teams raise — and the answer should be unambiguous.
Claim-Level Provenance
Every output includes a traceable link to the source document and passage. This makes it possible to establish the rights status of any claim in an AI-generated research output — not just the general source, but the specific passage and the agreement under which it is held.
Audit Logging
A compliant environment maintains durable logs of what content was retrieved, by whom, in response to what query, and what output was produced. This supports internal compliance review, broker audits, and regulatory scrutiny.
5. The Practical Answer for Investment Teams
For most investment teams, the practical answer is not to become experts in licensing law. It is to establish a clear bright line: licensed content goes into a licensed AI environment with known, documented, entitlement-aware controls. Everything else is a different category of tool for a different category of task.
Contours is built on this principle. Every source in the Contours Marketplace is licensed for AI use in investment research workflows. Entitlement checking is structural, not optional. Provenance is preserved at the claim level. No content is used for model training.
Conclusion
The research you use every day is licensed content. The terms under which you hold that content do not change when you introduce AI into your workflow. What changes is the scale and visibility of potential rights exposure.
The answer is not to avoid AI in research. It is to ensure that the AI environment you use for licensed content is built around the same entitlement and permission structures as the rest of your research infrastructure.
Contours is the product of KiteEdge Ltd. All sources in the Contours Marketplace are licensed for AI-assisted investment research. Visit kiteedge.co.uk to learn more.