The Research Process Is Broken – And AI Didn’t Fix It

Why faster answers are not the same as better decisions

Executive Summary

AI has made it cheaper and faster to produce a plausible answer to almost any research question. Most investment teams now have access to one or more AI-assisted tools for search, summarisation, and market monitoring. By many measures, the baseline of what an analyst can produce in an hour has improved significantly.

And yet investment performance has not moved in proportion. Investment Committee presentations still require substantial rework. Conviction behind positions is often shallow. Research workflows remain fragmented between licensed content, internal models, and AI-generated synthesis that no one is quite sure how to cite.

This paper argues that the research process was not broken because analysts were slow. It was broken because the tools available to them — including most AI tools — optimise for retrieval speed rather than decision quality. Making retrieval faster does not fix the underlying problem. It scales it.

The firms that close the gap between research speed and research quality will not do so by adding more AI. They will do so by introducing the layer currently missing from almost every investment workflow: structured evidence governance — a disciplined, auditable link between the question being asked, the sources authorised to answer it, and the conclusion that can defensibly be drawn.

1. What the Research Process Is Actually For

The purpose of investment research is not to produce documents. It is to reduce uncertainty sufficiently to commit capital with calibrated conviction.

That is a different standard from ‘generating a useful summary.’ A summary may be accurate, well-written, and even insightful — and still be useless for decision-making if the PM cannot establish what it was based on, whether the sources are authorised, or whether a different set of evidence would change the conclusion.

Decision-grade research has three properties that most tools do not provide:

  • Traceability – every claim can be linked to a specific licensed source and passage
  • Scope control – the system reasons only within an explicitly defined evidence universe
  • Sufficiency signalling – the system distinguishes between a supported conclusion and a guess

Most AI tools are optimised for the appearance of knowledge. Decision-grade research requires the documentation of knowledge – including its limits.

2. What Changed — and What Did Not

The AI tools now standard across investment teams — co-pilots, summarisation engines, semantic search — have materially reduced the cost of baseline knowledge work. Finding the relevant passage in a 40-page broker note, summarising overnight macro commentary, stitching together a cross-asset context brief: all of this is faster.

But these tools did not change the nature of the investment decision. A portfolio manager still needs to answer: What do I believe? Why? What would change my mind? And,  increasingly, how do I explain this to my Investment Committee, my compliance team, or my client?

The gap between faster retrieval and better decisions is not a model problem. The models are capable. It is a product philosophy problem. Most AI tools are question-first: you ask, they answer. But a question-first system cannot enforce the constraints that investment decision-making requires – what sources are licensed for this use, what the evidence actually supports, where the answer should be ‘insufficient evidence to conclude.’

3. The Failure Modes That Have Emerged

Confident Answers Without Sufficient Evidence

Language models are rewarded for producing fluent, plausible responses. In an investment context, a fluent response without a traceable evidence base is worse than no answer — it creates the illusion of informed conviction where none exists. Analysts are then in the position of either trusting an output they cannot verify or doing the source work manually, which eliminates the efficiency gain.

Unlicensed Content in Research Workflows

The majority of decision-grade investment research is licensed, time-bounded, and usage-restricted under broker agreements, publisher terms, or internal compliance policies. Tools that retrieve from the open web or from broad internal document stores without entitlement checking create rights risk. In many cases, analysts using AI-generated summaries of broker research are in breach of their firm’s data agreements — without knowing it.

No Reproducibility

A research workflow that cannot be reproduced is not a research workflow. It is a conversation. If a PM cannot re-run the same query against the same evidence set six weeks later and get a comparable output, there is no audit trail and no ability to review decision quality. This matters for internal governance, for compliance, and increasingly for regulatory scrutiny.

Fragmentation, Not Synthesis

Most teams are using multiple tools — a Bloomberg terminal, a chatbot, a proprietary summarisation layer, email-distributed broker research — with no governed connection between them. The result is that the analyst is the integration layer, synthesising outputs manually with no documented evidence chain.

4. What Decision-Grade Research Looks Like

The investment firms that will close this gap share a common design principle: they define what is allowed to be true before they reason. This means:

  • An explicit evidence universe — a defined set of licensed, entitlement-checked sources the system is allowed to draw on for a given query or workflow
  • Claim-level provenance — every output carries a traceable link to the source passage, not just a general attribution to a research provider
  • Explicit insufficiency — when the evidence does not support a conclusion, the system says so, rather than generating a plausible-sounding answer
  • Reproducible workflows — research runs can be re-executed against the same evidence set, producing comparable outputs that support retrospective review

The question is not whether your team uses AI. It is whether the AI you use produces outputs you can defend — to an Investment Committee, a regulator, or yourself.

5. Why This Matters Now

The research landscape is changing in ways that make evidence governance more important, not less, over the next two to three years. The EU AI Act’s high-risk system provisions enter application in August 2026. The FCA’s principles-based approach to AI governance means that explainability and audit trails are becoming de facto requirements even in the absence of prescriptive rules. And as AI-generated research proliferates, the value of credentialed, licensed, attribution-clear content will rise relative to commoditised synthesis.

Firms that establish structured evidence governance now will be ahead of both the regulatory curve and the market curve. Firms that continue to treat AI as an advanced search engine will find that they have scaled their research process without improving it.

Conclusion

The research process is broken because it lacks a governed link between question, evidence, and conclusion. AI tools that optimise for retrieval speed have made this problem faster, not better.

Contours is built around the operating model that investment research requires: licensed sources, entitlement-aware retrieval, claim-level provenance, explicit sufficiency thresholds, and Investment Committee-ready outputs. It is not a faster way to get an answer. It is a disciplined way to earn conviction.

Related Articles

Why depth, provenance, and author accountability still matter when anyone can synthesise
The next wave of research governance — and how investment teams should prepare
What your AI tool doesn't know it doesn't know — and why that matters
How evidence-bounded investment research changes the way you build a view
A practical guide to licensing, AI, and rights risk in investment research

Test