I wrote about artificial intelligence and the legal profession back in August 2025. At the time, the question felt somewhat theoretical. Could AI ever truly replace lawyers? My answer was no, and I stand by that. But what felt like a philosophical musing six months ago became rather more concrete on 3 February 2026, when Anthropic, the company behind the AI chatbot Claude, unveiled a tool designed to automate a range of legal tasks.
The market reaction was immediate and dramatic. Shares in Thomson Reuters fell 18%. Relx dropped 14%. The London Stock Exchange Group lost 13%. Wolters Kluwer, Sage, Pearson and Experian all took significant hits. The FTSE 100 had reached a record high that very morning; by the afternoon, the sell-off had dragged it into the red.
So what exactly did Anthropic announce, and should lawyers, and their clients, be concerned?
What the tool actually does
Anthropic’s new tool is designed to handle tasks such as contract review, non-disclosure agreement triage, compliance workflows, legal briefings, and templated responses. These are genuine legal functions, and I won’t pretend otherwise. For large corporate legal departments processing hundreds of routine documents such as contract reviews or due diligence, a tool that can accelerate that work is genuinely useful.
But here is the detail that matters most, and it came from Anthropic itself: the tool does not provide legal advice. Their own guidance states that “AI-generated analysis should be reviewed by licensed attorneys (or qualified Jersey Advocates or solicitors in our part of the world) before being relied upon for legal decisions.”
That caveat is not a throwaway disclaimer. It is the entire point.
Why the market panicked, and why it overreacted
The share price falls were striking, but they tell us more about investor anxiety than about the reality of legal practice. The companies hit hardest, Thomson Reuters, Relx and Wolters Kluwer, are data and legal publishing businesses. The fear is that AI tools will either squeeze their margins or cut them out of the chain entirely. As Dan Coatsworth of AJ Bell put it, the concern is that tools like Anthropic’s will “reduce the margins these data-driven companies can achieve” or, in a worst-case scenario, remove them as providers altogether.
That is a legitimate concern for those businesses. But it is a concern about the legal information industry, not about the practice of law itself. There is a meaningful difference between the two.
What AI does well, and where it stops
I have no interest in pretending AI is not capable. It is, and increasingly so. The tasks Anthropic is targeting, contract review, compliance workflows, and document triage, are precisely the kinds of repetitive, high-volume work where AI can add real value. If you are a general counsel reviewing 200 contracts a quarter, a tool that flags the unusual clauses and lets you focus your attention on what matters is not a threat. It is a welcome development.
Where AI stops is where legal practice truly begins. The moment a client sits across from me and explains their situation, the commercial pressures they are under, the relationships at stake, the outcomes they need, that is not a data processing exercise. It requires judgement, experience, and an understanding of context that no algorithm can replicate.
Consider something as routine as advising on a contract dispute. The legal position might be perfectly clear. But the right advice depends on the client’s commercial relationship with the other party, their appetite for risk, the reputational consequences of different courses of action, and half a dozen other factors that exist nowhere in the contract itself. AI can tell you what the contract says. It cannot tell you what to do about it.

The jobs question
The Anthropic announcement has, inevitably, reignited concerns about job losses. Clifford Chance, one of the largest international law firms, announced late last year that it was reducing business services staff at its London office by 10%, citing AI as a factor. A recent survey found that 27% of UK workers worry their jobs could disappear within five years as a result of AI. London’s mayor, Sadiq Khan, has warned that the capital sits “at the sharpest edge of change” because it relies on white-collar professional services.
These concerns are not unfounded. Certain roles, particularly those focused on document processing, data entry, and routine administrative tasks, will change significantly. Some will disappear. That is the honest position, and I think the profession does itself no favours by pretending otherwise.
But the roles most at risk are not the ones that involve exercising legal judgment on behalf of clients. They are the roles that involve processing information at volume: precisely the tasks AI is designed for. The human lawyer who assesses, advises, negotiates, and advocates is not being replaced. If anything, they are being freed up to do more of the work that requires a lawyer.
What this means for clients
For clients of firms like ours, the practical impact is this: AI tools will, over time, make certain aspects of legal work faster and more cost-effective. That is a good thing. Nobody benefits from paying a solicitor to spend three hours reviewing a standard lease when a tool can flag the relevant clauses in minutes, with a solicitor then reviewing the output and advising accordingly.
But the advice itself, the judgement call about what to do, the strategy behind a negotiation, the understanding of what a particular outcome means for your business or your family, that remains a human function. Anthropic themselves acknowledge this. Their tool does not provide legal advice. It provides analysis that requires a lawyer to interpret.
Where we go from here
I wrote six months ago that the future of law would likely involve a collaboration between AI and human lawyers. The Anthropic announcement has not changed that view. If anything, it has confirmed it. The tools are getting better, the capabilities more impressive, and the efficiency gains more tangible. But the need for lawyers who can exercise judgement, understand their clients, and navigate the complexities of real-world legal problems has not diminished.
If anything, it has become more important. As AI handles the routine, the value of genuine legal expertise, the kind that comes from years of practice rather than from an algorithm, only increases.
The machines are at the table. But the advice still comes from the person sitting across from you.