‘Frankly we’re a bit lucky’. Thomson Reuters CTO on making money from GenAI

Image:
Thomson Reuters HQ in Toronto. Source: Can Pac Swire, CC BY-SA 2.0, Wikimedia

Kirsty Roth is CTO of Thomson Reuters, the content-driven technology powerhouse that provides information and tools for the professions, a position that puts her in the driving seat of the company’s AI rollout. She’s also COO, “managing everything from real estate to procurement - all those things nobody really wants,” which she says gives her a unique perspective.

At the end of 2025, Thomson Reuters’ annual report noted organic revenues up 9% year-on-year for the “Big 3" - meaning the legal professionals, corporates and tax and accounting professionals that provide 82% of its income. The company credits much of this growth to the impact of its AI solutions Westlaw, CoCounsel and UltraTax. Roth says it’s about being in the right place at the right time.

“I think we are one of the few companies that can show real revenue with our generative AI products,’ she said, adding: “Frankly, we're a bit lucky, because AI works really well for legal and tax type problems.”

Image
Description
Kirsty Roth

Legal, accounting and tax are wordy, rules-based disciplines that require the processing of large volumes of contracts, records, legislation, court rulings and filings. Researching, translating, checking and drafting documents can be repetitive and time-consuming, making these segments ideal candidates for automation with large language models (LLMs).

But there are caveats. An LLM is not the same thing as a database, and these are complex, subtle, nuanced, contextual - not to mention highly regulated - verticals, which means that an off-the-shelf AI chatbot should never be trusted with the final draft - as hundreds of lawyers have discovered to their peril (see below).

Keeping AI grounded

Following its acquisition of UK startup Safe Sign, Thomson Reuters plans to launch its own large language model, building on proprietary data and domain expertise to deliver AI designed specifically for professional workflows, but the company also uses several general-purpose models which allows it to hedge against any single model’s weaknesses or outages - and to pick the best model for a particular task.

LLMs are notorious for hallucinating, so they are not used “out of the box”. Accuracy can be improved by grounding the models using RAG and Thomson Reuters’s content - which it has in abundance - but there’s also the matter of compliance. GDPR and other data protection legislation restricts the processing of personal data to approved jurisdictions. How to guarantee that when you’re using ChatGPT or Claude over a web API? Resolving these issues take a lot of effort.

“[Applying the guardrails] is as much work as building the product,” Roth told Computing. “You can get many wrong answers out of ChatGPT. Do you want your lawyers relying on it? Ours will come back and say, ‘The data is not good enough to give you an answer,’ versus just making something up.”

Accuracy is table stakes for the user base of lawyers and accountants, she went on. “These guys want it to be 99.99% accurate. Security is a huge piece; privacy is a huge piece.”

New competitors

All that said, Thomson Reuters and other specialist data providers face a significant challenge from the likes of Anthropic. The company’s shares dived this week on the news that Anthropic has launched plug-ins for Claude Cowork that could automate tasks across legal, sales, marketing and data analysis. Roth sees this as little more than market jitters.

“Anthropic’s new legal plug‑in validates the scale and attractiveness of this market,” she commented via email after the interview.

“Thomson Reuters is building from a position of deep strength: authoritative content, decades of legal expertise, and embedded workflows trusted by the world’s top professionals. Our AI is grounded in assets that competitors simply don’t have. That differentiation compounds over time, and it’s why we remain uniquely positioned to lead the future of professional AI.”

Where AI can add value

Identifying appropriate use cases for AI is key if it is to gain acceptance and really add value. In the professions these include streamlining administrative tasks, simplifying language, discovery and translation.

Thomson Reuters may have lucked out in that the company’s core markets align with AI’s strengths making it an easy sell, but the company also uses AI internally, with a platform called Open Arena designed to cater for needs of the staff while mitigating the risk of shadow AI.

“We didn't want all our everyone in Thomson Reuters using ChatGPT and Gemini and not knowing where our data might end up,” Roth commented.

The AI rollout is ongoing, but there’s been a strong pull factor from certain areas of the business, she went on.

“The use cases that have been very successful for us so far are software engineering, customer support, marketing - so quite typical - but teams like finance have actually done really well, which is not necessarily so typical.”

How does she identify possible use cases?

“We look at the type of work that people do, how much time is spent on it. We've said, ‘these are the big teams where we think it will work, and we'll focus there first’. It's not that we won't get to some of the smaller teams, but we've gone after the big chunks first.”

Human in the loop

Safe Sign, the legal AI startup, was just one of five acquisitions made by Thomson Reuters in 2024, and this is another internal area where AI has proved useful.

“We've built something for M&A ourselves that really helps with due diligence. If you if you've got to do a lot of research, or strategy work, or get through a lot of documentation, then generative AI solutions are incredibly helpful.”

However, it is never used without human intervention, Roth added.

Whether we give it to our customers or use it ourselves it’s still very much ‘human in the loop’, the last mile sits with you. We expect the person leading that diligence to read it, to check it, to make sure there aren't any mistakes, same as what we put out to a lawyer. “

Which will come as a relief to anyone who might fear AI taking their job. So far this hasn’t happened at Thomson Reuters and both morale and curiosity about the tech remain high, insists Roth, but that day could come.

We get people to understand this opportunity to make their day more efficient, to make it more effective, to get to work on more interesting topics, and so far that's gone quite well. But as the industry shifts and people start to see roles being eliminated, I imagine at some point we'll have to think about how we ask some of those tougher questions.”

A lawyer’s perspective

We asked Jonathan Armstrong, a partner at Punter Southall Law who also serves on the New York State Bar Association’s AI Task Force and sits on the Law Society AI Group, for his take the use of AI tools by lawyers.

The risk of careless use of AI in the legal context is clear, he said. “There are 878 cases currently where lawyers or litigants in person have produced cases to the courts which simply don’t exist.”

Many of these cases are likely to be the result of lawyers using general purpose chatbots without properly verifying the output.

See also: Debunking the AI legal revolution: What’s really happening?

“There’s a real difference between AI which has been tailored for legal purposes and just out-of-the-box GenAI,” he remarked. “One of the essential differences is training data.

“Companies like Thomson Reuters and LexisNexis have hundreds of years of legal data which they can train their models on and can use to draw down answers. By contrast, some free-to-air GenAI is trained on CommonCrawl data, so that could be people in Reddit chatrooms saying what they want the law to be or what they think it is rather than what it actually is.”

However, specialist tools are more expensive (a few hundred dollars per head per month) making workarounds a temptation, and even with good tools lawyers must still be diligent about data security and possible bias.

“Be transparent, check AI’s work as you would check a non-lawyer on your staff, and be careful with privilege and confidentiality,” Armstrong advised.