Meta pauses work with AI data firm after security incident

Attack thought to be part of a broader supply chain campaign that could have affected thousands of organisations

Meta has suspended its collaboration with data contractor Mercor following a significant security breach that may have exposed sensitive information about how leading AI systems are trained.

Sources familiar with the matter told Wired that the pause is indefinite, with other major AI companies also reassessing their relationships with the firm as they investigate the scale of the incident.

Mercor is one of several specialist companies that supply curated training data to top AI developers, including OpenAI and Anthropic.

These datasets, often produced by large networks of human contractors, are considered highly confidential, as they can reveal critical insights into how advanced AI models are built and refined.

The breach has raised concerns that proprietary methods used by AI firms could have been compromised.

While it remains unclear whether any exposed data would offer a meaningful advantage to competitors, experts say such information is among the most closely guarded assets in the industry.

OpenAI confirmed it is reviewing the incident but said there is no indication that user data has been affected.

Mercor acknowledged the breach in an internal email to staff on 31st March, stating that it was part of a wider security incident affecting "thousands of organisations worldwide".

Contractors left in limbo

The fallout has also affected workers linked to Mercor's projects.

Contractors assigned to Meta-related work have reportedly been unable to log hours since the pause, leaving some without income.

Internal communications suggest the company is attempting to reassign affected workers, though details remain unclear.

Contractors were not initially informed of the reasons behind the suspension. One paused initiative, known internally as "Chordus", involved training AI systems to verify information by drawing on multiple online sources.

Supply chain attack suspected

Cybersecurity researchers believe the breach may be linked to compromised updates in an AI tool known as LiteLLM, widely used by developers.

The attack is thought to be part of a broader supply chain campaign that could have affected thousands of organisations.

A hacking group known as "TeamPCP" is suspected of being behind the incident. The group has recently gained attention for a series of high-profile cyberattacks, some involving ransomware and data extortion.

Although another group using the name "Lapsus$" has claimed responsibility, analysts say there is little evidence connecting the breach to Lapsus$.

The incident suggests increasing vulnerability of the AI supply chain, where multiple firms collaborate to build complex systems.

Companies like Mercor - and competitors such as Scale AI, Labelbox and Turing - play a crucial but often opaque role in this ecosystem. Their work is typically shrouded in secrecy, reflecting the competitive value of the data they produce.

"TeamPCP is definitely financially motivated," said Allan Liska, an analyst for the security firm Recorded Future.

"There might be some geopolitical stuff as well, but it's hard to determine what's real and what's bluster, especially with a group this new."

For now, the full extent of the breach – and its implications for the global AI race – remains under investigation