All News

Meta's $14B Bet on Scale AI Hits Turbulence

Meta’s $14.3 billion investment in Scale AI and hiring of Alexandr Wang promised to accelerate its push toward AI superintelligence. Early fractures emerged: exec Ruben Mayer left after two months, TBD Labs is using competing vendors Surge and Mercor, and researchers question Scale’s data quality. The split raises urgent questions about vendor risk and talent retention.

Published August 29, 2025 at 10:09 PM EDT in Artificial Intelligence (AI)

Meta’s $14.3B bet shows early cracks

In June Meta announced a headline-making $14.3 billion investment in Scale AI and brought founder Alexandr Wang and several top executives into Meta Superintelligence Labs (MSL). The move was meant to accelerate Meta’s push to catch up with OpenAI and Google. But within months the relationship has shown signs of strain.

At least one senior hire, Ruben Mayer — Scale’s former SVP of GenAI Product and Operations — left Meta after roughly two months. Mayer was not placed on Meta’s core TBD Labs team, which is building the company’s most advanced models, and his departure is one of several early exits that raise questions about fit and integration.

More broadly, TBD Labs is contracting with multiple third-party data vendors — including Mercor and Surge, two of Scale’s largest competitors. Sources say some researchers at TBD view Scale’s data as lower quality compared with those competitors.

That criticism traces to business models: Scale built much of its volume on a crowdsourced, low-cost annotation workforce. Modern large models increasingly demand high-skill domain expertise — doctors, lawyers, scientists — to create and curate the nuanced training data that drives performance.

Scale has tried to pivot — for example with an Outlier platform aimed at higher-skill tasks — but competitors built on higher-paid specialists have grown quickly. The situation is compounded by the loss of major customers: OpenAI and Google reportedly stopped working with Scale shortly after Meta’s investment, and Scale later cut scores of labeling jobs.

Inside Meta, the shake-up has coincided with turbulence. The underwhelming Llama 4 launch led to an aggressive recruitment push, bringing talent from OpenAI, DeepMind and elsewhere. Several new hires have since left, and longtime GenAI staffers report a shrinking scope and growing bureaucracy.

For Meta the practical effect is telling: even after a historic investment, it’s not exclusive to Scale for data labeling and is asking multiple vendors to supply training material. That suggests Meta is hedging — and that Scale faces greater exposure as its roster of large customers changes.

Why this matters

The episode illustrates three practical risks for organizations racing to build advanced AI:

  • Vendor concentration risk: a single large investment doesn’t remove the need to validate data quality against alternatives.
  • Talent integration friction: rapid hiring from startups and competitors can clash with enterprise processes and cause attrition.
  • Quality mismatch for advanced models: volume-focused annotation pipelines may not meet the needs of specialized, high-stakes model training.

For AI teams and procurement leaders the takeaway is straightforward: treat training data and annotator expertise as strategic assets. Run blind quality comparisons, instrument model training with provenance and error analysis, and build multi-vendor guardrails so a single commercial move doesn’t derail progress.

Meta’s experiment will be closely watched across the industry. If the company stabilizes its labs and demonstrates measurable gains from its investment, this could become a template for deep vendor partnerships. If not, it will underline how costly mismatches between business-model assumptions and model requirements can be.

Either way, the episode is a reminder that building frontier AI is as much about managing people and suppliers as it is about compute. Organizations should plan for iterative proofs, diverse vendor pipelines, and clear metrics tying data sources to downstream model performance.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte helps organizations stress-test training-data strategies, quantify annotator expertise impacts on model performance, and design vendor-diversification roadmaps. Talk with our analysts to model vendor risk, measure labeling quality, and build procurement guardrails that prevent single-supplier exposure.