All News

Tesla Pauses Dojo as Cortex and AI6 Redraw AI Roadmap

Tesla’s long‑teased Dojo supercomputer — meant to be the backbone for Full Self‑Driving training — has been wound down as Cortex and a new AI6 chip strategy take center stage. Years of teasers, custom D1 tiles, and partial deployments gave way to a pivot toward Nvidia GPUs and a unified in‑house inference chip plan, reshaping Tesla’s AI compute roadmap.

Published September 2, 2025 at 01:09 PM EDT in Artificial Intelligence (AI)

Tesla Pauses Dojo as Cortex and AI6 Redraw AI Roadmap

Tesla’s plan to turn the carmaker into an AI company hinged on Dojo, a custom training supercomputer meant to consume the flood of video from its fleet and push Full Self‑Driving (FSD) past the ‘almost’ stage. After years of teases, technical papers, and limited deployments, Dojo has been quieted as Tesla accelerates Cortex and a new AI6 chip direction.

Dojo was pitched as a bet: custom D1 tiles and a bespoke floating‑point approach could cut training cost and improve scale. Tesla demoed early cabinets, promised Exapod clusters, and suggested Dojo could even become a sellable service like AWS for AI training.

But reality shifted. Nvidia GPUs remained dominant, supply dynamics changed priorities, and Tesla found it easier and faster to scale using vast H100 fleets in a new Austin cluster called Cortex. Elon Musk signaled a convergence toward a single chip family — the AI5/AI6 line — intended to handle both inference in cars and large‑scale training.

A short timeline shows how Dojo rose and then faded:

  • 2019 – Musk teases Dojo at Autonomy Day as a supercomputer to train FSD using fleet video.
  • 2021 – Dojo becomes official at AI Day; Tesla reveals the D1 chip and a Dojo architecture.
  • 2022 – Tesla installs test cabinets, demos image generation, and sets Exapod targets.
  • 2023 – Tesla starts Dojo production claims while also buying massive Nvidia capacity; Musk calls Dojo a long‑shot bet.
  • 2024 – Plans to expand Dojo continue, Buffalo investment announced, and D2 silicon news appears — but GPU shortages and shifting priorities complicate matters.
  • Early 2025 – Tesla deploys Cortex in Austin with roughly 50,000 H100 GPUs and credits it for FSD V13 improvements.
  • Mid‑2025 – Dojo team is disbanded, lead engineers depart, and Musk explains the move as convergence toward AI6 and a single chip strategy.

Why did Dojo stall? There are three practical pressures: raw economics (Nvidia’s ecosystem and volume advantages), timeline urgency for FSD training, and a desire to avoid fragmenting Tesla’s engineering resources across multiple chip families. When a company needs tens of thousands of GPUs quickly, buying existing hardware can trump building a custom stack.

There are consequences beyond hardware. A closed Dojo program led to personnel loss and spawned startups from departing engineers. It also raises questions for automotive and robotics teams worldwide: should you invest in bespoke silicon that promises efficiency gains years out, or scale fast with commercial GPUs to train critical models today?

For FSD, the short answer is hybrid: Tesla used Nvidia GPUs in Cortex to accelerate deployments and improve V13, while keeping the door open for a converged in‑vehicle/in‑data‑center chip (AI6). That reflects a pragmatic tradeoff: speed to scale now, coupled with longer‑term bets on efficiency and integration.

What should other organizations take from this? Consider these steps:

  • Assess if custom silicon yields measurable ROI within your product timeline.
  • Model mixed stacks: commodity GPUs for immediate scale, custom accelerators for inference or niche gains.
  • Design exit and reuse plans so hardware investments aren’t stranded if strategy pivots.

Dojo’s story is a reminder that AI infrastructure is both technological and organizational. Building chips is as much about supply chains, developer ecosystems, and timing as pure performance. Tesla’s pivot to Cortex and AI6 shows how companies balance near‑term delivery with long‑term aspirations.

For leaders deciding their compute path, scenario planning and rigorous cost‑benefit modeling matter more than ever. QuarkyByte approaches these choices by mapping technical tradeoffs to business outcomes, helping teams choose a path that delivers model performance without risking program velocity.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte can map the tradeoffs between custom silicon and commodity GPU stacks for automotive AI, quantify cost and time-to-model improvements, and design phased migration plans that protect FSD progress while minimizing stranded investment. Talk to our team to stress-test your compute strategy and plan a resilient AI roadmap.