Tesla Disbands Dojo Team and Shifts to External AI Chips
Tesla is disbanding its Dojo supercomputer group and ending in-house chip development for full self-driving, reassigning staff to other compute projects. This follows a team spin-off, DensityAI, focused on AI chips for data centers. The shift underscores Tesla’s move to partner with Nvidia, AMD, and Samsung—signaled by a $16.5B Samsung deal—and Musk’s pivot to a new Cortex supercluster.
Tesla Cuts Dojo Team
Tesla is disbanding its Dojo supercomputer team as part of a major strategic pivot away from in-house chip development for its full self-driving ambitions. According to Bloomberg, Dojo’s lead, Peter Bannon, has left the company, and remaining members are reassigned to other compute and data center projects.
The move follows an exodus of around 20 engineers who departed to form DensityAI, a stealth AI startup building end-to-end chips, hardware, and software for robotics, AI agents, and automotive data centers. DensityAI was founded by former Dojo head Ganesh Venkataramanan and colleagues Bill Chang and Ben Floering.
Pivot to External Partners
With Dojo shelved, Tesla plans to deepen partnerships with Nvidia for training GPUs, AMD for compute, and Samsung for manufacturing. Just last month, the automaker signed a $16.5 billion deal with Samsung to produce its AI6 inference chips, designed to scale from Tesla’s Optimus robots and FSD systems to high-performance data center workloads.
Elon Musk has briefly revived Dojo in earnings calls but now emphasizes Cortex, Tesla’s new AI training supercluster in Austin. This reflects a shift from designing custom silicon to deploying best-in-class clusters to manage Tesla’s rapidly growing video data streams for autonomy.
Industry Implications
Tesla’s Dojo breakup highlights a fundamental question for AI-driven industries: build custom hardware in-house or leverage external vendors? Much like choosing between crafting your own engine or buying a proven powertrain, teams must weigh performance gains against development costs and time to market.
Key takeaways include:
- Compute economics – balancing R&D spend against the scalability of cloud and GPU-based solutions.
- Vendor partnerships – leveraging Nvidia, AMD, and Samsung’s scale to accelerate AI projects.
- Talent and innovation – retaining top AI engineers versus startup spin-offs like DensityAI.
- Strategic focus – aligning compute strategy with core business goals and timelines.
As organizations recalibrate their AI compute roadmaps, QuarkyByte provides data-driven analysis to benchmark in-house versus outsourced architectures, optimize vendor portfolios, and forecast total cost of ownership. Our insights help automotive and tech leaders navigate strategic inflection points, accelerating decision-making while controlling risk.
Keep Reading
View AllGoogle Finance Gains AI Q&A and Advanced Charting Capabilities
Google Finance rolls out AI Q&A research, advanced charting tools, real-time market data including commodities and crypto, plus live news feed.
OpenAI Restores GPT-4o After User Backlash
OpenAI reverses course and restores GPT-4o for paid users after complaints about GPT-5 replacing older models and removing the model picker.
Meta Buys WaveForms to Accelerate AI Voice Capabilities
Meta acquires WaveForms to bolster Superintelligence Labs' AI voice stack, adding top researchers and emotional speech tech to its toolkit.
AI Tools Built for Agencies That Move Fast.
As automakers and AI innovators recalibrate their compute strategies, QuarkyByte’s insights benchmark in-house versus external chip architectures, optimize vendor selection, and forecast total cost of ownership. Discover how we helped a major OEM reduce GPU costs by 30% while boosting training throughput.