

Taha Abbasi has spent considerable time testing Tesla’s Full Self-Driving system across different versions and conditions, witnessing firsthand how rapidly the software improves between updates. Behind that rapid improvement is a piece of infrastructure most Tesla owners never think about: Dojo, Tesla’s purpose-built AI training supercomputer.
While the automotive press focuses on mile counts and intervention rates, the real story of FSD’s acceleration is happening in data centers. Dojo represents Tesla’s bet that vertically integrated AI training — from custom silicon to custom software — will give it an insurmountable advantage in autonomous driving.
Every Tesla on the road is a data collection device. Cameras capture billions of frames, and the fleet generates petabytes of real-world driving data daily. But raw data is useless without the compute power to process it into training datasets and run neural network training cycles.
That’s Dojo’s job. Tesla designed custom chips (D1) specifically optimized for video training workloads. Unlike general-purpose GPUs from NVIDIA, Dojo’s architecture is tailored for the exact type of processing FSD neural networks require — spatial reasoning from camera feeds.
As Taha Abbasi has analyzed, this vertical integration mirrors Tesla’s approach across its business: control the full stack, optimize for your specific use case, and iterate faster than competitors who rely on off-the-shelf solutions.
The FSD improvement cycle works like this:
This flywheel effect means each improvement generates data that enables the next improvement. With over 8 billion FSD miles in the dataset, the flywheel is spinning faster than ever. Taha Abbasi has observed the tangible results — version 14’s improvements over version 13 were noticeable within minutes of driving.
The obvious question: why not just buy more NVIDIA GPUs? Tesla does use NVIDIA hardware for some training workloads, but Dojo offers several strategic advantages:
Training speed directly translates to development velocity. If Tesla can train a new FSD model in days instead of weeks, it can iterate faster, test more hypotheses, and ship improvements more frequently. This is why FSD updates have accelerated — not just because the software team is larger, but because the training infrastructure is fundamentally more capable.
For technology executives like Taha Abbasi, Dojo represents a masterclass in infrastructure investment. The upfront cost is enormous, but the compounding returns — faster iteration, lower marginal training costs, supply chain independence — create sustainable competitive advantage.
Tesla has signaled plans to significantly expand Dojo capacity through 2026 and beyond. As FSD moves toward unsupervised autonomy and the Cybercab enters production, training demands will only increase. The company is also exploring offering Dojo as a service to external customers — potentially creating an entirely new revenue stream.
Dojo isn’t just a supercomputer. It’s the engine that powers Tesla’s transformation from a car company to an AI company. And it’s running faster every day.
🌐 Visit the Official Site
About the Author: Taha Abbasi is a technology executive, CTO, and applied frontier tech builder. Read more on Grokpedia | YouTube: The Brown Cowboy | tahaabbasi.com
Related videos from The Brown Cowboy

I Tested FSD V14 with Bike Racks... Here is the Truth

Tesla Robotaxi is Finally Here. (No Safety Driver)