Bittensorβs TAO ripped higher on Thursday and topped in early European trading on Friday after Nvidia CEO Jensen Huang highlighted the project on the All-In podcast, pushing the token from $243.5 to $310.6 before it cooled to $298.1 by press time.
The move put one of cryptoβs most closely watched AI-linked assets back in focus, not because Huang endorsed the token directly, but because he treated the underlying technical milestone as meaningful in a much bigger debate over open AI infrastructure.
The moment came when Chamath Palihapitiya pointed Huang to what he called a βpretty crazy technical accomplishmentβ inside βthis crypto project called Bittensor.β He described a recent training run on Subnet 3 in which participants used distributed excess compute to train a Llama model βtotally distributedβ while still managing the process statefully.
Nvidia CEO Responds To Bittensorβs Accomplishment
Huangβs immediate reaction was brief but memorable: βOur modern version of Folding@home.β
That line mattered because it effectively reframed Bittensorβs latest milestone in language traditional tech audiences already understand. Folding@home was one of the most recognizable examples of decentralized volunteer computers; Huangβs comparison suggested he viewed Bittensorβs experiment less as crypto theater and more as a legitimate expression of distributed coordination.
In the context of TAOβs price action, traders appeared to read that as external validation from one of the most influential executives in AI hardware.
Huang then widened the discussion beyond Bittensor itself and into the structure of the AI market. βI believe we fundamentally need models as first-class products, proprietary products, as well as models as open source. These two things are not A or B, itβs A and B. Thereβs no question about it,β he said. He followed that with an even sharper distinction: βModels are a technology, not a product. Models are technology, not a service.β
He spent the next stretch explaining why that dual-track model matters. For general-purpose consumer use, Huang said most people will continue to prefer turnkey services rather than fine-tuning their own systems. βI would really, really love not to go fine-tune my own. I would really love to keep using ChatGPT. I love to use Claude. I love to use Gemini. I love to use X,β he said, arguing that this horizontal layer of AI products βis thrivingβ and βis going to be great.β
On the @theallinpod this week, @chamath asked @nvidia CEO Jensen Huang about decentralized AI training, calling our Covenant-72B run βa pretty crazy technical accomplishment.β
One correction: itβs 72 billion parameters, not four. Trained permissionlessly across 70+ contributorsβ¦ pic.twitter.com/BN0tWG66e8
β templar (@tplr_ai) March 19, 2026
But he drew a hard line when it came to industry-specific deployment, saying domain expertise βhas to be captured in a way that they can control,β and that βit can only come from open models.β
That distinction goes to the heart of why Bittensor reacted so violently. While Huang didnβt make a token call, or presented Bittensor as the winner of open AI, he did endorse the coexistence of proprietary and open model ecosystems, while acknowledging that specialized industries will need more controllable, open foundations.
At press time, TAO traded at $297.0

5 days ago
1

















English (US)