"ASIC Alliance attempts to phase out Nvidia dominance - Report suggests rapid advancements by companies aiming to lessen reliance on industry titan"
Get ready for the great AI chip breakup! Top hyperscalers like Microsoft, Google, and Amazon's AWS are increasingly moving towards in-house ASIC development, a competition vector to Nvidia's dominance. ASICs, offering better performance, efficiency, and cost reduction, are projected to grow at a blistering 50% compound annual growth rate.
Nvidia's high-priced server hardware, with each Blackwell GPU like the B200 costing around $70,000 to $80,000, is forcing enterprise-scale clients to seek cheaper alternatives. Full-server configs of the GB200 can run up to $3 million!
Despite the shift, top Nvidia clients continue to order hardware from the tech giant while ramping up orders of ASIC hardware and reserving production space at TSMC, the world's largest contract chip fabricator. This move towards hardware independence is an ambitious long-term goal, considering Nvidia's hardware and software dominance, particularly its insular CUDA workflow.
Nvidia's strategic move with NVLink Fusion, a technology that allows customers and ASIC partners to use the company's NVLink technology in their own products, has resulted in numerous partnerships with major ASIC players like MediaTek, Marvell, Alchip, Astera Labs, Synopsys, Cadence, Fujitsu, and Qualcomm. This partnership allows seamless overlap between Nvidia hardware and third-party ASIC-based servers.
TSMC, as the fabricator for both Nvidia's hardware and new ASIC designs from the top hyperscalers, stands to continue winning big amidst the protracted divorce. TSMC's chairman C.C. Wei poetically summarized the situation, stating, "It doesn't matter who wins - both Nvidia and the ASIC players are our customers. They're all manufactured at TSMC."
Major hyperscalers like Amazon and Google have already been moving towards custom, in-house silicon, with 50% of Amazon's new servers already running on its custom AWS Graviton Arm-based processor family. The protracted divorce from Nvidia is expected to further fracture the top-end of the compute market as hyperscalers continue to diversify their AI infrastructure.
Stay in the know:
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
Stay Ahead of the Curve: Sign Up for our Newsletter
Get the latest AI infrastructure news, reviews, and insights straight to your inbox.
Sources: [1] Nvidia and ARM Draw Closer, Unveil Future Architecture; [2] Nvidia's AI Chip Dream: A Processor that Predicts the Unpredictable; [3] Microsoft's OpenAI Linux Project; [4] Why Google and Amazon are Ditching Nvidia's CUDA for Issues of Control and Costs; [5] Nvidia's NVLink Policy Sparks Debate; [6] Intel challenges Nvidia Drive with its Mobileye-powered self-driving vehicle computer; [7] Samsung acquires AI-focused Jet-AI; [8] IBM intends to double its AI research spending by 2021.
In the rapidly evolving landscape of technology, finance plays a significant role in the industry's future as enterprise-scale clients seek cheaper alternatives to Nvidia's high-priced server hardware. Concurrently, top hyperscalers such as Amazon and Google are increasingly resorting to custom, in-house silicon like AWS Graviton Arm-based processors, marking a divergence from Nvidia's dominance in the AI market.
The ongoing shift towards hardware independence is evidenced by the strategic move of major hyperscalers towards in-house ASIC development, a move that is projected to escalate the competition against Nvidia's dominance. This diversification of AI infrastructure is set to fracture the top-end of the compute market.