Intel, Google, Microsoft, Meta and different tech heavyweights are establishing a brand new trade group, the Extremely Accelerator Hyperlink (UALink) Promoter Group, to information the event of the elements that hyperlink collectively AI accelerator chips in knowledge facilities.
Introduced Thursday, the UALink Promoter Group — which additionally counts AMD (however not Arm), Hewlett Packard Enterprise, Broadcom and Cisco amongst its members — is proposing a brand new trade normal to attach the AI accelerator chips discovered inside a rising variety of servers. Broadly outlined, AI accelerators are chips starting from GPUs to custom-designed options to hurry up the coaching, fine-tuning and operating of AI fashions.
“The industry needs an open standard that can be moved forward very quickly, in an open [format] that allows multiple companies to add value to the overall ecosystem,” Forrest Norrod, AMD’s GM of information middle options, advised reporters in a briefing Wednesday. “The industry needs a standard that allows innovation to proceed at a rapid clip unfettered by any single company.”
Model one of many proposed normal, UALink 1.0, will join as much as 1,024 AI accelerators — GPUs solely — throughout a single computing “pod.” (The group defines a pod as one or a number of racks in a server.) UALink 1.0, based mostly on “open standards” together with AMD’s Infinity Cloth, will enable for direct hundreds and shops between the reminiscence hooked up to AI accelerators, and usually enhance velocity whereas reducing knowledge switch latency in comparison with present interconnect specs, in accordance with the UALink Promoter Group.
The group says it’ll create a consortium, the UALink Consortium, in Q3 to supervise growth of the UALink spec going ahead. UALink 1.0 will likely be made obtainable across the similar time to corporations that be a part of the consortium, with a higher-bandwidth up to date spec, UALink 1.1, set to reach in This autumn 2024.
The primary UALink merchandise will launch “in the next couple of years,” Norrod stated.
Obviously absent from the listing of the group’s members is Nvidia, which is by far the biggest producer of AI accelerators with an estimated 80% to 95% of the market. Nvidia declined to remark for this story. Nevertheless it’s not tought to see why the chipmaker isn’t enthusiastically throwing its weight behind UALink.
For one, Nvidia affords its personal proprietary interconnect tech for linking GPUs inside a knowledge middle server. The corporate might be none too eager to help a spec based mostly on rival applied sciences.
Then there’s the truth that Nvidia’s working from a place of monumental energy and affect.
In Nvidia’s most up-to-date fiscal quarter (Q1 2025), the corporate’s knowledge middle gross sales, which embody gross sales of its AI chips, rose greater than 400% from the year-ago quarter. If Nvidia continues on its present trajectory, it’s set to surpass Apple because the world’s second-most useful agency someday this 12 months.
So, merely put, Nvidia doesn’t must play ball if it doesn’t wish to.
As for Amazon Net Companies (AWS), the lone public cloud large not contributing to UALink, it is likely to be in a “wait and see” mode because it chips (no pun supposed) away at its numerous in-house accelerator {hardware} efforts. It is also that AWS, with a stranglehold on the cloud companies market, doesn’t see a lot of a strategic level in opposing Nvidia, which provides a lot of the GPUs it serves to clients.
AWS didn’t reply to TechCrunch’s request for remark.
Certainly, the most important beneficiaries of UALink — moreover AMD and Intel — appear to be Microsoft, Meta and Google, which mixed have spent billions of {dollars} on Nvidia GPUs to energy their clouds and prepare their ever-growing AI fashions. All need to wean themselves off of a vendor they see as worrisomely dominant within the AI {hardware} ecosystem.
Google has {custom} chips for coaching and operating AI fashions, TPUs and Axion. Amazon has a number of AI chip households underneath its belt. Microsoft final 12 months jumped into the fray with Maia and Cobalt. And Meta is refining its personal lineup of accelerators.
In the meantime, Microsoft and its shut collaborator, OpenAI, reportedly plan to spend a minimum of $100 billion on a supercomputer for coaching AI fashions that’ll be outfitted with future variations of Cobalt and Maia chips. These chips will want one thing hyperlink them — and maybe it’ll be UALink.