Nvidia is, at this point, so far ahead in the AI hardware game that competing companies are doing the most unlikely of things—working together to keep up and beat the jolly green giant at its own game.
Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco and Broadcom have announced the formation of the catchily titled "Ultra Accelerator [[link]] Link Promoter Group", with the goal of creating a new interconnect standard for AI accelerator chips.
Nvidia's proprietary is used to connect across multiple chips for [[link]] demanding AI tasks, and it's mighty fast, particularly when stacked together on the latest AI hardware. Nvidia Blackwell GPUs support up to 18 NVLink 100 GB/s connections for a total bandwidth of 1.8 TB/s per single GPU.
However, because it's proprietary tech it creates a closed ecosystem. Whichever link standard is used dictates the hardware, and that's what this new group aims to address.
The UALink Promoter group's goal is to create a new open standard that allows multiple companies to develop AI hardware using the new connection (via ), much like , an open standard high-speed connection developed by Intel for linking CPUs and devices in data centers.
The first version of the new standard, UALink 1.0, is said to be based on technologies like and is expected to improve speed and reduce latency compared to existing methods.
And in the meantime, Nvidia's overall AI hardware dominance shows no signs of waning. With for its previous generation H100 GPUs and before the AI chips were even , any company attempting to disrupt Nvidia's dominance in the market on any level is going to have to work its little socks off.
Still, those are some seriously big names. While Nvidia is undoubtedly the AI hardware darling of the moment, Google, Microsoft, Intel and AMD are not exactly technology [[link]] lightweights, and a joint effort to break at least some of Nvidia's grip on the market is worth paying attention to.