The latest chip architecture for AI training will be called Rubin and will be released in 2026.
Only a few months after announcing its top-of-the-line Blackwell architecture chips, Nvidia introduced its latest chip for AI training on Sunday at the computer and technology trade show Computex in Taipei.
According to the company's CEO, Jensen Huang, the latest platform for AI chips will be called Rubin. It will feature new Graphics processing units and a new Arm-based central processing unit called Vera.
Nvidia is planning to release the chips in 2026.
“Our company has a one-year rhythm. Our basic philosophy is very simple: build the entire data center scale, disaggregate and sell to you parts on a one-year rhythm, and push everything to technology’s limits,” said Huang in a statement.
Blackwell architecture chips, announced this March, have not yet been released but will be delivered to consumers later this year.
Currently, companies are training their data centers on Nvidia’s Hopper GPU architecture AI accelerators, the H100, which are estimated to cost between $30,000 and $40,000.
All the major tech companies, including Google, Open AI, and Meta, are using Nvidia’s chips to train their AI. According to Germany-based IoT Analytics, Nvidia currently has a 92% market share in data center GPUs.
However, competition in the AI training market is heating up, with many companies offering alternatives.
On Monday, semiconductor company AMD also announced its latest accelerators for AI training and a roadmap, as well as the latest CPUs for laptops and desktop computers.
AMD’s latest Instinct MI325X accelerator will be available in Q4 2024. In 2025, the company is planning to release the Instinct MI350 series, which is said to bring up to a 35x increase in AI inference performance compared to the current AMD Instinct MI300 Series.
And in 2026, AMD is planning to release the Instinct MI400 series based on the AMD CDNA “Next” architecture.
Your email address will not be published. Required fields are markedmarked