Intel has unveiled its latest chips for training large language models at its conference Vision 2024. According to the company, its latest hardware, called Gaudi 3 accelerator, boasts superior speed and efficiency compared to that of the market leader Nvidia.
More companies across critical sectors, such as finance, manufacturing, and healthcare, are seeking to access the benefits of AI and empower generative AI. However, according to Intel, only 10 percent of all businesses have successfully deployed AI so far.
The company aims to accelerate this transition by offering Gaudi 3 chips that Intel says are four times faster than its predecessor, Gaudi 2, and deliver a significant leap in AI training for businesses.
“Innovation is advancing at an unprecedented pace, all enabled by silicon – and every company is quickly becoming an AI company. Intel is bringing AI everywhere across the enterprise, from the PC to the data center to the edge,” said Intel CEO Pat Gelsinger in a statement.
The dominant player in hardware for training large language models is Nvidia, which currently occupies around 80 percent of graphic processing units for the AI market.
Most enterprises for training large language models use Nvidia’s H100, which costs around $40,000. In comparison to the Nvidia H100, Intel’s Gaudi 3 is projected to deliver 50% faster time-to-train on average across Llama2 models and is more power efficient.
Gaudi 3 will be available to OEMs – including Dell Technologies, Hewlett Packard Enterprise, Lenovo, and Supermicro – in the second quarter of 2024. Other businesses will be able to purchase it in the third quarter.
The company didn’t specify the price of the AI processor Gaudi 3 accelerator.
Your email address will not be published. Required fields are markedmarked