AMD unveiled chips for accelerated artificial intelligence training

By: Bohdan Kaminskyi | 08.12.2023, 19:18

PaulSakuma.com.

AMD has announced new accelerators and processors focused on Large Language Models (LLM).

Here's What We Know

The chipmaker has unveiled the Instinct MI300X chip and Instinct M1300A processor for training and running LLMs. As the company claims, both new products surpass their predecessors in terms of memory capacity and power efficiency.

According to AMD CEO Lisa Su, the MI300X is "the world's highest performing accelerator." It is comparable to Nvidia's H100 chip in terms of LLM training, but outperforms it by 1.4x in inference on Meta's Llama 2 (70 billion parameters).

AMD also announced a partnership with Microsoft to deploy the MI300X in Azure cloud computing. Meta also announced plans to deploy MI300 processors in its data centres.

In addition, Su announced the MI300A APUs for data centres, which it said would boost the market to $45 billion.The APUs combine CPUs and GPUs for faster processing. AMD claims the MI300A delivers high performance, fast model learning and 30 times the power efficiency. It has 1.6 times the memory capacity of the H100 and implements unified memory.

The MI300A will be used in the El Capitan supercomputer built by Hewlett Packard Enterprise for Livermore National Laboratory. It is one of the most powerful installations in the world with a performance of more than 2 exaflops.

The company did not provide information about prices for the new products.

In addition, AMD announced Ryzen 8040 - chips that will allow to introduce more AI features in mobile devices. According to the announcement, the 8040 series provides 1.6 times more AI processing performance compared to the previous generation and also features embedded neural processing units (NPUs).

The company expects Ryzen 8040-based products to be available in Q1 2024.

Source: The Verge