top of page
Search

New Records for the Biggest and Smallest AI Computers.

Updated: Nov 29, 2022

Nvidia H100 and Intel Sapphire Rapids Xeon debut on ML Perf training benchmarks.


ree

The machine-learning consortium MLCommons released the latest set of benchmark results last week, offering a glimpse at the capabilities of new chips and old as they tackled executing lightweight AI on the tiniest systems and training neural networks at both server and supercomputer scales. The benchmark tests saw the debut of new chips from Intel and Nvidia as well as speed boosts from software improvements and predictions that new software will play a role in speeding the new chips in the years after their debut.



Training Servers

Training AI has been a problem that’s driven billions of dollars in investment, and it seems to be paying off. “A few years ago we were talking about training these networks in days or weeks, now we’re talking about minutes,” says Dave Salvator, director of product marketing at Nvidia.


There are eight benchmarks in the MLPerf training suite, but here I’m showing results from just two—image classification and natural-language processing—because although they don’t give a complete picture, they’re illustrative of what’s happening. Not every company puts up benchmark results every time; in the past, systems from Baidu, Google, Graphcore, and Qualcomm have made marks, but none of these were on the most recent list. And there are companies whose goal is to train the very biggest neural networks, such as Cerebras and SambaNova, that have never participated.

 
 
 

Comments


bottom of page