In the latest AI benchmark test, it's mostly Nvidia competing with Nvidia

In the latest AI benchmark test, it’s mostly Nvidia competing with Nvidia

training-mlperf-and-hpc-nvidia-press-deck-slide-9

Lacking rich competition, some of Nvidia’s most significant results in the latest MLPerf were against itself, comparing its new GPU, H100 “Hopper”, to its existing product, the A100.

Nvidia

Although chip giant Nvidia tends to cast a shadow over the AI ​​world, its ability to simply drive the competition out of the market could increase, if the latest benchmark test results are any indication.

MLCommons, the industry consortium overseeing a popular machine learning performance test, MLPerf, released the latest figures for the “training” of artificial neural networks on Wednesday. The bake-off showed the smallest number of competitors Nvidia has had in three years, just one: processor giant Intel.

In previous rounds, including the most recent, in June, Nvidia had at least two competitors against which it came up against, including Intel, Google, with its “Tensor Processing Unit”, or TPU, chip and chips from British startup Graphcore ; and, in the past, Chinese telecommunications giant Huawei.

Also: Google and Nvidia share top marks in MLPerf AI training benchmark

Lacking competition, Nvidia swept all the top scores this time around, while in June the company shared the top spot with Google. Nvidia submitted systems using its A100 GPU which has been out for several years, as well as its brand new H100, known as the “Hopper” GPU in honor of computing pioneer Grace Hopper. The H100 scored highest in one of eight benchmark tests, for so-called recommender systems that are commonly used to suggest products to internet users.

Intel offered two systems using its Habana Gaudi2 chips, along with systems labeled “preview” that showed off its upcoming Xeon server, named “Sapphire Rapids.”

Intel systems were found to be much slower than Nvidia components.

Nvidia said in a press release, “H100 (aka Hopper) GPUs set world record-setting patterns in all eight MLPerf enterprise workloads. They delivered up to 6.7x more performance than previous generation GPUs when first subjected to MLPerf training. same comparison, today’s A100 GPUs are 2.5 times more powerful, thanks to software advancements.”

During an official press conference, Nvidia’s Dave Salvator, Senior Product Manager for AI and Cloud, focused on Hopper performance improvements and A100 software tweaks. Salvatore showed both how Hopper speeds up performance over A100 – a test of Nvidia against Nvidia, in other words – and also showed how Hopper was able to stomp both Intel Gaudi2 and Sapphire Rapids chips.

Also: Graphcore Brings New Competition to Nvidia in Latest MLPerf AI Benchmarks

The absence of different vendors does not in itself signal a trend given that in previous cycles of MLPerf, individual vendors have decided to skip the competition and return in a subsequent cycle.

Google did not respond to a request for comment from ZDNET explaining why it did not participate this time.

In an email, Graphcore told ZDNET that it has decided it may have better places to spend its engineers’ time at this time than the weeks or months it takes to prepare submissions for MLPerf.

“The issue of diminishing returns has been raised,” Graphcore communications manager Iain McKenzie told ZDNET via email, “in the sense that there will be an inevitable jump to infinity, extra seconds shaved off , ever larger system configurations on offer.”

Graphcore “may participate in future MLPerf rounds, but at this time this does not reflect the areas of AI where we are seeing the most exciting progress,” McKenzie told ZDNET. MLPerf tasks are simply “table stakes”.

Instead, he said, “we really want to focus our energies” on “unlocking new capabilities for AI practitioners.” To that end, “you can expect to see some exciting progress soon” from Graphcore, McKenzie said, “for example in model sparsification, as well as with GNNs,” or Graph Neural Networks.

Also: Nvidia CEO Jensen Huang announces availability of “Hopper” GPU, a cloud service for large AI language models

In addition to Nvidia’s chips dominating the competition, all of the top-scoring computer systems were those built by Nvidia rather than those of partners. This is also a change from previous rounds of the benchmark test. Usually, some vendors like Dell get top marks for systems they build using Nvidia chips. This time around, no system vendor has been able to beat Nvidia in Nvidia’s use of their chips.

MLPerf training benchmark tests indicate the number of minutes it takes to adjust “weights” or neural parameters, until the computer program achieves a minimum precision required on a given task, a process called “training” a neural network, where a shorter time frame is preferable.

Although top scores often make headlines – and are highlighted in the press by vendors – in reality, MLPerf results include a wide variety of systems and a wide range of scores, not just a single best score.

In a phone conversation, MLCommons executive director David Kanter told ZDNET not to just focus on high scores. According to Kanter, the value of the benchmark suite for companies evaluating the purchase of AI hardware is having a wide range of systems of different sizes with different types of performance.

The submissions, which number in the hundreds, range from machines with just a few mainstream microprocessors to machines that have thousands of host processors from AMD and thousands of Nvidia GPUs, the kind of systems that get the highest scores.

“When it comes to ML training and inference, there is a wide variety of needs for all different levels of performance,” Kanter told ZDNET, “and part of the goal is to provide performance metrics that can be used at all these different scales.”

“There’s as much value in information about some of the smaller systems as there is about larger-scale systems,” Kanter said. “All of these systems are equally relevant and important, but perhaps for different people.”

Also: Benchmark AI performance test, MLPerf, continues to gain adherents

Regarding the lack of participation from Graphcore and Google this time around, Kanter said, “I would like to see more submissions,” adding, “I understand that for many companies they may have to choose how they invest their engineering resources.”

“I think you’ll see these things go up and down over time in different rounds” of the benchmark, Kanter said.

An interesting side effect of the scarcity of competition with Nvidia meant that some high scores for certain training tasks showed not only an improvement over the previous period, but rather a regression.

For example, in the venerable ImageNet task, where a neural network is trained to assign a classifier label to millions of images, the best result this time around was the same result that was third in June, a system built by Nvidia which took 19 seconds to practice. That June result lagged behind Google’s “TPU” chip results, which came in at only 11.5 seconds and 14 seconds.

Asked about repeating a previous submission, Nvidia told ZDNET via email that its focus this time around was the H100 chip, not the A100. Nvidia also noted that there has been progress since the very first A100 results in 2018. In this round of training tests, an 8-way Nvidia system took almost 40 minutes to train ResNet-50. In this week’s results, that time had been reduced to less than thirty minutes.

training-mlperf-and-hpc-nvidia-press-deck-slide-11

Nvidia also talked about its speed advantage over Intel’s Gaudi2 AI chips and upcoming Sapphire Rapids XEON processor.

Nvidia

Asked about the lack of competitive submissions and the viability of MLPerf, Nvidia’s Salvatore told reporters, “That’s a good question,” adding, “We’re doing everything we can to encourage participation; industry thrive on participation.”

“We hope,” Salvatore said, “that some of the new solutions continue to be marketed by others, that they will want to show the benefits and quality of these solutions in an industry standard benchmark as opposed to offering their own point performance claims, which are very difficult to verify.”

A key part of MLPerf, Salvatore said, is rigorously publishing test configuration and code to keep test results clear and consistent across hundreds of submissions from dozens of companies.

Along with the MLPerf training benchmark scores, Wednesday’s release by MLCommons also offered test results for HPC, meaning scientific computing and supercomputers. These submissions included a mix of systems from Nvidia and partners as well as Fujitsu’s Fugaku supercomputer which runs its own chips.

Also: Rarity of Neural Magic, Nvidia’s Hopper and Alibaba’s Network Among the First in Latest MLPerf AI Benchmarks

A third competition, called TinyML, measures the performance of low-power, embedded chips during inference, the part of machine learning where a trained neural network makes predictions.

This competition, which Nvidia has not entered so far, features an interesting diversity of chips and submissions from vendors such as chipmakers Silicon Labs and Qualcomm, European tech giant STMicroelectronics, and startups OctoML, Syntiant and GreenWaves Technologies.

In a test of TinyML, an image recognition test using the CIFAR dataset and ResNet neural network, GreenWaves, headquartered in Grenoble, France, scored highest for having the lowest latency to process the data and come up with a prediction. . The company introduced its Gap9 AI accelerator in combination with a RISC processor.

In prepared remarks, GreenWaves said that Gap9 “delivers extraordinarily low power consumption on neural networks of medium complexity such as the MobileNet series in classification and detection tasks, but also on complex recurrent neural networks and mixed precision such as our LSTM-based audio denoiser”.

#latest #benchmark #test #Nvidia #competing #Nvidia

Leave a Comment

Your email address will not be published. Required fields are marked *