Nvidia Leads Again in MLPerf Benchmarks: Generative AI at the Top

watch 51s
views 2

13:35, 04.04.2025

Nvidia's graphics processors once again delivered the best results in one of the most prestigious benchmarks for measuring AI chip performance – MLPerf. This time, the main focus was on generative AI applications, such as large language models (LLMs), particularly Llama by Meta. In the tests, systems built by companies like SuperMicro, Hewlett Packard Enterprise, and Lenovo almost entirely dominated, taking first place in most categories.

New Test Scenarios for Generative AI

MLPerf updated its tests by adding several new scenarios, particularly for working with large language models. One new test measures the speed of token generation based on Llama 3.1, while another evaluates the response time of chatbots based on Llama 2. Additionally, a test was added to measure the speed of graph neural networks, which are increasingly used in generative AI, such as for protein folding predictions.

While competition was limited, AMD also showed strong results in two Llama 2 tests. Overall, however, Nvidia demonstrated the greatest power in the tests, especially in the closed division, where the requirements for configuration were the strictest.

Share

Was this article helpful to you?

VPS popular offers

Other articles on this topic

Opening VPS HDD in Amsterdam
Opening VPS HDD in Amsterdam
cookie

Accept cookies & privacy policy?

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we'll assume that you are happy to receive all cookies on the HostZealot website.