Nvidia’s Blackwell GPUs dramatically outshine previous generations by more than two times in performance, according to recent testing by MLCommons. These findings lend credibility to Nvidia’s initial claims about Blackwell’s capabilities, confirming the architecture’s superiority in handling complex AI language models.
The test results reveal that each Blackwell GPU significantly outpaces its Hopper predecessor, achieving over twice the processing speed in demanding computational tasks. Utilizing Llama 3.1 405B language models with trillions of parameters, these tests highlight the latest advancements in AI technology.
While Nvidia provided the test data, MLCommons stands by the data’s reliability. In a specific test setup, a system with 2496 Blackwell chips completed its task in a mere 27 minutes. In contrast, the same task required a Hopper-based system more than three times as long, showcasing Blackwell’s impressive efficiency.
CoreWeave representatives noted a growing industry trend towards dividing large computing clusters into smaller subsystems. These subsystems use fewer accelerators focused on specific tasks, optimizing language model training despite limited hardware resources.
The advancements of Blackwell GPUs not only mark an evolution in computational capability but also align with the industry’s shifting strategies for more effective and resource-efficient AI model development.
BYD has launched sales of the Sealion 6 plug-in hybrid in Japan, starting at 3,982,000…
YASA, a subsidiary of Mercedes-Benz, has unveiled a next-generation dual-channel inverter weighing 15 kg with…
The company Antares, which develops small modular reactors, announced raising $96 million in a financing…
First images of the Motorola Edge 70 Ultra, set to succeed the Edge 50 Ultra…
Samsung has not yet announced the Galaxy S26 series, but One UI 8.5 has already…
The company LandSpace conducted the first launch of the new rocket "Zhuque-3," taking off from…