Good morning everyone!
I want to offer a huge congratulations to the entire MLCommons® community. Today we released the 4Q23 MLPerf™ Training and HPC results alongside a new website and new interactive browser for the MLPerf results (kudos to the fantastic MLC team on getting this launched!!).
Today marks the 5th anniversary of MLPerf Training, which is a great opportunity to reflect on what we have done together:
Relative performance best results - Closed Available On Premise
I want to highlight a few things. First, if we look at our most venerable benchmarks over the last five years, the community has achieved an astounding 32-49X performance gains - a tremendous testament to the ingenuity of everyone working on systems for machine learning. A big kudos to everyone in the community for developing these solutions and investing in benchmarking to share this with the whole world.
On a shorter timescale, we saw performance gains of up to 2.8X compared to just five months ago for our GPT3 benchmark, which is especially exciting as this is one of the most challenging workloads and exciting areas in AI.
The press release is here: https://mlcommons.org/2023/11/mlperf-training-v3-1-hpc-v3-0-results
You can help promote us via Linkedin: https://www.linkedin.com/feed/update/urn:li:activity:7128069366897668099
or Twitter: https://twitter.com/MLCommons/status/1722299456203461097
I’d especially like to congratulate several teams for their first submissions including Ailiverse, Clemson University, cTuning, and Red Hat in MLPerf Training and Clemson University and HPE for their first submissions in MLPerf HPC.
Additionally, I want to celebrate a number of organizations that submitted to our new benchmarks - Stable Diffusion and OpenFold. As you know, running MLPerf isn't easy and is a testament to a robust stack that encompasses ML frameworks, software, and hardware - sometimes spanning whole datacenters. Running a debut benchmark is the most challenging exercise so I want to call out these organizations.
In MLPerf Training, Dell, Intel-Habana Labs, and NVIDIA all submitted results for Stable Diffusion. In MLPerf HPC, we had Clemson University, HPE+Lawrence Berkeley National Lab, NVIDIA, and Texas Advanced Computing Center all submitted results for OpenFold.
A few individuals I'd like to call out for their contributions to these rounds:
Eric Han, Hiwot Kassa, and Ritika Borkar for leading the Training WG
Murali Emani and Andreas Prodromou for leading the HPC WG
Ahmad Kiswani for leading the development of our Stable Diffusion benchmark, and many participants in the taskforce
Arkadiusz Nowaczynski and Michal Marcinkiewicz for developing the OpenFold benchmark
Kelly Berschauer, Nathan Wasson, and David Tafur for driving the entire marketing and launching a brand new website yesterday. It's been a fabulous journey and we appreciate your efforts!
Again congratulations to all the submitters and everyone else who has worked on the submissions. Please take a bit of the day to relax and celebrate doing your part to make ML better for everyone and as always let me know if you have any questions!
Thanks,