Congratulations to MLPerf Training on new benchmarks, results and power measurement

19 views
Skip to first unread message

David Kanter

unread,
Jun 12, 2024, 11:37:05 AM6/12/24
to training, public, power

Good morning everyone,


One of my favorite things is celebrating our accomplishments. Today I am thrilled to congratulate the MLPerf Training and Power working groups on two new benchmarks, over 200 results, and the first ever MLPerf Training power and energy measurement results.


Looking back at our performance over the last few years:


Since the last round, we saw performance gains of up to 1.8X for our Stable Diffusion benchmark, one of the staples for the generative AI landscape, and 1.13X for GPT3 pre-training.


The press release is here: https://mlcommons.org/2024/06/mlperf-training-v4-benchmark-results/

You can help promote us via Linkedin: https://www.linkedin.com/feed/update/urn:li:activity:7206674474660765696

or Twitter: https://x.com/MLCommons/status/1800906882858848663

I’d especially like to congratulate several teams for their first MLPerf Training submissions including Juniper Networks, Oracle, and Tiny Corp and doubly acknowledge Sustainable Metal Cloud (SM) for the impressive feat of submitting for the first time to MLPerf Training and measuring power with MLPerf Power. They are joined by returning submitters AsusTek, Dell, Fujitsu, GigaComputing, Google, HPE, Intel (Habana Labs), Lenovo, NVIDIA, NVIDIA+Coreweave, Quanta Cloud Technology, Red Hat + Supermicro, and Supermicro.


A few individuals I'd like to call out for their contributions to this round:

  • Hiwot Kassa, and Ritika Borkar for leading the Training WG

  • Tejus Raghunathrajan, Sachin Idgunji, and Anirban Ghosh for leading the MLPerf Power WG and finishing what we began many years ago

  • Deepak Canchi for leading the development of our GNN benchmark, with many contributions from Baole Ai, Shuxian Hu, Yong Li, Wenting Shen, Li Su, Wenyuan Yu, Keith Achorn, Sasikanth Avancha, Radha Giduthuri, Kaixuan Liu, Hesham Mostafa, Kyle Kranen,  Yunzhou Liu, and Shriya Palasamudram; you can read more details on this benchmark in a blog here: https://mlcommons.org/2024/06/gnn-for-mlperf-training-v4/

  • Itay Hubara for leading the development of the Llama2 finetuning with LoRA benchmark with major contributions from Regis Pierrad, Michal Futrega, Shriya Palsamudran, Hiwot Kassa, and Ritika Borkar; you can read more details on the benchmark in a blog here: https://mlcommons.org/2024/06/lora-fine-tuning-mlperf-training-v4-0/

  • David Tafur, Kelly Berschauer, Nathan Wasson, and Scott Wasson for driving the submission and marketing process


Again congratulations to all the submitters and everyone else who has worked on the benchmarks and submissions. Please take a bit of the day to relax and celebrate doing your part to make ML better for everyone and as always let me know if you have any questions!


Thanks,

David Kanter
Executive Director
Reply all
Reply to author
Forward
0 new messages