Congratulations to Inference and Storage on new benchmarks and results!

50 views
Skip to first unread message

David Kanter

unread,
Sep 15, 2023, 2:40:22 PM9/15/23
to public, community, Voting Representatives, Representative Contacts, board

Good morning everyone!


I want to offer a huge congratulations to the entire MLCommons community. On Monday we released the 3Q23 MLPerf™ Inference and Storage results. We have a lot to be excited about.


The excellent MLPerf Inference team has added two new benchmarks this time. We have a new recommender for inference using a brand new Criteo 4TB multi-hot dataset with the DLRM-dcnv2 reference model, which is a much more modern recommendation network. Second, we added our first LLM inference benchmark which is a text summarization task using the GPTJ 6B model finetuned and tested with the CNN/DailyMail datasets.


I want to wish an especial congratulations to the MLPerf Storage team for their first submission round. This is brand new benchmark suite that focuses on storage workloads while simulating the compute for ML training, enabling it to run on a wide variety of systems. This first round includes two workloads - 3DUNet for medical imaging and BERT-large for NLP.


The press release is here: https://mlcommons.org/en/news/mlperf-inference-storage-q323/


You can help promote us via Linkedin: https://www.linkedin.com/feed/update/urn:li:activity:7107032001597112320

or twitter: https://x.com/MLCommons/status/1701264483958620652?s=20

I’d especially like to congratulate several teams for their first submissions including Connect Tech, Nutanix, Oracle, and TTA in MLPerf Inference and Argonne National Lab, DDN, Micron, Nutanix, and Weka for the first ever submissions in MLPerf Storage.


A few individuals I'd like to call out for their contributions to these rounds:

  • Oana Balmau, Curtis Anderson, Johnu George, and Huihuo Zheng for leading the Storage WG

  • Argonne National Lab and McGill for working on the BERT and 3DUNet Storage benchmarks

  • Johnu George and Wes Vaske for working on the benchmark wrapper and testing the DLIO integration

  • Curtis and Oana for documentation

  • Xinyuan Huang and Kongtao Chen for help integrating in logging


  • Mitchelle Rasquinha, Ramesh Chukka, and Miro Hodak for leading the Inference WG

  • Mitchelle Rasquinha and Pablo Gonzalez Mesa for leading the development of our new recommender benchmark (DLRM-dcnv2)

  • Itay Hubara, Ramesh Chukka, Thomas Atta-Fosu, Badhri Suresh Narayanan, Ashwin Nanjappa, Zhihan Jiang, Akhil Arunkumar, Yavuz Yetim, and Mitchelle Rasquinha for leading the development of our LLM benchmark (GPTJ 6B), and many participants

  • Kelly Berschauer, Nathan Wasson, and David Tafur for driving the entire marketing and results process on their own!


Again congratulations to all the submitters and everyone else who has worked on the submissions. Please take a bit of the day to relax and celebrate doing your part to make ML better for everyone and as always let me know if you have any questions!


Thanks,



David Kanter
Executive Director
Reply all
Reply to author
Forward
0 new messages