don’t miss the last chance to add your benchmarking and optimization challenges to the 1st MLPerf inference v3.1 community submission!

36 views
Skip to first unread message

Grigori Fursin

unread,
Jun 21, 2023, 3:02:00 PM6/21/23
to public

Dear all,

We are very excited to announce that the MLCommons Task Force on Automation and Reproducibility is organizing several benchmarking, optimization and reproducibility challenges for the upcoming MLPerf inference v3.1 submission round that should open around July 4th, 2023.


They are based on a community feedback after the successful validation of our MLCommons CM/CK workflow automation language to run MLPerf inference benchmarks out-of-the-box inside or outside Docker containers using a unified interface and user-friendly GUI - we are very glad that it already helped several organizations and the community submit more than 50% of performance and power results across diverse software and hardware in the previous round.


The goal of the new competition is to connect MLCommons organizations with external researchers, students and companies to test, benchmark and optimize very diverse software/hardware stacks using the MLPerf inference benchmark and the MLCommons CM workflow automation language. 


The most interesting implementations, optimizations and submissions will be presented at our upcoming HiPEAC’24 workshop on bridging the gap between academia and industry, CM-MLPerf out-of-the-box tutorial at IEEE IISWC’23, and the Student Cluster Competition at ACM/IEEE SuperComputing’23.


Please fill in this questionnaire if you want to add your own challenges to MLPerf inference v3.1, highlight your interests, provide your own hardware and/or prizes for the community submissions, and even join our steering/organizational committee.


Feel free to join our public Discord server and weekly conf-calls on Thursdays at 10am PT (Google Meeе link and notes).


Finally, please join us (virtually or physically) at the very 1st ACM conference on reproducibility and replicability that will be held in Santa Cruz next week. We are honored to give a keynote about bridging the growing gap between ML Systems research and production with the help of our MLCommons CM automation language and MLPerf benchmarks.


Looking forward to collaborating with you all,

Grigori Fursin and Arjun Suresh


  • MLCommons Task Force on Automation and Reproducibility

  • cTuning foundation

  • ACM Special Interest Group on Reproducibility




Reply all
Reply to author
Forward
0 new messages