Dear all,
The MLCommons Task Force on Automation and Reproducibility is organizing several benchmarking, optimization and reproducibility challenges for the upcoming MLPerf inference v3.1 submission round. They are based on the community feedback after we successfully validated a new version of the MLCommons CM/CK workflow automation language to run MLPerf inference benchmarks out of the box during the previous submission round.
The goal is to connect MLCommons organizations with external researchers, students and companies to automatically test, benchmark and optimize very diverse software/hardware stacks using the MLPerf inference benchmark and the MLCommons CM workflow automation language. The most interesting implementations, optimizations and submissions from the community will be presented at our upcoming HiPEAC’24 workshop on bridging the gap between academia and industry, CM-MLPerf tutorial at IEEE IISWC’23, and the Student Cluster Competition at ACM/IEEE SuperComputing’23.
Please fill in this questionnaire if you want to add your own challenges to MLPerf inference v3.1, highlight your interests, provide your own hardware and/or prizes for the community submissions, and join our steering/organizational committee.
Feel free to join our public Discord server and weekly conf-calls on Thursdays at 10am PT (Google Meet link and notes).
Finally, please join us (virtually or physically) at the 1st ACM conference on reproducibility and replicability that will be held in Santa Cruz next week. We are honored to give a keynote about bridging the growing gap between ML Systems research and production with the help of the MLCommons CM automation language.
Looking forward to collaborating with you all,
Grigori Fursin and Arjun Suresh
MLCommons Task Force on Automation and Reproducibility
cTuning foundation
ACM Special Interest Group on Reproducibility