Dear All,
(TLTR: join the Discord channel to get free help for your MLPerf inference v3.0 submission and check out our new GUI to run different implementations of the benchmark).
We are just a month away from MLPerf inference benchmark v3.0 submission and it can be quite stressful to run MLPerf inference benchmarks and make sure that your submission is valid.
That’s why the MLCommons Collective Knowledge Taskforce has developed 2 simple GUI to help you generate a command line to detect or install all the necessary components to run various implementations of MLPerf inference benchmarks (datasets, models, frameworks, libraries, compilers, tools), obtain power measurements, validate submission and prepare the tar file that can be uploaded to the submission website and optional W&B dashboards:
MLPerf inference submission preparation GUI - prepare, run and validate MLPerf submission and show results on public or private W&B dashboards (optional)
MLPerf inference run GUI - run and optimize MLPerf inference benchmarks in different modes and scenarios
We’ve also prepared several tutorials to automate MLPerf submissions (on-going work):
MLPerf inference submission (currently supports all reference models and implementations with ONNX, PyTorch, TF and TVM targeting CPU and CUDA, Nvidia implementation, TFLite C++ and ONNX C++ implementations)
Running short MLPerf inference at the Student Cluster Competition’22 at SuperComputing (11 our 13 teams managed to run from scratch the reference implementation of the MLPerf inference benchmark with RetinaNet, Open Images, ONNX Runtime within 30 minutes using a short dataset).
Based on your feedback, our taskforce is resuming the weekly conf-calls to help you add your own implementation to the above GUI while reusing all other automations to prepare, run, optimize and validate your submission:
Each Thursday at 10:30am PST starting from Feb 2 [Google Meet link]
Each Tuesday at 9:00am London(GMT) starting from Feb 7 [Google Meet link]
If you would like free help to add your implementations to the MLPerf workflow and prepare your submission or if have any questions, suggestions or comments, please feel free to reach/join our public taskforce using this Discord server, via CK mailing list or by contacting Arjun Suresh & Grigori Fursin directly. We can easily organize 1..2 hour hackathons in February to help you understand CM automations and add your implementation into our MLPerf automation workflow.
Our further community developments include automation of the design space exploration of diverse ML models, frameworks, libraries, SDKs and platforms together with collection, visualization, comparison and reproducibility of all experiments in public and private dashboards - we are looking forward to working with all of you to enable modular ML Systems and automate their benchmarking, design space exploration and optimization!
Dear All,
TLDR: join this Discord channel to get free help for your MLPerf inference v3.0 submission and check out our new GUI to run different implementations of the benchmark.
We are just a month away from MLPerf inference benchmark v3.0 submission and it can be quite stressful to run MLPerf inference benchmarks and make sure that your submission is valid.
That’s why the MLCommons Collective Knowledge Taskforce has developed 2 simple GUI to help you generate a command line to detect or install all the necessary components to run various implementations of MLPerf inference benchmarks (datasets, models, frameworks, libraries, compilers, tools), obtain power measurements, validate submission and prepare the tar file that can be uploaded to the submission website and optional W&B dashboards:
MLPerf inference submission preparation GUI - prepare, run and validate MLPerf submission and show results on public or private W&B dashboards (optional)
MLPerf inference run GUI - run and optimize MLPerf inference benchmarks in different modes and scenarios
We’ve also prepared several tutorials to automate MLPerf submissions (on-going work):
MLPerf inference submission (currently supports all reference models and implementations with ONNX, PyTorch, TF and TVM targeting CPU and CUDA, Nvidia implementation, TFLite C++ and ONNX C++ implementations)
Running short MLPerf inference at the Student Cluster Competition’22 at SuperComputing (11 our 13 teams managed to run from scratch the reference implementation of the MLPerf inference benchmark with RetinaNet, Open Images, ONNX Runtime within 30 minutes using a short dataset).
Based on your feedback, our taskforce is resuming the weekly conf-calls to help you add your own implementation to the above GUI while reusing all other automations to prepare, run, optimize and validate your submission:
Each Thursday at 10:30am PST starting from Feb 2 [Google Meet link]
Each Tuesday at 9:00am London(GMT) starting from Feb 7 [Google Meet link]
If you would like free help to add your implementations to the MLPerf workflow and prepare your submission or if have any questions, suggestions or comments, please feel free to reach/join our public taskforce using this Discord server, via CK mailing list or by contacting Arjun Suresh & Grigori Fursin directly. We can easily organize 1..2 hour hackathons in February to help you understand CM automations and add your implementation into our MLPerf automation workflow.
Our further community developments include automation of the design space exploration of diverse ML models, frameworks, libraries, SDKs and platforms together with collection, visualization, comparison and reproducibility of all experiments in public and private dashboards...