Congratulations - MLPerf Training v1.0 results are live!

87 views
Skip to first unread message

David Kanter

unread,
Jun 30, 2021, 1:05:47 PM6/30/21
to public, community, training
Hi Everyone,

Our latest training results went live this morning - you can find the results here: https://www.mlcommons.org /en/training-normal-10/


Training 1.0 retires two familiar benchmarks - the translation tasks using both NMT and Transformer. These are replaced by a speech-to-text using the RNN-T reference model that operates on the LibreSpeech dataset and a 3D tumor segmentation task using the 3D-Unet model applied to the KiTS2019 dataset.

Additionally, this is the first time we have used our Reference Convergence Points (RCP) methodology, which significantly simplified submission and reduced potential problems. We believe it will be even more valuable going forward for future rounds.

This round we hit 650 results from 12 different organizations, about a 5X increase from the previous round. Compared to the same benchmarks in v0.7, the fastest submissions in v1.0 were about 1.2-2X faster - a testament to the rapid pace of evolution in machine learning.

We could not have done this without our fantastic submitters. A big congratulations to everyone, especially the first-time submitters: Gigabyte, Graphcore, Habana Labs, Lenovo, Nettrix, PCL & PKU, & Supermicro. 

I want to emphasize a number of folks for their outsize contributions to this round of training:

Victor Bittorf and John Tran who helped lead the working group.

Marek Wawrzos, Daniel Galvez, Sam Davis, and George Mathew who helped with the speech-to-text task and reference model.

Michal Marcinkiewicz, Pablo Ribalto, Fabian Isensee, and Tony Reina who helped with the 3D medical imaging task and reference model.

Elias Mizan and Marek Wawrzos who helped with the logging and infrastructure and RCPs.

I also want to emphasize this is a huge team effort and many other folks contributed.

Congratulations again - I'm excited to dig into all the results! I will also be sharing some of the press coverage a little later today.


Thanks,


David Kanter
Executive Director

David Kanter

unread,
Jun 30, 2021, 7:25:06 PM6/30/21
to Janapa Reddi, Vijay, Debojyoti Dutta, public, community, training
Passing along the EOD press roundup  from our colleagues  at  Strangebrew:


Sharing an EOD recap of coverage – so far, we’ve secured 9 original articles, and agree messaging around greater diversity in submitters is showing up strong. Today’s news was also featured on Hacker News. It's great to see this pickup, in addition to all the chatter on socials! We’ll continue to monitor for and flag any follow-on coverage. 


One note - Nicole Hemsworth tweeted that she’ll be covering for the Next Platform soon. We followed up with her this morning to see if she’d like to set up a 1:1 and will share any other feedback. 


Thanks!
Brittany 


Press Coverage (9): 


Notable Mentions on Twitter: 


AI News @AI_TechNews

MLCommons releases latest MLPerf Training benchmark results artificialintelligence-news.com/2021/06/30/mlc… #mlperf #ml #ai #news #tech


All Things Tech @TechNewsGen

Graphcore brings new competition to Nvidia in latest MLPerf AI benchmarks - ZDNet ift.tt/3jtP5yZ


Dr. Joseph Frusci @JFrusci

Graphcore brings new competition to Nvidia in latest MLPerf AI benchmarks  zdnet.com/article/graphc… ZDNet


FierceElectronics @FierceElectron

Nvidia, Google celebrate MLPerf performance fierceelectronics.com/electronics/nv


Google Cloud Tech @GoogleCloudTech

For the second year in a row, the latest MLPerf benchmark shows that Google has the world’s fastest machine learning supercomputers. Read all about our record-breaking MLPerf-winning submission ↓  cloud.google.com/blog/products/…


HPCwire @HPCwire

MLPerf Results: Nvidia Shines but Intel, Graphcore, Google Increase Their Presence #ISC21 #HPC @NVIDIAAI ow.ly/FHh750Fm6U2


John Russell @JRussonHPC

Latest MLPerf Results: Nvidia Shines but Intel, Graphcore, Google Increase Their Presence - hpcwire.com/2021/06/30/lat…


Karl Freund @karlfreund

My thoughts on the latest #AI #MLPerf Training Benchmarks, with NVIDIA Google Cloud Habana Labs and Graphcore all showing excellent results.  Congrats to all submitters!  forbes.com/sites/karlfreu…


Neurons.AI - The Global AI Ecosystem #intoAI #AI @into_AI

MLCommons™ Releases MLPerf™ Training v1.0 Results - MLPerf Training measures the t neurons.ai/blog/news-stor… #machinelearning #intoAInews


Nicole Hemsoth @NicoleHemsoth

MLPerf results are live. One standout new entrant in both open and closed is Graphcore. Deeper dive on that coming shortly at Next Platform mlcommons.org/en/


NVIDIA AI  @NVIDIAAI

@Dell, @Fujitsu_Global , @GIGABYTEUSA,  @InspurSystems, @Lenovo, Nettrix, and @Supermicro_SMCI joined #NVIDIA to deliver best-in-class #MLPerf benchmark results in training #AI models, powered by NVIDIA A100 Tensor Core GPUs. Learn more: nvda.ws/3w2D4De


NVIDIA AI Developer @NVIDIAAIDev

On the latest round of #MLPerf v1.0 training submissions NVIDIA improved up to 2.1x on a chip-to-chip basis and up to 3.5x at scale, setting 16 performance records. Learn how: nvda.ws/3hjAoM9


Ryan Daws 🤓 @Gadget_Ry

MLCommons releases latest MLPerf Training benchmark results artificialintelligence-news.com/2021/06/30/mlc… #mlperf #ml #ai #news #tech


TensorFlow @TensorFlow

🏆⚡️ The latest MLPerf Benchmark results are out and Google's #TPU v4 has set new performance records! Now you can train some of the most common ML models in seconds.

Learn more → goo.gle/3jI0ATN


Tiernan Ray @TiernanRayTech

What you will also see in these MLPerf AI training results are a lot of AMD chips showing up in a lot of systems… // $AMD $INTC $NVDA


TIRIAS Research @tiriasresearch

@commons_ml released the latest #MLPerf Training results that include some startups and new submitters as the benchmark gains traction. You can access all the results at MLCommons and read Jim McGregor's (@TekStrategist) review on @Forbes at bit.ly/3h6Vkae #AI


ZDNet @ZDNet

Graphcore brings new competition to Nvidia in latest MLPerf AI benchmarks  zdnet.com/article/graphc…


On Wed, Jun 30, 2021 at 3:36 PM Janapa Reddi, Vijay <v...@eecs.harvard.edu> wrote:
Awesome! Congratulations to the training submitters and WG. 

David, thank you for summarizing the highlights. It is very helpful to quickly notice the major achievements and changes. 

On Wed, Jun 30, 2021 at 1:28 PM Debojyoti Dutta <debojyo...@nutanix.com> wrote:

Congrats to the team, this is a big milestone! This agile benchmark will lead to the acceleration of these representative workloads.

 

From: David Kanter <da...@mlcommons.org>
Date: Wednesday, June 30, 2021 at 10:05 AM
To: public <pub...@mlcommons.org>, community <comm...@mlcommons.org>, training <trai...@mlcommons.org>
Subject: Congratulations - MLPerf Training v1.0 results are live!

 

Hi Everyone,

 

Our latest training results went live this morning - you can find the results here: https://www.mlcommons.org [mlcommons.org] /en/training-normal-10/

 

 

Training 1.0 retires two familiar benchmarks - the translation tasks using both NMT and Transformer. These are replaced by a speech-to-text using the RNN-T reference model that operates on the LibreSpeech dataset and a 3D tumor segmentation task using the 3D-Unet model applied to the KiTS2019 dataset.

 

Additionally, this is the first time we have used our Reference Convergence Points (RCP) methodology, which significantly simplified submission and reduced potential problems. We believe it will be even more valuable going forward for future rounds.

 

This round we hit 650 results from 12 different organizations, about a 5X increase from the previous round. Compared to the same benchmarks in v0.7, the fastest submissions in v1.0 were about 1.2-2X faster - a testament to the rapid pace of evolution in machine learning.

 

We could not have done this without our fantastic submitters. A big congratulations to everyone, especially the first-time submitters: Gigabyte, Graphcore, Habana Labs, Lenovo, Nettrix, PCL & PKU, & Supermicro. 

 

I want to emphasize a number of folks for their outsize contributions to this round of training:

 

Victor Bittorf and John Tran who helped lead the working group.

 

Marek Wawrzos, Daniel Galvez, Sam Davis, and George Mathew who helped with the speech-to-text task and reference model.


Michal Marcinkiewicz, Pablo Ribalto, Fabian Isensee, and Tony Reina who helped with the 3D medical imaging task and reference model.

 

Elias Mizan and Marek Wawrzos who helped with the logging and infrastructure and RCPs.

 

I also want to emphasize this is a huge team effort and many other folks contributed.

Congratulations again - I'm excited to dig into all the results! I will also be sharing some of the press coverage a little later today.


Thanks,

 

 

David Kanter

Executive Director

--
You received this message because you are subscribed to the Google Groups "community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to community+...@mlcommons.org.
To view this discussion on the web visit https://groups.google.com/a/mlcommons.org/d/msgid/community/CA%2B700FFPiUsFGHZO7gKvM1KY51zcmUO7QeH7DEE0m1cy_aO5Wg%40mail.gmail.com [groups.google.com].
For more options, visit https://groups.google.com/a/mlcommons.org/d/optout [groups.google.com].

--
You received this message because you are subscribed to the Google Groups "community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to community+...@mlcommons.org.
To view this discussion on the web visit https://groups.google.com/a/mlcommons.org/d/msgid/community/510EA46F-B2D9-4EDB-B9B7-925B223AC28A%40nutanix.com.
For more options, visit https://groups.google.com/a/mlcommons.org/d/optout.


--
Vijay Janapa Reddi, Ph. D. | Associate Professor 
John A. Paulson School of Engineering and Applied Sciences
Science and Engineering Complex (SEC)
150 Western Ave, Room #5.305
Boston, MA 02134
Harvard University

David Kanter

unread,
Jul 1, 2021, 11:27:33 AM7/1/21
to public, community, training
For those of you with social media clout, we have the original posts on LinkedIn: https://www.linkedin.com/feed/update/urn:li:activity:6816048188231946240/ and Twitter: https://twitter.com/commons_ml/status/1410283768812904450?s=20

Thanks!

David

Debojyoti Dutta

unread,
Jul 2, 2021, 8:51:13 PM7/2/21
to David Kanter, public, community, training

Congrats to the team, this is a big milestone! This agile benchmark will lead to the acceleration of these representative workloads.

 

From: David Kanter <da...@mlcommons.org>
Date: Wednesday, June 30, 2021 at 10:05 AM
To: public <pub...@mlcommons.org>, community <comm...@mlcommons.org>, training <trai...@mlcommons.org>
Subject: Congratulations - MLPerf Training v1.0 results are live!

 

Hi Everyone,

 

Our latest training results went live this morning - you can find the results here: https://www.mlcommons.org [mlcommons.org] /en/training-normal-10/

 

 

Training 1.0 retires two familiar benchmarks - the translation tasks using both NMT and Transformer. These are replaced by a speech-to-text using the RNN-T reference model that operates on the LibreSpeech dataset and a 3D tumor segmentation task using the 3D-Unet model applied to the KiTS2019 dataset.

 

Additionally, this is the first time we have used our Reference Convergence Points (RCP) methodology, which significantly simplified submission and reduced potential problems. We believe it will be even more valuable going forward for future rounds.

 

This round we hit 650 results from 12 different organizations, about a 5X increase from the previous round. Compared to the same benchmarks in v0.7, the fastest submissions in v1.0 were about 1.2-2X faster - a testament to the rapid pace of evolution in machine learning.

 

We could not have done this without our fantastic submitters. A big congratulations to everyone, especially the first-time submitters: Gigabyte, Graphcore, Habana Labs, Lenovo, Nettrix, PCL & PKU, & Supermicro. 

 

I want to emphasize a number of folks for their outsize contributions to this round of training:

 

Victor Bittorf and John Tran who helped lead the working group.

 

Marek Wawrzos, Daniel Galvez, Sam Davis, and George Mathew who helped with the speech-to-text task and reference model.


Michal Marcinkiewicz, Pablo Ribalto, Fabian Isensee, and Tony Reina who helped with the 3D medical imaging task and reference model.

 

Elias Mizan and Marek Wawrzos who helped with the logging and infrastructure and RCPs.

 

I also want to emphasize this is a huge team effort and many other folks contributed.

Congratulations again - I'm excited to dig into all the results! I will also be sharing some of the press coverage a little later today.


Thanks,

 

 

David Kanter

Executive Director

Janapa Reddi, Vijay

unread,
Jul 2, 2021, 8:51:15 PM7/2/21
to Debojyoti Dutta, David Kanter, public, community, training
Awesome! Congratulations to the training submitters and WG. 

David, thank you for summarizing the highlights. It is very helpful to quickly notice the major achievements and changes. 

On Wed, Jun 30, 2021 at 1:28 PM Debojyoti Dutta <debojyo...@nutanix.com> wrote:
Reply all
Reply to author
Forward
0 new messages