Sharing an EOD recap of coverage – so far, we’ve secured 9 original articles, and agree messaging around greater diversity in submitters is showing up strong. Today’s news was also featured on Hacker News. It's great to see this pickup, in addition to all the chatter on socials! We’ll continue to monitor for and flag any follow-on coverage.
One note - Nicole Hemsworth tweeted that she’ll be covering for the Next Platform soon. We followed up with her this morning to see if she’d like to set up a 1:1 and will share any other feedback.
Thanks!
Brittany
Press Coverage (9):
NEW AI News: MLCommons releases latest MLPerf Training benchmark results
EETimes: Google and Nvidia Tie in MLPerf; Graphcore and Habana Debut
NEW Fierce Electronics: Nvidia, Google celebrate MLPerf performance
NEW Forbes: MLPerf Gains Traction Amongst Vendors And Customers
Forbes: News Flash: NVIDIA Remains The Fastest Available AI Accelerator
Hardwareluxx: MLPerf Training 1.0: Graphcore and Google attack NVIDIA's supremacy
NEW HPCWire: Latest MLPerf Results: Nvidia Shines but Intel, Graphcore, Google Increase Their Presence
ServeTheHome: MLPerf Training v1.0 Results Still NVIDIA Led but with 500W NVIDIA A100
ZDNet: Graphcore brings new competition to Nvidia in latest MLPerf AI benchmarks
Syndicated by: Tech Investor News
Notable Mentions on Twitter:
MLCommons releases latest MLPerf Training benchmark results artificialintelligence-news.com/2021/06/30/mlc… #mlperf #ml #ai #news #tech
Graphcore brings new competition to Nvidia in latest MLPerf AI benchmarks - ZDNet ift.tt/3jtP5yZ
Graphcore brings new competition to Nvidia in latest MLPerf AI benchmarks zdnet.com/article/graphc… ZDNet
FierceElectronics @FierceElectron
Nvidia, Google celebrate MLPerf performance fierceelectronics.com/electronics/nv
Google Cloud Tech @GoogleCloudTech
For the second year in a row, the latest MLPerf benchmark shows that Google has the world’s fastest machine learning supercomputers. Read all about our record-breaking MLPerf-winning submission ↓ cloud.google.com/blog/products/…
MLPerf Results: Nvidia Shines but Intel, Graphcore, Google Increase Their Presence #ISC21 #HPC @NVIDIAAI ow.ly/FHh750Fm6U2
Latest MLPerf Results: Nvidia Shines but Intel, Graphcore, Google Increase Their Presence - hpcwire.com/2021/06/30/lat…
My thoughts on the latest #AI #MLPerf Training Benchmarks, with NVIDIA Google Cloud Habana Labs and Graphcore all showing excellent results. Congrats to all submitters! forbes.com/sites/karlfreu…
Neurons.AI - The Global AI Ecosystem #intoAI #AI @into_AI
MLCommons™ Releases MLPerf™ Training v1.0 Results - MLPerf Training measures the t neurons.ai/blog/news-stor… #machinelearning #intoAInews
MLPerf results are live. One standout new entrant in both open and closed is Graphcore. Deeper dive on that coming shortly at Next Platform mlcommons.org/en/
@Dell, @Fujitsu_Global , @GIGABYTEUSA, @InspurSystems, @Lenovo, Nettrix, and @Supermicro_SMCI joined #NVIDIA to deliver best-in-class #MLPerf benchmark results in training #AI models, powered by NVIDIA A100 Tensor Core GPUs. Learn more: nvda.ws/3w2D4De
NVIDIA AI Developer @NVIDIAAIDev
On the latest round of #MLPerf v1.0 training submissions NVIDIA improved up to 2.1x on a chip-to-chip basis and up to 3.5x at scale, setting 16 performance records. Learn how: nvda.ws/3hjAoM9
MLCommons releases latest MLPerf Training benchmark results artificialintelligence-news.com/2021/06/30/mlc… #mlperf #ml #ai #news #tech
🏆⚡️ The latest MLPerf Benchmark results are out and Google's #TPU v4 has set new performance records! Now you can train some of the most common ML models in seconds.
Learn more → goo.gle/3jI0ATN
What you will also see in these MLPerf AI training results are a lot of AMD chips showing up in a lot of systems… // $AMD $INTC $NVDA
TIRIAS Research @tiriasresearch
@commons_ml released the latest #MLPerf Training results that include some startups and new submitters as the benchmark gains traction. You can access all the results at MLCommons and read Jim McGregor's (@TekStrategist) review on @Forbes at bit.ly/3h6Vkae #AI
Graphcore brings new competition to Nvidia in latest MLPerf AI benchmarks zdnet.com/article/graphc…
Awesome! Congratulations to the training submitters and WG.David, thank you for summarizing the highlights. It is very helpful to quickly notice the major achievements and changes.On Wed, Jun 30, 2021 at 1:28 PM Debojyoti Dutta <debojyo...@nutanix.com> wrote:Congrats to the team, this is a big milestone! This agile benchmark will lead to the acceleration of these representative workloads.
From: David Kanter <da...@mlcommons.org>
Date: Wednesday, June 30, 2021 at 10:05 AM
To: public <pub...@mlcommons.org>, community <comm...@mlcommons.org>, training <trai...@mlcommons.org>
Subject: Congratulations - MLPerf Training v1.0 results are live!
Hi Everyone,
Our latest training results went live this morning - you can find the results here: https://www.mlcommons.org [mlcommons.org] /en/training-normal-10/
We have the press release here: https://www.mlcommons.org/en/news/mlperf-training-v10/ [mlcommons.org]
Training 1.0 retires two familiar benchmarks - the translation tasks using both NMT and Transformer. These are replaced by a speech-to-text using the RNN-T reference model that operates on the LibreSpeech dataset and a 3D tumor segmentation task using the 3D-Unet model applied to the KiTS2019 dataset.
Additionally, this is the first time we have used our Reference Convergence Points (RCP) methodology, which significantly simplified submission and reduced potential problems. We believe it will be even more valuable going forward for future rounds.
This round we hit 650 results from 12 different organizations, about a 5X increase from the previous round. Compared to the same benchmarks in v0.7, the fastest submissions in v1.0 were about 1.2-2X faster - a testament to the rapid pace of evolution in machine learning.
We could not have done this without our fantastic submitters. A big congratulations to everyone, especially the first-time submitters: Gigabyte, Graphcore, Habana Labs, Lenovo, Nettrix, PCL & PKU, & Supermicro.
I want to emphasize a number of folks for their outsize contributions to this round of training:
Victor Bittorf and John Tran who helped lead the working group.
Marek Wawrzos, Daniel Galvez, Sam Davis, and George Mathew who helped with the speech-to-text task and reference model.
Michal Marcinkiewicz, Pablo Ribalto, Fabian Isensee, and Tony Reina who helped with the 3D medical imaging task and reference model.
Elias Mizan and Marek Wawrzos who helped with the logging and infrastructure and RCPs.
I also want to emphasize this is a huge team effort and many other folks contributed.
Congratulations again - I'm excited to dig into all the results! I will also be sharing some of the press coverage a little later today.
Thanks,
----
You received this message because you are subscribed to the Google Groups "community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to community+...@mlcommons.org.
To view this discussion on the web visit https://groups.google.com/a/mlcommons.org/d/msgid/community/CA%2B700FFPiUsFGHZO7gKvM1KY51zcmUO7QeH7DEE0m1cy_aO5Wg%40mail.gmail.com [groups.google.com].
For more options, visit https://groups.google.com/a/mlcommons.org/d/optout [groups.google.com].
You received this message because you are subscribed to the Google Groups "community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to community+...@mlcommons.org.
To view this discussion on the web visit https://groups.google.com/a/mlcommons.org/d/msgid/community/510EA46F-B2D9-4EDB-B9B7-925B223AC28A%40nutanix.com.
For more options, visit https://groups.google.com/a/mlcommons.org/d/optout.
--
Congrats to the team, this is a big milestone! This agile benchmark will lead to the acceleration of these representative workloads.
From: David Kanter <da...@mlcommons.org>
Date: Wednesday, June 30, 2021 at 10:05 AM
To: public <pub...@mlcommons.org>, community <comm...@mlcommons.org>, training <trai...@mlcommons.org>
Subject: Congratulations - MLPerf Training v1.0 results are live!
Hi Everyone,
Our latest training results went live this morning - you can find the results here: https://www.mlcommons.org [mlcommons.org] /en/training-normal-10/
We have the press release here: https://www.mlcommons.org/en/news/mlperf-training-v10/ [mlcommons.org]
Training 1.0 retires two familiar benchmarks - the translation tasks using both NMT and Transformer. These are replaced by a speech-to-text using the RNN-T reference model that operates on the LibreSpeech dataset and a 3D tumor segmentation task using the 3D-Unet model applied to the KiTS2019 dataset.
Additionally, this is the first time we have used our Reference Convergence Points (RCP) methodology, which significantly simplified submission and reduced potential problems. We believe it will be even more valuable going forward for future rounds.
This round we hit 650 results from 12 different organizations, about a 5X increase from the previous round. Compared to the same benchmarks in v0.7, the fastest submissions in v1.0 were about 1.2-2X faster - a testament to the rapid pace of evolution in machine learning.
We could not have done this without our fantastic submitters. A big congratulations to everyone, especially the first-time submitters: Gigabyte, Graphcore, Habana Labs, Lenovo, Nettrix, PCL & PKU, & Supermicro.
I want to emphasize a number of folks for their outsize contributions to this round of training:
Victor Bittorf and John Tran who helped lead the working group.
Marek Wawrzos, Daniel Galvez, Sam Davis, and George Mathew who helped with the speech-to-text task and reference model.
Michal Marcinkiewicz, Pablo Ribalto, Fabian Isensee, and Tony Reina who helped with the 3D medical imaging task and reference model.
Elias Mizan and Marek Wawrzos who helped with the logging and infrastructure and RCPs.
I also want to emphasize this is a huge team effort and many other folks contributed.
Congratulations again - I'm excited to dig into all the results! I will also be sharing some of the press coverage a little later today.
Thanks,
To view this discussion on the web visit https://groups.google.com/a/mlcommons.org/d/msgid/community/510EA46F-B2D9-4EDB-B9B7-925B223AC28A%40nutanix.com.
For more options, visit https://groups.google.com/a/mlcommons.org/d/optout.