Forming an AI Safety Working Group at MLC

25 views
Skip to first unread message

David Kanter

unread,
Oct 26, 2023, 9:24:47 AM10/26/23
to public, community, Voting Representatives, Representative Contacts
Hi Everyone,

We are excited to share some significant news from MLCommons and key industry and academic collaborators. Today we are announcing the formation of a new MLCommons AI Safety working group focusing on building AI safety benchmarks.

As you are aware, there is a lot of concern and discussion about AI safety risks, much of it offering little in the way of technically addressing the problem. The explosion of generative AI large language models (LLMs) have exacerbated potential safety risks including toxicity, misinformation, and bias, among many others.

MLCommons and this multi-disciplinary group of AI experts spanning industry, academia and civil society, participating in the AI Safety working group believe that, as AI testing matures, open, collaborative AI safety benchmarks are a vital approach towards helping achieve standards towards better AI safety.

Initial participation in the AI Safety working group includes: Anthropic, Coactive AI, Google,, Inflection, Intel, Meta, Microsoft, NVIDIA, OpenAI, Qualcomm Technologies, Inc., and academics Joaquin Vanstoren from Eindhoven University of Technology, Percy Liang from Stanford University, and Bo Li from the University of Chicago. 

The working group has already been hard at work formulating the ideas and underlying infrastructure to build and support AI Safety benchmarks. It is open to additional academic and industry researchers and engineers, as well as domain experts from civil society and the public sector to join.

I also want to extend a huge thanks to Peter Mattson, who has spent the last 9 months helping to assemble this fantastic team - it's been a huge amount of work and I'm excited to see what we can achieve.

Please help us spread the word on social media:

and
https://x.com/MLCommons/status/1717526698949013660?s=20

You can learn more about the new AI Safety working group in our blog (https://mlcommons.org/en/news/formation-ai-safety-working-group/), and if you have additional questions I’d be happy to answer them.


Thanks,


David Kanter
Executive Director
Reply all
Reply to author
Forward
0 new messages