Skip to first unread message

Grigori Fursin

Jan 28, 2020, 11:23:13 AM1/28/20
to Collective Knowledge, ctuning-discussions, Artifact Evaluation for systems and ML conferences
Dear colleagues,

I wish you a very happy New Year and would like to share some exciting news.

We have released a prototype of our open CodeReef portal to support reproducible R&D while providing several important enhancements for the CK framework based on your feedback during last few years:

It is now possible to share and publish stable research components (R&D automation actions, software detection plugins, meta-packages, portable workflows from reproduced papers, etc) similar to PyPI instead of always relying on potentially unstable versions from GitHub repositories.

Aggregating and versioning stable components in one place makes it possible to assemble stable workflows and perform continuous testing and benchmarking of research techniques from published papers. You can see a practical example of such a stable workflow to automate the MLPerf inference benchmark at .

We will present our platform at the MLSys'20 workshop on MLOps systems in Austin, TX: - if you plan to be there just give us a shout! You can also catch our team at the AI hardware summit in Munich and at ASPLOS'20 in Lausanne to discuss this project face-to-face.

There is still a lot to be done and we continue adding new features to our CodeReef platform and the CK framework based on user needs so don't hesitate to provide your feedback using our Slack or GitHub.

We also collaborate with several ML and systems conferences to define a common format for sharing and reusing artifacts and workflows from published papers, automate the validation of experimental results and support reproducible benchmarking - please tell us if you are interested in these activities and we will keep you in the loop!

Looking forward to working with all of you this year,

Grigori Fursin, PhD
President of the cTuning foundation
Co-founder and CTO of

Reply all
Reply to author
0 new messages