[CFP] Workshop on Graph Learning Benchmarks @ WWW 2021

Skip to first unread message

Jiaqi Ma

Jan 6, 2021, 5:44:15 PMJan 6
to Mining and Learning with Graphs Mailing List
*** Apologies for Cross-Posting ***

Dear All,

We are organizing a workshop on Graph Learning Benchmarks (GLB) at the Web Conference 2021. Please find the CFP below. The latest information can be found on our website: https://graph-learning-benchmarks.github.io/

Important Dates
  • Submission deadline: Feb 15, 2021
  • Acceptance notification: Mar 8, 2021
  • Camera-ready version due: Mar 22, 2021

Inspired by the conference tracks in the computer vision and natural language processing communities that are dedicated to establishing new benchmark datasets and tasks, we call for contributions that introduce novel ML tasks or novel graph-structured data which have the potential to (i) help understand the performance and limitations of graph representation models on diverse sets of problems and (ii) support benchmark evaluations for various models.

We especially (but not exclusively) call for submissions which will contribute to at least one of the following:

  • New Graph Datasets: Novel graph-structured datasets—especially large-scale, application-oriented, and publicly accessible datasets. We also welcome methods and software packages that enable streamlined benchmarking of large-scale graph data, crawling or crowdsourcing for labeled graph data, and generation of realistic synthetic graphs.
  • New ML Tasks: New ML tasks and applications on different types of graphs, at different levels (e.g., node, edge, and (sub)graph), with a special focus on real-world and industry-valued problems.
  • New Metrics: New evaluation procedures and metrics of graph learning associated with the various tasks and datasets.
  • Benchmarking Studies: Studies that benchmark multiple graph ML methods (especially graph neural networks) on non-trivial tasks and datasets. We explicitly encourage works that reveal limitations of existing models, optimize matches between model design and problems, and other novel findings about the behaviors of existing models on various tasks or datasets.
The acceptance of the contributed papers will be decided on the meaningfulness of the established graph learning tasks/datasets and their potential of being formalized into new benchmarks, rather than the performance of ML models (old or new) on these tasks. We particularly welcome contributions of negative results of popular, state-of-the-art models on a new task/dataset, as these provide novel insights to the community’s understanding of the meta-knowledge of graph ML.



Abstracts and papers can be submitted through CMT:

  • A paper no longer than 4 pages (excluding references and the appendix) using the ACM “sigconf” LaTeX template (see the instruction by the Web Conference 2021).
  • This workshop is non-archival. Relevant findings that have been recently published are also welcome.
  • The submission is single-blinded for the ease of data/code sharing. The reviewers are anonymized but the authors do not need to be anonymized in the submission.
  • Authors are strongly encouraged to include the corresponding datasets and code as supplementary materials in their submission. For large datasets or repositories, the authors can provide an external link through Github, Google drive, Dropbox, OneDrive, or Box. We limit the choice of storage platforms for security considerations. Please email the organizers if none of the listed platforms works for you.
  • If the data cannot be made publicly available, an extra section is required to illustrate how the results of the established benchmark may generalize to other graph data.

Jiaqi Ma, University of Michigan
Jiong Zhu, University of Michigan
Yuxiao Dong, Facebook AI
Danai Koutra, University of Michigan
Qiaozhu Mei, University of Michigan
Reply all
Reply to author
0 new messages