Dear Colleagues,
We are organizing the 3rd Workshop on Graph Learning Benchmarks (GLB 2023) in conjunction with KDD 2023. In the past two years, we successfully organized the first and second editions of the workshop, which attracted significant attention from the field of graph machine learning. Please see the detailed information about the previous editions of the workshop at this website: https://graph-learning-benchmarks.github.io/all-editions
Please find the CFP for this year below. The latest information can be found on our website: https://graph-learning-benchmarks.github.io/. If you have any questions, please contact us through this email address: glb23-or...@umich.edu
=======================
Important Dates
Submission deadline: May 26, 2023
Acceptance notification: Jun 13, 2023
Camera-ready version due: Jun 27, 2023
=======================
Overview
Inspired by the conference tracks in the computer vision and natural language processing communities that are dedicated to establishing new benchmark datasets and tasks, we call for contributions that introduce novel ML tasks or novel graph-structured data which have the potential to (i) help understand the performance and limitations of graph representation models on diverse sets of problems and (ii) support benchmark evaluations for various models.
We especially (but not exclusively) call for submissions which will contribute to at least one of the following:
Real-World Graph Datasets: Novel real-world graph-structured datasets --- especially large-scale, application-oriented, and publicly accessible datasets.
Synthetic Graph Datasets: Synthetic graph-structured datasets that are well-supported by graph theory, network science, or empirical studies, and can be used to reveal limitations of existing graph learning methods.
Graph Benchmarking Software Packages: Software packages which enable streamlined benchmarking large-scale online graphs, crawling or crowdsourcing of graph data, and generation of realistic synthetic graphs.
Graph Learning Tasks: New learning tasks and applications on different types of graphs, at different levels (e.g., node, edge, and (sub)graph), with a special focus on real-world and industry-oriented problems.
Evaluation Metrics: New evaluation procedures and metrics of graph learning associated with the various tasks and datasets.
Benchmarking Studies: Studies that benchmark multiple graph ML methods (especially graph neural networks) on non-trivial tasks and datasets. We explicitly encourage works that reveal limitations of existing models, optimize matches between model design and problems, and other novel findings about the behaviors of existing models on various tasks or datasets.
Graph Learning Task Taxonomy: Discussions towards a more comprehensive and fine-grained taxonomy of graph learning tasks.
The acceptance of the contributed papers will be decided on the meaningfulness of the established graph learning tasks/datasets and their potential of being formalized into new benchmarks, rather than the performance of ML models (old or new) on these tasks. We particularly welcome contributions of negative results of popular, state-of-the-art models on a new task/dataset, as these provide novel insights to the community’s understanding of the meta-knowledge of graph ML.
=======================
Submission
Abstracts and papers can be submitted through CMT:
https://cmt3.research.microsoft.com/GLB2023
Format
A paper no longer than 4 pages (excluding references and the appendix) using the ACM “sigconf” LaTeX template (see the instruction by KDD 2023).
This workshop is non-archival. Relevant findings that have been recently published are also welcome.
The submission is single-blinded for the ease of data/code sharing. The reviewers are anonymous but the authors do not need to be anonymized in the submission.
Authors are strongly encouraged to include the corresponding datasets and code as supplementary materials in their submission. For large datasets or repositories, the authors can provide an external link through Github, Google drive, Dropbox, OneDrive, or Box. We limit the choice of storage platforms for security considerations. Please email the organizers if none of the listed platforms works for you.
If the data cannot be made publicly available, an extra section is required to illustrate how the results of the established benchmark may generalize to other graph data.
=======================
Organizers
Jiaqi Ma, Harvard University / University of Illinois Urbana-Champaign
Jiong Zhu, University of Michigan
Yuxiao Dong, Tsinghua University
Danai Koutra, Amazon / University of Michigan
Jingrui He, University of Illinois Urbana-Champaign
Qiaozhu Mei, University of Michigan
Anton Tsitsulin, Google Research
Xingjian Zhang, University of Michigan
Marinka Zitnik, Harvard University