Special Session on "Learning Representations for
Structured Data"
2020 IEEE International Joint Conference on Neural Networks - World Congress on Computational Intelligence (IJCNN-WCCI)
July 19-24 2020, Glasgow, UK
Important Dates:
Paper submission: 15 January 2020
Notification of acceptance: 15 March 2020
Aims and Scope:
Structured data, e.g.
sequences, trees and graphs, are a natural representation for compound
information made of atomic information pieces (i.e. the nodes and their
labels) and their relationships, represented by the edges (and their
labels). Graphs are one of the most general and complex forms of
structured data allowing to represent networks of interacting elements,
e.g. in social graphs or metabolomics, as well as data where topological
variations influence the feature of interest, e.g. molecular compounds.
Being able to process data in these rich structured forms provides a
fundamental advantage when it comes to identifying data patterns
suitable for predictive and/or explorative analyses. This has motivated a
recent increasing interest of the machine learning community into the
development of learning models for structured information.
Thanks to the growing availability of computational
resources and data, modern machine learning methods promote flexible
representations that can be learned end-to-end from data. For instance,
recent deep learning approaches for learning representation on
structured data complement the flexibility of data-driven approaches
with biases about structures in data, coming from prior knowledge about
the problem at hand. Nonetheless, representation learning is becoming of
great importance in other areas, such as kernel-based and probabilistic
models. It has also been shown that, when the data available for the
task at hand is limited, it is still beneficial to resort to
representations learned in an unsupervised fashion, or on different, but
related, tasks.
This
session focuses on learning representation for structured data such as
sequences, trees, graphs, and relational data. Topics that are of
interest to this session include, but are not limited to:
- Deep learning and representation learning for graphs
- Learning with network data
- Graph generation (probabilistic models, variational autoencoders, adversarial training, …)
- Graph reduction and pooling in Graph Neural Networks
- Adaptive processing of structured data (neural, probabilistic, kernel)
- Recurrent, recursive and contextual models
- Tensor methods for structured data
- Reservoir computing and randomized neural networks for structures
- Relational deep learning
- Learning implicit representations
- Applications
of adaptive structured data processing: e.g. Natural Language
Processing, machine vision (e.g. point clouds as graphs), materials
science, chemoinformatics, computational biology, social networks.
Submission:
Organisers:
- Davide Bacciu, University of Pisa
- Filippo Maria Bianchi, Norwegian Research Centre
- Thomas Gärtner, University of Nottingham
- Nicolò Navarin, University of Padova
- Alessandro Sperduti, University of Padova