[CFP] Special Issue on Information Storage, Compression and Prediction in Deep Neural Networks

3 views
Skip to first unread message

Nicola Catenacci

unread,
May 14, 2024, 11:17:24 PMMay 14
to anno...@isal.groups.io, age...@cs.umbc.edu, ai-ro...@googlegroups.com, AI-...@jiscmail.ac.uk, ai...@aixia.it, comp-...@neuroinformatics.be, connect...@cs.cmu.edu, dbw...@cs.wisc.edu, dma...@zpr.uni-koeln.de, comm...@mailing.eu-robotics.net, ml-...@googlegroups.com, icaps-co...@googlegroups.com, mem...@mathneuronet.org.uk, planni...@googlegroups.com, rl-...@googlegroups.com, SIGAI-A...@listserv.acm.org, u...@engr.oregonstate.edu, visio...@visionscience.com
[Apologies for multiple posts; please distribute to interested people!]

Dear colleagues,

We invite you to submit to the special issue on "Mathematical Understanding of Information Storage, Compression and Prediction in Neural Networks" (more details can be found here: https://www.frontiersin.org/research-topics/59300).


- The journal(s) -

The special issue is hosted by (alternatively):
- Frontiers in Computational Neuroscience (Impact Factor - 3.2, CiteScore - 4.8), or
- Frontiers in Neuroinformatics (Impact Factor - 3.5, CiteScore - 5.3).


- Research Topic -

Regardless of their success, today, deep neural networks (NNs) are still used as a "black box", meaning the inner workings or detailed explanations of how they arrive at their output are not revealed. Currently, we still do not have the mathematical tools to fully understand the formal properties of these networks and their limitations. Improving our theoretical understanding of NNs is particularly important today, as these tools are being deployed in an increasing number of safety-critical scenarios, necessitating constant scrutiny of their behavior.

The goal of the special issue is to investigate the mathematical frameworks that enable a better understanding of how information is learned and represented within a neural network, including the study of existing approaches that go in this direction.


- Call for Papers -

We invite researchers to present manuscripts focusing on the mathematical analysis of deep neural networks, including their information-theoretic interpretation and their statistical limits. The areas relevant to this special issue include, but are not limited to:

- Theory of Deep Feed-forward and Recurrent NNs
- Information-theoretic principles and interpretation of NNs
- The Information Bottleneck and deep learning
- Compression in Deep Neural Networks
- The analysis of pattern and memory storage in NNs
- Deep NNs for brain-inspired machine learning and biological modeling
- Statistical Physics of deep neural networks
- Dynamical Systems Modeling of NNs
- Neural Network Dynamics and Stability
- Generalization and Regularization in NNs
- Learning Theory and Neural Networks
- Mathematical Models of Learning and Plasticity
- Neural Network Interpretability and Explainability
- Energy-Based Models in Deep Learning
- Neural Network Compression, Pruning, Sparsity and Efficient Computing
- Mathematics of Self-Supervised Deep Learning
- Optimization Landscapes and Loss Surface Analysis
- Neural Network Generalization and Overparameterization
- Mathematical Theories of Transformers and Attention Mechanisms
- Theoretical Foundations of Transfer Learning and Domain Adaptation

Manuscript Submission Deadline: 27 October 2024.


- Topic Editors -

Giorgio Gosti, Italian National Research Council (CNR), Italy (giorgi...@cnr.it)
Nicola Catenacci Volpi, University of Hertfordshire, United Kingdom (n.catena...@herts.ac.uk)
Nilesh Goel, Birla Institute of Technology and Science, United Arab Emirates



-----------------------------------


Dr. Nicola Catenacci Volpi, PhD



Research Fellow in Information Theory for AI & Robotics

Adaptive Systems Research Group

The University of Hertfordshire

Department of Computer Science

College Lane

Hatfield, Hertfordshire AL10 9AB

United Kingdom

E-mail: n.catena...@herts.ac.uk

Reply all
Reply to author
Forward
0 new messages