Weekly TMLR digest for Aug 14, 2022

Skip to first unread message


Aug 13, 2022, 8:00:10 PMAug 13
to tmlr-annou...@googlegroups.com

Accepted papers

Title: TITRATED: Learned Human Driving Behavior without Infractions via Amortized Inference

Authors: Vasileios Lioutas, Adam Scibior, Frank Wood

Abstract: Models of human driving behavior have long been used for prediction in autonomous vehicles, but recently have also started being used to create non-playable characters for driving simulations. While such models are in many respects realistic, they tend to suffer from unacceptably high rates of driving infractions, such as collisions or off-road driving, particularly when deployed in map locations with road geometries dissimilar to the training dataset. In this paper we present a novel method for fine-tuning a foundation model of human driving behavior to novel locations where human demonstrations are not available which reduces the incidence of such infractions. The method relies on inference in the foundation model to generate infraction-free trajectories as well as additional penalties applied when fine-tuning the amortized inference behavioral model. We demonstrate this "titration" technique using the ITRA foundation behavior model trained on the INTERACTION dataset when transferring to CARLA map locations. We demonstrate a 76-86% reduction in infraction rate and provide evidence that further gains are possible with more computation or better inference algorithms.

URL: https://openreview.net/forum?id=M8D5iZsnrO


Title: No More Pesky Hyperparameters: Offline Hyperparameter Tuning for RL

Authors: Han Wang, Archit Sakhadeo, Adam M White, James M Bell, Vincent Liu, Xutong Zhao, Puer Liu, Tadashi Kozuno, Alona Fyshe, Martha White

Abstract: The performance of reinforcement learning (RL) agents is sensitive to the choice of hyperparameters. In real-world settings like robotics or industrial control systems, however, testing different hyperparameter configurations directly on the environment can be financially prohibitive, dangerous, or time consuming. We focus on hyperparameter tuning from offline logs of data, to fully specify the hyperparameters for an RL agent that learns online in the real world. The approach is conceptually simple: we first learn a model of the environment from the offline data, which we call a calibration model, and then simulate learning in the calibration model to identify promising hyperparameters. Though such a natural idea is (likely) being used in industry, it has yet to be systematically investigated. We identify several criteria to make this strategy effective, and develop an approach that satisfies these criteria. We empirically investigate the method in a variety of settings to identify when it is effective and when it fails.

URL: https://openreview.net/forum?id=AiOUi3440V


Title: Mean-Field Langevin Dynamics : Exponential Convergence and Annealing

Authors: Lénaïc Chizat

Abstract: Noisy particle gradient descent (NPGD) is an algorithm to minimize convex functions over the space of measures that include an entropy term. In the many-particle limit, this algorithm is described by a Mean-Field Langevin dynamics---a generalization of the Langevin dynamic with a non-linear drift---which is our main object of study. Previous work have shown its convergence to the unique minimizer via non-quantitative arguments. We prove that this dynamics converges at an exponential rate, under the assumption that a certain family of Log-Sobolev inequalities holds. This assumption holds for instance for the minimization of the risk of certain two-layer neural networks, where NPGD is equivalent to standard noisy gradient descent. We also study the annealed dynamics, and show that for a noise decaying at a logarithmic rate, the dynamics converges in value to the global minimizer of the unregularized objective function.

URL: https://openreview.net/forum?id=BDqzLH1gEm


Title: Variational Disentanglement for Domain Generalization

Authors: Yufei Wang, Haoliang Li, Hao Cheng, Bihan Wen, Lap-Pui Chau, Alex Kot

Abstract: Domain generalization aims to learn a domain-invariant model that can generalize well to the unseen target domain. In this paper, based on the assumption that there exists an invariant feature mapping, we propose an evidence upper bound of the divergence between the category-specific feature and its invariant ground-truth using variational inference. To optimize this upper bound, we further propose an efficient Variational Disentanglement Network (VDN) that is capable of disentangling the domain-specific features and category-specific features (which generalize well to the unseen samples). Besides, the generated novel images from VDN are used to further improve the generalization ability. We conduct extensive experiments to verify our method on three benchmarks, and both quantitative and qualitative results illustrate the effectiveness of our method.

URL: https://openreview.net/forum?id=fudOtITMIZ


Title: On Robustness to Missing Video for Audiovisual Speech Recognition

Authors: Oscar Chang, Otavio Braga, Hank Liao, Dmitriy Serdyuk, Olivier Siohan

Abstract: It has been shown that learning audiovisual features can lead to improved speech recognition performance over audio-only features, especially for noisy speech. However, in many common applications, the visual features are partially or entirely missing, e.g. the speaker might move off screen. Multi-modal models need to be robust: missing video frames should not degrade the performance of an audiovisual model to be worse than that of a single-modality audio-only model. While there have been many attempts at building robust models, there is little consensus on how robustness should be evaluated. To address this, we introduce a framework that allows claims about robustness to be evaluated in a precise and testable way. We also conduct a systematic empirical study of the robustness of common audiovisual speech recognition architectures on a range of acoustic noise conditions and test suites. Finally, we show that an architecture-agnostic solution based on cascades can consistently achieve robustness to missing video, even in settings where existing techniques for robustness like dropout fall short.

URL: https://openreview.net/forum?id=fXorxxbDvO


Title: Identifying Causal Structure in Dynamical Systems

Authors: Dominik Baumann, Friedrich Solowjow, Karl Henrik Johansson, Sebastian Trimpe

Abstract: Mathematical models are fundamental building blocks in the design of dynamical control systems. As control systems are becoming increasingly complex and networked, approaches for obtaining such models based on first principles reach their limits. Data-driven methods provide an alternative. However, without structural knowledge, these methods are prone to finding spurious correlations in the training data, which can hamper generalization capabilities of the obtained models. This can significantly lower control and prediction performance when the system is exposed to unknown situations. A preceding causal identification can prevent this pitfall. In this paper, we propose a method that identifies the causal structure of control systems. We design experiments based on the concept of controllability, which provides a systematic way to compute input trajectories that steer the system to specific regions in its state space. We then analyze the resulting data leveraging powerful techniques from causal inference and extend them to control systems. Further, we derive conditions that guarantee the discovery of the true causal structure of the system. Experiments on a robot arm demonstrate reliable causal identification from real-world data and enhanced generalization capabilities.

URL: https://openreview.net/forum?id=X2BodlyLvT


Title: Understanding AdamW through Proximal Methods and Scale-Freeness

Authors: Zhenxun Zhuang, Mingrui Liu, Ashok Cutkosky, Francesco Orabona

Abstract: Adam has been widely adopted for training deep neural networks due to less hyperparameter tuning and remarkable performance. To improve generalization, Adam is typically used in tandem with a squared $\ell_2$ regularizer (referred to as Adam-$\ell_2$). However, even better performance can be obtained with AdamW, which decouples the gradient of the regularizer from the update rule of Adam-$\ell_2$. Yet, we are still lacking a complete explanation of the advantages of AdamW. In this paper, we tackle this question from both an optimization and an empirical point of view. First, we show how to re-interpret AdamW as an approximation of a proximal gradient method, which takes advantage of the closed-form proximal mapping of the regularizer instead of only utilizing its gradient information as in Adam-$\ell_2$. Next, we consider the property of "scale-freeness" enjoyed by AdamW and by its proximal counterpart: their updates are invariant to component-wise rescaling of the gradients. We provide empirical evidence across a wide range of deep learning experiments showing a correlation between the problems in which AdamW exhibits an advantage over Adam-$\ell_2$ and the degree to which we expect the gradients of the network to exhibit multiple scales, thus motivating the hypothesis that the advantage of AdamW could be due to the scale-free updates.

URL: https://openreview.net/forum?id=IKhEPWGdwK


Title: Diagnosing and Fixing Manifold Overfitting in Deep Generative Models

Authors: Gabriel Loaiza-Ganem, Brendan Leigh Ross, Jesse C Cresswell, Anthony L. Caterini

Abstract: Likelihood-based, or explicit, deep generative models use neural networks to construct flexible high-dimensional densities. This formulation directly contradicts the manifold hypothesis, which states that observed data lies on a low-dimensional manifold embedded in high-dimensional ambient space. In this paper we investigate the pathologies of maximum-likelihood training in the presence of this dimensionality mismatch. We formally prove that degenerate optima are achieved wherein the manifold itself is learned but not the distribution on it, a phenomenon we call manifold overfitting. We propose a class of two-step procedures consisting of a dimensionality reduction step followed by maximum-likelihood density estimation, and prove that they recover the data-generating distribution in the nonparametric regime, thus avoiding manifold overfitting. We also show that these procedures enable density estimation on the manifolds learned by implicit models, such as generative adversarial networks, hence addressing a major shortcoming of these models. Several recently proposed methods are instances of our two-step procedures; we thus unify, extend, and theoretically justify a large class of models.

URL: https://openreview.net/forum?id=0nEZCVshxS


New submissions

Title: Gaussian process surrogate models for neural networks

Abstract: The lack of insight into deep learning systems hinders their systematic design. In science and engineering, modeling is a methodology used to understand complex systems whose internal processes are opaque. Modeling replaces a complex system with a simpler surrogate that is more amenable to interpretation. Drawing inspiration from this, we construct a class of surrogate models for neural networks using Gaussian processes. Rather than deriving the kernels for certain limiting cases of neural networks, we learn the kernels of the Gaussian process empirically from the naturalistic behavior of neural networks. We first evaluate our approach with two case studies inspired by previous theoretical studies of neural network behavior in which we capture neural network preferences for learning low frequencies and identify pathological behavior in deep neural networks. In two further practical case studies, we use the learned kernel to predict the generalization properties of neural networks.

URL: https://openreview.net/forum?id=p3pH2EKRQz


Title: Scaling Autoregressive Models for Content-Rich Text-to-Image Generation

Abstract: We present the Pathways Autoregressive Text-to-Image (Parti) model, which generates high-fidelity photorealistic images and supports content-rich synthesis involving complex compositions and world knowledge. Parti treats text-to-image generation as a sequence-to-sequence modeling problem, akin to machine translation, with sequences of image tokens as the target outputs rather than text tokens in another language. This strategy can naturally tap into the rich body of prior work on large language models, which have seen continued advances in capabilities and performance through scaling data and model sizes. Our approach is simple: First, Parti uses a Transformer-based image tokenizer, ViT-VQGAN, to encode images as sequences of discrete tokens. Second, we achieve consistent quality improvements by scaling the encoder-decoder Transformer model up to 20B parameters, with a new state-of-the-art zero-shot FID score of 7.23 and finetuned FID score of 3.22 on MS-COCO. Our detailed analysis on Localized Narratives as well as PartiPrompts (P2), a new holistic benchmark of over 1600 English prompts, demonstrate the effectiveness of Parti across a wide variety of categories and difficulty aspects. We also explore and highlight limitations of our models in order to define and exemplify key areas of focus for further improvements.

URL: https://openreview.net/forum?id=AFDcYJKhND


Title: Fail-Safe Adversarial Generative Imitation Learning

Abstract: For flexible yet safe imitation learning (IL), we propose theory and a modular method, with a safety layer that enables a closed-form density/gradient of the overall safe generative policy, end-to-end training using generative adversarial IL (GAIL), and worst-case fail-safety guarantees.
The safety layer is a ``glueing together'' of piecewise diffeomorphisms, with sum over change-of-variables formulas as density. The safe action set (into which the safety layer maps) is inferred by sample-based adversarial reachability analysis of fallback maneuvers plus Lipschitz continuity or convexity arguments. We also provide theoretical analysis showing the robustness advantage of using the safety layer already during training (imitation error linear in the horizon) compared to only using it at test time (quadratic error). In an experiment on real-world driver interaction data, we empirically demonstrate tractability, safety and imitation performance of our approach.

URL: https://openreview.net/forum?id=e4Bb0b3QgJ


Title: Margin based Self-Supervised Neural Architecture Search

Abstract: Neural Architecture Search (NAS) has been used recently to achieve improved performance in various tasks and most prominently in image classification. Yet, most search strategies rely on large labeled datasets, which limit their usage in the case where only a smaller fraction of the data is annotated. Self-supervised learning has shown great promise in training neural networks using unlabeled data. In this work, we propose a self-supervised neural architecture search (SSNAS) that allows finding novel network models without the need for labeled data. We show that such a search leads to comparable results to supervised training with a ``fully labeled'' NAS. While such a result has been shown in concurrent works, the uniqueness of this work is that we also show that such a search can also improve the performance of self-supervised learning. We show that using the learned architectures for self-supervised representation learning leads to improved performance. Thus, SSL can both improve NAS and be improved by it.
Specifically, due to the common case of resource constrains, we exhibit the advantage of our approach when the number of labels in the search is relatively small.

URL: https://openreview.net/forum?id=qSnHmZg63d


Title: On a continuous time model of gradient descent dynamics and instability in deep learning

Abstract: The recipe behind the success of deep learning has been the combination of neural networks and gradient-based optimization. Understanding the behavior of gradient descent however, and particularly its instability, has lagged behind its empirical success. To add to the theoretical tools available to study gradient descent we propose the principal flow (PF), a continuous time flow that approximates gradient descent dynamics. To our knowledge, the PF is the only continuous flow that captures the divergent and oscillatory behaviors of gradient descent, including escaping local minima and saddle points. Through its dependence on the eigendecomposition of the Hessian the PF sheds light on the recently observed edge of stability phenomena in deep learning. Using our new understanding of instability we propose a learning rate adaptation method which enables us to control the trade-off between training stability and test set evaluation performance.

URL: https://openreview.net/forum?id=EYrRzKPinA


Title: Instance-Conditioned GAN Data Augmentation for Representation Learning

Abstract: Data augmentation has become a crucial component to train state-of-the-art visual representation models. However, handcrafting combinations of transformations that lead to improved performances is a laborious task, which can result in visually unrealistic samples. To overcome these limitations, recent works have explored the use of generative models as learnable data augmentation tools, showing promising results in narrow application domains, e.g., few-shot learning and low-data medical imaging. In this paper, we introduce a data augmentation module, called DA_IC-GAN, which leverages instance-conditioned GAN generations and can be used off-the-shelf in conjunction with most state-of-the-art training recipes. We showcase the benefits of DA_IC-GAN by plugging it out-of-the-box into the supervised training of ResNets and DeiT models on the ImageNet dataset and achieving accuracy boosts up to between 1% and 2% with the highest capacity models. Moreover, the learnt representations are shown to be more robust than the baselines when transferred to a handful of out-of-distribution datasets and exhibit increased invariance to variations of instance and viewpoints. We additionally couple DA_IC-GAN with a self-supervised training recipe and show that we can also achieve an improvement of 1% in accuracy in some settings. We open-source the code at anonymous.url to encourage reproducibility and further future explorations. With this work, we strengthen the evidence on the potential of learnable data augmentations to improve visual representation learning, paving the road towards non-handcrafted augmentations in model training.

URL: https://openreview.net/forum?id=1n7q9mxG3T


Title: DiffuseVAE: Efficient, Controllable and High-Fidelity Generation from Low-Dimensional Latents

Abstract: Diffusion probabilistic models have been shown to generate state-of-the-art results on several competitive image synthesis benchmarks but lack a low-dimensional, interpretable latent space, and are slow at generation. On the other hand, standard Variational Autoencoders (VAEs) typically have access to a low-dimensional latent space but exhibit poor sample quality. We present DiffuseVAE, a novel generative framework that integrates VAE within a diffusion model framework, and leverage this to design novel conditional parameterizations for diffusion models. We show that the resulting model equips diffusion models with a low-dimensional VAE inferred latent code which can be used for downstream tasks like controllable synthesis. The proposed method also improves upon the speed vs quality tradeoff exhibited in standard unconditional DDPM/DDIM models (for instance, FID of 16.47 vs 34.36 using a standard DDIM on the CelebA-HQ-128 benchmark using T=10 reverse process steps) without having explicitly trained for such an objective. Furthermore, the proposed model exhibits synthesis quality comparable to state-of-the-art models on standard image synthesis benchmarks like CIFAR-10 and CelebA-64 while outperforming most existing VAE-based methods. Lastly, we show that the proposed method exhibits inherent generalization to different types of noise in the conditioning signal. Our code and model checkpoints will be made publicly available.

URL: https://openreview.net/forum?id=ygoNPRiLxw


Title: Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images

Abstract: Due to the intractability of characterizing everything that looks unlike the normal data, anomaly detection (AD) is traditionally treated as an unsupervised problem utilizing only normal samples. However, it has recently been found that unsupervised image AD can be drastically improved through the utilization of huge corpora of random images to represent anomalousness; a technique which is known as Outlier Exposure. In this paper we show that specialized AD learning methods are unnecessary for state-of-the-art performance, and furthermore one can achieve strong performance with just a small collection of Outlier Exposure data, contradicting common assumptions in the field of AD. We find that standard classifiers and semi-supervised one-class methods trained to discern between normal samples and relatively few random natural images are able to outperform the current state of the art on an established AD benchmark with ImageNet. Further experiments reveal that even one well-chosen outlier sample is sufficient to achieve decent performance on this benchmark (79.3% AUC). We investigate this phenomenon and find that one-class methods are more robust to the choice of training outliers, indicating that there are scenarios where these are still more useful than standard classifiers. Lastly, no training samples are necessary when one uses the representations learned by CLIP, a recent foundation model, which achieves state-of-the-art AD results on CIFAR-10 and ImageNet in a zero-shot setting.

URL: https://openreview.net/forum?id=3v78awEzyB


Title: Cheap and Deterministic Inference for Deep State-Space Models of Interacting Dynamical Systems

Abstract: Graph neural networks are often used to model interacting dynamical systems since they
gracefully scale to systems with a varying and high number of agents. While there has been
much progress made for deterministic interacting systems, modeling is much more challenging
for stochastic systems in which one is interested in obtaining a predictive distribution
over future trajectories. Existing methods are either computationally slow since they rely
on Monte Carlo sampling or make simplifying assumptions such that the predictive distribution
is unimodal. In this work, we present a deep state-space model which employs graph
neural networks in order to model the underlying interacting dynamical system. The predictive
distribution is multimodal and has the form of a Gaussian mixture model, where the
moments of the Gaussian components can be computed via deterministic moment matching
rules. Our moment matching scheme can be exploited for sample-free inference leading to
more efficient and stable training compared to Monte Carlo alternatives. Furthermore, we
propose structured approximations to the covariance matrices of the Gaussian components
in order to scale up to systems with many agents. We benchmark our novel framework
on two challenging autonomous driving datasets. Both confirm the benefits of our method
compared to state-of-the-art methods. We further demonstrate the usefulness of our individual
contributions in a carefully designed ablation study and provide a detailed empirical
runtime analysis of our proposed covariance approximations.

URL: https://openreview.net/forum?id=dqgdBy4Uv5


Title: Learning Two-Step Hybrid Policy for Graph-Based Interpretable Reinforcement Learning

Abstract: We present a two-step hybrid reinforcement learning (RL) policy that is designed to generate interpretable and robust hierarchical policies on the RL problem with graph-based input. Unlike prior deep reinforcement learning policies parameterized by an end-to-end black-box graph neural network, our approach disentangles the decision-making process into two steps. The first step is a simplified classification problem that maps the graph input to an action group where all actions share a similar semantic meaning. The second step implements a sophisticated rule-miner that conducts explicit one-hop reasoning over the graph and identifies decisive edges in the graph input without the necessity of heavy domain knowledge. This two-step hybrid policy presents human-friendly interpretations and achieves better performance in terms of generalization and robustness. Extensive experimental studies on four levels of complex text-based games have demonstrated the superiority of the proposed method compared to the state-of-the-art.

URL: https://openreview.net/forum?id=Ox5tmhFBrc


Title: Infinitely wide limits for deep Stable neural networks: sub-linear, linear and super-linear activation functions

Abstract: There is a recent and growing literature on large-width asymptotic properties of deep Gaussian neural networks (NNs), i.e. deep NNs with Gaussian-distributed parameters or weights, and Gaussian stochastic processes. Motivated by some empirical analyses that show the potential of replacing Gaussian distributions with the more general Stable distributions for the NN’s weights, in this paper we investigate large-width asymptotic properties of deep Stable NNs, i.e. deep NNs with Stable-distributed parameters. For sub-linear activation functions, a recent work has characterized the infinitely wide limit of a suitable rescaled deep Stable NN in terms of a Stable stochastic process, which generalize the Gaussian process. Here, we extend such a characterization to a general class of activation functions, which includes sub-linear, linear and super-linear functions. Our results show that in the Stable setting the scaling of the NN may depend on the choice of the activation function, thus bringing out a critical difference with respect to the Gaussian setting. In particular, while in the Gaussian setting the choice of the activation function does not affect the scaling required to achieve the infinitely wide Gaussian process, in the Stable setting the use of a linear activation function in place of a sub-linear or a super-linear activation function results in a change of the scaling, through an additional logarithmic term, in order to achieve the infinitely with Stable process.

URL: https://openreview.net/forum?id=A5tIluhDW6


Title: A Note on "Assessing Generalization of SGD via Disagreement"

Abstract: Jiang et al. (2022) find empirically that the average test error of deep neural networks can be estimated via the prediction disagreement of two separately trained networks, which does not require labels. They show that this 'Generalization Disagreement Equality' follows from the well-calibrated nature of deep ensembles under the notion of a proposed 'class-aggregated calibration.' In this reproduction, we show on two datasets that the suggested theory might be impractical because a deep ensemble’s calibration can deteriorate as prediction disagreement increases, which is precisely when the coupling of test error and disagreement is of interest, and labels are needed to estimate the calibration on new datasets. Further, we simplify the theoretical statements and proofs, showing them to be straightforward within a probabilistic context unlike the original hypothesis space view employed by Jiang et al. (2022).

URL: https://openreview.net/forum?id=oRP8urZ8Fx


Title: Centroids Matching: an efficient Continual Learning approach operating in the embedding space

Abstract: Catastrophic forgetting (CF) occurs when a neural network loses the information previously learned while training on a set of samples from a different distribution, i.e., a new task. Existing approaches have achieved remarkable results in mitigating CF, especially in a scenario called task incremental learning. However, this scenario is not realistic, and limited work has been done to achieve good results on more realistic scenarios. In this paper, we propose a novel regularization method called Centroids Matching, that, inspired by meta-learning approaches, fights CF by operating in the feature space produced by the neural network, achieving good results while requiring a small memory footprint. Specifically, the approach classifies the samples directly using the feature vectors produced by the neural network, by matching those vectors with the centroids representing the classes from the current task, or all the tasks up to that point. Centroids Matching is faster than competing baselines, and it can be exploited to efficiently mitigate CF, by preserving the distances between the embedding space produced by the model when past tasks were over, and the one currently produced, leading to a method that achieves high accuracy on all the tasks, without using an external memory when operating on easy scenarios, or using a small one for more realistic ones. Extensive experiments demonstrate that Centroids Matching achieves accuracy gains on multiple datasets and scenarios.

URL: https://openreview.net/forum?id=7gzQltQSwr


Title: No imputation without representation

Abstract: By filling in missing values in datasets, imputation allows these datasets to be used with algorithms that cannot handle missing values by themselves. However, missing values may in principle contribute useful information that is lost through imputation. The missing-indicator approach can be used in combination with imputation to instead represent this information as a part of the dataset. There are several theoretical considerations why missing-indicators may or may not be beneficial, but there has not been any large-scale practical experiment on real-life datasets to test this question for machine learning predictions. We perform this experiment for three imputation strategies and a range of different classification algorithms, on the basis of twenty real-life datasets. We find that on these datasets, missing-indicators generally increase classification performance. In addition, we find no evidence for most algorithms that nearest neighbour and iterative imputation lead to better performance than simple mean/mode imputation. Therefore, we recommend the use of missing-indicators with mean/mode imputation as a safe default, with the caveat that for decision trees, pruning is necessary to prevent overfitting. In a follow-up experiment, we determine attribute-specific missingness thresholds for each classifier above which missing-indicators are more likely than not to increase classification performance, and observe that these thresholds are much lower for categorical than for numerical attributes. Finally, we argue that mean imputation of numerical attributes may preserve some of the information from missing values, and we show that in the absence of missing-indicators, it can similarly be useful to apply mean imputation to one-hot encoded categorical attributes instead of mode imputation.

URL: https://openreview.net/forum?id=QmOESArhpp


Title: Incorporating Sum Constraints into Multitask Gaussian Processes

Abstract: Machine learning models can be improved by adapting them to respect existing background knowledge. In this paper we consider multitask Gaussian processes, with background knowledge in the form of constraints that require a specific sum of the outputs to be constant. This is achieved by conditioning the prior distribution on the constraint fulfillment. The approach allows for both linear and nonlinear constraints. We demonstrate that the constraints are fulfilled with high precision and that the construction can improve the overall prediction accuracy as compared to the standard Gaussian process.

URL: https://openreview.net/forum?id=gzu4ZbBY7S


Title: Unifying Approaches in Data Subset Selection via Fisher Information and Information-Theoretic Quantities

Abstract: The mutual information between predictions and model parameters---also referred to as expected information gain or BALD in machine learning---measures informativeness. It is a popular acquisition function in Bayesian active learning. In data subset selection, that is,
active learning and active sampling, several recent works use Fisher information, Hessians, similarity matrices based on the gradients, or simply the gradient lengths to compute the acquisition scores that guide sample selection. Are these different approaches connected,
and if so, how? In this paper, we revisit the Fisher information and use it to show how several otherwise disparate methods are connected as approximations of information-theoretic quantities known from earlier works in Bayesian optimal experiment design.

URL: https://openreview.net/forum?id=UVDAKQANOW


Title: Subgraph Permutation Equivariant Networks

Abstract: In this work we develop a new method, named Sub-graph Permutation Equivariant Networks (SPEN), which provides a framework for building graph neural networks that operate on sub-graphs, while using a base update function that is permutation equivariant, that are equivariant to a novel choice of automorphism group. Message passing neural networks have been shown to be limited in their expressive power and recent approaches to over come this either lack scalability or require structural information to be encoded into the feature space. The general framework presented here overcomes the scalability issues associated with global permutation equivariance by operating more locally on sub-graphs. In addition, through operating on sub-graphs the expressive power of higher-dimensional global permutation equivariant networks is improved; this is due to fact that two non-distinguishable graphs often contain distinguishable sub-graphs. Furthermore, the proposed framework only requires a choice of $k$-hops for creating ego-network sub-graphs and a choice of representation space to be used for each layer, which makes the method easily applicable across a range of graph based domains. We experimentally validate the method on a range of graph benchmark classification tasks, demonstrating statistically indistinguishable results from the state-of-the-art on six out of seven benchmarks. Further, we demonstrate that the use of local update functions offers a significant improvement in GPU memory over global methods.

URL: https://openreview.net/forum?id=5erasT6Tal


Title: On Connecting Deep Trigonometric Networks with Deep Gaussian Processes: Covariance, Expressivity, and Neural Tangent Kernel

Abstract: Deep Gaussian Process (DGP) as a model prior in Bayesian learning intuitively exploits the expressive power of function composition. DGPs also offer diverse modeling capabil- ities, but inference is challenging because marginalization in latent function space is not tractable. With Bochner’s theorem, DGP with squared exponential kernel can be viewed as a deep trigonometric network consisting of the random feature layers, sine and cosine acti- vation units, and random weight layers. In the wide limit with a bottleneck, we show that the weight space view yields the same effective covariance functions which were obtained previously in function space. Also, varying the prior distributions over network parameters is equivalent to employing different kernels. As such, DGPs can be translated into the deep bottlenecked trigonometric networks, with which the exact maximum a posteriori estimate can be obtained. Interestingly, the network representation enables the study of DGP’s neu- ral tangent kernel, which may also reveal the mean of the intractable predictive distribution. Statistically, unlike the shallow networks, deep networks of finite width have covariance de- viating from the limiting kernel, and the inner and outer widths may play different roles in feature learning. Numerical simulations are presented to support our findings.

URL: https://openreview.net/forum?id=DmjBJtCIKu


Title: Generative Adversarial Neural Operators

Abstract: We propose the generative adversarial neural operator (GANO), a generative model paradigm for learning probabilities on infinite-dimensional function spaces. The natural sciences and engineering are known to have many types of data that are sampled from infinite-dimensional function spaces, where classical finite-dimensional deep generative adversarial networks (GANs) may not be directly applicable. GANO generalizes the GAN framework and allows for the sampling of functions by learning push-forward operator maps in infinite-dimensional spaces. GANO consists of two main components, a generator neural operator and a discriminator neural functional. The inputs to the generator are samples of functions from a user-specified probability measure, e.g., Gaussian random field (GRF), and the generator outputs are synthetic data functions. The input to the discriminator is either a real or synthetic data function. In this work, we instantiate GANO using the Wasserstein criterion and show how the Wasserstein loss can be computed in infinite-dimensional spaces. We empirically study GANO in controlled cases where both input and output functions are samples from GRFs and compare its performance to the finite-dimensional counterpart GAN. We empirically study the efficacy of GANO on real-world function data of volcanic activities and show its superior performance over GAN. Furthermore, we find that for the function-based data considered, GANOs are more stable to train than GANs and require less hyperparameter optimization.

URL: https://openreview.net/forum?id=X1VzbBU6xZ


Title: If your data distribution shifts, use self-learning

Abstract: We demonstrate that self-learning techniques like entropy minimization and pseudo-labeling are simple and effective at improving performance of a deployed computer vision model under systematic domain shifts. We conduct a wide range of large-scale experiments and show consistent improvements irrespective of the model architecture, the pre-training technique or the type of distribution shift. At the same time, self-learning is simple to use in practice because it does not require knowledge or access to the original training data or scheme, is robust to hyperparameter choices, is straight-forward to implement and requires only a few adaptation epochs. This makes self-learning techniques highly attractive for any practitioner who applies machine learning algorithms in the real world. We present state-of-the-art adaptation results on CIFAR10-C (8.5% error), ImageNet-C (22.0% mCE), ImageNet-R (17.4% error) and ImageNet-A (14.8% error), theoretically study the dynamics of self-supervised adaptation methods and propose a new classification dataset (ImageNet-D) which is challenging even with adaptation.

URL: https://openreview.net/forum?id=vqRzLv6POg


Title: Reinventing Policy Iteration under Time Inconsistency

Abstract: Policy iteration (PI) is a fundamental policy search algorithm in standard reinforcement learning (RL) setting, which can be shown to converge to an optimal policy by policy improvement theorems. However, under time-inconsistent (TIC) objectives, the use of standard PI has been marked with questions regarding the convergence of its policy improvement scheme and the optimality of its termination policy, often leading to its avoidance. In this paper, we consider infinite-horizon TIC RL setting and formally present a type of dynamic optimality: subgame perfect equilibrium that corresponds to the sophisticated behaviour of an economic agent in the face of TIC. We first analyze standard PI under this type of dynamic optimality, revealing its merits and insufficiencies. Drawing on these observations, we propose backward Q-learning (bwdQ), a new algorithm in the approximate PI family that targets SPE policy under general (non-exponential) discounting criteria. Finally, with two TIC gridworld environments, we demonstrate the implications of our theoretical findings
on the behavior of the bwdQ and other approximate PI variants.

URL: https://openreview.net/forum?id=bN2vWLTh0P


Title: An Attract-Repel Decomposition of Undirected Networks

Abstract: Dot product latent space models are a standard method in many areas ranging from social network analysis to computational biology. Such models have issues modeling graphs which include unclosed triangles such as social networks which include latent heterophily (i.e. cases where opposites attract) or co-occurrence graphs which have substitutes (items which occur in similar contexts but not together). We show a minimal expansion to the dot product model which includes both homophily (attract) and heterophily (repel) latent forces. Beyond simply fitting the data, we discuss how to use the AR spaces produced to more deeply understand real networks allowing analysts to measure the latent heterophily in social network formation, detect substitutes in co-occurrence networks, or perform exploratory analysis for candidates for inhibition / activation relationships in systems biology.

URL: https://openreview.net/forum?id=P18uCQqEcb


Title: Robustness through Data Augmentation Loss Consistency

Abstract: While deep learning through empirical risk minimization (ERM) has succeeded in achieving human-level performance at a variety of complex tasks, ERM is not robust to distribution shifts or adversarial attacks. Synthetic data augmentation followed by empirical risk minimization (DA-ERM) is a simple and widely used solution to improve robustness in ERM. In addition, consistency regularization can be applied to further improve the robustness of the model by forcing the representation of the original sample and the augmented one to be similar. However, existing consistency regularization methods are not applicable to *covariant data augmentation*, where the label in the augmented sample is dependent on the augmentation function, e.g., dialog state covaries with named entity when we augment data with a new named entity. In this paper, we propose data augmented invariant regularization (DAIR), a simple form of consistency regularization that is applied directly at the loss level rather than intermediate features, making it widely applicable to both invariant and covariant data augmentation regardless of network architecture, problem setup, and task. We apply DAIR to real-world learning problems involving covariant data augmentation: robust neural task-oriented dialog state tracking and robust visual question answering. We also apply DAIR to tasks involving invariant data augmentation: robust regression, robust classification against adversarial attacks, and robust ImageNet classification under distribution shift. Our experiments show that DAIR consistently outperforms ERM and DA-ERM with little marginal computational cost and sets new state-of-the-art results in several benchmarks.

URL: https://openreview.net/forum?id=a1meaRy1bN


Title: Reasonable Effectiveness of Random Weighting: A Litmus Test for Multi-Task Learning

Abstract: Multi-Task Learning (MTL) has achieved success in various fields. However, how to balance different tasks to achieve good performance is a key problem. To achieve task balancing, there are many works to carefully design dynamical loss/gradient weighting strategies but the basic random experiments are ignored to examine their effectiveness. In this paper, we propose the Random Weighting (RW) methods, including Random Loss Weighting (RLW) and Random Gradient Weighting (RGW), where an MTL model is trained with random loss/gradient weights sampled from a distribution. To show the effectiveness and necessity of RW methods, theoretically we analyze the convergence of RW and reveal that RW has a higher probability to escape local minima, resulting in better generalization ability. Empirically, we extensively evaluate the proposed RW methods to compare with twelve state-of-the-art methods on five image datasets and two multilingual problems from the XTREME benchmark to show RW methods can achieve comparable performance with state-of-the-art baselines. Therefore, we think that the RW methods are important baselines for MTL and should attract more attention.

URL: https://openreview.net/forum?id=jjtFD8A1Wx


Title: Dual Regularized Optimal Transport

Abstract: In this paper, we present a new formulation of unbalanced optimal transport called Dual Regularized Optimal Transport (DROT). We argue that regularizing the dual formulation of optimal transport results in a version of unbalanced optimal transport that leads to sparse solutions and that gives us control over mass creation and destruction. We build intuition behind such control and present theoretical properties of the solutions to DROT. We demonstrate that due to recent advances in optimization techniques, we can feasibly solve such a formulation at large scales and present extensive experimental evidence for this formulation and its solution.

URL: https://openreview.net/forum?id=TdzzL32OUd


Title: Active Learning of Ordinal Embeddings: A User Study on Football Data

Abstract: Humans innately measure distance between instances in an unlabeled dataset using an unknown similarity function. Distance metrics can only serve as proxy for similarity in information retrieval of similar instances. Learning a good similarity function from human annotations improves the quality of retrievals. This work uses deep metric learning to learn these user-defined similarity functions from few annotations for a large football trajectory dataset.
We adapt an entropy-based active learning method with recent work from triplet mining to collect easy-to-answer but still informative annotations from human participants and use them to train a deep convolutional network that generalizes to unseen samples.
Our user study shows that our approach improves the quality of the information retrieval compared to a previous deep metric learning approach that relies on a Siamese network. Specifically, we shed light on the strengths and weaknesses of passive sampling heuristics and active learners alike by analyzing the participants' response efficacy. To this end, we collect accuracy, algorithmic time complexity, the participants' fatigue and time-to-response, qualitative self-assessment and statements, as well as the effects of mixed-expertise annotators and their consistency on model performance and transfer-learning.

URL: https://openreview.net/forum?id=oq3tx5kinu


Title: Multi-Resolution Continuous Normalizing Flows

Abstract: Recent work has shown that Neural Ordinary Differential Equations (ODEs) can serve as generative models of images using the perspective of Continuous Normalizing Flows (CNFs). Such models offer exact likelihood calculation, and invertible generation/density estimation. In this work we introduce a Multi-Resolution variant of such models (MRCNF), by characterizing the conditional distribution over the additional information required to generate a fine image that is consistent with the coarse image. We introduce a transformation between resolutions that allows for no change in the log likelihood. We show that this approach yields comparable likelihood values for various image datasets, with improved performance at higher resolutions, with fewer parameters, using only one GPU. Further, we examine the out-of-distribution properties of MRCNFs, and find that they are similar to those of other likelihood-based generative models.

URL: https://openreview.net/forum?id=Ymu8ISwZzl


Title: Robust uncertainty estimates with out-of-distribution pseudo-inputs training

Abstract: Probabilistic models often use neural networks to control their predictive uncertainty. However, when making out-of-distribution (OOD) predictions, the often-uncontrollable extrapolation properties of neural networks yield poor uncertainty predictions. Such models then don’t know what they don’t know, which directly limits their robustness w.r.t unexpected inputs. To counter this, we propose to explicitly train the uncertainty predictor where we are not given data to make it reliable. As one cannot train without data, we provide mechanisms for generating pseudo-inputs in informative low-density regions of the input space, and show how to leverage these in a practical Bayesian framework that casts a prior distribution over the model uncertainty. With a holistic evaluation, we demonstrate that this yields robust and interpretable predictions of uncertainty while retaining state-of-the-art performance on diverse tasks such as regression and generative modelling.

URL: https://openreview.net/forum?id=WiuqtomnY6


Reply all
Reply to author
0 new messages