Weekly TMLR digest for May 29, 2022

1 view
Skip to first unread message

TMLR

unread,
May 28, 2022, 8:00:06 PM5/28/22
to tmlr-annou...@googlegroups.com

New submissions
===============


Title: Convergence Analysis of Schr{\"o}dinger-F{\"o}llmer Sampler without Convexity

Abstract: Schr\"{o}dinger-F\"{o}llmer sampler (SFS) (Huang et al., 2021) is a novel and efficient approach for sampling from possibly unnormalized distributions without ergodicity. SFS is based on the Euler-Maruyama discretization of Schr\"{o}dinger-F\"{o}llmer diffusion process
$$\mathrm{d} X_{t}=-\nabla U\left(X_t, t\right) \mathrm{d} t+\mathrm{d} B_{t}, \quad t \in[0,1],\quad X_0=0$$ on the unit interval, which transports the degenerate distribution at time zero to the target distribution at time one. In Huang et al. (2021), the consistency of SFS is established under a restricted assumption that the potential $U(x,t)$ is uniformly (on $t$) strongly convex (on $x$). In this paper we provide a non-asymptotic error bound of SFS in Wasserstein distance under some smooth and bounded conditions on the density ratio of the target distribution over the standard normal distribution, but without requiring the strongly convexity of the potential.

URL: https://openreview.net/forum?id=V3gHTlZIoP

---

Title: Weight Expansion: A New Perspective on Dropout and Generalization

Abstract: While dropout is known to be a successful regularization technique, insights into the mechanisms that lead to this success are still lacking. We introduce the concept of weight expansion, an increase in the signed volume of a parallelotope spanned by the column or row vectors of the weight covariance matrix, and show that weight expansion is an effective means of increasing the generalization in a PAC-Bayesian setting. We provide a theoretical argument that dropout leads to weight expansion and extensive empirical support for the correlation between dropout and weight expansion. To support our hypothesis that weight expansion can be regarded as an indicator of the enhanced generalization capability endowed by dropout, and not just as a mere by-product, we have studied other methods that achieve weight expansion (resp.\ contraction), and found that they generally lead to an increased (resp.\ decreased) generalization ability. This suggests that dropout is an attractive regularizer, because it is a computationally cheap method for obtaining weight expansion. This insight justifies the role of dropout as a regularizer, while paving the way for identifying regularizers that promise improved generalization through weight expansion.

URL: https://openreview.net/forum?id=w3z3sN1b04

---

Title: A Comprehensive Study of Real-Time Object Detection Networks Across Multiple Domains: A Survey

Abstract: Deep neural network based object detectors are continuously evolving and are used in a multitude of applications, each having its own set of requirements. While safety-critical applications need high accuracy and reliability, low-latency tasks need resource and energy efficient networks. Real-time detection networks, which are a necessity in high-impact real-world applications, are continuously proposed but they overemphasize the improvements in accuracy and speed while other capabilities such as versatility, robustness, resource, and energy efficiency are omitted. A reference benchmark for existing networks does not exist nor does a standard evaluation guideline for designing new networks, which results in ambiguous and inconsistent comparisons. We, therefore, conduct a comprehensive study on multiple real-time detection networks (anchor-based, keypoint-based, and transformer-based) on a wide range of datasets and report results on an extensive set of metrics. We also study the impact of variables such as image size, anchor dimensions, confidence thresholds, and architecture layers on the overall performance. We analyze the robustness of detection networks against distribution shift and natural corruptions and also provide the calibration metric to gauge the reliability of the predictions. Finally, to highlight the real-world impact, we conduct two unique case studies, on autonomous driving and healthcare application. To further gauge the capability of networks in critical real-time applications, we report the performance after deploying the detection networks on edge devices. Our extensive empirical study can act as a guideline for the industrial community to make an informed choice on the existing networks. We also hope to inspire the research community towards a new direction of design and evaluation of networks that focuses on the bigger and holistic overview for a far-reaching impact.

URL: https://openreview.net/forum?id=ywr5sWqQt4

---

Title: Conformal Prediction Intervals with Temporal Dependence

Abstract: Cross-sectional prediction is common in many domains such as healthcare, including forecasting tasks using electronic health records, where different patients form a cross-section. We focus on the task of constructing valid prediction intervals (PIs) in time-series regression with a cross-section. A prediction interval is considered valid if it covers the true response with (a pre-specified) high probability. We first distinguish between two notions of validity in such a setting: cross-sectional and longitudinal. Cross-sectional validity is concerned with validity across the cross-section of the time series data, while longitudinal validity accounts for the temporal dimension. Coverage guarantees along both these dimensions are ideally desirable; however, we show that distribution-free longitudinal validity is theoretically impossible. Despite this limitation, we propose Conformal Prediction with Temporal Dependence (CPTD), a procedure which is able to maintain strict cross-sectional validity while improving longitudinal coverage. CPTD is post-hoc and light-weight, and can easily be used in conjunction with any prediction model as long as a calibration set is available. We focus on neural networks due to their ability to model complicated data such as diagnosis codes for time-series regression, and perform extensive experimental validation to verify the efficacy of our approach. We find that CPTD outperforms baselines on a variety of datasets by improving longitudinal coverage and often providing more efficient (narrower) PIs.

URL: https://openreview.net/forum?id=8QoxXTDcsH

---

Title: Diagnosing and Fixing Manifold Overfitting in Deep Generative Models

Abstract: Likelihood-based, or explicit, deep generative models use neural networks to construct flexible high-dimensional densities. This formulation directly contradicts the manifold hypothesis, which states that observed data lies on a low-dimensional manifold embedded in high-dimensional ambient space. In this paper we investigate the pathologies of maximum-likelihood training in the presence of this dimensionality mismatch. We formally prove, in a measure-theoretic way, that degenerate optima are achieved wherein the manifold itself is learned but not the distribution on it, a phenomenon we call manifold overfitting. We propose a class of two-step procedures consisting of a dimensionality reduction step followed by maximum-likelihood density estimation, and prove that they recover the data-generating distribution in the nonparametric regime, thus avoiding manifold overfitting. We also show that these procedures enable density estimation on the manifolds learned by implicit models, such as generative adversarial networks, hence addressing a major shortcoming of these models. Several recently proposed methods are instances of our two-step procedures; we thus unify, extend, and theoretically justify a large class of models.

URL: https://openreview.net/forum?id=0nEZCVshxS

---

Title: Deep Classifiers with Label Noise Modeling and Distance Awareness

Abstract: Uncertainty estimation in deep learning has recently emerged as a crucial area of interest to advance reliability and robustness in safety-critical applications. While there have been many proposed methods that either focus on distance-aware model uncertainties for out-of-distribution detection or on input-dependent label uncertainties for in-distribution calibration, both of these types of uncertainty are often necessary. In this work, we propose the HetSNGP method for jointly modeling the model and data uncertainty. We show that our proposed model affords a favorable combination between these two types of uncertainty and thus outperforms the baseline methods on some challenging out-of-distribution datasets, including CIFAR-100C, ImageNet-C, and ImageNet-A. Moreover, we propose HetSNGP Ensemble, an ensembled version of our method which additionally models uncertainty over the network parameters and outperforms other ensemble baselines.

URL: https://openreview.net/forum?id=Id7hTt78FV

---

Title: Ranking Recovery under Privacy Considerations

Abstract: We consider the {\em private ranking recovery problem}, where a data collector seeks to estimate the permutation/ranking of a data vector given a randomized (privatized) version of it.
We aim to establish fundamental trade-offs between the performance of the estimation task, measured in terms of {\em probability of error}, and the {\em level of privacy} that can be guaranteed when the noise mechanism consists of adding artificial noise.
Towards this end, we show the optimality of a low-complexity decision rule (referred to as linear decoder) for the estimation task, under several noise distributions widely used in the privacy literature (e.g., Gaussian, Laplace, and generalized normal model).
We derive the Taylor series of the probability of error, which yields its first and second-order approximations when such a linear decoder is employed.
We quantify the guaranteed level of privacy using the $(\alpha,\epsilon)$-Rényi differential privacy metric.
Finally, we put together the results to characterize trade-offs between privacy and probability of error.

URL: https://openreview.net/forum?id=2EOVIvRXlv

---

Title: Recurrent networks, hidden states and beliefs in partially observable environments

Abstract: Reinforcement learning aims to learn optimal policies from interaction with environments whose dynamics are unknown. Many methods rely on the approximation of a value function to derive near-optimal policies. In partially observable environments, these functions depend on the complete sequence of observations and past actions, called the history. In this work, we show empirically that recurrent neural networks trained to approximate such value functions internally filter the posterior probability distribution of the current state given the history, called the belief. More precisely, we show that, as a recurrent neural network learns the Q-function, its hidden states become more and more correlated with the beliefs of state variables that are relevant to optimal control. This correlation is measured through their mutual information. In addition, we show that the expected return of an agent increases with the ability of its recurrent architecture to reach a high mutual information between its hidden states and the beliefs. Finally, we show that the mutual information between the hidden states and the beliefs of variables that are irrelevant for optimal control decreases through the learning process. In summary, this work shows that in its hidden states, an RNN approximating the Q-function of a partially observable environment reproduces a sufficient statistic from the history that is correlated to the relevant part of the belief for taking optimal actions.

URL: https://openreview.net/forum?id=dkHfV3wB2l

---

Title: Evaluation Pitfalls in Data Augmentation for Adversarial Robustness

Abstract: Recent work has proposed novel data augmentation methods to improve adversarial robustness of deep neural networks. Recent work has proposed novel data augmentation methods to improve adversarial robustness of deep neural networks. In this paper, we re-evaluate such methods under a common framework and through the lens of different metrics that characterize the augmented manifold, finding contradictory evidence. In particular, our extensive empirical analysis involving 5 data augmentation methods, tested with 10 augmentation probabilities, shows that: (i) novel data augmentation methods proposed to improve adversarial robustness only improve it when combined with classical augmentations, like image flipping and rotation; (ii) novel data augmentation methods even worsen adversarial robustness if not combined with classical augmentations; and (iii) adversarial robustness is significantly affected by augmentation probability, conversely to what claimed in recent work. We conclude by discussing how to rethink the development and the evaluation of novel data augmentation methods for adversarial robustness.In this paper, we re-evaluate such methods under a common framework and through the lens of different metrics that characterize the augmented manifold, finding contradictory evidence. In particular, our extensive empirical analysis involving 5 data augmentation methods, tested with 10 augmentation probabilities, shows that: (i) novel data augmentation methods proposed to improve adversarial robustness only improve it when combined with classical augmentations, like image flipping and rotation; (ii) novel data augmentation methods even worsen adversarial robustness if not combined with classical augmentations; and (iii) adversarial robustness is significantly affected by augmentation probability, conversely to what claimed in recent work. We conclude by discussing how to rethink the development and the evaluation of novel data augmentation methods for adversarial robustness.

URL: https://openreview.net/forum?id=gHpTlBGRQg

---

Title: A Simple Yet Effective SVD-GCN for Directed Graphs

Abstract: In this paper, we will present a simple yet effective way for directed Graph (digraph) Convolutional Neural Networks based on the classic Singular Value Decomposition (SVD), named SVD-GCN for digraphs. Through empirical experiments on node classification datasets, we have found that SVD-GCN has remarkable improvements in a number of graph node learning tasks and outperforms GCN and many other state-of-the-art graph neural networks.

URL: https://openreview.net/forum?id=icVzOFd2BN

---

Title: Offline Policy Comparison with Confidence: Benchmarks and Baselines

Abstract: Decision makers often wish to use offline historical data to compare sequential-action policies at various world states. Importantly, computational tools should produce confidence values for such offline policy comparison (OPC) to account for statistical variance and limited data coverage. Nevertheless, there is little work that directly evaluates the quality of confidence values for OPC. In this work, we address this issue by creating benchmarks for OPC with Confidence (OPCC), derived by adding sets of policy comparison queries to datasets from offline reinforcement learning. In addition, we present an empirical evaluation of the risk versus coverage trade-off for a class of model-based baselines. In particular, the baselines learn ensembles of dynamics models, which are used in various ways to produce simulations for answering queries with confidence values. While our results suggest advantages for certain baseline variations, there appears to be significant room for improvement in future work.

URL: https://openreview.net/forum?id=9blaLpc5iz

---

Title: How Expressive are Transformers in Spectral Domain for Graphs?

Abstract: The recent works proposing transformer-based models for graphs have proven the inadequacy of Vanilla Transformer for graph representation learning. To understand this inadequacy, there is a need to investigate if spectral analysis of the transformer will reveal insights into its expressive power. Similar studies already established that spectral analysis of Graph neural networks (GNNs) provides extra perspectives on their expressiveness.
In this work, we systematically study and establish the link between the spatial and spectral domain in the realm of the transformer. We further provide a theoretical analysis that the spatial attention mechanism in the transformer cannot effectively capture the desired frequency response, thus, inherently limiting its expressiveness in spectral space. Therefore, we propose FeTA, a framework that aims to perform attention over the entire graph spectrum analogous to the attention in spatial space.
Empirical results suggest that FeTA provides homogeneous performance gain against vanilla transformer across all tasks on standard benchmarks and can easily be extended to GNN-based models with low-pass characteristics (e.g., GAT). Furthermore, replacing the vanilla transformer model with FeTA in recently proposed position encoding schemes has resulted in comparable or better performance than transformer and GNN baselines.

URL: https://openreview.net/forum?id=aRsLetumx1

---

Title: Mean-Field Langevin Dynamics : Exponential Convergence and Annealing

Abstract: Noisy particle gradient descent (NPGD) is an algorithm to minimize convex functions over the space of measures that include an entropy term. In the many-particle limit, this algorithm is described by a Mean-Field Langevin dynamics---a generalization of the Langevin dynamic with a non-linear drift---which is our main object of study. Previous work have shown its convergence to the unique minimizer via non-quantitative arguments. We prove that this dynamics converges at an exponential rate, under the assumption that a certain family of Log-Sobolev inequalities holds. This assumption holds for instance for the minimization of the risk of certain two-layer neural networks, where NPGD is equivalent to standard noisy gradient descent. We also study the annealed dynamics, and show that for a noise decaying at a logarithmic rate, the dynamics converges in value to the global minimizer of the unregularized objective function.

URL: https://openreview.net/forum?id=BDqzLH1gEm

---

Title: Application of Referenced Thermodynamic Integration to Bayesian Model Selection

Abstract: Evaluating normalising constants is important across a range of topics in statistical learning, notably Bayesian model selection. However, in many realistic problems this involves the integration of analytically intractable, high-dimensional distributions, and therefore requires the use of stochastic methods such as thermodynamic integration (TI). In this paper we apply a simple but under-appreciated variation of the TI method, here referred to as \emph{referenced TI}, which computes a single model's normalising constant in an efficient way by using a judiciously chosen reference density. The advantages of the approach and theoretical considerations are set out, along with pedagogical 1 and 2D examples. The approach is shown to be useful in practice when applied to a real problem — to perform model selection for a semi-mechanistic hierarchical Bayesian model of COVID-19 transmission in South Korea involving the integration of a 200D density.

URL: https://openreview.net/forum?id=brjAu7frHb

---

Title: Domain-invariant Feature Exploration for Domain Generalization

Abstract: Deep learning has achieved great success in the past few years. However, the performance of deep learning is likely to impede in face of non-IID situations. Domain generalization (DG) haas attracted increasing interest in recent years, enabling a model to generalize to an unseen test distribution, i.e., to learn domain-invariant representations. In this paper, we argue that domain-invariant features should be originating from both internal and mutual sides: the internally-invariant features capture the intrinsic semantics of the data while the mutually-invariant features learn the cross-domain transferable knowledge. We then propose DIFEX for Domain-Invariant Feature EXploration. DIFEX employs a knowledge distillation framework to capture the high-level Fourier phase as the internally-invariant features and learn cross-domain correlation alignment as the mutually-invariant features. We further design an exploration loss to increase the feature diversity for better generalization. Extensive experiments on both time-series and visual benchmarks demonstrate that the proposed DIFEX achieves state-of-the-art performance.

URL: https://openreview.net/forum?id=0xENE7HiYm

---

Title: Zero-shot Object Detection with a Text and Image Contrastive Model

Abstract: We introduce DUCE, a generalizeable zero-shot object detector, and BCC, a novel method
of bounding box consolidation for models where traditional non-maximum suppression is
insufficient. DUCE leverages the zero-shot performance of CLIP (Radford et al. (2021))
in combination with a region proposal network (Ren et al. (2015)) to achieve state of the
art results in generalized zero-shot object detection with minimal training. This approach
introduces a new challenge in that DUCE is able to label portions of an image with very
high confidence, leading to numerous high confidence bounding boxes around an object of
interest. In these scenarios, traditional forms of non-maximum suppression fail to reduce
the number of bounding boxes. We introduce BCC as a new approach to bounding box
suppression, that allows us to successfully navigate this challenge. DUCE and BCC are
able to achieve competitive results to other state of the art models for all classes, agnostic of
whether or not the RPN was trained on those classes. Our proposed model and new method
bounding-box consolidation represents a novel approach to the zero-shot object detection
task.

URL: https://openreview.net/forum?id=k8lB1Pkswa

---

Title: Geometric Random Walk Graph Neural Networks via Implicit Layers

Abstract: Graph neural networks have recently attracted a lot of attention and have been applied with great success to several important graph problems. The Random Walk Graph Neural Network model was recently proposed as a more intuitive alternative to the well-studied family of message passing neural networks. This model compares each input graph against a set of latent ``hidden graphs'' using a kernel that counts common random walks up to some length. In this paper, we propose a new architecture, called Geometric Random Walk Graph Neural Network (GRWNN), that generalizes the above model such that it can count common walks of infinite length in two graphs. The proposed model retains the transparency of Random Walk Graph Neural Networks since its first layer also consists of a number of trainable ``hidden graphs'' which are compared against the input graphs using the geometric random walk kernel. To compute the kernel, we employ a fixed-point iteration approach involving implicitly defined operations. Then, we capitalize on implicit differentiation to derive an efficient training scheme which requires only constant memory, regardless of the number of fixed-point iterations. The employed random walk kernel is differentiable, and therefore, the proposed model is end-to-end trainable. Experiments on standard graph classification datasets demonstrate the effectiveness of the proposed approach in comparison with state-of-the-art methods.

URL: https://openreview.net/forum?id=VeWJZAGPcZ

---

Title: Leveraging Causal Graphs for Blocking in Randomized Experiments

Abstract: \textit{Randomized experiments} are often performed to study the causal effects of interest. \textit{Blocking} is a technique to precisely estimate the causal effects when the experimental material is not homogeneous. It involves stratifying the available experimental material based on the covariates causing non-homogeneity and then randomizing the treatment within those strata (known as \textit{blocks}). This eliminates the unwanted effect of the covariates on the causal effects of interest. We investigate the problem of finding a \textit{stable} set of covariates to be used to form blocks, that minimizes the variance of the causal effect estimates. Using the underlying causal graph, we provide an efficient algorithm to obtain such a set for a general \textit{semi-Markovian} causal model.

URL: https://openreview.net/forum?id=dpSskDgDfE

---

Title: Decoder Denoising Pretraining for Semantic Segmentation

Abstract: Semantic segmentation labels are expensive and time consuming to acquire. Hence, pretraining is commonly used to improve the label-efficiency of segmentation models. Typically, the encoder of a segmentation model is pretrained as a classifier and the decoder is randomly initialized. Here, we argue that random initialization of the decoder can be suboptimal, especially when few labeled examples are available. We propose a decoder pretraining approach based on denoising, which can be combined with supervised pretraining of the encoder. We find that decoder denoising pretraining on the ImageNet dataset strongly outperforms encoder-only supervised pretraining. Despite its simplicity, decoder denoising pretraining achieves state-of-the-art results on label-efficient semantic segmentation and offers considerable gains on the Cityscapes, Pascal Context, and ADE20K datasets.

URL: https://openreview.net/forum?id=D3WI0QG7dC

---

Reply all
Reply to author
Forward
0 new messages