Daily TMLR digest for Mar 05, 2023

4 views
Skip to first unread message

TMLR

unread,
Mar 4, 2023, 7:00:08 PM3/4/23
to tmlr-anno...@googlegroups.com


New submissions
===============


Title: Summary Statistic Privacy in Data Sharing

Abstract: Data sharing between different parties has become increasingly common across industry and academia. An important class of privacy concerns that arises in data sharing scenarios regards the underlying distribution of data. For example, the total traffic volume of data from a networking company can reveal the scale of its business, which may be considered a trade secret. Unfortunately, existing privacy frameworks (e.g., differential privacy, anonymization) do not adequately address such concerns. In this paper, we propose summary statistic privacy, a framework for analyzing and protecting these summary statistic privacy concerns. We propose a class of quantization mechanisms that can be tailored to various data distributions and statistical secrets, and analyze their privacy-distortion trade-offs under our framework. We prove corresponding lower bounds on the privacy-utility tradeoff, which match the tradeoffs of the quantization mechanism under certain regimes, up to small constant factors. Finally, we demonstrate that the proposed quantization mechanisms achieve better privacy-distortion tradeoffs than alternative privacy mechanisms on real-world datasets.

URL: https://openreview.net/forum?id=oFZyeFe85Z

---

Title: You Only Debias Once: Towards Flexible Accuracy-Fairness Trade-offs at the Inference Time

Abstract: Deep neural networks are prone to various bias issues, jeopardizing their applications for high-stake decision-making. Existing fairness methods typically offer a fixed accuracy-fairness trade-off at the inference time, since the weight of the well-trained model is a fixed point (fairness-optimum) in the weight space. Nevertheless, more flexible accuracy-fairness trade-offs at the inference time are practically desired since: 1) stakes of the same downstream task can vary for different individuals, and 2) different regions have diverse laws or regularization for fairness. If using the previous fairness methods, we have to train multiple models, each offering a specific level of accuracy-fairness trade-off. This is often computationally expensive, time-consuming, and difficult to deploy, making it less practical for real-world applications. To address this problem, we propose \textit{You Only Debias Once} (YODO) to achieve in-situ flexible accuracy-fairness trade-offs at the inference time, using \textit{a single model} that trained only once. Instead of pursuing one individual fixed point (fairness-optimum) in the weight space, we aim to find a ``line'' in the weight space that connects the accuracy-optimum and fairness-optimum points using a single model. Points (models) on this line implement varying levels of accuracy-fairness trade-offs. At the inference time, by manually selecting the specific position of the learned ``line'', our proposed method can achieve arbitrary accuracy-fairness trade-offs for different end-users and scenarios. Experimental results on tabular and image datasets show that YODO achieves flexible trade-offs between model accuracy and fairness, at ultra-low overheads. Our codes are anonymously available at https://anonymous.4open.science/r/yodo-BB81 .

URL: https://openreview.net/forum?id=dSbbZwCTQI

---

Title: Posterior Annealing: Fast Calibrated Uncertainty for Regression

Abstract: Bayesian deep learning approaches that allow uncertainty estimation for regression problems often converge slowly and yield poorly calibrated uncertainty estimates that can not be effectively used for quantification. Recently proposed post hoc calibration techniques are seldom applicable to regression problems and often add overhead to an already slow model training phase. This work presents a fast calibrated uncertainty estimation method for regression tasks, called {\em posterior annealing}, that consistently improves the convergence of deep regression models and yields calibrated uncertainty without any post hoc calibration phase. Unlike previous methods for calibrated uncertainty in regression that focus only on low-dimensional regression problems, our method works well on a wide spectrum of regression problems. Our empirical analysis shows that our approach is generalizable to various network architectures including, multilayer perceptrons, 1D/2D convolutional networks, and graph neural networks, on five vastly diverse tasks, i.e., chaotic particle trajectory denoising, physical property prediction of molecules using 3D atomistic representation, natural image super-resolution, and medical image translation using MRI images.

URL: https://openreview.net/forum?id=RTeIGQtIfq

---
Reply all
Reply to author
Forward
0 new messages