Daily TMLR digest for Jul 13, 2022

0 views
Skip to first unread message

TMLR

unread,
Jul 12, 2022, 8:00:06 PM7/12/22
to tmlr-anno...@googlegroups.com


New submissions
===============


Title: Degradation Attacks on Certifiably Robust Neural Networks

Abstract: Certifiably robust neural networks protect against adversarial examples by employing provable run-time defenses that check if the model is locally robust at the input under evaluation. We show through examples and experiments that any defense (whether complete or incomplete) based on checking local robustness is inherently over-cautious. Specifically, such defenses flag inputs for which local robustness checks fail, but yet that are not adversarial; i.e., they are classified consistently with all valid inputs within a distance of $\epsilon$. As a result, while a norm-bounded adversary cannot change the classification of an input, it can use norm-bounded changes to degrade the utility of certifiably robust networks by forcing them to reject otherwise correctly classifiable inputs. We empirically demonstrate the efficacy of such attacks against state-of-the-art certifiable defenses.


URL: https://openreview.net/forum?id=P0XO5ZE98j

---

Title: ZerO Initialization: Initializing Neural Networks with only Zeros and Ones

Abstract: Deep neural networks are usually initialized with random weights, with adequately selected initial variance to ensure stable signal propagation during training. However, selecting the appropriate variance becomes challenging especially as the number of layers grows. In this work, we replace random weight initialization with a fully deterministic initialization scheme, viz., ZerO, which initializes the weights of networks with only zeros and ones, based on (deterministic) identity and Hadamard transforms. Through both theoretical and empirical studies, we demonstrate that ZerO is able to train networks without damaging their expressivity. Applying ZerO on ResNet achieves state-of-the-art performance on various datasets, including ImageNet, which suggests random weights may be unnecessary for network initialization. In addition, ZerO has many benefits, such as training ultra deep networks (without batch-normalization), exhibiting low-rank learning trajectories that result in low-rank and sparse solutions, and improving training reproducibility.

URL: https://openreview.net/forum?id=1AxQpKmiTc

---
Reply all
Reply to author
Forward
0 new messages