Accepted papers
===============
Title: ULTra: Unveiling Latent Token Interpretability in Transformer-Based Understanding and Segmentation
Authors: Hesam Hosseini, Ghazal Hosseini Mighan, Amirabbas Afzali, Sajjad Amini, Amir Houmansadr
Abstract: Transformers have revolutionized Computer Vision (CV) through self-attention mechanisms. However, their complexity makes latent token representations difficult to interpret. We introduce ULTra, a framework for interpreting Transformer embeddings and uncovering meaningful semantic patterns within them. ULTra enables unsupervised semantic segmentation using pre-trained models without requiring fine-tuning. Additionally, we propose a self-supervised training approach that refines segmentation performance by learning an external transformation matrix without modifying the underlying model. Our method achieves state-of-the-art performance in unsupervised semantic segmentation, outperforming existing segmentation methods. Furthermore, we validate ULTra for model interpretation on both synthetic and real-world scenarios, including Object Selection and interpretable text summarization using LLMs, demonstrating its broad applicability in explaining the semantic structure of latent token representations.
URL: https://openreview.net/forum?id=vL3pmJjGDQ
---
Title: Overcoming Open-Set Approaches to Adversarial Defense
Authors: Edgar Wilfred Jatho, Armon Barton, Matthew Wright, Patrick McClure
Abstract: Machine learning (ML) models are increasingly proposed to replace or augment safety-critical information processing systems, yet their fragility to evasion attacks remains a well-documented, open problem. This work analyzes a class of deep neural network defenses that add a none-of-the-above (NOTA) class as an open-set-inspired, closed-set adversarial defense. We analyze seven prominent adversarial evasion attacks developed for computer vision classification and one attack developed for natural language processing classification, identifying how these attacks fail in the presence of a NOTA defense. We use this knowledge to adapt these attacks and provide empirical evidence that adding a NOTA class alone does not solve the core challenge of defending DNNs against evasion attacks. We release our adapted attack suite to enable more rigorous future evaluations of open-set-inspired defenses.
URL: https://openreview.net/forum?id=iuQ9r8VSIX
---
Title: Communication-Efficient Federated AUC Maximization with Cyclic Client Participation
Authors: Umesh-Vangapally, Wenhan Wu, Chen Chen, Zhishuai Guo
Abstract: Federated AUC maximization is a powerful approach for learning from imbalanced data in federated learning (FL). However, existing methods typically assume full client availability, which is rarely practical. In real-world FL systems, clients often participate in a cyclic manner: joining training according to a fixed, repeating schedule. This setting poses unique optimization challenges for the non-decomposable AUC objective.
This paper addresses these challenges by developing and analyzing communication-efficient algorithms for federated AUC maximization under cyclic client participation. We investigate two key settings:
First, we study AUC maximization with a squared surrogate loss, which reformulates the problem as a nonconvex-strongly-concave minimax optimization. By leveraging the Polyak-Łojasiewicz (PL) condition, we establish a state-of-the-art communication complexity of $\widetilde{O}(1/\epsilon^{1/2})$ and iteration complexity of $\widetilde{O}(1/\epsilon)$.
Second, we consider general pairwise AUC losses. We establish a communication complexity of $O(1/\epsilon^3)$ and an iteration complexity of $O(1/\epsilon^4)$. Further, under the PL condition, these bounds improve to communication complexity of $\widetilde{O}(1/\epsilon^{1/2})$ and iteration complexity of $\widetilde{O}(1/\epsilon)$.
Extensive experiments on benchmark tasks in image classification, medical imaging, and fraud detection demonstrate the superior efficiency and effectiveness of our proposed methods.
URL: https://openreview.net/forum?id=18yPFLbVRy
---
Title: On Calibration of Multilingual Question Answering LLMs
Authors: Yahan Yang, Soham Dan, Dan Roth, Insup Lee
Abstract: Multilingual pre-trained Large Language Models (LLMs) are incredibly effective at Question Answering (QA), a core task in Natural Language Understanding, achieving high accuracies on several multilingual benchmarks. However, little is known about how well their confidences are calibrated. In this paper, we comprehensively benchmark the calibration of several multilingual LLMs (MLLMs) on a variety of QA tasks. We perform extensive experiments, spanning encoder-only, encoder-decoder, and decoder-only QA models (size varying from 110M to 7B parameters) and diverse languages, including both high- and low-resource ones. We study different dimensions of calibration in in-distribution, out-of-distribution, and cross-lingual transfer settings, and investigate strategies to improve it, including post-hoc methods and regularized fine-tuning. For decoder-only LLMs such as LlaMa2, we additionally find that in-context learning improves confidence calibration on multilingual data.
We also conduct several ablation experiments to study the effect of language distances, language corpus size, and model size on calibration, and how multilingual models compare with their monolingual counterparts for diverse tasks and languages. Our experiments suggest that the multilingual QA models are poorly calibrated for languages other than English and incorporating a small set of cheaply translated multilingual samples during fine-tuning/calibration effectively enhances the calibration performance.
URL: https://openreview.net/forum?id=4klghu2PTj
---
New submissions
===============
Title: Node Perturbation Can Effectively Train Multi-Layer Neural Networks
Abstract: Backpropagation (BP) remains the dominant and most successful method for training parameters of deep neural network models.
However, BP relies on two computationally distinct phases, does not provide a satisfactory explanation of biological learning, and can be challenging to apply for training of networks with discontinuities or noisy node dynamics.
By comparison, node perturbation (NP), also known as activity-perturbed forward gradients, proposes learning by the injection of noise into network activations, and subsequent measurement of the induced loss change.
NP relies on two forward (inference) passes, does not make use of network derivatives, and has been proposed as a model for learning in biological systems.
However, standard NP is highly data inefficient and can be unstable due to its unguided noise-based search process.
In this work, we develop a modern perspective on NP by relating it to the directional derivative and incorporating input decorrelation.
We find that a closer alignment with directional derivatives together with input decorrelation at every layer theoretically and practically enhances performance of NP learning with large improvements in parameter convergence and much higher performance on the test data, approaching that of BP.
Furthermore, our novel formulation allows for application to noisy systems in which the noise process itself is inaccessible, which is of particular interest for on-chip learning in neuromorphic systems.
URL: https://openreview.net/forum?id=LxUw44pnpu
---