Daily TMLR digest for Jun 15, 2023

0 views
Skip to first unread message

TMLR

unread,
Jun 14, 2023, 8:00:07 PM6/14/23
to tmlr-anno...@googlegroups.com


New certifications
==================

Survey Certification: On Averaging ROC Curves

Jack Hogan, Niall M. Adams

https://openreview.net/forum?id=FByH3qL87G

---


Accepted papers
===============


Title: On Averaging ROC Curves

Authors: Jack Hogan, Niall M. Adams

Abstract: Receiver operating characteristic (ROC) curves are a popular method of summarising the performance of classifiers. The ROC curve describes the separability of the distributions of predictions from a two-class classifier. There are a variety of situations in which an analyst seeks to aggregate multiple ROC curves into a single representative example. A number of methods of doing so are available; however, there is a degree of subtlety that is often overlooked when selecting the appropriate one. An important component of this relates to the interpretation of the decision process for which the classifier will be used. This paper summarises a number of methods of aggregation and carefully delineates the interpretations of each in order to inform their correct usage. A toy example is provided that highlights how an injudicious choice of aggregation method can lead to erroneous conclusions.

URL: https://openreview.net/forum?id=FByH3qL87G

---


New submissions
===============


Title: ContraSim – Analyzing Neural Representations Based on Contrastive Learning

Abstract: Recent work has compared neural network representations via similarity-based analyses to improve model interpretation. The quality of a similarity measure is typically evaluated by its success in assigning a high score to representations that are expected to be matched. However, existing similarity measures perform mediocrely on standard benchmarks. In this work, we develop a new similarity measure, dubbed ContraSim, based on contrastive learning. In contrast to common closed-form similarity measures, ContraSim learns a parameterized measure by using both similar and dissimilar examples. We perform an extensive experimental evaluation of our method, with both language and vision models, on the standard layer prediction benchmark and two new benchmarks that we introduce: the multilingual benchmark and the image–caption benchmark. In all cases, ContraSim achieves much higher accuracy than previous similarity measures, even when presented with challenging examples. Finally, ContraSim is more suitable for the analysis of neural networks, revealing new insights not captured by previous measures.

URL: https://openreview.net/forum?id=ILzCLWKLbk

---

Reply all
Reply to author
Forward
0 new messages