Daily TMLR digest for Jul 07, 2024

0 views
Skip to first unread message

TMLR

unread,
Jul 7, 2024, 12:00:06 AMJul 7
to tmlr-anno...@googlegroups.com

Accepted papers
===============


Title: D3: Data Diversity Design for Systematic Generalization in Visual Question Answering

Authors: Amir Rahimi, Vanessa D'Amario, Moyuru Yamada, Kentaro Takemoto, Tomotake Sasaki, Xavier Boix

Abstract: Systematic generalization is a crucial aspect of intelligence, which refers to the ability to generalize to novel tasks by combining known subtasks and concepts. One critical factor that has been shown to influence systematic generalization is the diversity of training data. However, diversity can be defined in various ways, as data have many factors of variation. A more granular understanding of how different aspects of data diversity affect systematic generalization is lacking. We present new evidence in the problem of Visual Question Answering (VQA) that reveals that the diversity of simple tasks (i.e. tasks formed by a few subtasks and concepts) plays a key role in achieving systematic generalization. This implies that it may not be essential to gather a large and varied number of complex tasks, which could be costly to obtain. We demonstrate that this result is independent of the similarity between the training and testing data and applies to well-known families of neural network architectures for VQA (i.e. monolithic architectures and neural module networks). Additionally, we observe that neural module networks leverage all forms of data diversity we evaluated, while monolithic architectures require more extensive amounts of data to do so. These findings provide a first step towards understanding the interactions between data diversity design, neural network architectures, and systematic generalization capabilities.

URL: https://openreview.net/forum?id=ZAin13msOp

---


New submissions
===============


Title: Human–AI Safety: A Descendant of Generative AI and Control Systems Safety

Abstract: Artificial intelligence (AI) is interacting with people at an unprecedented scale, offering new avenues for immense positive impact, but also raising widespread concerns around the potential for individual and societal harm. Today, the predominant paradigm for human–AI
safety focuses on fine-tuning the generative model’s outputs to better agree with human-provided examples or feedback. In reality, however, the consequences of an AI model’s outputs cannot be determined in isolation: they are tightly entangled with the responses and
behavior of human users over time. In this paper, we distill key complementary lessons from AI safety and control systems safety, highlighting open challenges as well as key synergies between both fields. We then argue that meaningful safety assurances for advanced AI technologies require reasoning about how the feedback loop formed by AI outputs and human behavior may drive the interaction towards different outcomes. To this end, we introduce a unifying formalism to capture dynamic, safety-critical human–AI interactions and propose a concrete technical roadmap towards next-generation human-centered AI safety.

URL: https://openreview.net/forum?id=YuKBJ7iHf8

---

Reply all
Reply to author
Forward
0 new messages