SIGSEC talk: 31 Jan - Robust and Context-Faithful Language Understanding with (Large) Language Models

21 visualizzazioni
Passa al primo messaggio da leggere

Leon Derczynski

da leggere,
30 gen 2024, 09:33:2930 gen
a acl-s...@googlegroups.com
Robust and Context-Faithful Language Understanding with (Large) Language Models
2024, January 31st, 13.00 ET / 19.00 CET


Large language models (LLMs) have achieved remarkable success in various language understanding tasks. However, their deployment in real-world scenarios raises significant accountability concerns. In this presentation, I will introduce our recent work on enhancing contextual faithfulness and robustness of LLMs. First, LLMs often make unfaithful predictions based on entity mentions or parametric knowledge, ignoring the context. I will present causality-driven approaches, including training-time and in-context causal intervention, to mitigate entity bias for both black-box and white-box LLMs. Second, LLMs may capture various unreliable prediction shortcuts, some of which could be unknown. I will demonstrate how to address this issue by proactively mitigating biases in the attention module without needing to identify the specific cause of the bias. Finally, I will outline future directions for advancing accountable and responsible LLMs.



See you there :)

Rispondi a tutti
Rispondi all'autore
Inoltra
0 nuovi messaggi