LREC 2022 Tutorial - Psychological, Cognitive and Linguistic BERTology: An Idiomatic Multiword Expression Perspective

4 views
Skip to first unread message

Carlos Ramisch

unread,
May 28, 2022, 9:44:33 AM5/28/22
to siglex-mw...@googlegroups.com, verbalmwe, dims...@googlegroups.com

Dear all, 


We are delighted to announce that we will be presenting  a tutorial titled “Psychological, Cognitive and Linguistic BERTology: An Idiomatic Multiword Expression Perspective” at LREC 2022. 


Please see the tutorial website for more information including the tutorial outline: https://sites.google.com/view/psych-bertology-lrec-2022/home


Instructors

  • Harish Tayyar Madabushi

  • Carlos Ramisch

  • Marco Idiart

  • Aline Villavicencio


Motivation

The success of BERT and similar pre-trained language models (PLMs) has led to what might be described as an existential crisis for certain aspects of Natural Language Processing: PLMs can now do better than other models on numerous tasks in multiple evaluation scenarios and are argued to outperform human performances on some benchmarks (Wang et al., 2018; Sun et al., 2020; Hassan et al., 2018). In addition, PLMs also seem to have access to a variety of linguistic information as diverse as parse trees (Hewitt and Manning, 2019), entity types, relations, semantic roles (Tenney et al., 2019a), and constructional information (Tayyar Madabushi et al., 2020).


Does this mean that there is no longer a need to tap into the decades of progress that was made in traditional NLP and related fields including corpus and cognitive linguistics? In short, can deep(er) models replace linguistically motivated (layered) models and systematic engineering as we work towards high-level symbolic artificial intelligence systems?


This tutorial will explore these questions through the lens of a linguistically and cognitively important phenomenon that PLMs do not (yet) handle very well: Idiomatic Multiword Expressions (MWEs) (Yu and Ettinger, 2020; Garcia et al., 2021; Tayyar Madabushi et al., 2021).
Reply all
Reply to author
Forward
0 new messages