Dear all,
We are delighted to announce that we will be presenting a tutorial titled “Psychological, Cognitive and Linguistic BERTology: An Idiomatic Multiword Expression Perspective” at LREC 2022.
Please see the tutorial website for more information including the tutorial outline: https://sites.google.com/view/psych-bertology-lrec-2022/home
Instructors
Harish Tayyar Madabushi
Carlos Ramisch
Marco Idiart
Aline Villavicencio
Motivation
The success of BERT and similar pre-trained language models (PLMs) has led to what might be described as an existential crisis for certain aspects of Natural Language Processing: PLMs can now do better than other models on numerous tasks in multiple evaluation scenarios and are argued to outperform human performances on some benchmarks (Wang et al., 2018; Sun et al., 2020; Hassan et al., 2018). In addition, PLMs also seem to have access to a variety of linguistic information as diverse as parse trees (Hewitt and Manning, 2019), entity types, relations, semantic roles (Tenney et al., 2019a), and constructional information (Tayyar Madabushi et al., 2020).
Does this mean that there is no longer a need to tap into the decades of progress that was made in traditional NLP and related fields including corpus and cognitive linguistics? In short, can deep(er) models replace linguistically motivated (layered) models and systematic engineering as we work towards high-level symbolic artificial intelligence systems?