The Ins and Outs of Preposition Semantics: Challenges in Comprehensive Corpus Annotation and Automatic DisambiguationIn most linguistic meaning representations that are used in NLP, prepositions fly under the radar. I will argue that they should instead be put front and center given their crucial status as linkers of meaning—whether for spatial and temporal relations, for predicate-driven roles, or in special constructions. To that end, we have sought to characterize and disambiguate semantic functions expressed by prepositions and possessives in English (Schneider et al., ACL 2018; https://github.com/nert-gu/streusle/), and similar markers in other languages (ongoing work on Korean, Hebrew, German, and Mandarin Chinese). This is joint work with Vivek Srikumar, Jena Hwang, Archna Bhatia, Na-Rae Han, Meredith Green, Abhijit Suresh, Kathryn Conger, Tim O’Gorman, Austin Blodgett, Jakob Prange, Omri Abend, Sarah Moeller, Aviram Stern, Adi Bitan, Yilun Zhu, Yang Liu, Siyao Peng, Yushi Zhao, and Martha Palmer.Nathan Schneider is an annotation schemer and computational modeler for natural language. As Assistant Professor of Linguistics and Computer Science at Georgetown University, he looks for synergies between practical language technologies and the scientific study of language. He specializes in broad-coverage semantic analysis: designing linguistic meaning representations, annotating them in corpora, and automating them with statistical natural language processing techniques. A central focus in this research is the nexus between grammar and lexicon as manifested in multiword expressions and adpositions/case markers. He has inhabited UC Berkeley (BA in Computer Science and Linguistics), Carnegie Mellon University (Ph.D. in Language Technologies), and the University of Edinburgh (postdoc). Now a Hoya and leader of NERT, he continues to play with data and algorithms for linguistic meaning.