Title:Men Also Like Shopping: Reducing Societal Bias in NaturalLanguage Processing Models
Abstract:Machine learning techniques play important roles in our daily life.Despite these methods being successful in various applications, theyrun the risk of exploiting and reinforcing the societal biases (e.g.gender bias) that are present in the underlying data. For instance, anautomatic resume filtering system may inadvertently select candidatesbased on their gender and race due to implicit associations betweenapplicant names and job titles, causing the system to perpetuateunfairness potentially. In this talk, I will describe a collection ofresults that quantify and reduce biases in vision and language models,including word embeddings, coreference resolution, and visual semanticrole labeling.
Bio:Kai-Wei Chang is an assistant professor in the Department of ComputerScience at the University of California Los Angeles. His researchinterests include designing robust machine learning methods for largeand complex data and building language processing models for socialgood applications. Kai-Wei has published broadly in machine learning,natural language processing, and artificial intelligence and has beeninvolved in developing machine learning libraries (e.g., LIBLINEAR,Vowpal Wabbit, and Illinois-SL) that are being used widely by theresearch community. His awards include the EMNLP Best Long Paper Award (2017), the KDD Best Paper Award (2010), the Yahoo! Key ScientificChallenges Award (2011), and the Okawa Research Grant Award (2018).Additional information is available at http://kwchang.net.