Towards Content-Based Essay Scoring
State-of-the-art automated essay scoring engines such as e-rater do not 
grade essay content, focusing instead on providing diagnostic trait 
feedback on categories such as grammar, usage, mechanics, style and 
organization. Content-based essay scoring is very challenging: it requires 
an understanding of essay content and is beyond the reach of today's 
automated essay scoring technologies. As a result, content-dependent dimensions
of essay quality are largely ignored in existing automated essay scoring 
research. In this talk, we describe our recent and ongoing efforts on 
content-based essay scoring, sharing the lessons we learned from 
automatically scoring one of the arguably most important content-dependent 
dimensions of persuasive essay quality, argument persuasiveness.  
Vincent Ng is a Professor in the Computer Science Department at the 
University of Texas at Dallas. He is also the director of the Machine 
Learning and Language Processing Laboratory in the Human Language 
Technology Research Institute at UT Dallas. He obtained his B.S. from 
Carnegie Mellon University and his Ph.D. from Cornell University. His 
research is in the area of Natural Language Processing, focusing on the
development of computational methods for addressing key tasks in 
information extraction and discourse processing.