Anyone have any specific papers they are interested in reviewing?
-Scott Frye
Yeah it's been a bit dead since October really, I do hope we're not
losing steam.
as for papers, some of the ones that have lsot out in the past still
interest me,
from a goal point of view ER and Chunking I'm still not 100% on,
from a method point of view I'm still shaky on application of HMMs,
but also interested in SVM and NN applications.
These are the ones I had on my backlog to get to:
1. Introduction to the CoNLL-2003 Shared Task: Language-Independent
Named
Entity Recognition - EFTK Sang, F De Meulder
http://acl.ldc.upenn.edu/W/W03/W03-0419.pdf
Cites: 310
Year: 2003
Desc: A shared task on Named Entity Recognition (NER)
2. Chunking with support vector machines
http://acl.ldc.upenn.edu/N/N01/N01-1025.pdf
Cites: 343
Year: 2001
Desc: An approach to chunking using SVMs
3. A Unified Architecture for Natural Language Processing:
Deep Neural Networks with Multitask Learning
http://ronan.collobert.com/pub/matos/2008_nlp_icml.pdf
cites:27
year: 2008
Regards,
Alex.
--
You received this message because you are subscribed to the Google Groups "Natural Language Processing Virtual Reading Group" group.
To post to this group, send email to NLP-r...@googlegroups.com.
To unsubscribe from this group, send email to NLP-reading...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/NLP-reading?hl=en.
Finally got round to reading through the CoNLL paper. In the spirit of
Grants missive last month, rather than waiting about, I'll just put my
thoughts up.
Firstly, the paper is not a research paper, it doesn't go into details
about techniques or methods. It's a recap of the performance of the
participants of the CoNLL shared task. I guess in retrospect, this is
obvious from the title, but I was a little disappointed it didn't deep
dive into the similarities/differences of the approaches as much as
simply report on their results.
It contrasts a few of the methods, and points out at a general level
what features and attributes were used for training the various
systems, and draws a summary across all the systems. A few of the
tables are interesting in being able to see what features and error
correction mechanisms the researchers all clustered around, for
example, 13 of the 16 systems used prefix features but only 2 used
quote indicators.
It is however interesting to note that they did a committee vote
process of the various systems, and it turned out to be better than
any one individual system for either language.
What it does do well is serve as a starting point to read through the
CoNLL 2003 papers, which I found here: http://www.cnts.ua.ac.be/conll2003/proceedings.html
And following that the next few papers I will most likely look at for
NER is:
Named Entity Recognition through Classifier Combination
Radu Florian, Abe Ittycheriah, Hongyan Jing and Tong Zhang
http://www.cnts.ua.ac.be/conll2003/pdf/16871flo.pdf
topped the rankings for both languages, but used an externally trained
classifier to obtain significant error reduction
Named Entity Recognition with a Maximum Entropy Approach
Hai Leong Chieu and Hwee Tou Ng
http://www.cnts.ua.ac.be/conll2003/pdf/16063chi.pdf
which came second in the English ranking but didn't do too well in the
German ranking
Named Entity Recognition with Character-Level Models
Dan Klein, Joseph Smarr, Huy Nguyen and Christopher D. Manning
http://www.cnts.ua.ac.be/conll2003/pdf/18083kle.pdf
which came third in English and second in the German rankings