Hi all-
Newbie here.
I have an NLP project in which we are using cTAKES to annotate clinical notes using SNOMED terms.
I want to evaluate the accuracy of the automated NLP by having clinicians manually annotate the notes for comparison.
BRAT seems like the right tool, and I see there's a way to do normalization (i.e. entity linking) with an existing terminology as of V1.3. I found the instructions for how to do that here-
http://brat.nlplab.org/normalization.html#norm-config but I'm not sure what those values would be for SNOMED CT. Has anyone done this? Any hints/suggestions/ideas?
A few additional comments
2. It's not clear to me if/how the UI scales when one has a list of not a handful of types, but rather thousands of types- has anyone had experience with this? Maybe with GO?
Many thanks for any insights.
Best regards,
Jessie Tenenbaum, PhD
Duke University