WNUT17 Shared task eval command

26 views
Skip to first unread message

Leon Derczynski

unread,
Jul 6, 2017, 2:59:13 PM7/6/17
to Workshop on Noisy User-generated Text (WNUT)
Hi,

If you're having trouble evaluating your data or checking the scores, the command we used to generate this was of the form:

cat ../submissions/drexel_flytxt | awk '{print $NF}'| paste emerging.test.annotated - | ../eval/wnuteval.py 

You might need to amend the paths to wnuteval.py and to the submission input.

All the best,


Leon
Reply all
Reply to author
Forward
0 new messages