Dear Chairs,
The main concern regarding the taxonomy is that it is not clear whether the results that participants have to submit must be filtered using that taxonomy before submission or you will manage this tasks on yourself before running evaluation scripts.
Actually, (almost) all annotators participants are using do not rely on that taxonomy and most likely they will annotate tweets with DBpedia concepts that are not contained in that taxonomy. This doesn't mean that the annotator is not working fine, but this challenge is just focused on a reduced set of DBpedia concepts.
An this is perfectly fine, but the issue is that you will evaluate both precision and recall, so it is important to remove all DBpedia URIs that are not contained in that taxonomy otherwise, even if the annotator has correctly found a relevant DBpedia URI but that URI is not part of that taxonomy, this case will be considered as a false positive, hence affecting the annotator precision and in turn the overall F1 score.
Could you please clarify what URIs should be included in the TSV files to be submitted?
In case you don't manage such a filter, ie participants have to filter out URIs that are not part of that taxonomy, I think you should clarify how to match a DBpedia URI with that taxonomy, because it's still not clear.
Regards,
-- Ugo Scaiella