Hi Sebastian,
DKPro TC does currently not plot ROC curves. We've had some limited support for PR curves in multi-label scenarios, but that's rather legacy.
To create a confusion matrix (and further), you can always use the id2Outcome.txt files which give a detailed classification report overall all instances of your dataset. They are created once for each executed TestTask and in the case of crossvalidation, as an aggregated file in the context of the crossvalidation task. Have a look at the TwentyNewsgroupsUsingTCEvaluationDemo to see how to get this to work with document, single-label classification and Weka.
I am not sure whether I understand the second part of your email.
Do you want to test several algorithms independent of each other or do you want to use the output of the first algorithm as input for the second one?
In the first case, you simply need to specify all algorithms you want to test in your experiment setup (using the "classificationArguments" dimension) and the different configurations will be run iteratively, see e.g. ComplexConfigurationSingleDemo in the examples module. In the second case, you have to modify the experiment setup and add another task (something like SubsequentTestTask). This task could then (in whichever way you need to) access the output of the previous TestTask and use it for its own purpose.
Hope this helps,
Johannes
-----Ursprüngliche Nachricht-----
Von:
dkpro-t...@googlegroups.com [mailto:
dkpro-t...@googlegroups.com] Im Auftrag von
sliz...@googlemail.com
Gesendet: Freitag, 6. November 2015 13:36
An: dkpro-tc-users
Betreff: [dkpro-tc-users] ROC Evaluation
--
You received this message because you are subscribed to the Google Groups "dkpro-tc-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
dkpro-tc-user...@googlegroups.com.
For more options, visit
https://groups.google.com/d/optout.