Hi MOA team,
Hi, I love your tool, used it for couple of publications... I came across 1 issue this week, Java is not my primary choice of coding language and I was unsble to identify the issue in code so far. Where/how can I create a bug report? I noticed that f1 score, precision and recall are not calculated for ADWIN evaluator (as compared to Basic eval.), this feature would be really handy...
ADWIN evluator ont generating f1, precision, recall (Basic evaluator is generating f1, precision, recall).
Basic example to reproduce issue:
EvaluatePrequentialCV -l (drift.DriftDetectionMethodClassifier -l lazy.kNN -d ADWINChangeDetector) -s generators.RandomRBFGenerator -e (AdwinClassificationPerformanceEvaluator -o -p -r -f) -f 2
EvaluatePrequentialCV -l (drift.DriftDetectionMethodClassifier -l lazy.kNN -d ADWINChangeDetector) -s generators.RandomRBFGenerator -e (BasicClassificationPerformanceEvaluator -o -p -r -f) -f 2
I am using latest official release MOA Release 2019.04 , I tried previous releases...
1. Could you please help me solving this issue?
2. What does ADWIN evaluator do? When the ADWIN detects concept drift the evaluator shrinks the evaluation dataset accordingly? i.e. it calculates accuracy only on recent N samples per ADWIN detected change?