Hi Moura,
A few comments that might help you figure out what is happening (and some that are just related to ARF usage):
(1) If you want to disable drift detection, use the flag option -u (which will also disable the background learners, if you just want to disable background learners, then just use -q). Then, it doesn't matter what option is set for drift detection or warning detection
-u disableDriftDetection
Should use drift detection? If disabled then bkg learner is also disabled
(2) The train and test split was stratified?
(3) You may want to set the train size to 700 (even though it shouldn't influence the overall execution since your file has only 1000 instances)
-i trainSize (default: 0)
Number of training examples, <1 = unlimited.
(4) Given a small sample of data there is a chance that the trees do not split.
If you can, and want, you might want to upscale the experiment by using 7000 training instances and 3000 testing instances.
Regards,
Heitor