> I want to know which files triggered crashes.
> But with s/remove/move elsewhere/, sounds good to me.
This would make afl-cmin slower. I need to rewrite it in C, but for
now, I just want it to be as fast as it can while still being a shell
script. Ignoring crashes requires no extra code in afl-cmin (just a
couple lines in afl-showmap in -C mode). Moving them around and (and
preferably also minimizing that corpus!) is more involved and will
slow down one of the performance-critical loops.
Now, to elaborate on why I wanted to go with afl-cmin instead of
adding a check in afl-fuzz:
1) It's problematic to allow afl-cmin to do minimization while looking
at crashing inputs as first-class citizens, only to have them rejected
by the fuzzer. This makes it possible for afl-cmin to decide that a
crashing input is the best candidate for a bunch of tuples that are
also covered by several non-crashing inputs, and reject the latter
bunch. Then, afl-fuzz will reject the crashing candidate selected by
afl-cmin, and your loaded corpus will have glaring gaps.
2) The current design of afl-fuzz isn't very conductive to moving test
cases out of the queue. The "dry run" takes place only after the files
are sorted, loaded into memory, renamed, gived filename-tied IDs that
make them traceable to other inputs, etc. This design can be changed,
but it has many other benefits; for example, it means that you don't
leave the output directory in some half-baked state if you hit Ctrl-C
before the startup is complete (especially important for in-place
resume).
At this point, my gut feeling is that supporting AFL_SKIP_CRASHES=1 is
worthwhile; but putting the crashes in a separate bucket will have to
wait until the rewrite of afl-cmin. I can add exit codes to
afl-showmap.c to make it easier to write a simple for loop to detect,
crashes, though. Something like:
for i in corpus/*; do ./afl-showmap -o /dev/null -t 123 -m 456
/path/to/foo $i; test "$?" = "2" && mv $i my_crashes/; done
/mz