> I'm currently fuzzing a complex target using a single 60-bytes initial
> testfile. The deterministic stages clock at around 300 execs per second, the
> havoc stage however drops down to 35-60 execs per second. Is there an
> inherent reason for this drop in performance within afl-fuzz or should I
> check if the (patched) target somehow screws up?
The only difference is that havoc can make more striking changes to
the file, including making it shorter or (somewhat) longer. Perhaps
the program doesn't take shorter / longer inputs kindly?
However, for a program that indeed executes at 300 execs/sec
initially, I'm not sure if 35 execs/sec is possible. For fast
programs, -t should be auto-calibrated to 20 ms, capping the speed
closer to 50 exec/sec.
> Only somewhat related to that: The havoc stage doubles the amount of total
> executions if a new execution path was found during the current round. I'm
> not sure if this is cpu time spent wisely: If a random change in a testfile
> causes the execution path to differ then we can only assume the chance for
> every bit in the input to alter the execution path being a constant.
Some paths are dead ends and we want to be over with them quickly;
hence the initial cap of 5k execs per havoc stage, adjusted slightly
based on speed and a couple of other factors. But when we have
evidence that fuzzing a particular input is productive, we want to
give them a bit more air time.
Every additional find cements the possibility that it's not just a
random fluke. There is an upper cap for the number of execs to make
sure we move on fairly quickly anyway. To be honest, I suspect that
the cycle increase method is non-critical (+const number, x1.5, x2,
x4, jump straight to max, etc), since it will ultimately account only
for a small percentage of all execs carried out. But you're welcome to
experiment with various approaches for different benchmark binaries
and see if anything works markedly better than x2.
Cheers,
/mz