minimalistic quick start guide

309 views
Skip to first unread message

Michal Zalewski

unread,
Aug 26, 2015, 4:45:51 PM8/26/15
to afl-users
Just FYI - probably not very useful to people on the mailing list, but
maybe worthwhile if you want to convince a developer or somebody else
to give AFL a try:

http://lcamtuf.coredump.cx/afl/QuickStartGuide.txt

Jacek Wielemborek

unread,
Aug 26, 2015, 5:19:07 PM8/26/15
to afl-...@googlegroups.com
W dniu 26.08.2015 o 22:45, Michal Zalewski pisze:
That's something I was looking for when I was testing afl-fuzz for the
first time! If I could suggest something though, I would personally have
loved a "hello world" test, something like:

> echo "main() { if (getchar() == '1') abort(); }" > test.c
> afl-gcc test.c -o test
(afl-gcc output follows)
> mkdir i
> echo 2 > i/sample-input
> AFL_EXIT_WHEN_DONE=1 afl-fuzz -i i -o o ./test
> ls -l o/crashes

This way I can see for myself how the program works and how to use all
of its parts together. What do you think about it?

signature.asc

Magnus Stubman

unread,
Aug 26, 2015, 5:45:15 PM8/26/15
to afl-...@googlegroups.com
+1 !

Consider adding instructions for how a developer can add to automatic build pipelines / buildservers. I believe that's one of few ways to make devs use fuzzers.

Magnus.
> --
> You received this message because you are subscribed to the Google Groups "afl-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to afl-users+...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Jacek Wielemborek

unread,
Aug 26, 2015, 6:54:55 PM8/26/15
to afl-...@googlegroups.com
W dniu 26.08.2015 o 23:45, Magnus Stubman pisze:
> +1 !
>
> Consider adding instructions for how a developer can add to automatic
> build pipelines / buildservers. I believe that's one of few ways to
> make devs use fuzzers.
>
> Magnus.

I agree with that. I'd love to see a "make fuzz" target in most of FLOSS
projects.

signature.asc

Michael Rash

unread,
Aug 26, 2015, 7:19:44 PM8/26/15
to afl-users
Agreed. Further, it seems like fuzzing should built in to any serious test suite. What I don't understand is why the code coverage achieved even by the native OpenSSL and OpenSSH test suites (counted by lines) is barely cracking 50%. This is not to say that fuzzing is necessarily a strategy for maximizing code coverage, but it certainly helps, and the rigor behind AFL makes it all the more important.

--Mike

Michal Zalewski

unread,
Aug 26, 2015, 7:54:03 PM8/26/15
to afl-users
> Agreed. Further, it seems like fuzzing should built in to any serious test
> suite. What I don't understand is why the code coverage achieved even by the
> native OpenSSL and OpenSSH test suites (counted by lines) is barely cracking
> 50%. This is not to say that fuzzing is necessarily a strategy for
> maximizing code coverage, but it certainly helps, and the rigor behind AFL
> makes it all the more important.

I think it would be nice to have some articles promoting any sort of
plug-and-play fuzzing as a part of release testing; I'm not sure if
they are best done as a yet another docs/ entry within AFL, or just as
something standalone. In any case... anyone up for that?:-)

/mz

Peter Gutmann

unread,
Aug 27, 2015, 1:14:41 AM8/27/15
to Jacek Wielemborek, afl-...@googlegroups.com
Jacek Wielemborek <d33...@gmail.com> writes:

>I'd love to see a "make fuzz" target in most of FLOSS projects.

The problem isn't the 'make fuzz' part (which is exactly what I use in my
code), it's the 'run fuzz' part. My standard regression tests run in
a couple of minutes, the fuzzing runs for months.

Peter.

Peter Gutmann

unread,
Aug 27, 2015, 1:22:17 AM8/27/15
to afl-...@googlegroups.com
Michael Rash <michae...@gmail.com> writes:

>What I don't understand is why the code coverage achieved even by the
>native OpenSSL and OpenSSH test suites (counted by lines) is barely
>cracking 50%.

That's pretty simple, (a) you can't easily generate all the test cases needed
to exercise all the code (you could probably spend close to a lifetime doing
test cases just for basic TLS + certificates), and (b) a lot of those paths
will be error handling which are equally hard to test because you can't easily
simulate all those errors (a subset of this is the Zeno's-paradox issue of
testing the code that tests the error handling, which in turn needs to be
tested...).

So AFL's great contribution is that you can generate at least some of the
zillions of test cases that are required to exercise all the code paths.

In terms of error handling, a friend of mine once tested a widely-used OSS
code base by instrumenting the malloc function so that it failed on subsequent
invocations. So the first time the code was run, it failed on the first
alloc, the second time it failed on the second alloc, and so on. All this was
doing was testing the out-of-memory error handling. Luckily, he'd had the
foresight to cap the coredump count at 1,000 before he ran the test code...

Peter.

toastedc...@gmail.com

unread,
Aug 27, 2015, 4:15:06 AM8/27/15
to afl-users, d33...@gmail.com, pgu...@cs.auckland.ac.nz
I think it still useful to have a `make fuzz` that runs AFL for a few minutes,
or until the user cancels it. In some open source project AFL finds multiple
bugs in < 5 minutes! And if the devs let it run overnight once in a while
they can only benefit from the deeper coverage. Lastly, integration with CI
servers could make a full run feasible at least once.

Peter Gutmann

unread,
Aug 27, 2015, 7:33:47 AM8/27/15
to toastedc...@gmail.com, afl-users, d33...@gmail.com
toastedc...@gmail.com <toastedc...@gmail.com> writes:

>I think it still useful to have a `make fuzz` that runs AFL for a few
>minutes, or until the user cancels it. In some open source project AFL finds
>multiple bugs in < 5 minutes! And if the devs let it run overnight once in a
>while they can only benefit from the deeper coverage. Lastly, integration with
>CI servers could make a full run feasible at least once.

That one's really more of a user-education problem I think, for each OSS
project someone needs to sit down and create the build environment/scripts and
generate the test cases. Having said that, AFL is the first fuzzer I've found
that doesn't require insane amounts of effort to produce useful results, which
makes it much easier to demonstrate its utility to a sceptical developer, and
thereby encourage them to use it further.

Peter.

Tim Wilson-Brown - teor

unread,
Aug 27, 2015, 10:03:53 AM8/27/15
to afl-...@googlegroups.com, toastedc...@gmail.com, d33...@gmail.com
Tor has an open task to add fuzzing instructions and setup files.
Unfortunately, as a network-based program involving encryption, the test harnesses are non-trivial.
And fuzzing comes lower in the list of priorities than some other features.

See https://trac.torproject.org/projects/tor/ticket/16301 for progress - help would be welcome!

Tim (teor)

Michael Rash

unread,
Aug 27, 2015, 8:55:20 PM8/27/15
to afl-users
On Thu, Aug 27, 2015 at 1:22 AM, Peter Gutmann <pgu...@cs.auckland.ac.nz> wrote:
Michael Rash <michae...@gmail.com> writes:

>What I don't understand is why the code coverage achieved even by the
>native OpenSSL and OpenSSH test suites (counted by lines) is barely
>cracking 50%.

That's pretty simple, (a) you can't easily generate all the test cases needed
to exercise all the code (you could probably spend close to a lifetime doing
test cases just for basic TLS + certificates), and (b) a lot of those paths
will be error handling which are equally hard to test because you can't easily
simulate all those errors (a subset of this is the Zeno's-paradox issue of
testing the code that tests the error handling, which in turn needs to be
tested...).

So AFL's great contribution is that you can generate at least some of the
zillions of test cases that are required to exercise all the code paths.

Ok, couldn't the AFL test cases be incorporated into the OpenSSL test suite directly as a means to expand code coverage?
 

In terms of error handling, a friend of mine once tested a widely-used OSS
code base by instrumenting the malloc function so that it failed on subsequent
invocations.  So the first time the code was run, it failed on the first
alloc, the second time it failed on the second alloc, and so on.  All this was
doing was testing the out-of-memory error handling.  Luckily, he'd had the
foresight to cap the coredump count at 1,000 before he ran the test code...

Sounds like your friend discovered a lot of buggy code with that method. The libfiu project implements this technique and more, and I think it could help quite a bit for testing error handling code in general for OpenSSL and beyond. I'm sure the recursive testing problem you mentioned would still be an issue, but on the other hand, if no fault injection techniques are being used, then most likely there is a lot of "level zero" error handling code isn't being tested. That seems like a significant gap.

Thanks,

--Mike


 

Peter.

Peter Gutmann

unread,
Aug 28, 2015, 12:17:09 AM8/28/15
to afl-...@googlegroups.com
Michael Rash <michae...@gmail.com> writes:

>Ok, couldn't the AFL test cases be incorporated into the OpenSSL test suite
>directly as a means to expand code coverage?

Uhh, all 20 million of them?

Peter :-).

teor

unread,
Aug 28, 2015, 2:52:43 PM8/28/15
to afl-...@googlegroups.com
OpenSSL has a pretty large library of test cases as-is, and surely many of the AFL test cases would be quite short.

A representative 1000 cases of up to a kilobyte wouldn't be too bad, right?

Tim

Peter Gutmann

unread,
Aug 31, 2015, 4:23:37 AM8/31/15
to afl-...@googlegroups.com
teor <teor...@gmail.com> writes:

>OpenSSL has a pretty large library of test cases as-is, and surely many of
>the AFL test cases would be quite short.
>
>A representative 1000 cases of up to a kilobyte wouldn't be too bad, right?

I'm not sure what this would achieve though because it's static data rather
than the dynamic mutations that AFL does. Let's say you take an input set of
20M test cases and, using some sort of optimisation algorithm (toplogical
sorting or simulated annealing or choose your pet method) pick out the 1,000
that provide the best coverage of code paths. Presumably you're going to
correct any problems that these find before you ship, so what you've got is a
regression test with 1,000 test cases (alongside the existing test suite). If
you change any of the code then you lose coverage, because the (static) set of
1,000 cases can't adapt to new code paths. So as an extension of the existing
test suite it's fine, but due to its static nature I can't see how it can
exercise newly-added code, for that you actually need to run AFL on it.

(What we really need is someone with a spare supercomputer to offer cycles for
OSS projects).

Peter.

Michael Rash

unread,
Sep 5, 2015, 9:25:56 PM9/5/15
to afl-users
The static set of test cases would likely still be useful even underneath changing code paths. While the test cases of course would not be "explicitly" designed to exercise new code paths, I suspect some of these paths would be hit anyway depending on what changes are made. Say, a refactoring that splits a function that has relatively good coverage under the static set into smaller functions or similar examples. Further, if compiled under ASAN (or any of the other sanitizers), static test cases in effect become much more rigorous. Such validation elevates the importance of _any_ additional test coverage.

Also, another aspect of this discussion I'm puzzled about is that when developers are adding new code into OpenSSL or any other code base, if they are doing any validation at all (which they obviously should), this validation itself can be extracted into new test cases. If the argument is that the OpenSSL test suite has low code coverage because of the shear number of tests that would need to be written, then can you provide an example where 1) new code is added, but 2) it is so laborious to add tests that the work is insurmountable? Is that the essence of the argument for low test coverage?
 

(What we really need is someone with a spare supercomputer to offer cycles for
OSS projects).

That would certainly be nice.

--Mike

 

Peter.

 

Magnus Stubman

unread,
Nov 22, 2016, 11:51:47 AM11/22/16
to afl-...@googlegroups.com
Michael,

I made a asciinema of a ‘hello world’ setup of AFL fuzzing UPX: https://asciinema.org/a/e7bpjng8jj33o53qmctkihka8

It might be handy for newcomers to add the link to the quickstartguide as an example.

Magnus.



Reply all
Reply to author
Forward
0 new messages