new in 1.19b: afl-tmin

2,662 views
Skip to first unread message

Michal Zalewski

unread,
Jan 22, 2015, 12:43:18 AM1/22/15
to afl-users
Version 1.19b now ships with a small standalone tool called afl-tmin.
The tool works in two (auto-selected) modes:

1) If you give it a test case that crashes the target binary, it tries
to trim as many bytes as it can (and then normalize the remaining
ones) while still keeping the binary in the crashing state. This works
with instrumented and non-instrumented targets alike.

2) If you give it a non-crashing test case, it attempts to trim /
normalize its contents while watching the instrumentation output to
make sure that the execution path stays the same.

The first mode is essentially an extension of my old tmin tool
(https://code.google.com/p/tmin/wiki/TminManual).

The tool accepts afl-fuzz -f, -m, -t, and @@ syntax. Enjoy.

/mz

Sami Liedes

unread,
Jan 22, 2015, 10:42:32 AM1/22/15
to afl-...@googlegroups.com
On Wed, Jan 21, 2015 at 09:42:57PM -0800, Michal Zalewski wrote:
> Version 1.19b now ships with a small standalone tool called afl-tmin.
> The tool works in two (auto-selected) modes:

I suppose this too uses the fork server? I'm thinking about ripping
the fork server into its own library, or perhaps even into a command
line tool; I think it would be useful on its own.

> 1) If you give it a test case that crashes the target binary, it tries
> to trim as many bytes as it can (and then normalize the remaining
> ones) while still keeping the binary in the crashing state. This works
> with instrumented and non-instrumented targets alike.
>
> 2) If you give it a non-crashing test case, it attempts to trim /
> normalize its contents while watching the instrumentation output to
> make sure that the execution path stays the same.

Not sure how useful this would be in general, but in corpus
minimization you might also want to do (if you prefer a large corpus
of small cases):

3) If you give it a test case and an edge, minimize test case subject
to that edge still being executed.

Sami
signature.asc

Michal Zalewski

unread,
Jan 22, 2015, 10:59:25 AM1/22/15
to afl-users
> I suppose this too uses the fork server? I'm thinking about ripping
> the fork server into its own library, or perhaps even into a command
> line tool; I think it would be useful on its own.

The helper tools (afl-tmin and afl-showmap) don't actually use the
fork server. They simply don't create the control channel file
descriptor, and when it's absent, the injected instrumentation just
lets the program execute directly and then exit.

> Not sure how useful this would be in general, but in corpus
> minimization you might also want to do (if you prefer a large corpus
> of small cases):
>
> 3) If you give it a test case and an edge, minimize test case subject
> to that edge still being executed.

Nice idea. I'll test how well that works.

/mz

Endeavor

unread,
Jan 22, 2015, 11:20:12 AM1/22/15
to afl-...@googlegroups.com
I haven't looked at the fork server so I don't know what's involved or what's possible, but if it could be broken out to a simple C library that would be awesome. I would absolutely use that.

If it was architecture-agnostic, that would also be super duper.

Sent from my iPhone
> --
> You received this message because you are subscribed to the Google Groups "afl-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to afl-users+...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Michal Zalewski

unread,
Jan 22, 2015, 11:48:46 AM1/22/15
to afl-users
I think there are four basic ways to do it:

- Compile-time, which is OK for afl, but probably not what you want in
the more general case,

- LD_PRELOAD, which would be pretty robust and simple but work only
for dynamically linked binaries,

- ptrace(), which is quirky but gives you a fair amount of
flexibility, but won't be architecture-agnostic,

- Custom ELF loader, which is probably the most complicated way; also
needs arch-specific code.

I don't think that a standalone, general purpose fork server is gonna
happen in AFL, but I wouldn't be surprised if there are third-party
implementations of one of these approaches out there already.

/mz

Jakub Wilk

unread,
Jan 22, 2015, 12:23:07 PM1/22/15
to afl-...@googlegroups.com
* Michal Zalewski <lca...@gmail.com>, 2015-01-22, 08:48:
>I don't think that a standalone, general purpose fork server is gonna
>happen in AFL, but I wouldn't be surprised if there are third-party
>implementations of one of these approaches out there already.

Speaking of independent implementations of fork server, I have an
unusual feature request.

Could you add an option for dumb fuzzing with fork server? :-)

I am thinking of fuzzing software written in scripting languages, where
implementing fork server is easy and it provides great speed benefits;
but OTOH implementing efficient instrumentation is a much more
challenging task.

--
Jakub Wilk

jann...@googlemail.com

unread,
Jan 22, 2015, 1:08:05 PM1/22/15
to afl-...@googlegroups.com
On Thursday, January 22, 2015 at 5:20:12 PM UTC+1, Alex Eubanks wrote:
I haven't looked at the fork server so I don't know what's involved or what's possible, but if it could be broken out to a simple C library that would be awesome. I would absolutely use that.

Do you mean a library that lets you turn your program into a forkserver or a library that lets you inject the forkserver into other non-instrumented programs?
If you mean the latter, you might want to have a look at my ugly old patch for forkserver mode. I think this is the last version of that: <https://gist.github.com/thejh/6339804cfd1cdcd90bc7>
It requires the target binary to be dynamically linked (for LD_PRELOAD) and it requires it to be x86 code, but you don't have to compile the binary with instrumentation. It traces the target binary until it sees a filesystem function that accesses a specific path, then forces the process to enter the forkserver code.
Getting rid of the requirement for dynamic linking is hard because at least glibc will do really weird stuff if you do direct fork syscalls instead of using its fork wrapper to fork (like killing the wrong process on abort()) - you would have to locate the fork function in the binary somehow or tell the user to link with a saner libc or directly overwrite the cached PID somehow ooor use seccomp mode 2 to kill the binary if it tries to use the kill() syscall or so (and hope that no other important stuff relies on the cached pid being correct).
Getting rid of architecture-specific code is hard because you need some reliable way to enter and leave the forkserver code. Also, afaik you can't inject syscalls or method calls into another process without architecture-specific code, which means you can't easily insert new executable code. You might be able to do things in a mostly architecture-independent way if you let the library make a copy of the target binary, append code to the text section, add an entry to INIT_ARRAY and then run the copied binary, but that seems a bit complicated to me.

Michal Zalewski

unread,
Jan 22, 2015, 3:14:15 PM1/22/15
to afl-users
> Could you add an option for dumb fuzzing with fork server? :-)
>
> I am thinking of fuzzing software written in scripting languages, where
> implementing fork server is easy and it provides great speed benefits; but
> OTOH implementing efficient instrumentation is a much more challenging task.

Does this need a special mode in afl-fuzz? You could just do the fork
server handshake, then set one bit in the bitmap every time you get a
"fork" command. The fuzzer will complain a bit in the UI about not
being able to discover anything, but that's about it.

/mz

Jakub Wilk

unread,
Jan 22, 2015, 3:42:28 PM1/22/15
to afl-...@googlegroups.com
* Michal Zalewski <lca...@gmail.com>, 2015-01-22, 12:13:
I tried it a while ago, but trimming collapsed the initial testcase to 4
bytes. :-(

--
Jakub Wilk

Michal Zalewski

unread,
Jan 22, 2015, 4:24:52 PM1/22/15
to afl-users
> I tried it a while ago, but trimming collapsed the initial testcase to 4
> bytes. :-(

Ah... that. OK, let me add some magical flag.

/mz

Michal Zalewski

unread,
Jan 22, 2015, 5:58:11 PM1/22/15
to afl-users
> Ah... that. OK, let me add some magical flag.

Done. I also renamed AFL_SKIP_CHECKS, because that was a pretty dumb
name for a variable; I think this may affect your Python stuff, so
just FYI.

Cheers,
/mz

Michal Zalewski

unread,
Jan 22, 2015, 11:41:24 PM1/22/15
to afl-users
>> Not sure how useful this would be in general, but in corpus
>> minimization you might also want to do (if you prefer a large corpus
>> of small cases):
>>
>> 3) If you give it a test case and an edge, minimize test case subject
>> to that edge still being executed.
>
> Nice idea. I'll test how well that works.

OK, experiment results with giflib (gif2rgb):

1) Completed GIF corpus straight from afl-fuzz: 2,692 kB, 645 files

2) After running minimize_corpus.sh: 1,004 kB (-63%), 242 files
(-62%), attainable compression ratio 24x (as a proxy for simplicity of
the files)

3) After running afl-tmin on the output of minimize_corpus.sh: 1,000
kB (-63%), 242 files (-62%), attainable compression ratio 28x

4) Alternate minimization where minimize_corpus.sh selects best
candidate for each tuple, and then asks modified afl-tmin to trim as
much as possible to still hit that tuple: 1,164 kB (-57%), 282 files
(-56%), attainable compression ratio 23x

So, we're actually worse off with the proposed approach (!). The
reason for this is that whenever minimization reduces the number of
tuples triggered by a particular file, we need more files to cover
everything - and this growth is not offset by how much we can trim
from individual test cases.

/mz

PS. Note that the modest gains between #2 and #3 are to be expected;
the built-in afl-fuzz trimmer is pretty good when allowed to fully do
its deed.

Jakub Wilk

unread,
Mar 1, 2015, 12:45:08 PM3/1/15
to afl-...@googlegroups.com
* Jakub Wilk <jw...@jwilk.net>, 2015-01-22, 18:23:
>Speaking of independent implementations of fork server, I have an
>unusual feature request.
>
>Could you add an option for dumb fuzzing with fork server? :-)

This was added in 1.20b, but it has bit-rotted since then. :-(
Now afl-fuzz hangs when trying to dry run against the first testcase.

--
Jakub Wilk

Michal Zalewski

unread,
Mar 1, 2015, 12:57:52 PM3/1/15
to afl-users
> This was added in 1.20b, but it has bit-rotted since then. :-(

Fixed.

/mz

Ben Nagy

unread,
Mar 1, 2015, 8:25:32 PM3/1/15
to afl-...@googlegroups.com
On Mon, Mar 2, 2015 at 2:57 AM, Michal Zalewski <lca...@gmail.com> wrote:
>> This was added in 1.20b, but it has bit-rotted since then. :-(
>
> Fixed.

Spooky. I was _just_ about to try hacking a bad version of this today.

I have to collect my thoughts a little, but I'll note a couple of
other quick things:

I have used CPU monitoring on other platforms for dumb fuzzing to save
time ( over using a fixed timeout ) and also to try to differentiate
'hang/spin' from 'nothing'. I'm sure you've thought about it - is it
hard to do cross platform, cumbersome or just low on the list?

I'm still thinking about my fixup wrappers. Last time you suggested a
C wrapper that fixes and then execve()s the target. I would like to
have an approach that minimises the amount of C that needs to be
written (it's more accessible for more users, and also I suck at C). I
wrote the wrapper in Go, but forgot that you need the forkserver
handshake compiled into the wrapper binary :)

I was considering trying my hand at a C wrapper that spits the test
out to a unix socket, reads back a fixed test, rewrites the file and
then execve() but I still think this would be a useful feature in afl
itself ( saves hitting the 'disk' twice ). I assert that it would
allow users of any language to write fixers and then just pass -fix
checksum_fixer.sock as an afl option. Such agile. Much agnostic! Using
a long running app and a socket saves launching the fixer binary every
time, which would get expensive.

Cheers!

ben

Ben Nagy

unread,
Mar 1, 2015, 9:07:01 PM3/1/15
to afl-...@googlegroups.com, b...@iagu.net
On Monday, March 2, 2015 at 10:25:32 AM UTC+9, Ben Nagy wrote:
On Mon, Mar 2, 2015 at 2:57 AM, Michal Zalewski <lca...@gmail.com> wrote:
>> This was added in 1.20b, but it has bit-rotted since then. :-(
>
> Fixed.

Spooky. I was _just_ about to try hacking a bad version of this today.

Oh.. no I completely misunderstood. What I _wanted_ was to allow afl to be used for targets that don't exit by themselves. Right now that doesn't seem to be possible. If I kill the target it's detected as "initial case causes a crash" and if I don't it's "initial case causes a hang". I just want ( for now ) a simplistic wrapper that will run the target with a fixed timeout, throw bad inputs at it and monitor for crashes. I can do this with a number of different ( non-afl ) approaches, but it seems a shame :)

Cheers,

ben 

Michal Zalewski

unread,
Mar 2, 2015, 12:06:00 AM3/2/15
to afl-users
> [...fuzzing programs that don't exit on their own...]
>
> I have used CPU monitoring on other platforms for dumb fuzzing to save
> time ( over using a fixed timeout ) and also to try to differentiate
> 'hang/spin' from 'nothing'. I'm sure you've thought about it - is it
> hard to do cross platform, cumbersome or just low on the list?

It's sorta possible to distinguish between a process that is idle and
waiting for user interaction and one that is still starting up, but
every OS would require a different approach. I'm assuming that you
want to fuzz an interactive X11 app, something like a PDF viewer; if
so, then why not either test the underlying library, or modify the
client to parse the document and then exit (or even show the window
and then exit)?

My worry is that anything we could do here would work only for a very
narrow subset of interactive apps. It will likely fall apart for
things like browsers, editors, media players, etc (multiple threads,
tons of persistent state, lock files, process consolidation with
already-running instances, etc).

> I'm still thinking about my fixup wrappers. Last time you suggested a
> C wrapper that fixes and then execve()s the target. I would like to
> have an approach that minimises the amount of C that needs to be
> written (it's more accessible for more users, and also I suck at C). I
> wrote the wrapper in Go, but forgot that you need the forkserver
> handshake compiled into the wrapper binary :)

For slow programs, the forkserver doesn't matter much, and you can
always just set AFL_NO_FORKSRV=1.

> I was considering trying my hand at a C wrapper that spits the test
> out to a unix socket, reads back a fixed test, rewrites the file and
> then execve() but I still think this would be a useful feature in afl
> itself ( saves hitting the 'disk' twice ). I assert that it would
> allow users of any language to write fixers and then just pass -fix
> checksum_fixer.sock as an afl option. Such agile. Much agnostic!

Well, you still need to write the fixup code that parses the input
format, plus do all the socket handling bits; wouldn't it be simpler
to comment out the cksum check in the targeted program?

I'm really not trying to be stubborn, but I'm hesitant about adding
features that do not necessarily make users' lives easier or that have
oddball failure modes. Peach is an example of a fuzzer that tries to
do everything at once, and I'm sort of using it as the anti-role-model
for AFL. The "output filter" thing was on my list for a very long
time, and I was actually sorta enthusiastic about it early on; but
couldn't think of any "killer use cases" where it's vastly superior
than alternatives. At the same time, I could think of many cases where
it would hurt you if you tried to use it (especially if it's an
out-of-process script).

/mz

Ben Nagy

unread,
Mar 2, 2015, 1:07:22 AM3/2/15
to afl-...@googlegroups.com
On Mon, Mar 2, 2015 at 2:05 PM, Michal Zalewski <lca...@gmail.com> wrote:
>> [...fuzzing programs that don't exit on their own...]
>>
>> I have used CPU monitoring on other platforms for dumb fuzzing to save
>> time ( over using a fixed timeout ) and also to try to differentiate
>> 'hang/spin' from 'nothing'. I'm sure you've thought about it - is it
>> hard to do cross platform, cumbersome or just low on the list?
>
> It's sorta possible to distinguish between a process that is idle and
> waiting for user interaction and one that is still starting up, but
> every OS would require a different approach.

This is difficult with closed source apps. It can be done, even then,
but the time and effort required for the RE means retargeting is
slower. CPU monitoring is bad but "easy", although I know from
experience that it's not quite as easy as it looks and portability is
ugly.

> My worry is that anything we could do here would work only for a very
> narrow subset of interactive apps.

Yep, definitely valid.

>> I'm still thinking about my fixup wrappers. Last time you suggested a
>> C wrapper that fixes and then execve()s the target. I would like to
>> have an approach that minimises the amount of C that needs to be
>> written (it's more accessible for more users, and also I suck at C). I
>> wrote the wrapper in Go, but forgot that you need the forkserver
>> handshake compiled into the wrapper binary :)
>
> For slow programs, the forkserver doesn't matter much, and you can
> always just set AFL_NO_FORKSRV=1.

And it should still pick up the instrumentation? That sounds
interesting, I'll try it out.

>> I was considering trying my hand at a C wrapper that spits the test
>> out to a unix socket, reads back a fixed test, rewrites the file and
>> then execve()
>
> Well, you still need to write the fixup code that parses the input
> format, plus do all the socket handling bits; wouldn't it be simpler
> to comment out the cksum check in the targeted program?

Again, closed source (and there might be more than one check). The
socket handling is mostly boilerplate and once it's done it's done.
Any reasonable language would let users add fixups as fairly simple
plugins. For example, for PDF all I was trying to do initially is fix
the startxref entry, which is trivial to write but more difficult to
reverse and patch.

> I'm really not trying to be stubborn, but I'm hesitant about adding
> features that do not necessarily make users' lives easier or that have
> oddball failure modes. Peach is an example of [doing it wrong...]

Yep. There's definitely a feature line between useful and baroque. I'm
just throwing out ideas for things that I've found useful on more than
one target, and I'm not trying to suggest that my ideas are either
empirically good or, more importantly, in line with the philosophy of
the tool.

BTW I did some awful hacks to make afl skip the dry-run for dumb mode,
so I now have my "every test hangs" dumb instrumentation layer. I
think. I won't know for sure until I see a crash :)

Cheers!

ben

Sami Liedes

unread,
Mar 2, 2015, 5:11:01 AM3/2/15
to afl-users
On Sun, Mar 01, 2015 at 09:05:38PM -0800, Michal Zalewski wrote:
> > I was considering trying my hand at a C wrapper that spits the test
> > out to a unix socket, reads back a fixed test, rewrites the file and
> > then execve() but I still think this would be a useful feature in afl
> > itself ( saves hitting the 'disk' twice ). I assert that it would
> > allow users of any language to write fixers and then just pass -fix
> > checksum_fixer.sock as an afl option. Such agile. Much agnostic!
>
> Well, you still need to write the fixup code that parses the input
> format, plus do all the socket handling bits; wouldn't it be simpler
> to comment out the cksum check in the targeted program?
>
> I'm really not trying to be stubborn, but I'm hesitant about adding
> features that do not necessarily make users' lives easier or that have
> oddball failure modes. Peach is an example of a fuzzer that tries to

It seems to me some way to do this would be really quite useful. A
really large portion of meaningful fuzzing requires some kind of
fixups, and having them hit the disk (or even only the fs cache, which
still can, in my experience, be a large bottleneck compared to tmpfs)
seems non-ideal. Also I think it effectively negates the benefits of
the fork server so that it only forks the trivial wrapper. It doesn't
matter for slow targets, but not all targets are slow (that's why the
fork server exists, after all?).

The exec wrapper can also be a hassle. For example, I recently fuzzed
a disassembler which for some weird reason wants its input in text
form (whitespace separated numbers), either from stdin or from a file.
If there was a generic way to inject fixups between afl and the
target, I wouldn't need to take care of giving every
wrapper-inside-afl instance a different filename to write the fixup to
and pass to the target. Well, it's still not too much of a hassle, but
it could be easier.

I also think the idea of having to write exec wrappers is much more
likely to scare some people (especially those less familiar with C and
Unix) away than alternatives...

I've also asked myself this: Why would I alter the target to handle
what AFL gives it, or write an exec wrapper to do that, when it's
easier for me to alter AFL to give my target what it wants, and it's
also philosophically more satisfying to not modify the program you are
testing? So perhaps one way to approach this would be to make it easy
(design & document) to modify AFL to do the fixup. I think that can be
much less scary to many than exec wrappers, and it's also likely to be
efficient, right?

Sami
signature.asc

Michal Zalewski

unread,
Mar 2, 2015, 11:16:23 AM3/2/15
to afl-users
> It seems to me some way to do this would be really quite useful. A
> really large portion of meaningful fuzzing requires some kind of
> fixups, and having them hit the disk (or even only the fs cache, which
> still can, in my experience, be a large bottleneck compared to tmpfs)

But that's orthogonal, right? You can point -f to tmpfs.

(Btw, I really wasn't ever able to reproduce significant perf
differences between using the filesystem and using ramdisks, so even
stdin fuzzing goes through a real file to improve compatibility; it
may become an issue for very large files or very heavy-duty targets,
of course. If anyone has some hard data, I'd love to have a look.)

> seems non-ideal. Also I think it effectively negates the benefits of
> the fork server so that it only forks the trivial wrapper. It doesn't
> matter for slow targets, but not all targets are slow (that's why the
> fork server exists, after all?).

In my mind, the cons are:

1) Writing standalone filters will be usually considerably more
complex than achieving the same goal through code changes in the
target binary. For example, the code to compute and fix checksums in
PNG files, and communicate with AFL over unix sockets, will be
probably 50+ lines of code, versus a single-line change in libpng. So,
it doesn't seem like something that lowers the bar for dealing with a
particular class of fuzzing challenges.

2) Especially less experienced users will be likely tempted to write
them in scripting languages. Doing so will probably result in a fairly
significant bottleneck for fast binaries. Arguably, not our problem,
but then, relatively few users will honestly read all the docs, etc.

3) The code will be an unintentional "attack surface" on its own,
since in cases such as PNG, you will need to parse the file to some
extent. So, it will probably need some babysitting to deal with
crashes, hangs, etc. Doable, but adds more complexity.

4) In general, when you add an "advanced" feature like this, there is
a temptation for users to try it out, even if it doesn't help or
actively hurts their fuzzing jobs =) For this reason, I don't want to
tack on very tricky features that only have niche use cases. So far,
the checksum use case seems to be the strongest one, but that's
actually pretty rare (PNG, tar, TCP, what else?).

Speaking of that last part... you mentioned that "a really large
portion of meaningful fuzzing requires some kind of fixups" - can you
elaborate?

> For example, I recently fuzzed
> a disassembler which for some weird reason wants its input in text
> form (whitespace separated numbers), either from stdin or from a file.

What was the fixup code doing? Why not just fuzz it as-is, perhaps
with a dictionary?

/mz

Ben Nagy

unread,
Mar 2, 2015, 7:30:47 PM3/2/15
to afl-...@googlegroups.com
On Tue, Mar 3, 2015 at 1:16 AM, Michal Zalewski <lca...@gmail.com> wrote:
[discussion on ram]

I always use ramdisks or /dev/shm when available, if only to preserve
hardware life. Obviously it's faster, but whether or not the delta can
be considered a bottleneck for a given target is an open ( but moot
imho ) question.

> In my mind, the cons are:
>
> 1) Writing standalone filters will be usually considerably more
> complex than achieving the same goal through code changes in the
> target binary. [...]

Even if this were the case ( and I am dubious ) it only applies to
open source targets.

> 2) Especially less experienced users will be likely tempted to write
> them in scripting languages. Doing so will probably result in a fairly
> significant bottleneck for fast binaries. Arguably, not our problem,
> but then, relatively few users will honestly read all the docs, etc.

It's a valid concern, I suppose, but surely this hypothetical user
would first fuzz without fixups, and thus notice that they've killed
their performance? As an aside, I think that even lowly "scripting
languages" could probably muster the awesome power required to parse a
couple of hundred streams a second. ;)

> 3) The code will be an unintentional "attack surface" on its own,
> since in cases such as PNG, you will need to parse the file to some
> extent. So, it will probably need some babysitting to deal with
> crashes, hangs, etc. Doable, but adds more complexity.

A timeout on the call out to the fixup, and an EOF handler?

> 4) In general, when you add an "advanced" feature like this, there is
> a temptation for users to try it out,

This is the crux of the argument. Say, for example, I fix up a length
field because I know ( or guess ) it's an early parser check - great,
except I've now spared the code path that deals with incorrect
lengths. When we have flying cars etc I guess afl will be able to tell
us that we're bottlenecking at a branch that has a lot of code behind
it, or we'll all be using SAGE at a thousand tests a second or
something, but for now it's always going to be guesswork, and may do
more harm than good.

> So far,
> the checksum use case seems to be the strongest one, but that's
> actually pretty rare (PNG, tar, TCP, what else?).

Lengths and magics are very common. Just by watching the stdout of
some pdf parsers I can see that the xrefs are parsed early. In some
cases a missing xref keyword is an immediate abort, so when the
startxrefs index is broken then there's a lot less chance something
good will happen. In .doc there are countless small things that I
won't bore you with, but they're easier to fix in the file than in the
reader. It can also become obvious from just watching your fuzzer
output. For example, at the moment I collate my crashes and I see a
huge number of aborts following a certain stack sequence. I can go and
look at that code and add a fix.

Before you jump on me, yes I know that having an xref token in a
dictionary would evolve past the missing keyword, but it will 'never'
evolve past the startxrefs [idx] check.

So, IMHO, the argument boils down to "fixups are often premature
optimisation, and some users will do it wrong". The counterargument is
"sometimes they _are_ needed, and altering the code under test is
sometimes hard/impossible, and philosophically distasteful".
Unfortunately both sides are assessing "how often will it help"
anecdotally right now.

I can't really see any merit in the performance or implementation
wrangling on either side of the debate. Staying off disk is nice but
hardly a killer feature. If fixups are slower but 100x more likely to
hit new code then who cares? And if you can write a tool that babysits
a fuzz target then babysitting a socket is hardly going to be
daunting.

Anyway, just my 0.02.

PS: [apropos nothing] Is it just me or are the available flags not
covered (succinctly) anywhere except in the usage output from -help?
Every time I wanted to check the flags I end up back in the source
file...

Cheers,

ben

Michal Zalewski

unread,
Mar 2, 2015, 10:21:52 PM3/2/15
to afl-users
>> So far,
>> the checksum use case seems to be the strongest one, but that's
>> actually pretty rare (PNG, tar, TCP, what else?).
>
> Lengths and magics are very common.

Yup, but they don't pose a huge problem for AFL, right? I mean, only
some tiny percentage of execs will be wasted.

Anyway, seems like you guys care strongly; I seriously, seriously,
*seriously* doubt that this feature will be used properly and will
offer benefits to the extent you're claiming (especially if the goal
is to do really inconsequential fixups like length fields, etc). But
I'll code something up for you.

> PS: [apropos nothing] Is it just me or are the available flags not
> covered (succinctly) anywhere except in the usage output from -help?

They are explained in a couple of sentences in README (in the context
of their intended use) and then summarized in the usage hints. What
other options are you hoping for?

/mz

Ben Nagy

unread,
Mar 2, 2015, 10:37:02 PM3/2/15
to afl-...@googlegroups.com
On Tue, Mar 3, 2015 at 12:21 PM, Michal Zalewski <lca...@gmail.com> wrote:
>>> So far,
>>> the checksum use case seems to be the strongest one, but that's
>>> actually pretty rare (PNG, tar, TCP, what else?).
>>
>> Lengths and magics are very common.
>
> Yup, but they don't pose a huge problem for AFL, right? I mean, only
> some tiny percentage of execs will be wasted.

With some formats, any time afl changes the file length then the exec
will be wasted. I don't know what percentage of execs that is. If it's
tiny, then yeah, you're right :) Personally I worry about user dict
insert/over ops.

> Anyway, seems like you guys care strongly;

For my own part, I certainly wouldn't go that far. If I had to pick
between fixups and graceful handling of targets that don't exit by
themselves I'd take the latter :) For the record, my 'patch' disables
perform_dry_run() and then fixes two divzeros that result. I feel
pretty bad about it.

> I'll code something up for you.

\(^v^)/

Stick it in the undocumented section, then nobody can blame you.

>> PS: [apropos nothing] Is it just me or are the available flags not
>> covered (succinctly) anywhere except in the usage output from -help?
>
> They are explained in a couple of sentences in README (in the context
> of their intended use) and then summarized in the usage hints. What
> other options are you hoping for?

I keep expecting to see a terse list of all the flags in one place,
basically like the -help output. It's not a big deal.

Cheers!

ben

Michal Zalewski

unread,
Mar 2, 2015, 10:55:52 PM3/2/15
to afl-users
> For my own part, I certainly wouldn't go that far. If I had to pick
> between fixups and graceful handling of targets that don't exit by
> themselves I'd take the latter :)

Maybe we can do it in a portable way with getrusage(RUSAGE_CHILDREN, ...).

/mz

Sami Liedes

unread,
Mar 3, 2015, 5:05:52 AM3/3/15
to afl-users
On Mon, Mar 02, 2015 at 08:16:02AM -0800, Michal Zalewski wrote:
> > It seems to me some way to do this would be really quite useful. A
> > really large portion of meaningful fuzzing requires some kind of
> > fixups, and having them hit the disk (or even only the fs cache, which
> > still can, in my experience, be a large bottleneck compared to tmpfs)
>
> But that's orthogonal, right? You can point -f to tmpfs.

True, at least when not using stdin fuzzing. The fixup wrapper will
have to write the file somewhere, but of course it can use tmpfs too.

I'll have a look at the new fixup plugin thing. Thanks :)

> (Btw, I really wasn't ever able to reproduce significant perf
> differences between using the filesystem and using ramdisks, so even
> stdin fuzzing goes through a real file to improve compatibility; it
> may become an issue for very large files or very heavy-duty targets,
> of course. If anyone has some hard data, I'd love to have a look.)

You are right, it's probably unlikely that it matters for the
fixup/afl part that the files hit (cached) filesystems. One case where
I saw execs/second grow IIRC ~tenfold when I moved to tmpfs was
e2fsck, which probably seeks/reads/writes around the file a lot.
Actually now that I think that's probably caused by the file's
timestamps/other metadata being updated on disk for writes by
e2fsck... That wasn't stdin fuzzing, though.

> 4) In general, when you add an "advanced" feature like this, there is
> a temptation for users to try it out, even if it doesn't help or
> actively hurts their fuzzing jobs =) For this reason, I don't want to
> tack on very tricky features that only have niche use cases. So far,
> the checksum use case seems to be the strongest one, but that's
> actually pretty rare (PNG, tar, TCP, what else?).
[...]
> Speaking of that last part... you mentioned that "a really large
> portion of meaningful fuzzing requires some kind of fixups" - can you
> elaborate?

You are probably right, it may be somewhat of a niche. It's probably a
larger niche if you are not fuzzing for security but for robustness of
code (just plain old handling of invalid inputs even in non-security
critical contexts).

For example, with text-based formats I think it is very nice to be
able to produce text-only crashing inputs even if the program is
somewhat tolerant of binary. Such bug reports are just much nicer and
cleaner and it's easier to see what is going on. Essentially, I
believe that if you can reproduce the same crash with text input or
binary input, you would almost always prefer a textual test case.

> > For example, I recently fuzzed
> > a disassembler which for some weird reason wants its input in text
> > form (whitespace separated numbers), either from stdin or from a file.
>
> What was the fixup code doing? Why not just fuzz it as-is, perhaps
> with a dictionary?

Even if the target rejected anything that is not decmal numbers 0..255
separated by a single space, I think a huge portion of execs would be
wasted with binary inputs, often only caught after disassembling a
number of instructions. Also, the code taking the textual input is
somewhat throwaway code, but still I'd rather fuzz it than my own code
accepting binary input so I don't need to reverse those steps to write
a clean bug report. I'm really more interested in the cases where the
underlying disassembler crashes on some malformed instruction than on
fuzzing the the throwaway-ish parser.

Sami
signature.asc
Reply all
Reply to author
Forward
0 new messages