From 95b794744b4edd6e32eefc97998ec6aa041c5275 Mon Sep 17 00:00:00 2001
From: llzmb <46303940+llzmb@users.noreply.github.com>
Date: Thu, 12 Aug 2021 23:06:34 +0200
Subject: Edit README.md
Changes:
- Move advanced content to docs/.
- Add links.
- Fix links.
- Restructure content.
---
docs/afl-fuzz_approach.md | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
create mode 100644 docs/afl-fuzz_approach.md
(limited to 'docs/afl-fuzz_approach.md')
diff --git a/docs/afl-fuzz_approach.md b/docs/afl-fuzz_approach.md
new file mode 100644
index 00000000..5652816b
--- /dev/null
+++ b/docs/afl-fuzz_approach.md
@@ -0,0 +1,37 @@
+# The afl-fuzz approach
+
+American Fuzzy Lop is a brute-force fuzzer coupled with an exceedingly simple
+but rock-solid instrumentation-guided genetic algorithm. It uses a modified
+form of edge coverage to effortlessly pick up subtle, local-scale changes to
+program control flow.
+
+Simplifying a bit, the overall algorithm can be summed up as:
+
+ 1) Load user-supplied initial test cases into the queue,
+
+ 2) Take the next input file from the queue,
+
+ 3) Attempt to trim the test case to the smallest size that doesn't alter
+ the measured behavior of the program,
+
+ 4) Repeatedly mutate the file using a balanced and well-researched variety
+ of traditional fuzzing strategies,
+
+ 5) If any of the generated mutations resulted in a new state transition
+ recorded by the instrumentation, add mutated output as a new entry in the
+ queue.
+
+ 6) Go to 2.
+
+The discovered test cases are also periodically culled to eliminate ones that
+have been obsoleted by newer, higher-coverage finds; and undergo several other
+instrumentation-driven effort minimization steps.
+
+As a side result of the fuzzing process, the tool creates a small,
+self-contained corpus of interesting test cases. These are extremely useful
+for seeding other, labor- or resource-intensive testing regimes - for example,
+for stress-testing browsers, office applications, graphics suites, or
+closed-source tools.
+
+The fuzzer is thoroughly tested to deliver out-of-the-box performance far
+superior to blind fuzzing or coverage-only tools.
\ No newline at end of file
--
cgit 1.4.1
From c31f4646cbd00f591dad3258c08ff8e56aa94420 Mon Sep 17 00:00:00 2001
From: llzmb <46303940+llzmb@users.noreply.github.com>
Date: Sun, 21 Nov 2021 21:11:52 +0100
Subject: Clean up docs folder
---
README.md | 29 +-
docs/afl-fuzz_approach.md | 540 ++++++++++++++++++++++++++-
docs/custom_mutators.md | 6 +-
docs/env_variables.md | 9 +-
docs/features.md | 2 +-
docs/fuzzing_in_depth.md | 572 +++++++++++++++--------------
docs/important_changes.md | 4 +-
docs/interpreting_output.md | 71 ----
docs/status_screen.md | 444 ----------------------
docs/third_party_tools.md | 6 +-
qemu_mode/libqasan/README.md | 2 +-
unicorn_mode/samples/persistent/COMPILE.md | 12 +-
utils/aflpp_driver/README.md | 12 +-
13 files changed, 864 insertions(+), 845 deletions(-)
delete mode 100644 docs/interpreting_output.md
delete mode 100644 docs/status_screen.md
(limited to 'docs/afl-fuzz_approach.md')
diff --git a/README.md b/README.md
index fcb6b3c9..e0cb4558 100644
--- a/README.md
+++ b/README.md
@@ -80,8 +80,10 @@ Step-by-step quick start:
1. Compile the program or library to be fuzzed using `afl-cc`. A common way to
do this would be:
- CC=/path/to/afl-cc CXX=/path/to/afl-c++ ./configure --disable-shared
- make clean all
+ ```
+ CC=/path/to/afl-cc CXX=/path/to/afl-c++ ./configure --disable-shared
+ make clean all
+ ```
2. Get a small but valid input file that makes sense to the program. When
fuzzing verbose syntax (SQL, HTTP, etc), create a dictionary as described in
@@ -89,10 +91,10 @@ Step-by-step quick start:
3. If the program reads from stdin, run `afl-fuzz` like so:
-```
+ ```
./afl-fuzz -i seeds_dir -o output_dir -- \
- /path/to/tested/program [...program's cmdline...]
-```
+ /path/to/tested/program [...program's cmdline...]
+ ```
To add a dictionary, add `-x /path/to/dictionary.txt` to afl-fuzz.
@@ -100,13 +102,20 @@ Step-by-step quick start:
command line; AFL will put an auto-generated file name in there for you.
4. Investigate anything shown in red in the fuzzer UI by promptly consulting
- [docs/status_screen.md](docs/status_screen.md).
+ [docs/afl-fuzz_approach.md#understanding-the-status-screen](docs/afl-fuzz_approach.md#understanding-the-status-screen).
+
+5. Interpret the output, see
+ [docs/afl-fuzz_approach.md#interpreting-output](docs/afl-fuzz_approach.md#interpreting-output).
-5. You will find found crashes and hangs in the subdirectories `crashes/` and
+6. You will find found crashes and hangs in the subdirectories `crashes/` and
`hangs/` in the `-o output_dir` directory. You can replay the crashes by
- feeding them to the target, e.g.: `cat output_dir/crashes/id:000000,* |
- /path/to/tested/program [...program's cmdline...]` You can generate cores or
- use gdb directly to follow up the crashes.
+ feeding them to the target, e.g.:
+
+ ```
+ cat output_dir/crashes/id:000000,* | /path/to/tested/program [...program's cmdline...]
+ ```
+
+ You can generate cores or use gdb directly to follow up the crashes.
## Contact
diff --git a/docs/afl-fuzz_approach.md b/docs/afl-fuzz_approach.md
index 5652816b..57a275d9 100644
--- a/docs/afl-fuzz_approach.md
+++ b/docs/afl-fuzz_approach.md
@@ -1,37 +1,539 @@
# The afl-fuzz approach
-American Fuzzy Lop is a brute-force fuzzer coupled with an exceedingly simple
-but rock-solid instrumentation-guided genetic algorithm. It uses a modified
-form of edge coverage to effortlessly pick up subtle, local-scale changes to
-program control flow.
+AFL++ is a brute-force fuzzer coupled with an exceedingly simple but rock-solid
+instrumentation-guided genetic algorithm. It uses a modified form of edge
+coverage to effortlessly pick up subtle, local-scale changes to program control
+flow.
Simplifying a bit, the overall algorithm can be summed up as:
- 1) Load user-supplied initial test cases into the queue,
+1) Load user-supplied initial test cases into the queue.
- 2) Take the next input file from the queue,
+2) Take the next input file from the queue.
- 3) Attempt to trim the test case to the smallest size that doesn't alter
- the measured behavior of the program,
+3) Attempt to trim the test case to the smallest size that doesn't alter the
+ measured behavior of the program.
- 4) Repeatedly mutate the file using a balanced and well-researched variety
- of traditional fuzzing strategies,
+4) Repeatedly mutate the file using a balanced and well-researched variety of
+ traditional fuzzing strategies.
- 5) If any of the generated mutations resulted in a new state transition
- recorded by the instrumentation, add mutated output as a new entry in the
- queue.
+5) If any of the generated mutations resulted in a new state transition recorded
+ by the instrumentation, add mutated output as a new entry in the queue.
- 6) Go to 2.
+6) Go to 2.
The discovered test cases are also periodically culled to eliminate ones that
have been obsoleted by newer, higher-coverage finds; and undergo several other
instrumentation-driven effort minimization steps.
As a side result of the fuzzing process, the tool creates a small,
-self-contained corpus of interesting test cases. These are extremely useful
-for seeding other, labor- or resource-intensive testing regimes - for example,
-for stress-testing browsers, office applications, graphics suites, or
-closed-source tools.
+self-contained corpus of interesting test cases. These are extremely useful for
+seeding other, labor- or resource-intensive testing regimes - for example, for
+stress-testing browsers, office applications, graphics suites, or closed-source
+tools.
The fuzzer is thoroughly tested to deliver out-of-the-box performance far
-superior to blind fuzzing or coverage-only tools.
\ No newline at end of file
+superior to blind fuzzing or coverage-only tools.
+
+## Understanding the status screen
+
+This document provides an overview of the status screen - plus tips for
+troubleshooting any warnings and red text shown in the UI. See
+[README.md](../README.md) for the general instruction manual.
+
+### A note about colors
+
+The status screen and error messages use colors to keep things readable and
+attract your attention to the most important details. For example, red almost
+always means "consult this doc" :-)
+
+Unfortunately, the UI will render correctly only if your terminal is using
+traditional un*x palette (white text on black background) or something close to
+that.
+
+If you are using inverse video, you may want to change your settings, say:
+
+- For GNOME Terminal, go to `Edit > Profile` preferences, select the "colors"
+ tab, and from the list of built-in schemes, choose "white on black".
+- For the MacOS X Terminal app, open a new window using the "Pro" scheme via the
+ `Shell > New Window` menu (or make "Pro" your default).
+
+Alternatively, if you really like your current colors, you can edit config.h to
+comment out USE_COLORS, then do `make clean all`.
+
+I'm not aware of any other simple way to make this work without causing other
+side effects - sorry about that.
+
+With that out of the way, let's talk about what's actually on the screen...
+
+### The status bar
+
+```
+american fuzzy lop ++3.01a (default) [fast] {0}
+```
+
+The top line shows you which mode afl-fuzz is running in (normal: "american
+fuzzy lop", crash exploration mode: "peruvian rabbit mode") and the version of
+AFL++. Next to the version is the banner, which, if not set with -T by hand,
+will either show the binary name being fuzzed, or the -M/-S main/secondary name
+for parallel fuzzing. Second to last is the power schedule mode being run
+(default: fast). Finally, the last item is the CPU id.
+
+### Process timing
+
+```
+ +----------------------------------------------------+
+ | run time : 0 days, 8 hrs, 32 min, 43 sec |
+ | last new path : 0 days, 0 hrs, 6 min, 40 sec |
+ | last uniq crash : none seen yet |
+ | last uniq hang : 0 days, 1 hrs, 24 min, 32 sec |
+ +----------------------------------------------------+
+```
+
+This section is fairly self-explanatory: it tells you how long the fuzzer has
+been running and how much time has elapsed since its most recent finds. This is
+broken down into "paths" (a shorthand for test cases that trigger new execution
+patterns), crashes, and hangs.
+
+When it comes to timing: there is no hard rule, but most fuzzing jobs should be
+expected to run for days or weeks; in fact, for a moderately complex project,
+the first pass will probably take a day or so. Every now and then, some jobs
+will be allowed to run for months.
+
+There's one important thing to watch out for: if the tool is not finding new
+paths within several minutes of starting, you're probably not invoking the
+target binary correctly and it never gets to parse the input files we're
+throwing at it; another possible explanations are that the default memory limit
+(`-m`) is too restrictive, and the program exits after failing to allocate a
+buffer very early on; or that the input files are patently invalid and always
+fail a basic header check.
+
+If there are no new paths showing up for a while, you will eventually see a big
+red warning in this section, too :-)
+
+### Overall results
+
+```
+ +-----------------------+
+ | cycles done : 0 |
+ | total paths : 2095 |
+ | uniq crashes : 0 |
+ | uniq hangs : 19 |
+ +-----------------------+
+```
+
+The first field in this section gives you the count of queue passes done so far
+- that is, the number of times the fuzzer went over all the interesting test
+cases discovered so far, fuzzed them, and looped back to the very beginning.
+Every fuzzing session should be allowed to complete at least one cycle; and
+ideally, should run much longer than that.
+
+As noted earlier, the first pass can take a day or longer, so sit back and
+relax.
+
+To help make the call on when to hit `Ctrl-C`, the cycle counter is color-coded.
+It is shown in magenta during the first pass, progresses to yellow if new finds
+are still being made in subsequent rounds, then blue when that ends - and
+finally, turns green after the fuzzer hasn't been seeing any action for a longer
+while.
+
+The remaining fields in this part of the screen should be pretty obvious:
+there's the number of test cases ("paths") discovered so far, and the number of
+unique faults. The test cases, crashes, and hangs can be explored in real-time
+by browsing the output directory, as discussed in [README.md](../README.md).
+
+### Cycle progress
+
+```
+ +-------------------------------------+
+ | now processing : 1296 (61.86%) |
+ | paths timed out : 0 (0.00%) |
+ +-------------------------------------+
+```
+
+This box tells you how far along the fuzzer is with the current queue cycle: it
+shows the ID of the test case it is currently working on, plus the number of
+inputs it decided to ditch because they were persistently timing out.
+
+The "*" suffix sometimes shown in the first line means that the currently
+processed path is not "favored" (a property discussed later on).
+
+### Map coverage
+
+```
+ +--------------------------------------+
+ | map density : 10.15% / 29.07% |
+ | count coverage : 4.03 bits/tuple |
+ +--------------------------------------+
+```
+
+The section provides some trivia about the coverage observed by the
+instrumentation embedded in the target binary.
+
+The first line in the box tells you how many branch tuples we have already hit,
+in proportion to how much the bitmap can hold. The number on the left describes
+the current input; the one on the right is the value for the entire input
+corpus.
+
+Be wary of extremes:
+
+- Absolute numbers below 200 or so suggest one of three things: that the program
+ is extremely simple; that it is not instrumented properly (e.g., due to being
+ linked against a non-instrumented copy of the target library); or that it is
+ bailing out prematurely on your input test cases. The fuzzer will try to mark
+ this in pink, just to make you aware.
+- Percentages over 70% may very rarely happen with very complex programs that
+ make heavy use of template-generated code. Because high bitmap density makes
+ it harder for the fuzzer to reliably discern new program states, we recommend
+ recompiling the binary with `AFL_INST_RATIO=10` or so and trying again (see
+ [env_variables.md](env_variables.md)). The fuzzer will flag high percentages
+ in red. Chances are, you will never see that unless you're fuzzing extremely
+ hairy software (say, v8, perl, ffmpeg).
+
+The other line deals with the variability in tuple hit counts seen in the
+binary. In essence, if every taken branch is always taken a fixed number of
+times for all the inputs we have tried, this will read `1.00`. As we manage to
+trigger other hit counts for every branch, the needle will start to move toward
+`8.00` (every bit in the 8-bit map hit), but will probably never reach that
+extreme.
+
+Together, the values can be useful for comparing the coverage of several
+different fuzzing jobs that rely on the same instrumented binary.
+
+### Stage progress
+
+```
+ +-------------------------------------+
+ | now trying : interest 32/8 |
+ | stage execs : 3996/34.4k (11.62%) |
+ | total execs : 27.4M |
+ | exec speed : 891.7/sec |
+ +-------------------------------------+
+```
+
+This part gives you an in-depth peek at what the fuzzer is actually doing right
+now. It tells you about the current stage, which can be any of:
+
+- calibration - a pre-fuzzing stage where the execution path is examined to
+ detect anomalies, establish baseline execution speed, and so on. Executed very
+ briefly whenever a new find is being made.
+- trim L/S - another pre-fuzzing stage where the test case is trimmed to the
+ shortest form that still produces the same execution path. The length (L) and
+ stepover (S) are chosen in general relationship to file size.
+- bitflip L/S - deterministic bit flips. There are L bits toggled at any given
+ time, walking the input file with S-bit increments. The current L/S variants
+ are: `1/1`, `2/1`, `4/1`, `8/8`, `16/8`, `32/8`.
+- arith L/8 - deterministic arithmetics. The fuzzer tries to subtract or add
+ small integers to 8-, 16-, and 32-bit values. The stepover is always 8 bits.
+- interest L/8 - deterministic value overwrite. The fuzzer has a list of known
+ "interesting" 8-, 16-, and 32-bit values to try. The stepover is 8 bits.
+- extras - deterministic injection of dictionary terms. This can be shown as
+ "user" or "auto", depending on whether the fuzzer is using a user-supplied
+ dictionary (`-x`) or an auto-created one. You will also see "over" or
+ "insert", depending on whether the dictionary words overwrite existing data or
+ are inserted by offsetting the remaining data to accommodate their length.
+- havoc - a sort-of-fixed-length cycle with stacked random tweaks. The
+ operations attempted during this stage include bit flips, overwrites with
+ random and "interesting" integers, block deletion, block duplication, plus
+ assorted dictionary-related operations (if a dictionary is supplied in the
+ first place).
+- splice - a last-resort strategy that kicks in after the first full queue cycle
+ with no new paths. It is equivalent to 'havoc', except that it first splices
+ together two random inputs from the queue at some arbitrarily selected
+ midpoint.
+- sync - a stage used only when `-M` or `-S` is set (see
+ [parallel_fuzzing.md](parallel_fuzzing.md)). No real fuzzing is involved, but
+ the tool scans the output from other fuzzers and imports test cases as
+ necessary. The first time this is done, it may take several minutes or so.
+
+The remaining fields should be fairly self-evident: there's the exec count
+progress indicator for the current stage, a global exec counter, and a benchmark
+for the current program execution speed. This may fluctuate from one test case
+to another, but the benchmark should be ideally over 500 execs/sec most of the
+time - and if it stays below 100, the job will probably take very long.
+
+The fuzzer will explicitly warn you about slow targets, too. If this happens,
+see the [perf_tips.md](perf_tips.md) file included with the fuzzer for ideas on
+how to speed things up.
+
+### Findings in depth
+
+```
+ +--------------------------------------+
+ | favored paths : 879 (41.96%) |
+ | new edges on : 423 (20.19%) |
+ | total crashes : 0 (0 unique) |
+ | total tmouts : 24 (19 unique) |
+ +--------------------------------------+
+```
+
+This gives you several metrics that are of interest mostly to complete nerds.
+The section includes the number of paths that the fuzzer likes the most based on
+a minimization algorithm baked into the code (these will get considerably more
+air time), and the number of test cases that actually resulted in better edge
+coverage (versus just pushing the branch hit counters up). There are also
+additional, more detailed counters for crashes and timeouts.
+
+Note that the timeout counter is somewhat different from the hang counter; this
+one includes all test cases that exceeded the timeout, even if they did not
+exceed it by a margin sufficient to be classified as hangs.
+
+### Fuzzing strategy yields
+
+```
+ +-----------------------------------------------------+
+ | bit flips : 57/289k, 18/289k, 18/288k |
+ | byte flips : 0/36.2k, 4/35.7k, 7/34.6k |
+ | arithmetics : 53/2.54M, 0/537k, 0/55.2k |
+ | known ints : 8/322k, 12/1.32M, 10/1.70M |
+ | dictionary : 9/52k, 1/53k, 1/24k |
+ |havoc/splice : 1903/20.0M, 0/0 |
+ |py/custom/rq : unused, 53/2.54M, unused |
+ | trim/eff : 20.31%/9201, 17.05% |
+ +-----------------------------------------------------+
+```
+
+This is just another nerd-targeted section keeping track of how many paths we
+have netted, in proportion to the number of execs attempted, for each of the
+fuzzing strategies discussed earlier on. This serves to convincingly validate
+assumptions about the usefulness of the various approaches taken by afl-fuzz.
+
+The trim strategy stats in this section are a bit different than the rest. The
+first number in this line shows the ratio of bytes removed from the input files;
+the second one corresponds to the number of execs needed to achieve this goal.
+Finally, the third number shows the proportion of bytes that, although not
+possible to remove, were deemed to have no effect and were excluded from some of
+the more expensive deterministic fuzzing steps.
+
+Note that when deterministic mutation mode is off (which is the default because
+it is not very efficient) the first five lines display "disabled (default,
+enable with -D)".
+
+Only what is activated will have counter shown.
+
+### Path geometry
+
+```
+ +---------------------+
+ | levels : 5 |
+ | pending : 1570 |
+ | pend fav : 583 |
+ | own finds : 0 |
+ | imported : 0 |
+ | stability : 100.00% |
+ +---------------------+
+```
+
+The first field in this section tracks the path depth reached through the guided
+fuzzing process. In essence: the initial test cases supplied by the user are
+considered "level 1". The test cases that can be derived from that through
+traditional fuzzing are considered "level 2"; the ones derived by using these as
+inputs to subsequent fuzzing rounds are "level 3"; and so forth. The maximum
+depth is therefore a rough proxy for how much value you're getting out of the
+instrumentation-guided approach taken by afl-fuzz.
+
+The next field shows you the number of inputs that have not gone through any
+fuzzing yet. The same stat is also given for "favored" entries that the fuzzer
+really wants to get to in this queue cycle (the non-favored entries may have to
+wait a couple of cycles to get their chance).
+
+Next, we have the number of new paths found during this fuzzing section and
+imported from other fuzzer instances when doing parallelized fuzzing; and the
+extent to which identical inputs appear to sometimes produce variable behavior
+in the tested binary.
+
+That last bit is actually fairly interesting: it measures the consistency of
+observed traces. If a program always behaves the same for the same input data,
+it will earn a score of 100%. When the value is lower but still shown in purple,
+the fuzzing process is unlikely to be negatively affected. If it goes into red,
+you may be in trouble, since AFL will have difficulty discerning between
+meaningful and "phantom" effects of tweaking the input file.
+
+Now, most targets will just get a 100% score, but when you see lower figures,
+there are several things to look at:
+
+- The use of uninitialized memory in conjunction with some intrinsic sources of
+ entropy in the tested binary. Harmless to AFL, but could be indicative of a
+ security bug.
+- Attempts to manipulate persistent resources, such as left over temporary files
+ or shared memory objects. This is usually harmless, but you may want to
+ double-check to make sure the program isn't bailing out prematurely. Running
+ out of disk space, SHM handles, or other global resources can trigger this,
+ too.
+- Hitting some functionality that is actually designed to behave randomly.
+ Generally harmless. For example, when fuzzing sqlite, an input like `select
+ random();` will trigger a variable execution path.
+- Multiple threads executing at once in semi-random order. This is harmless when
+ the 'stability' metric stays over 90% or so, but can become an issue if not.
+ Here's what to try:
+ * Use afl-clang-fast from [instrumentation](../instrumentation/) - it uses a
+ thread-local tracking model that is less prone to concurrency issues,
+ * See if the target can be compiled or run without threads. Common
+ `./configure` options include `--without-threads`, `--disable-pthreads`, or
+ `--disable-openmp`.
+ * Replace pthreads with GNU Pth (https://www.gnu.org/software/pth/), which
+ allows you to use a deterministic scheduler.
+- In persistent mode, minor drops in the "stability" metric can be normal,
+ because not all the code behaves identically when re-entered; but major dips
+ may signify that the code within `__AFL_LOOP()` is not behaving correctly on
+ subsequent iterations (e.g., due to incomplete clean-up or reinitialization of
+ the state) and that most of the fuzzing effort goes to waste.
+
+The paths where variable behavior is detected are marked with a matching entry
+in the `/queue/.state/variable_behavior/` directory, so you can look
+them up easily.
+
+### CPU load
+
+```
+ [cpu: 25%]
+```
+
+This tiny widget shows the apparent CPU utilization on the local system. It is
+calculated by taking the number of processes in the "runnable" state, and then
+comparing it to the number of logical cores on the system.
+
+If the value is shown in green, you are using fewer CPU cores than available on
+your system and can probably parallelize to improve performance; for tips on how
+to do that, see [parallel_fuzzing.md](parallel_fuzzing.md).
+
+If the value is shown in red, your CPU is *possibly* oversubscribed, and running
+additional fuzzers may not give you any benefits.
+
+Of course, this benchmark is very simplistic; it tells you how many processes
+are ready to run, but not how resource-hungry they may be. It also doesn't
+distinguish between physical cores, logical cores, and virtualized CPUs; the
+performance characteristics of each of these will differ quite a bit.
+
+If you want a more accurate measurement, you can run the `afl-gotcpu` utility
+from the command line.
+
+## Interpreting output
+
+See [#understanding-the-status-screen](#understanding-the-status-screen) for
+information on how to interpret the displayed stats and monitor the health of
+the process. Be sure to consult this file especially if any UI elements are
+highlighted in red.
+
+The fuzzing process will continue until you press Ctrl-C. At a minimum, you want
+to allow the fuzzer to complete one queue cycle, which may take anywhere from a
+couple of hours to a week or so.
+
+There are three subdirectories created within the output directory and updated
+in real-time:
+
+- queue/ - test cases for every distinctive execution path, plus all the
+ starting files given by the user. This is the synthesized corpus
+ mentioned in section 2.
+
+ Before using this corpus for any other purposes, you can shrink
+ it to a smaller size using the afl-cmin tool. The tool will find
+ a smaller subset of files offering equivalent edge coverage.
+
+- crashes/ - unique test cases that cause the tested program to receive a fatal
+ signal (e.g., SIGSEGV, SIGILL, SIGABRT). The entries are grouped by
+ the received signal.
+
+- hangs/ - unique test cases that cause the tested program to time out. The
+ default time limit before something is classified as a hang is the
+ larger of 1 second and the value of the -t parameter. The value can
+ be fine-tuned by setting AFL_HANG_TMOUT, but this is rarely
+ necessary.
+
+Crashes and hangs are considered "unique" if the associated execution paths
+involve any state transitions not seen in previously-recorded faults. If a
+single bug can be reached in multiple ways, there will be some count inflation
+early in the process, but this should quickly taper off.
+
+The file names for crashes and hangs are correlated with the parent, non-faulting
+queue entries. This should help with debugging.
+
+## Visualizing
+
+If you have gnuplot installed, you can also generate some pretty graphs for any
+active fuzzing task using afl-plot. For an example of how this looks like, see
+[https://lcamtuf.coredump.cx/afl/plot/](https://lcamtuf.coredump.cx/afl/plot/).
+
+You can also manually build and install afl-plot-ui, which is a helper utility
+for showing the graphs generated by afl-plot in a graphical window using GTK.
+You can build and install it as follows:
+
+```shell
+sudo apt install libgtk-3-0 libgtk-3-dev pkg-config
+cd utils/plot_ui
+make
+cd ../../
+sudo make install
+```
+
+
+### Addendum: status and plot files
+
+For unattended operation, some of the key status screen information can be also
+found in a machine-readable format in the fuzzer_stats file in the output
+directory. This includes:
+
+- `start_time` - unix time indicating the start time of afl-fuzz
+- `last_update` - unix time corresponding to the last update of this file
+- `run_time` - run time in seconds to the last update of this file
+- `fuzzer_pid` - PID of the fuzzer process
+- `cycles_done` - queue cycles completed so far
+- `cycles_wo_finds` - number of cycles without any new paths found
+- `execs_done` - number of execve() calls attempted
+- `execs_per_sec` - overall number of execs per second
+- `paths_total` - total number of entries in the queue
+- `paths_favored` - number of queue entries that are favored
+- `paths_found` - number of entries discovered through local fuzzing
+- `paths_imported` - number of entries imported from other instances
+- `max_depth` - number of levels in the generated data set
+- `cur_path` - currently processed entry number
+- `pending_favs` - number of favored entries still waiting to be fuzzed
+- `pending_total` - number of all entries waiting to be fuzzed
+- `variable_paths` - number of test cases showing variable behavior
+- `stability` - percentage of bitmap bytes that behave consistently
+- `bitmap_cvg` - percentage of edge coverage found in the map so far
+- `unique_crashes` - number of unique crashes recorded
+- `unique_hangs` - number of unique hangs encountered
+- `last_path` - seconds since the last path was found
+- `last_crash` - seconds since the last crash was found
+- `last_hang` - seconds since the last hang was found
+- `execs_since_crash` - execs since the last crash was found
+- `exec_timeout` - the -t command line value
+- `slowest_exec_ms` - real time of the slowest execution in ms
+- `peak_rss_mb` - max rss usage reached during fuzzing in MB
+- `edges_found` - how many edges have been found
+- `var_byte_count` - how many edges are non-deterministic
+- `afl_banner` - banner text (e.g. the target name)
+- `afl_version` - the version of AFL used
+- `target_mode` - default, persistent, qemu, unicorn, non-instrumented
+- `command_line` - full command line used for the fuzzing session
+
+Most of these map directly to the UI elements discussed earlier on.
+
+On top of that, you can also find an entry called `plot_data`, containing a
+plottable history for most of these fields. If you have gnuplot installed, you
+can turn this into a nice progress report with the included `afl-plot` tool.
+
+### Addendum: automatically sending metrics with StatsD
+
+In a CI environment or when running multiple fuzzers, it can be tedious to log
+into each of them or deploy scripts to read the fuzzer statistics. Using
+`AFL_STATSD` (and the other related environment variables `AFL_STATSD_HOST`,
+`AFL_STATSD_PORT`, `AFL_STATSD_TAGS_FLAVOR`) you can automatically send metrics
+to your favorite StatsD server. Depending on your StatsD server, you will be
+able to monitor, trigger alerts, or perform actions based on these metrics (e.g:
+alert on slow exec/s for a new build, threshold of crashes, time since last
+crash > X, etc).
+
+The selected metrics are a subset of all the metrics found in the status and in
+the plot file. The list is the following: `cycle_done`, `cycles_wo_finds`,
+`execs_done`,`execs_per_sec`, `paths_total`, `paths_favored`, `paths_found`,
+`paths_imported`, `max_depth`, `cur_path`, `pending_favs`, `pending_total`,
+`variable_paths`, `unique_crashes`, `unique_hangs`, `total_crashes`,
+`slowest_exec_ms`, `edges_found`, `var_byte_count`, `havoc_expansion`. Their
+definitions can be found in the addendum above.
+
+When using multiple fuzzer instances with StatsD, it is *strongly* recommended
+to setup the flavor (AFL_STATSD_TAGS_FLAVOR) to match your StatsD server. This
+will allow you to see individual fuzzer performance, detect bad ones, see the
+progress of each strategy...
\ No newline at end of file
diff --git a/docs/custom_mutators.md b/docs/custom_mutators.md
index 8b5a4068..b1dfd309 100644
--- a/docs/custom_mutators.md
+++ b/docs/custom_mutators.md
@@ -127,9 +127,9 @@ def deinit(): # optional for Python
- `describe` (optional):
- When this function is called, it shall describe the current testcase,
+ When this function is called, it shall describe the current test case,
generated by the last mutation. This will be called, for example,
- to name the written testcase file after a crash occurred.
+ to name the written test case file after a crash occurred.
Using it can help to reproduce crashing mutations.
- `havoc_mutation` and `havoc_mutation_probability` (optional):
@@ -224,7 +224,7 @@ Optionally, the following environment variables are supported:
- `AFL_CUSTOM_MUTATOR_ONLY`
- Disable all other mutation stages. This can prevent broken testcases
+ Disable all other mutation stages. This can prevent broken test cases
(those that your Python module can't work with anymore) to fill up your
queue. Best combined with a custom trimming routine (see below) because
trimming can cause the same test breakage like havoc and splice.
diff --git a/docs/env_variables.md b/docs/env_variables.md
index 65cca0dc..34318cd4 100644
--- a/docs/env_variables.md
+++ b/docs/env_variables.md
@@ -306,8 +306,9 @@ checks or alter some of the more exotic semantics of the tool:
exit soon after the first crash is found.
- `AFL_CMPLOG_ONLY_NEW` will only perform the expensive cmplog feature for
- newly found testcases and not for testcases that are loaded on startup (`-i
- in`). This is an important feature to set when resuming a fuzzing session.
+ newly found test cases and not for test cases that are loaded on startup
+ (`-i in`). This is an important feature to set when resuming a fuzzing
+ session.
- Setting `AFL_CRASH_EXITCODE` sets the exit code AFL treats as crash. For
example, if `AFL_CRASH_EXITCODE='-1'` is set, each input resulting in a `-1`
@@ -447,8 +448,8 @@ checks or alter some of the more exotic semantics of the tool:
- If you are using persistent mode (you should, see
[instrumentation/README.persistent_mode.md](../instrumentation/README.persistent_mode.md)),
- some targets keep inherent state due which a detected crash testcase does
- not crash the target again when the testcase is given. To be able to still
+ some targets keep inherent state due which a detected crash test case does
+ not crash the target again when the test case is given. To be able to still
re-trigger these crashes, you can use the `AFL_PERSISTENT_RECORD` variable
with a value of how many previous fuzz cases to keep prio a crash. If set to
e.g. 10, then the 9 previous inputs are written to out/default/crashes as
diff --git a/docs/features.md b/docs/features.md
index f44e32ff..05670e6f 100644
--- a/docs/features.md
+++ b/docs/features.md
@@ -17,7 +17,7 @@
| Context Coverage | | x(6) | | | | | |
| Auto Dictionary | | x(7) | | | | | |
| Snapshot LKM Support | | (x)(8) | (x)(8) | | (x)(5) | | |
- | Shared Memory Testcases | | x | x | x86[_64]/arm64 | x | x | |
+ | Shared Memory Test cases | | x | x | x86[_64]/arm64 | x | x | |
1. default for LLVM >= 9.0, env var for older version due an efficiency bug in previous llvm versions
2. GCC creates non-performant code, hence it is disabled in gcc_plugin
diff --git a/docs/fuzzing_in_depth.md b/docs/fuzzing_in_depth.md
index 5306cbef..5b4a9df7 100644
--- a/docs/fuzzing_in_depth.md
+++ b/docs/fuzzing_in_depth.md
@@ -1,24 +1,25 @@
# Fuzzing with AFL++
The following describes how to fuzz with a target if source code is available.
-If you have a binary-only target please skip to [#Instrumenting binary-only apps](#Instrumenting binary-only apps)
+If you have a binary-only target, please go to
+[fuzzing_binary-only_targets.md](fuzzing_binary-only_targets.md).
-Fuzzing source code is a three-step process.
+Fuzzing source code is a three-step process:
1. Compile the target with a special compiler that prepares the target to be
fuzzed efficiently. This step is called "instrumenting a target".
2. Prepare the fuzzing by selecting and optimizing the input corpus for the
target.
-3. Perform the fuzzing of the target by randomly mutating input and assessing
- if a generated input was processed in a new path in the target binary.
+3. Perform the fuzzing of the target by randomly mutating input and assessing if
+ a generated input was processed in a new path in the target binary.
### 1. Instrumenting that target
#### a) Selecting the best AFL++ compiler for instrumenting the target
AFL++ comes with a central compiler `afl-cc` that incorporates various different
-kinds of compiler targets and and instrumentation options.
-The following evaluation flow will help you to select the best possible.
+kinds of compiler targets and and instrumentation options. The following
+evaluation flow will help you to select the best possible.
It is highly recommended to have the newest llvm version possible installed,
anything below 9 is not recommended.
@@ -51,132 +52,131 @@ anything below 9 is not recommended.
Clickable README links for the chosen compiler:
- * [LTO mode - afl-clang-lto](../instrumentation/README.lto.md)
- * [LLVM mode - afl-clang-fast](../instrumentation/README.llvm.md)
- * [GCC_PLUGIN mode - afl-gcc-fast](../instrumentation/README.gcc_plugin.md)
- * GCC/CLANG modes (afl-gcc/afl-clang) have no README as they have no own features
+* [LTO mode - afl-clang-lto](../instrumentation/README.lto.md)
+* [LLVM mode - afl-clang-fast](../instrumentation/README.llvm.md)
+* [GCC_PLUGIN mode - afl-gcc-fast](../instrumentation/README.gcc_plugin.md)
+* GCC/CLANG modes (afl-gcc/afl-clang) have no README as they have no own
+ features
You can select the mode for the afl-cc compiler by:
- 1. use a symlink to afl-cc: afl-gcc, afl-g++, afl-clang, afl-clang++,
- afl-clang-fast, afl-clang-fast++, afl-clang-lto, afl-clang-lto++,
- afl-gcc-fast, afl-g++-fast (recommended!)
- 2. using the environment variable AFL_CC_COMPILER with MODE
- 3. passing --afl-MODE command line options to the compiler via CFLAGS/CXXFLAGS/CPPFLAGS
+1. use a symlink to afl-cc: afl-gcc, afl-g++, afl-clang, afl-clang++,
+ afl-clang-fast, afl-clang-fast++, afl-clang-lto, afl-clang-lto++,
+ afl-gcc-fast, afl-g++-fast (recommended!)
+2. using the environment variable AFL_CC_COMPILER with MODE
+3. passing --afl-MODE command line options to the compiler via
+ CFLAGS/CXXFLAGS/CPPFLAGS
MODE can be one of: LTO (afl-clang-lto*), LLVM (afl-clang-fast*), GCC_PLUGIN
(afl-g*-fast) or GCC (afl-gcc/afl-g++) or CLANG(afl-clang/afl-clang++).
-Because no AFL specific command-line options are accepted (beside the
---afl-MODE command), the compile-time tools make fairly broad use of environment
-variables, which can be listed with `afl-cc -hh` or by reading [env_variables.md](env_variables.md).
+Because no AFL specific command-line options are accepted (beside the --afl-MODE
+command), the compile-time tools make fairly broad use of environment variables,
+which can be listed with `afl-cc -hh` or by reading
+[env_variables.md](env_variables.md).
#### b) Selecting instrumentation options
-The following options are available when you instrument with LTO mode (afl-clang-fast/afl-clang-lto):
-
- * Splitting integer, string, float and switch comparisons so AFL++ can easier
- solve these. This is an important option if you do not have a very good
- and large input corpus. This technique is called laf-intel or COMPCOV.
- To use this set the following environment variable before compiling the
- target: `export AFL_LLVM_LAF_ALL=1`
- You can read more about this in [instrumentation/README.laf-intel.md](../instrumentation/README.laf-intel.md)
- * A different technique (and usually a better one than laf-intel) is to
- instrument the target so that any compare values in the target are sent to
- AFL++ which then tries to put these values into the fuzzing data at different
- locations. This technique is very fast and good - if the target does not
- transform input data before comparison. Therefore this technique is called
- `input to state` or `redqueen`.
- If you want to use this technique, then you have to compile the target
- twice, once specifically with/for this mode by setting `AFL_LLVM_CMPLOG=1`,
- and pass this binary to afl-fuzz via the `-c` parameter.
- Note that you can compile also just a cmplog binary and use that for both
- however there will be a performance penality.
- You can read more about this in [instrumentation/README.cmplog.md](../instrumentation/README.cmplog.md)
-
-If you use LTO, LLVM or GCC_PLUGIN mode (afl-clang-fast/afl-clang-lto/afl-gcc-fast)
-you have the option to selectively only instrument parts of the target that you
-are interested in:
-
- * To instrument only those parts of the target that you are interested in
- create a file with all the filenames of the source code that should be
- instrumented.
- For afl-clang-lto and afl-gcc-fast - or afl-clang-fast if a mode other than
- DEFAULT/PCGUARD is used or you have llvm > 10.0.0 - just put one
- filename or function per line (no directory information necessary for
- filenames9, and either set `export AFL_LLVM_ALLOWLIST=allowlist.txt` **or**
- `export AFL_LLVM_DENYLIST=denylist.txt` - depending on if you want per
- default to instrument unless noted (DENYLIST) or not perform instrumentation
- unless requested (ALLOWLIST).
- **NOTE:** During optimization functions might be inlined and then would not match!
- See [instrumentation/README.instrument_list.md](../instrumentation/README.instrument_list.md)
+The following options are available when you instrument with LTO mode
+(afl-clang-fast/afl-clang-lto):
+
+* Splitting integer, string, float and switch comparisons so AFL++ can easier
+ solve these. This is an important option if you do not have a very good and
+ large input corpus. This technique is called laf-intel or COMPCOV. To use this
+ set the following environment variable before compiling the target: `export
+ AFL_LLVM_LAF_ALL=1` You can read more about this in
+ [instrumentation/README.laf-intel.md](../instrumentation/README.laf-intel.md).
+* A different technique (and usually a better one than laf-intel) is to
+ instrument the target so that any compare values in the target are sent to
+ AFL++ which then tries to put these values into the fuzzing data at different
+ locations. This technique is very fast and good - if the target does not
+ transform input data before comparison. Therefore this technique is called
+ `input to state` or `redqueen`. If you want to use this technique, then you
+ have to compile the target twice, once specifically with/for this mode by
+ setting `AFL_LLVM_CMPLOG=1`, and pass this binary to afl-fuzz via the `-c`
+ parameter. Note that you can compile also just a cmplog binary and use that
+ for both however there will be a performance penality. You can read more about
+ this in
+ [instrumentation/README.cmplog.md](../instrumentation/README.cmplog.md).
+
+If you use LTO, LLVM or GCC_PLUGIN mode
+(afl-clang-fast/afl-clang-lto/afl-gcc-fast) you have the option to selectively
+only instrument parts of the target that you are interested in:
+
+* To instrument only those parts of the target that you are interested in create
+ a file with all the filenames of the source code that should be instrumented.
+ For afl-clang-lto and afl-gcc-fast - or afl-clang-fast if a mode other than
+ DEFAULT/PCGUARD is used or you have llvm > 10.0.0 - just put one filename or
+ function per line (no directory information necessary for filenames9, and
+ either set `export AFL_LLVM_ALLOWLIST=allowlist.txt` **or** `export
+ AFL_LLVM_DENYLIST=denylist.txt` - depending on if you want per default to
+ instrument unless noted (DENYLIST) or not perform instrumentation unless
+ requested (ALLOWLIST). **NOTE:** During optimization functions might be
+ inlined and then would not match! See
+ [instrumentation/README.instrument_list.md](../instrumentation/README.instrument_list.md)
There are many more options and modes available however these are most of the
time less effective. See:
- * [instrumentation/README.ctx.md](../instrumentation/README.ctx.md)
- * [instrumentation/README.ngram.md](../instrumentation/README.ngram.md)
+* [instrumentation/README.ctx.md](../instrumentation/README.ctx.md)
+* [instrumentation/README.ngram.md](../instrumentation/README.ngram.md)
AFL++ performs "never zero" counting in its bitmap. You can read more about this
here:
- * [instrumentation/README.neverzero.md](../instrumentation/README.neverzero.md)
+* [instrumentation/README.neverzero.md](../instrumentation/README.neverzero.md)
#### c) Sanitizers
-It is possible to use sanitizers when instrumenting targets for fuzzing,
-which allows you to find bugs that would not necessarily result in a crash.
+It is possible to use sanitizers when instrumenting targets for fuzzing, which
+allows you to find bugs that would not necessarily result in a crash.
Note that sanitizers have a huge impact on CPU (= less executions per second)
-and RAM usage. Also you should only run one afl-fuzz instance per sanitizer type.
-This is enough because a use-after-free bug will be picked up, e.g. by
-ASAN (address sanitizer) anyway when syncing to other fuzzing instances,
-so not all fuzzing instances need to be instrumented with ASAN.
+and RAM usage. Also you should only run one afl-fuzz instance per sanitizer
+type. This is enough because a use-after-free bug will be picked up, e.g. by
+ASAN (address sanitizer) anyway when syncing to other fuzzing instances, so not
+all fuzzing instances need to be instrumented with ASAN.
The following sanitizers have built-in support in AFL++:
- * ASAN = Address SANitizer, finds memory corruption vulnerabilities like
- use-after-free, NULL pointer dereference, buffer overruns, etc.
- Enabled with `export AFL_USE_ASAN=1` before compiling.
- * MSAN = Memory SANitizer, finds read access to uninitialized memory, eg.
- a local variable that is defined and read before it is even set.
- Enabled with `export AFL_USE_MSAN=1` before compiling.
- * UBSAN = Undefined Behaviour SANitizer, finds instances where - by the
- C and C++ standards - undefined behaviour happens, e.g. adding two
- signed integers together where the result is larger than a signed integer
- can hold.
- Enabled with `export AFL_USE_UBSAN=1` before compiling.
- * CFISAN = Control Flow Integrity SANitizer, finds instances where the
- control flow is found to be illegal. Originally this was rather to
- prevent return oriented programming exploit chains from functioning,
- in fuzzing this is mostly reduced to detecting type confusion
- vulnerabilities - which is however one of the most important and dangerous
- C++ memory corruption classes!
- Enabled with `export AFL_USE_CFISAN=1` before compiling.
- * TSAN = Thread SANitizer, finds thread race conditions.
- Enabled with `export AFL_USE_TSAN=1` before compiling.
- * LSAN = Leak SANitizer, finds memory leaks in a program. This is not really
- a security issue, but for developers this can be very valuable.
- Note that unlike the other sanitizers above this needs
- `__AFL_LEAK_CHECK();` added to all areas of the target source code where you
- find a leak check necessary!
- Enabled with `export AFL_USE_LSAN=1` before compiling.
-
-It is possible to further modify the behaviour of the sanitizers at run-time
-by setting `ASAN_OPTIONS=...`, `LSAN_OPTIONS` etc. - the available parameters
-can be looked up in the sanitizer documentation of llvm/clang.
-afl-fuzz however requires some specific parameters important for fuzzing to be
-set. If you want to set your own, it might bail and report what it is missing.
+* ASAN = Address SANitizer, finds memory corruption vulnerabilities like
+ use-after-free, NULL pointer dereference, buffer overruns, etc. Enabled with
+ `export AFL_USE_ASAN=1` before compiling.
+* MSAN = Memory SANitizer, finds read access to uninitialized memory, eg. a
+ local variable that is defined and read before it is even set. Enabled with
+ `export AFL_USE_MSAN=1` before compiling.
+* UBSAN = Undefined Behaviour SANitizer, finds instances where - by the C and
+ C++ standards - undefined behaviour happens, e.g. adding two signed integers
+ together where the result is larger than a signed integer can hold. Enabled
+ with `export AFL_USE_UBSAN=1` before compiling.
+* CFISAN = Control Flow Integrity SANitizer, finds instances where the control
+ flow is found to be illegal. Originally this was rather to prevent return
+ oriented programming exploit chains from functioning, in fuzzing this is
+ mostly reduced to detecting type confusion vulnerabilities - which is,
+ however, one of the most important and dangerous C++ memory corruption
+ classes! Enabled with `export AFL_USE_CFISAN=1` before compiling.
+* TSAN = Thread SANitizer, finds thread race conditions. Enabled with `export
+ AFL_USE_TSAN=1` before compiling.
+* LSAN = Leak SANitizer, finds memory leaks in a program. This is not really a
+ security issue, but for developers this can be very valuable. Note that unlike
+ the other sanitizers above this needs `__AFL_LEAK_CHECK();` added to all areas
+ of the target source code where you find a leak check necessary! Enabled with
+ `export AFL_USE_LSAN=1` before compiling.
+
+It is possible to further modify the behaviour of the sanitizers at run-time by
+setting `ASAN_OPTIONS=...`, `LSAN_OPTIONS` etc. - the available parameters can
+be looked up in the sanitizer documentation of llvm/clang. afl-fuzz, however,
+requires some specific parameters important for fuzzing to be set. If you want
+to set your own, it might bail and report what it is missing.
Note that some sanitizers cannot be used together, e.g. ASAN and MSAN, and
others often cannot work together because of target weirdness, e.g. ASAN and
CFISAN. You might need to experiment which sanitizers you can combine in a
-target (which means more instances can be run without a sanitized target,
-which is more effective).
+target (which means more instances can be run without a sanitized target, which
+is more effective).
#### d) Modify the target
-If the target has features that make fuzzing more difficult, e.g.
-checksums, HMAC, etc. then modify the source code so that checks for these
-values are removed.
-This can even be done safely for source code used in operational products
-by eliminating these checks within these AFL specific blocks:
+If the target has features that make fuzzing more difficult, e.g. checksums,
+HMAC, etc. then modify the source code so that checks for these values are
+removed. This can even be done safely for source code used in operational
+products by eliminating these checks within these AFL specific blocks:
```
#ifdef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION
@@ -193,25 +193,24 @@ All AFL++ compilers will set this preprocessor definition automatically.
In this step the target source code is compiled so that it can be fuzzed.
Basically you have to tell the target build system that the selected AFL++
-compiler is used. Also - if possible - you should always configure the
-build system such that the target is compiled statically and not dynamically.
-How to do this is described below.
+compiler is used. Also - if possible - you should always configure the build
+system such that the target is compiled statically and not dynamically. How to
+do this is described below.
-The #1 rule when instrumenting a target is: avoid instrumenting shared
-libraries at all cost. You would need to set LD_LIBRARY_PATH to point to
-these, you could accidently type "make install" and install them system wide -
-so don't. Really don't.
-**Always compile libraries you want to have instrumented as static and link
-these to the target program!**
+The #1 rule when instrumenting a target is: avoid instrumenting shared libraries
+at all cost. You would need to set LD_LIBRARY_PATH to point to these, you could
+accidentally type "make install" and install them system wide - so don't. Really
+don't. **Always compile libraries you want to have instrumented as static and
+link these to the target program!**
Then build the target. (Usually with `make`)
**NOTES**
-1. sometimes configure and build systems are fickle and do not like
- stderr output (and think this means a test failure) - which is something
- AFL++ likes to do to show statistics. It is recommended to disable AFL++
- instrumentation reporting via `export AFL_QUIET=1`.
+1. sometimes configure and build systems are fickle and do not like stderr
+ output (and think this means a test failure) - which is something AFL++ likes
+ to do to show statistics. It is recommended to disable AFL++ instrumentation
+ reporting via `export AFL_QUIET=1`.
2. sometimes configure and build systems error on warnings - these should be
disabled (e.g. `--disable-werror` for some configure scripts).
@@ -249,41 +248,46 @@ Sometimes cmake and configure do not pick up the AFL++ compiler, or the
ranlib/ar that is needed - because this was just not foreseen by the developer
of the target. Or they have non-standard options. Figure out if there is a
non-standard way to set this, otherwise set up the build normally and edit the
-generated build environment afterwards manually to point it to the right compiler
-(and/or ranlib and ar).
+generated build environment afterwards manually to point it to the right
+compiler (and/or ranlib and ar).
#### f) Better instrumentation
If you just fuzz a target program as-is you are wasting a great opportunity for
much more fuzzing speed.
-This variant requires the usage of afl-clang-lto, afl-clang-fast or afl-gcc-fast.
+This variant requires the usage of afl-clang-lto, afl-clang-fast or
+afl-gcc-fast.
-It is the so-called `persistent mode`, which is much, much faster but
-requires that you code a source file that is specifically calling the target
-functions that you want to fuzz, plus a few specific AFL++ functions around
-it. See [instrumentation/README.persistent_mode.md](../instrumentation/README.persistent_mode.md) for details.
+It is the so-called `persistent mode`, which is much, much faster but requires
+that you code a source file that is specifically calling the target functions
+that you want to fuzz, plus a few specific AFL++ functions around it. See
+[instrumentation/README.persistent_mode.md](../instrumentation/README.persistent_mode.md)
+for details.
-Basically if you do not fuzz a target in persistent mode then you are just
-doing it for a hobby and not professionally :-).
+Basically if you do not fuzz a target in persistent mode then you are just doing
+it for a hobby and not professionally :-).
#### g) libfuzzer fuzzer harnesses with LLVMFuzzerTestOneInput()
libfuzzer `LLVMFuzzerTestOneInput()` harnesses are the defacto standard
for fuzzing, and they can be used with AFL++ (and honggfuzz) as well!
Compiling them is as simple as:
+
```
afl-clang-fast++ -fsanitize=fuzzer -o harness harness.cpp targetlib.a
```
+
You can even use advanced libfuzzer features like `FuzzedDataProvider`,
`LLVMFuzzerMutate()` etc. and they will work!
The generated binary is fuzzed with afl-fuzz like any other fuzz target.
Bonus: the target is already optimized for fuzzing due to persistent mode and
-shared-memory testcases and hence gives you the fastest speed possible.
+shared-memory test cases and hence gives you the fastest speed possible.
-For more information see [utils/aflpp_driver/README.md](../utils/aflpp_driver/README.md)
+For more information, see
+[utils/aflpp_driver/README.md](../utils/aflpp_driver/README.md).
### 2. Preparing the fuzzing campaign
@@ -294,8 +298,8 @@ target as possible improves the efficiency a lot.
Try to gather valid inputs for the target from wherever you can. E.g. if it is
the PNG picture format try to find as many png files as possible, e.g. from
-reported bugs, test suites, random downloads from the internet, unit test
-case data - from all kind of PNG software.
+reported bugs, test suites, random downloads from the internet, unit test case
+data - from all kind of PNG software.
If the input format is not known, you can also modify a target program to write
normal data it receives and processes to a file and use these.
@@ -319,10 +323,9 @@ This step is highly recommended!
#### c) Minimizing all corpus files
-The shorter the input files that still traverse the same path
-within the target, the better the fuzzing will be. This minimization
-is done with `afl-tmin` however it is a long process as this has to
-be done for every file:
+The shorter the input files that still traverse the same path within the target,
+the better the fuzzing will be. This minimization is done with `afl-tmin`
+however it is a long process as this has to be done for every file:
```
mkdir input
@@ -332,8 +335,8 @@ for i in *; do
done
```
-This step can also be parallelized, e.g. with `parallel`.
-Note that this step is rather optional though.
+This step can also be parallelized, e.g. with `parallel`. Note that this step is
+rather optional though.
#### Done!
@@ -343,10 +346,9 @@ to be used in fuzzing! :-)
### 3. Fuzzing the target
-In this final step we fuzz the target.
-There are not that many important options to run the target - unless you want
-to use many CPU cores/threads for the fuzzing, which will make the fuzzing much
-more useful.
+In this final step we fuzz the target. There are not that many important options
+to run the target - unless you want to use many CPU cores/threads for the
+fuzzing, which will make the fuzzing much more useful.
If you just use one CPU for fuzzing, then you are fuzzing just for fun and not
seriously :-)
@@ -355,19 +357,19 @@ seriously :-)
Before you do even a test run of afl-fuzz execute `sudo afl-system-config` (on
the host if you execute afl-fuzz in a docker container). This reconfigures the
-system for optimal speed - which afl-fuzz checks and bails otherwise.
-Set `export AFL_SKIP_CPUFREQ=1` for afl-fuzz to skip this check if you cannot
-run afl-system-config with root privileges on the host for whatever reason.
+system for optimal speed - which afl-fuzz checks and bails otherwise. Set
+`export AFL_SKIP_CPUFREQ=1` for afl-fuzz to skip this check if you cannot run
+afl-system-config with root privileges on the host for whatever reason.
Note there is also `sudo afl-persistent-config` which sets additional permanent
boot options for a much better fuzzing performance.
Note that both scripts improve your fuzzing performance but also decrease your
-system protection against attacks! So set strong firewall rules and only
-expose SSH as a network service if you use these (which is highly recommended).
+system protection against attacks! So set strong firewall rules and only expose
+SSH as a network service if you use these (which is highly recommended).
-If you have an input corpus from step 2 then specify this directory with the `-i`
-option. Otherwise create a new directory and create a file with any content
+If you have an input corpus from step 2 then specify this directory with the
+`-i` option. Otherwise create a new directory and create a file with any content
as test data in there.
If you do not want anything special, the defaults are already usually best,
@@ -387,36 +389,37 @@ same as the afl-fuzz -M/-S naming :-)
For more information on screen or tmux please check their documentation.
If you need to stop and re-start the fuzzing, use the same command line options
-(or even change them by selecting a different power schedule or another
-mutation mode!) and switch the input directory with a dash (`-`):
+(or even change them by selecting a different power schedule or another mutation
+mode!) and switch the input directory with a dash (`-`):
`afl-fuzz -i - -o output -- bin/target -d @@`
-Memory limits are not enforced by afl-fuzz by default and the system may run
-out of memory. You can decrease the memory with the `-m` option, the value is
-in MB. If this is too small for the target, you can usually see this by
-afl-fuzz bailing with the message that it could not connect to the forkserver.
+Memory limits are not enforced by afl-fuzz by default and the system may run out
+of memory. You can decrease the memory with the `-m` option, the value is in MB.
+If this is too small for the target, you can usually see this by afl-fuzz
+bailing with the message that it could not connect to the forkserver.
-Adding a dictionary is helpful. See the directory [dictionaries/](../dictionaries/) if
-something is already included for your data format, and tell afl-fuzz to load
-that dictionary by adding `-x dictionaries/FORMAT.dict`. With afl-clang-lto
-you have an autodictionary generation for which you need to do nothing except
-to use afl-clang-lto as the compiler. You also have the option to generate
-a dictionary yourself, see [utils/libtokencap/README.md](../utils/libtokencap/README.md).
+Adding a dictionary is helpful. See the directory
+[dictionaries/](../dictionaries/) if something is already included for your data
+format, and tell afl-fuzz to load that dictionary by adding `-x
+dictionaries/FORMAT.dict`. With afl-clang-lto you have an autodictionary
+generation for which you need to do nothing except to use afl-clang-lto as the
+compiler. You also have the option to generate a dictionary yourself, see
+[utils/libtokencap/README.md](../utils/libtokencap/README.md).
afl-fuzz has a variety of options that help to workaround target quirks like
-specific locations for the input file (`-f`), performing deterministic
-fuzzing (`-D`) and many more. Check out `afl-fuzz -h`.
+specific locations for the input file (`-f`), performing deterministic fuzzing
+(`-D`) and many more. Check out `afl-fuzz -h`.
We highly recommend that you set a memory limit for running the target with `-m`
-which defines the maximum memory in MB. This prevents a potential
-out-of-memory problem for your system plus helps you detect missing `malloc()`
-failure handling in the target.
-Play around with various -m values until you find one that safely works for all
-your input seeds (if you have good ones and then double or quadrouple that.
+which defines the maximum memory in MB. This prevents a potential out-of-memory
+problem for your system plus helps you detect missing `malloc()` failure
+handling in the target. Play around with various -m values until you find one
+that safely works for all your input seeds (if you have good ones and then
+double or quadruple that.
-By default afl-fuzz never stops fuzzing. To terminate AFL++ simply press Control-C
-or send a signal SIGINT. You can limit the number of executions or approximate runtime
-in seconds with options also.
+By default afl-fuzz never stops fuzzing. To terminate AFL++ simply press
+Control-C or send a signal SIGINT. You can limit the number of executions or
+approximate runtime in seconds with options also.
When you start afl-fuzz you will see a user interface that shows what the status
is:
@@ -426,67 +429,67 @@ All labels are explained in [status_screen.md](status_screen.md).
#### b) Using multiple cores
-If you want to seriously fuzz then use as many cores/threads as possible to
-fuzz your target.
+If you want to seriously fuzz then use as many cores/threads as possible to fuzz
+your target.
On the same machine - due to the design of how AFL++ works - there is a maximum
-number of CPU cores/threads that are useful, use more and the overall performance
-degrades instead. This value depends on the target, and the limit is between 32
-and 64 cores per machine.
+number of CPU cores/threads that are useful, use more and the overall
+performance degrades instead. This value depends on the target, and the limit is
+between 32 and 64 cores per machine.
If you have the RAM, it is highly recommended run the instances with a caching
-of the testcases. Depending on the average testcase size (and those found
-during fuzzing) and their number, a value between 50-500MB is recommended.
-You can set the cache size (in MB) by setting the environment variable `AFL_TESTCACHE_SIZE`.
+of the test cases. Depending on the average test case size (and those found
+during fuzzing) and their number, a value between 50-500MB is recommended. You
+can set the cache size (in MB) by setting the environment variable
+`AFL_TESTCACHE_SIZE`.
-There should be one main fuzzer (`-M main-$HOSTNAME` option) and as many secondary
-fuzzers (eg `-S variant1`) as you have cores that you use.
-Every -M/-S entry needs a unique name (that can be whatever), however the same
--o output directory location has to be used for all instances.
+There should be one main fuzzer (`-M main-$HOSTNAME` option) and as many
+secondary fuzzers (e.g. `-S variant1`) as you have cores that you use. Every
+-M/-S entry needs a unique name (that can be whatever), however, the same -o
+output directory location has to be used for all instances.
For every secondary fuzzer there should be a variation, e.g.:
- * one should fuzz the target that was compiled differently: with sanitizers
- activated (`export AFL_USE_ASAN=1 ; export AFL_USE_UBSAN=1 ;
- export AFL_USE_CFISAN=1`)
- * one or two should fuzz the target with CMPLOG/redqueen (see above), at
- least one cmplog instance should follow transformations (`-l AT`)
- * one to three fuzzers should fuzz a target compiled with laf-intel/COMPCOV
- (see above). Important note: If you run more than one laf-intel/COMPCOV
- fuzzer and you want them to share their intermediate results, the main
- fuzzer (`-M`) must be one of the them! (Although this is not really
- recommended.)
+* one should fuzz the target that was compiled differently: with sanitizers
+ activated (`export AFL_USE_ASAN=1 ; export AFL_USE_UBSAN=1 ; export
+ AFL_USE_CFISAN=1`)
+* one or two should fuzz the target with CMPLOG/redqueen (see above), at least
+ one cmplog instance should follow transformations (`-l AT`)
+* one to three fuzzers should fuzz a target compiled with laf-intel/COMPCOV (see
+ above). Important note: If you run more than one laf-intel/COMPCOV fuzzer and
+ you want them to share their intermediate results, the main fuzzer (`-M`) must
+ be one of the them! (Although this is not really recommended.)
All other secondaries should be used like this:
- * A quarter to a third with the MOpt mutator enabled: `-L 0`
- * run with a different power schedule, recommended are:
- `fast (default), explore, coe, lin, quad, exploit and rare`
- which you can set with e.g. `-p explore`
- * a few instances should use the old queue cycling with `-Z`
+* a quarter to a third with the MOpt mutator enabled: `-L 0`
+* run with a different power schedule, recommended are:
+ `fast (default), explore, coe, lin, quad, exploit and rare` which you can set
+ with e.g. `-p explore`
+* a few instances should use the old queue cycling with `-Z`
-Also it is recommended to set `export AFL_IMPORT_FIRST=1` to load testcases
+Also, it is recommended to set `export AFL_IMPORT_FIRST=1` to load test cases
from other fuzzers in the campaign first.
If you have a large corpus, a corpus from a previous run or are fuzzing in
a CI, then also set `export AFL_CMPLOG_ONLY_NEW=1` and `export AFL_FAST_CAL=1`.
-You can also use different fuzzers.
-If you are using AFL spinoffs or AFL conforming fuzzers, then just use the
-same -o directory and give it a unique `-S` name.
-Examples are:
- * [Fuzzolic](https://github.com/season-lab/fuzzolic)
- * [symcc](https://github.com/eurecom-s3/symcc/)
- * [Eclipser](https://github.com/SoftSec-KAIST/Eclipser/)
- * [AFLsmart](https://github.com/aflsmart/aflsmart)
- * [FairFuzz](https://github.com/carolemieux/afl-rb)
- * [Neuzz](https://github.com/Dongdongshe/neuzz)
- * [Angora](https://github.com/AngoraFuzzer/Angora)
-
-A long list can be found at [https://github.com/Microsvuln/Awesome-AFL](https://github.com/Microsvuln/Awesome-AFL)
-
-However you can also sync AFL++ with honggfuzz, libfuzzer with `-entropic=1`, etc.
-Just show the main fuzzer (-M) with the `-F` option where the queue/work
-directory of a different fuzzer is, e.g. `-F /src/target/honggfuzz`.
-Using honggfuzz (with `-n 1` or `-n 2`) and libfuzzer in parallel is highly
+You can also use different fuzzers. If you are using AFL spinoffs or AFL
+conforming fuzzers, then just use the same -o directory and give it a unique
+`-S` name. Examples are:
+* [Fuzzolic](https://github.com/season-lab/fuzzolic)
+* [symcc](https://github.com/eurecom-s3/symcc/)
+* [Eclipser](https://github.com/SoftSec-KAIST/Eclipser/)
+* [AFLsmart](https://github.com/aflsmart/aflsmart)
+* [FairFuzz](https://github.com/carolemieux/afl-rb)
+* [Neuzz](https://github.com/Dongdongshe/neuzz)
+* [Angora](https://github.com/AngoraFuzzer/Angora)
+
+A long list can be found at
+[https://github.com/Microsvuln/Awesome-AFL](https://github.com/Microsvuln/Awesome-AFL).
+
+However, you can also sync AFL++ with honggfuzz, libfuzzer with `-entropic=1`,
+etc. Just show the main fuzzer (-M) with the `-F` option where the queue/work
+directory of a different fuzzer is, e.g. `-F /src/target/honggfuzz`. Using
+honggfuzz (with `-n 1` or `-n 2`) and libfuzzer in parallel is highly
recommended!
#### c) Using multiple machines for fuzzing
@@ -498,26 +501,24 @@ instance per server, and that its name is unique, hence the recommendation
for `-M main-$HOSTNAME`.
Now there are three strategies on how you can sync between the servers:
- * never: sounds weird, but this makes every server an island and has the
- chance the each follow different paths into the target. You can make
- this even more interesting by even giving different seeds to each server.
- * regularly (~4h): this ensures that all fuzzing campaigns on the servers
- "see" the same thing. It is like fuzzing on a huge server.
- * in intervals of 1/10th of the overall expected runtime of the fuzzing you
- sync. This tries a bit to combine both. have some individuality of the
- paths each campaign on a server explores, on the other hand if one
- gets stuck where another found progress this is handed over making it
- unstuck.
-
-The syncing process itself is very simple.
-As the `-M main-$HOSTNAME` instance syncs to all `-S` secondaries as well
-as to other fuzzers, you have to copy only this directory to the other
-machines.
-
-Lets say all servers have the `-o out` directory in /target/foo/out, and
-you created a file `servers.txt` which contains the hostnames of all
-participating servers, plus you have an ssh key deployed to all of them,
-then run:
+* never: sounds weird, but this makes every server an island and has the chance
+ the each follow different paths into the target. You can make this even more
+ interesting by even giving different seeds to each server.
+* regularly (~4h): this ensures that all fuzzing campaigns on the servers "see"
+ the same thing. It is like fuzzing on a huge server.
+* in intervals of 1/10th of the overall expected runtime of the fuzzing you
+ sync. This tries a bit to combine both. have some individuality of the paths
+ each campaign on a server explores, on the other hand if one gets stuck where
+ another found progress this is handed over making it unstuck.
+
+The syncing process itself is very simple. As the `-M main-$HOSTNAME` instance
+syncs to all `-S` secondaries as well as to other fuzzers, you have to copy only
+this directory to the other machines.
+
+Lets say all servers have the `-o out` directory in /target/foo/out, and you
+created a file `servers.txt` which contains the hostnames of all participating
+servers, plus you have an ssh key deployed to all of them, then run:
+
```bash
for FROM in `cat servers.txt`; do
for TO in `cat servers.txt`; do
@@ -525,49 +526,52 @@ for FROM in `cat servers.txt`; do
done
done
```
-You can run this manually, per cron job - as you need it.
-There is a more complex and configurable script in `utils/distributed_fuzzing`.
+
+You can run this manually, per cron job - as you need it. There is a more
+complex and configurable script in `utils/distributed_fuzzing`.
#### d) The status of the fuzz campaign
AFL++ comes with the `afl-whatsup` script to show the status of the fuzzing
campaign.
-Just supply the directory that afl-fuzz is given with the -o option and
-you will see a detailed status of every fuzzer in that campaign plus
-a summary.
+Just supply the directory that afl-fuzz is given with the -o option and you will
+see a detailed status of every fuzzer in that campaign plus a summary.
-To have only the summary use the `-s` switch e.g.: `afl-whatsup -s out/`
+To have only the summary, use the `-s` switch, e.g. `afl-whatsup -s out/`.
-If you have multiple servers then use the command after a sync, or you have
-to execute this script per server.
+If you have multiple servers, then use the command after a sync or you have to
+execute this script per server.
-Another tool to inspect the current state and history of a specific instance
-is afl-plot, which generates an index.html file and a graphs that show how
-the fuzzing instance is performing.
-The syntax is `afl-plot instance_dir web_dir`, e.g. `afl-plot out/default /srv/www/htdocs/plot`
+Another tool to inspect the current state and history of a specific instance is
+afl-plot, which generates an index.html file and a graphs that show how the
+fuzzing instance is performing. The syntax is `afl-plot instance_dir web_dir`,
+e.g. `afl-plot out/default /srv/www/htdocs/plot`.
#### e) Stopping fuzzing, restarting fuzzing, adding new seeds
To stop an afl-fuzz run, simply press Control-C.
-To restart an afl-fuzz run, just reuse the same command line but replace the
-`-i directory` with `-i -` or set `AFL_AUTORESUME=1`.
+To restart an afl-fuzz run, just reuse the same command line but replace the `-i
+directory` with `-i -` or set `AFL_AUTORESUME=1`.
If you want to add new seeds to a fuzzing campaign you can run a temporary
-fuzzing instance, e.g. when your main fuzzer is using `-o out` and the new
-seeds are in `newseeds/` directory:
+fuzzing instance, e.g. when your main fuzzer is using `-o out` and the new seeds
+are in `newseeds/` directory:
+
```
AFL_BENCH_JUST_ONE=1 AFL_FAST_CAL=1 afl-fuzz -i newseeds -o out -S newseeds -- ./target
```
#### f) Checking the coverage of the fuzzing
-The `paths found` value is a bad indicator for checking how good the coverage is.
+The `paths found` value is a bad indicator for checking how good the coverage
+is.
A better indicator - if you use default llvm instrumentation with at least
-version 9 - is to use `afl-showmap` with the collect coverage option `-C` on
-the output directory:
+version 9 - is to use `afl-showmap` with the collect coverage option `-C` on the
+output directory:
+
```
$ afl-showmap -C -i out -o /dev/null -- ./target -params @@
...
@@ -578,53 +582,67 @@ $ afl-showmap -C -i out -o /dev/null -- ./target -params @@
l'.
[+] A coverage of 4331 edges were achieved out of 9960 existing (43.48%) with 7849 input files.
```
+
It is even better to check out the exact lines of code that have been reached -
and which have not been found so far.
-An "easy" helper script for this is [https://github.com/vanhauser-thc/afl-cov](https://github.com/vanhauser-thc/afl-cov),
+An "easy" helper script for this is
+[https://github.com/vanhauser-thc/afl-cov](https://github.com/vanhauser-thc/afl-cov),
just follow the README of that separate project.
If you see that an important area or a feature has not been covered so far then
try to find an input that is able to reach that and start a new secondary in
that fuzzing campaign with that seed as input, let it run for a few minutes,
then terminate it. The main node will pick it up and make it available to the
-other secondary nodes over time. Set `export AFL_NO_AFFINITY=1` or
-`export AFL_TRY_AFFINITY=1` if you have no free core.
+other secondary nodes over time. Set `export AFL_NO_AFFINITY=1` or `export
+AFL_TRY_AFFINITY=1` if you have no free core.
Note that in nearly all cases you can never reach full coverage. A lot of
-functionality is usually dependent on exclusive options that would need individual
-fuzzing campaigns each with one of these options set. E.g. if you fuzz a library to
-convert image formats and your target is the png to tiff API then you will not
-touch any of the other library APIs and features.
+functionality is usually dependent on exclusive options that would need
+individual fuzzing campaigns each with one of these options set. E.g. if you
+fuzz a library to convert image formats and your target is the png to tiff API
+then you will not touch any of the other library APIs and features.
#### g) How long to fuzz a target?
-This is a difficult question.
-Basically if no new path is found for a long time (e.g. for a day or a week)
-then you can expect that your fuzzing won't be fruitful anymore.
-However often this just means that you should switch out secondaries for
-others, e.g. custom mutator modules, sync to very different fuzzers, etc.
+This is a difficult question. Basically if no new path is found for a long time
+(e.g. for a day or a week) then you can expect that your fuzzing won't be
+fruitful anymore. However, often this just means that you should switch out
+secondaries for others, e.g. custom mutator modules, sync to very different
+fuzzers, etc.
Keep the queue/ directory (for future fuzzings of the same or similar targets)
-and use them to seed other good fuzzers like libfuzzer with the -entropic
-switch or honggfuzz.
+and use them to seed other good fuzzers like libfuzzer with the -entropic switch
+or honggfuzz.
#### h) Improve the speed!
- * Use [persistent mode](../instrumentation/README.persistent_mode.md) (x2-x20 speed increase)
- * If you do not use shmem persistent mode, use `AFL_TMPDIR` to point the input file on a tempfs location, see [env_variables.md](env_variables.md)
- * Linux: Improve kernel performance: modify `/etc/default/grub`, set `GRUB_CMDLINE_LINUX_DEFAULT="ibpb=off ibrs=off kpti=off l1tf=off mds=off mitigations=off no_stf_barrier noibpb noibrs nopcid nopti nospec_store_bypass_disable nospectre_v1 nospectre_v2 pcid=off pti=off spec_store_bypass_disable=off spectre_v2=off stf_barrier=off"`; then `update-grub` and `reboot` (warning: makes the system more insecure) - you can also just run `sudo afl-persistent-config`
- * Linux: Running on an `ext2` filesystem with `noatime` mount option will be a bit faster than on any other journaling filesystem
- * Use your cores! [b) Using multiple cores](#b-using-multiple-cores)
- * Run `sudo afl-system-config` before starting the first afl-fuzz instance after a reboot
+* Use [persistent mode](../instrumentation/README.persistent_mode.md) (x2-x20
+ speed increase)
+* If you do not use shmem persistent mode, use `AFL_TMPDIR` to point the input
+ file on a tempfs location, see [env_variables.md](env_variables.md)
+* Linux: Improve kernel performance: modify `/etc/default/grub`, set
+ `GRUB_CMDLINE_LINUX_DEFAULT="ibpb=off ibrs=off kpti=off l1tf=off mds=off
+ mitigations=off no_stf_barrier noibpb noibrs nopcid nopti
+ nospec_store_bypass_disable nospectre_v1 nospectre_v2 pcid=off pti=off
+ spec_store_bypass_disable=off spectre_v2=off stf_barrier=off"`; then
+ `update-grub` and `reboot` (warning: makes the system more insecure) - you can
+ also just run `sudo afl-persistent-config`
+* Linux: Running on an `ext2` filesystem with `noatime` mount option will be a
+ bit faster than on any other journaling filesystem
+* Use your cores! [b) Using multiple cores](#b-using-multiple-cores)
+* Run `sudo afl-system-config` before starting the first afl-fuzz instance after
+ a reboot
### The End
-Check out the [FAQ](FAQ.md) if it maybe answers your question (that
-you might not even have known you had ;-) ).
+Check out the [FAQ](FAQ.md) if it maybe answers your question (that you might
+not even have known you had ;-) ).
This is basically all you need to know to professionally run fuzzing campaigns.
-If you want to know more, the tons of texts in [docs/](./) will have you covered.
+If you want to know more, the tons of texts in [docs/](./) will have you
+covered.
Note that there are also a lot of tools out there that help fuzzing with AFL++
-(some might be deprecated or unsupported), see [third_party_tools.md](third_party_tools.md).
\ No newline at end of file
+(some might be deprecated or unsupported), see
+[third_party_tools.md](third_party_tools.md).
\ No newline at end of file
diff --git a/docs/important_changes.md b/docs/important_changes.md
index 0c5c2243..877dfab2 100644
--- a/docs/important_changes.md
+++ b/docs/important_changes.md
@@ -36,7 +36,7 @@ behaviours and defaults:
shared libraries, etc. Additionally QEMU 5.1 supports more CPU targets so
this is really worth it.
* When instrumenting targets, afl-cc will not supersede optimizations anymore
- if any were given. This allows to fuzz targets build regularly like those
+ if any were given. This allows to fuzz targets build regularly like those
for debug or release versions.
* afl-fuzz:
* if neither -M or -S is specified, `-S default` is assumed, so more
@@ -47,7 +47,7 @@ behaviours and defaults:
* -m none is now default, set memory limits (in MB) with e.g. -m 250
* deterministic fuzzing is now disabled by default (unless using -M) and
can be enabled with -D
- * a caching of testcases can now be performed and can be modified by
+ * a caching of test cases can now be performed and can be modified by
editing config.h for TESTCASE_CACHE or by specifying the env variable
`AFL_TESTCACHE_SIZE` (in MB). Good values are between 50-500 (default: 50).
* -M mains do not perform trimming
diff --git a/docs/interpreting_output.md b/docs/interpreting_output.md
deleted file mode 100644
index 4bd705f2..00000000
--- a/docs/interpreting_output.md
+++ /dev/null
@@ -1,71 +0,0 @@
-# Interpreting output
-
-See the [status_screen.md](status_screen.md) file for information on
-how to interpret the displayed stats and monitor the health of the process. Be
-sure to consult this file especially if any UI elements are highlighted in red.
-
-The fuzzing process will continue until you press Ctrl-C. At a minimum, you want
-to allow the fuzzer to complete one queue cycle, which may take anywhere from a
-couple of hours to a week or so.
-
-There are three subdirectories created within the output directory and updated
-in real-time:
-
- - queue/ - test cases for every distinctive execution path, plus all the
- starting files given by the user. This is the synthesized corpus
- mentioned in section 2.
-
- Before using this corpus for any other purposes, you can shrink
- it to a smaller size using the afl-cmin tool. The tool will find
- a smaller subset of files offering equivalent edge coverage.
-
- - crashes/ - unique test cases that cause the tested program to receive a
- fatal signal (e.g., SIGSEGV, SIGILL, SIGABRT). The entries are
- grouped by the received signal.
-
- - hangs/ - unique test cases that cause the tested program to time out. The
- default time limit before something is classified as a hang is
- the larger of 1 second and the value of the -t parameter.
- The value can be fine-tuned by setting AFL_HANG_TMOUT, but this
- is rarely necessary.
-
-Crashes and hangs are considered "unique" if the associated execution paths
-involve any state transitions not seen in previously-recorded faults. If a
-single bug can be reached in multiple ways, there will be some count inflation
-early in the process, but this should quickly taper off.
-
-The file names for crashes and hangs are correlated with the parent, non-faulting
-queue entries. This should help with debugging.
-
-When you can't reproduce a crash found by afl-fuzz, the most likely cause is
-that you are not setting the same memory limit as used by the tool. Try:
-
-```shell
-LIMIT_MB=50
-( ulimit -Sv $[LIMIT_MB << 10]; /path/to/tested_binary ... )
-```
-
-Change LIMIT_MB to match the -m parameter passed to afl-fuzz. On OpenBSD,
-also change -Sv to -Sd.
-
-Any existing output directory can be also used to resume aborted jobs; try:
-
-```shell
-./afl-fuzz -i- -o existing_output_dir [...etc...]
-```
-
-If you have gnuplot installed, you can also generate some pretty graphs for any
-active fuzzing task using afl-plot. For an example of how this looks like,
-see [https://lcamtuf.coredump.cx/afl/plot/](https://lcamtuf.coredump.cx/afl/plot/).
-
-You can also manually build and install afl-plot-ui, which is a helper utility
-for showing the graphs generated by afl-plot in a graphical window using GTK.
-You can build and install it as follows
-
-```shell
-sudo apt install libgtk-3-0 libgtk-3-dev pkg-config
-cd utils/plot_ui
-make
-cd ../../
-sudo make install
-```
diff --git a/docs/status_screen.md b/docs/status_screen.md
deleted file mode 100644
index b1cb9696..00000000
--- a/docs/status_screen.md
+++ /dev/null
@@ -1,444 +0,0 @@
-# Understanding the status screen
-
-This document provides an overview of the status screen - plus tips for
-troubleshooting any warnings and red text shown in the UI. See README.md for
-the general instruction manual.
-
-## A note about colors
-
-The status screen and error messages use colors to keep things readable and
-attract your attention to the most important details. For example, red almost
-always means "consult this doc" :-)
-
-Unfortunately, the UI will render correctly only if your terminal is using
-traditional un*x palette (white text on black background) or something close
-to that.
-
-If you are using inverse video, you may want to change your settings, say:
-
-- For GNOME Terminal, go to `Edit > Profile` preferences, select the "colors" tab, and from the list of built-in schemes, choose "white on black".
-- For the MacOS X Terminal app, open a new window using the "Pro" scheme via the `Shell > New Window` menu (or make "Pro" your default).
-
-Alternatively, if you really like your current colors, you can edit config.h
-to comment out USE_COLORS, then do `make clean all`.
-
-I'm not aware of any other simple way to make this work without causing
-other side effects - sorry about that.
-
-With that out of the way, let's talk about what's actually on the screen...
-
-### The status bar
-
-```
-american fuzzy lop ++3.01a (default) [fast] {0}
-```
-
-The top line shows you which mode afl-fuzz is running in
-(normal: "american fuzy lop", crash exploration mode: "peruvian rabbit mode")
-and the version of AFL++.
-Next to the version is the banner, which, if not set with -T by hand, will
-either show the binary name being fuzzed, or the -M/-S main/secondary name for
-parallel fuzzing.
-Second to last is the power schedule mode being run (default: fast).
-Finally, the last item is the CPU id.
-
-### Process timing
-
-```
- +----------------------------------------------------+
- | run time : 0 days, 8 hrs, 32 min, 43 sec |
- | last new path : 0 days, 0 hrs, 6 min, 40 sec |
- | last uniq crash : none seen yet |
- | last uniq hang : 0 days, 1 hrs, 24 min, 32 sec |
- +----------------------------------------------------+
-```
-
-This section is fairly self-explanatory: it tells you how long the fuzzer has
-been running and how much time has elapsed since its most recent finds. This is
-broken down into "paths" (a shorthand for test cases that trigger new execution
-patterns), crashes, and hangs.
-
-When it comes to timing: there is no hard rule, but most fuzzing jobs should be
-expected to run for days or weeks; in fact, for a moderately complex project, the
-first pass will probably take a day or so. Every now and then, some jobs
-will be allowed to run for months.
-
-There's one important thing to watch out for: if the tool is not finding new
-paths within several minutes of starting, you're probably not invoking the
-target binary correctly and it never gets to parse the input files we're
-throwing at it; another possible explanations are that the default memory limit
-(`-m`) is too restrictive, and the program exits after failing to allocate a
-buffer very early on; or that the input files are patently invalid and always
-fail a basic header check.
-
-If there are no new paths showing up for a while, you will eventually see a big
-red warning in this section, too :-)
-
-### Overall results
-
-```
- +-----------------------+
- | cycles done : 0 |
- | total paths : 2095 |
- | uniq crashes : 0 |
- | uniq hangs : 19 |
- +-----------------------+
-```
-
-The first field in this section gives you the count of queue passes done so far - that is, the number of times the fuzzer went over all the interesting test
-cases discovered so far, fuzzed them, and looped back to the very beginning.
-Every fuzzing session should be allowed to complete at least one cycle; and
-ideally, should run much longer than that.
-
-As noted earlier, the first pass can take a day or longer, so sit back and
-relax.
-
-To help make the call on when to hit `Ctrl-C`, the cycle counter is color-coded.
-It is shown in magenta during the first pass, progresses to yellow if new finds
-are still being made in subsequent rounds, then blue when that ends - and
-finally, turns green after the fuzzer hasn't been seeing any action for a
-longer while.
-
-The remaining fields in this part of the screen should be pretty obvious:
-there's the number of test cases ("paths") discovered so far, and the number of
-unique faults. The test cases, crashes, and hangs can be explored in real-time
-by browsing the output directory, as discussed in README.md.
-
-### Cycle progress
-
-```
- +-------------------------------------+
- | now processing : 1296 (61.86%) |
- | paths timed out : 0 (0.00%) |
- +-------------------------------------+
-```
-
-This box tells you how far along the fuzzer is with the current queue cycle: it
-shows the ID of the test case it is currently working on, plus the number of
-inputs it decided to ditch because they were persistently timing out.
-
-The "*" suffix sometimes shown in the first line means that the currently
-processed path is not "favored" (a property discussed later on).
-
-### Map coverage
-
-```
- +--------------------------------------+
- | map density : 10.15% / 29.07% |
- | count coverage : 4.03 bits/tuple |
- +--------------------------------------+
-```
-
-The section provides some trivia about the coverage observed by the
-instrumentation embedded in the target binary.
-
-The first line in the box tells you how many branch tuples we have already
-hit, in proportion to how much the bitmap can hold. The number on the left
-describes the current input; the one on the right is the value for the entire
-input corpus.
-
-Be wary of extremes:
-
- - Absolute numbers below 200 or so suggest one of three things: that the
- program is extremely simple; that it is not instrumented properly (e.g.,
- due to being linked against a non-instrumented copy of the target
- library); or that it is bailing out prematurely on your input test cases.
- The fuzzer will try to mark this in pink, just to make you aware.
- - Percentages over 70% may very rarely happen with very complex programs
- that make heavy use of template-generated code.
- Because high bitmap density makes it harder for the fuzzer to reliably
- discern new program states, I recommend recompiling the binary with
- `AFL_INST_RATIO=10` or so and trying again (see env_variables.md).
- The fuzzer will flag high percentages in red. Chances are, you will never
- see that unless you're fuzzing extremely hairy software (say, v8, perl,
- ffmpeg).
-
-The other line deals with the variability in tuple hit counts seen in the
-binary. In essence, if every taken branch is always taken a fixed number of
-times for all the inputs we have tried, this will read `1.00`. As we manage
-to trigger other hit counts for every branch, the needle will start to move
-toward `8.00` (every bit in the 8-bit map hit), but will probably never
-reach that extreme.
-
-Together, the values can be useful for comparing the coverage of several
-different fuzzing jobs that rely on the same instrumented binary.
-
-### Stage progress
-
-```
- +-------------------------------------+
- | now trying : interest 32/8 |
- | stage execs : 3996/34.4k (11.62%) |
- | total execs : 27.4M |
- | exec speed : 891.7/sec |
- +-------------------------------------+
-```
-
-This part gives you an in-depth peek at what the fuzzer is actually doing right
-now. It tells you about the current stage, which can be any of:
-
- - calibration - a pre-fuzzing stage where the execution path is examined
- to detect anomalies, establish baseline execution speed, and so on. Executed
- very briefly whenever a new find is being made.
- - trim L/S - another pre-fuzzing stage where the test case is trimmed to the
- shortest form that still produces the same execution path. The length (L)
- and stepover (S) are chosen in general relationship to file size.
- - bitflip L/S - deterministic bit flips. There are L bits toggled at any given
- time, walking the input file with S-bit increments. The current L/S variants
- are: `1/1`, `2/1`, `4/1`, `8/8`, `16/8`, `32/8`.
- - arith L/8 - deterministic arithmetics. The fuzzer tries to subtract or add
- small integers to 8-, 16-, and 32-bit values. The stepover is always 8 bits.
- - interest L/8 - deterministic value overwrite. The fuzzer has a list of known
- "interesting" 8-, 16-, and 32-bit values to try. The stepover is 8 bits.
- - extras - deterministic injection of dictionary terms. This can be shown as
- "user" or "auto", depending on whether the fuzzer is using a user-supplied
- dictionary (`-x`) or an auto-created one. You will also see "over" or "insert",
- depending on whether the dictionary words overwrite existing data or are
- inserted by offsetting the remaining data to accommodate their length.
- - havoc - a sort-of-fixed-length cycle with stacked random tweaks. The
- operations attempted during this stage include bit flips, overwrites with
- random and "interesting" integers, block deletion, block duplication, plus
- assorted dictionary-related operations (if a dictionary is supplied in the
- first place).
- - splice - a last-resort strategy that kicks in after the first full queue
- cycle with no new paths. It is equivalent to 'havoc', except that it first
- splices together two random inputs from the queue at some arbitrarily
- selected midpoint.
- - sync - a stage used only when `-M` or `-S` is set (see parallel_fuzzing.md).
- No real fuzzing is involved, but the tool scans the output from other
- fuzzers and imports test cases as necessary. The first time this is done,
- it may take several minutes or so.
-
-The remaining fields should be fairly self-evident: there's the exec count
-progress indicator for the current stage, a global exec counter, and a
-benchmark for the current program execution speed. This may fluctuate from
-one test case to another, but the benchmark should be ideally over 500 execs/sec
-most of the time - and if it stays below 100, the job will probably take very
-long.
-
-The fuzzer will explicitly warn you about slow targets, too. If this happens,
-see the [perf_tips.md](perf_tips.md) file included with the fuzzer for ideas on how to speed
-things up.
-
-### Findings in depth
-
-```
- +--------------------------------------+
- | favored paths : 879 (41.96%) |
- | new edges on : 423 (20.19%) |
- | total crashes : 0 (0 unique) |
- | total tmouts : 24 (19 unique) |
- +--------------------------------------+
-```
-
-This gives you several metrics that are of interest mostly to complete nerds.
-The section includes the number of paths that the fuzzer likes the most based
-on a minimization algorithm baked into the code (these will get considerably
-more air time), and the number of test cases that actually resulted in better
-edge coverage (versus just pushing the branch hit counters up). There are also
-additional, more detailed counters for crashes and timeouts.
-
-Note that the timeout counter is somewhat different from the hang counter; this
-one includes all test cases that exceeded the timeout, even if they did not
-exceed it by a margin sufficient to be classified as hangs.
-
-### Fuzzing strategy yields
-
-```
- +-----------------------------------------------------+
- | bit flips : 57/289k, 18/289k, 18/288k |
- | byte flips : 0/36.2k, 4/35.7k, 7/34.6k |
- | arithmetics : 53/2.54M, 0/537k, 0/55.2k |
- | known ints : 8/322k, 12/1.32M, 10/1.70M |
- | dictionary : 9/52k, 1/53k, 1/24k |
- |havoc/splice : 1903/20.0M, 0/0 |
- |py/custom/rq : unused, 53/2.54M, unused |
- | trim/eff : 20.31%/9201, 17.05% |
- +-----------------------------------------------------+
-```
-
-This is just another nerd-targeted section keeping track of how many paths we
-have netted, in proportion to the number of execs attempted, for each of the
-fuzzing strategies discussed earlier on. This serves to convincingly validate
-assumptions about the usefulness of the various approaches taken by afl-fuzz.
-
-The trim strategy stats in this section are a bit different than the rest.
-The first number in this line shows the ratio of bytes removed from the input
-files; the second one corresponds to the number of execs needed to achieve this
-goal. Finally, the third number shows the proportion of bytes that, although
-not possible to remove, were deemed to have no effect and were excluded from
-some of the more expensive deterministic fuzzing steps.
-
-Note that when deterministic mutation mode is off (which is the default
-because it is not very efficient) the first five lines display
-"disabled (default, enable with -D)".
-
-Only what is activated will have counter shown.
-
-### Path geometry
-
-```
- +---------------------+
- | levels : 5 |
- | pending : 1570 |
- | pend fav : 583 |
- | own finds : 0 |
- | imported : 0 |
- | stability : 100.00% |
- +---------------------+
-```
-
-The first field in this section tracks the path depth reached through the
-guided fuzzing process. In essence: the initial test cases supplied by the
-user are considered "level 1". The test cases that can be derived from that
-through traditional fuzzing are considered "level 2"; the ones derived by
-using these as inputs to subsequent fuzzing rounds are "level 3"; and so forth.
-The maximum depth is therefore a rough proxy for how much value you're getting
-out of the instrumentation-guided approach taken by afl-fuzz.
-
-The next field shows you the number of inputs that have not gone through any
-fuzzing yet. The same stat is also given for "favored" entries that the fuzzer
-really wants to get to in this queue cycle (the non-favored entries may have to
-wait a couple of cycles to get their chance).
-
-Next, we have the number of new paths found during this fuzzing section and
-imported from other fuzzer instances when doing parallelized fuzzing; and the
-extent to which identical inputs appear to sometimes produce variable behavior
-in the tested binary.
-
-That last bit is actually fairly interesting: it measures the consistency of
-observed traces. If a program always behaves the same for the same input data,
-it will earn a score of 100%. When the value is lower but still shown in purple,
-the fuzzing process is unlikely to be negatively affected. If it goes into red,
-you may be in trouble, since AFL will have difficulty discerning between
-meaningful and "phantom" effects of tweaking the input file.
-
-Now, most targets will just get a 100% score, but when you see lower figures,
-there are several things to look at:
-
- - The use of uninitialized memory in conjunction with some intrinsic sources
- of entropy in the tested binary. Harmless to AFL, but could be indicative
- of a security bug.
- - Attempts to manipulate persistent resources, such as left over temporary
- files or shared memory objects. This is usually harmless, but you may want
- to double-check to make sure the program isn't bailing out prematurely.
- Running out of disk space, SHM handles, or other global resources can
- trigger this, too.
- - Hitting some functionality that is actually designed to behave randomly.
- Generally harmless. For example, when fuzzing sqlite, an input like
- `select random();` will trigger a variable execution path.
- - Multiple threads executing at once in semi-random order. This is harmless
- when the 'stability' metric stays over 90% or so, but can become an issue
- if not. Here's what to try:
- * Use afl-clang-fast from [instrumentation](../instrumentation/) - it uses a thread-local tracking
- model that is less prone to concurrency issues,
- * See if the target can be compiled or run without threads. Common
- `./configure` options include `--without-threads`, `--disable-pthreads`, or
- `--disable-openmp`.
- * Replace pthreads with GNU Pth (https://www.gnu.org/software/pth/), which
- allows you to use a deterministic scheduler.
- - In persistent mode, minor drops in the "stability" metric can be normal,
- because not all the code behaves identically when re-entered; but major
- dips may signify that the code within `__AFL_LOOP()` is not behaving
- correctly on subsequent iterations (e.g., due to incomplete clean-up or
- reinitialization of the state) and that most of the fuzzing effort goes
- to waste.
-
-The paths where variable behavior is detected are marked with a matching entry
-in the `/queue/.state/variable_behavior/` directory, so you can look
-them up easily.
-
-### CPU load
-
-```
- [cpu: 25%]
-```
-
-This tiny widget shows the apparent CPU utilization on the local system. It is
-calculated by taking the number of processes in the "runnable" state, and then
-comparing it to the number of logical cores on the system.
-
-If the value is shown in green, you are using fewer CPU cores than available on
-your system and can probably parallelize to improve performance; for tips on
-how to do that, see parallel_fuzzing.md.
-
-If the value is shown in red, your CPU is *possibly* oversubscribed, and
-running additional fuzzers may not give you any benefits.
-
-Of course, this benchmark is very simplistic; it tells you how many processes
-are ready to run, but not how resource-hungry they may be. It also doesn't
-distinguish between physical cores, logical cores, and virtualized CPUs; the
-performance characteristics of each of these will differ quite a bit.
-
-If you want a more accurate measurement, you can run the `afl-gotcpu` utility from the command line.
-
-### Addendum: status and plot files
-
-For unattended operation, some of the key status screen information can be also
-found in a machine-readable format in the fuzzer_stats file in the output
-directory. This includes:
-
- - `start_time` - unix time indicating the start time of afl-fuzz
- - `last_update` - unix time corresponding to the last update of this file
- - `run_time` - run time in seconds to the last update of this file
- - `fuzzer_pid` - PID of the fuzzer process
- - `cycles_done` - queue cycles completed so far
- - `cycles_wo_finds` - number of cycles without any new paths found
- - `execs_done` - number of execve() calls attempted
- - `execs_per_sec` - overall number of execs per second
- - `paths_total` - total number of entries in the queue
- - `paths_favored` - number of queue entries that are favored
- - `paths_found` - number of entries discovered through local fuzzing
- - `paths_imported` - number of entries imported from other instances
- - `max_depth` - number of levels in the generated data set
- - `cur_path` - currently processed entry number
- - `pending_favs` - number of favored entries still waiting to be fuzzed
- - `pending_total` - number of all entries waiting to be fuzzed
- - `variable_paths` - number of test cases showing variable behavior
- - `stability` - percentage of bitmap bytes that behave consistently
- - `bitmap_cvg` - percentage of edge coverage found in the map so far
- - `unique_crashes` - number of unique crashes recorded
- - `unique_hangs` - number of unique hangs encountered
- - `last_path` - seconds since the last path was found
- - `last_crash` - seconds since the last crash was found
- - `last_hang` - seconds since the last hang was found
- - `execs_since_crash` - execs since the last crash was found
- - `exec_timeout` - the -t command line value
- - `slowest_exec_ms` - real time of the slowest execution in ms
- - `peak_rss_mb` - max rss usage reached during fuzzing in MB
- - `edges_found` - how many edges have been found
- - `var_byte_count` - how many edges are non-deterministic
- - `afl_banner` - banner text (e.g. the target name)
- - `afl_version` - the version of AFL used
- - `target_mode` - default, persistent, qemu, unicorn, non-instrumented
- - `command_line` - full command line used for the fuzzing session
-
-Most of these map directly to the UI elements discussed earlier on.
-
-On top of that, you can also find an entry called `plot_data`, containing a
-plottable history for most of these fields. If you have gnuplot installed, you
-can turn this into a nice progress report with the included `afl-plot` tool.
-
-
-### Addendum: Automatically send metrics with StatsD
-
-In a CI environment or when running multiple fuzzers, it can be tedious to
-log into each of them or deploy scripts to read the fuzzer statistics.
-Using `AFL_STATSD` (and the other related environment variables `AFL_STATSD_HOST`,
-`AFL_STATSD_PORT`, `AFL_STATSD_TAGS_FLAVOR`) you can automatically send metrics
-to your favorite StatsD server. Depending on your StatsD server you will be able
-to monitor, trigger alerts or perform actions based on these metrics (e.g: alert on
-slow exec/s for a new build, threshold of crashes, time since last crash > X, etc).
-
-The selected metrics are a subset of all the metrics found in the status and in
-the plot file. The list is the following: `cycle_done`, `cycles_wo_finds`,
-`execs_done`,`execs_per_sec`, `paths_total`, `paths_favored`, `paths_found`,
-`paths_imported`, `max_depth`, `cur_path`, `pending_favs`, `pending_total`,
-`variable_paths`, `unique_crashes`, `unique_hangs`, `total_crashes`,
-`slowest_exec_ms`, `edges_found`, `var_byte_count`, `havoc_expansion`.
-Their definitions can be found in the addendum above.
-
-When using multiple fuzzer instances with StatsD it is *strongly* recommended to setup
-the flavor (AFL_STATSD_TAGS_FLAVOR) to match your StatsD server. This will allow you
-to see individual fuzzer performance, detect bad ones, see the progress of each
-strategy...
diff --git a/docs/third_party_tools.md b/docs/third_party_tools.md
index ba96d0ce..446d373c 100644
--- a/docs/third_party_tools.md
+++ b/docs/third_party_tools.md
@@ -1,12 +1,12 @@
# Tools that help fuzzing with AFL++
Speeding up fuzzing:
- * [libfiowrapper](https://github.com/marekzmyslowski/libfiowrapper) - if the function you want to fuzz requires loading a file, this allows using the shared memory testcase feature :-) - recommended.
+ * [libfiowrapper](https://github.com/marekzmyslowski/libfiowrapper) - if the function you want to fuzz requires loading a file, this allows using the shared memory test case feature :-) - recommended.
Minimization of test cases:
* [afl-pytmin](https://github.com/ilsani/afl-pytmin) - a wrapper for afl-tmin that tries to speed up the process of minimization of a single test case by using many CPU cores.
- * [afl-ddmin-mod](https://github.com/MarkusTeufelberger/afl-ddmin-mod) - a variation of afl-tmin based on the ddmin algorithm.
- * [halfempty](https://github.com/googleprojectzero/halfempty) - is a fast utility for minimizing test cases by Tavis Ormandy based on parallelization.
+ * [afl-ddmin-mod](https://github.com/MarkusTeufelberger/afl-ddmin-mod) - a variation of afl-tmin based on the ddmin algorithm.
+ * [halfempty](https://github.com/googleprojectzero/halfempty) - is a fast utility for minimizing test cases by Tavis Ormandy based on parallelization.
Distributed execution:
* [disfuzz-afl](https://github.com/MartijnB/disfuzz-afl) - distributed fuzzing for AFL.
diff --git a/qemu_mode/libqasan/README.md b/qemu_mode/libqasan/README.md
index 4a241233..6a65c12b 100644
--- a/qemu_mode/libqasan/README.md
+++ b/qemu_mode/libqasan/README.md
@@ -19,7 +19,7 @@ finding capabilities during fuzzing) is WIP.
### When should I use QASan?
If your target binary is PIC x86_64, you should also give a try to
-[retrowrite](https://github.com/HexHive/retrowrite) for static rewriting.
+[RetroWrite](https://github.com/HexHive/retrowrite) for static rewriting.
If it fails, or if your binary is for another architecture, or you want to use
persistent and snapshot mode, AFL++ QASan mode is what you want/have to use.
diff --git a/unicorn_mode/samples/persistent/COMPILE.md b/unicorn_mode/samples/persistent/COMPILE.md
index 111dfc54..9f2ae718 100644
--- a/unicorn_mode/samples/persistent/COMPILE.md
+++ b/unicorn_mode/samples/persistent/COMPILE.md
@@ -1,13 +1,16 @@
# C Sample
This shows a simple persistent harness for unicornafl in C.
-In contrast to the normal c harness, this harness manually resets the unicorn state on each new input.
-Thanks to this, we can rerun the testcase in unicorn multiple times, without the need to fork again.
+In contrast to the normal c harness, this harness manually resets the unicorn
+state on each new input.
+Thanks to this, we can rerun the test case in unicorn multiple times, without
+the need to fork again.
## Compiling sample.c
The target can be built using the `make` command.
Just make sure you have built unicorn support first:
+
```bash
cd /path/to/afl/unicorn_mode
./build_unicorn_support.sh
@@ -19,6 +22,7 @@ You don't need to compile persistent_target.c since a X86_64 binary version is
pre-built and shipped in this sample folder. This file documents how the binary
was built in case you want to rebuild it or recompile it for any reason.
-The pre-built binary (persistent_target_x86_64.bin) was built using -g -O0 in gcc.
+The pre-built binary (persistent_target_x86_64.bin) was built using -g -O0 in
+gcc.
-We then load the binary and we execute the main function directly.
+We then load the binary and we execute the main function directly.
\ No newline at end of file
diff --git a/utils/aflpp_driver/README.md b/utils/aflpp_driver/README.md
index 30e2412f..4560be2b 100644
--- a/utils/aflpp_driver/README.md
+++ b/utils/aflpp_driver/README.md
@@ -7,15 +7,15 @@ targets.
Just do `afl-clang-fast++ -o fuzz fuzzer_harness.cc libAFLDriver.a [plus required linking]`.
-You can also sneakily do this little trick:
+You can also sneakily do this little trick:
If this is the clang compile command to build for libfuzzer:
`clang++ -o fuzz -fsanitize=fuzzer fuzzer_harness.cc -lfoo`
then just switch `clang++` with `afl-clang-fast++` and our compiler will
magically insert libAFLDriver.a :)
-To use shared-memory testcases, you need nothing to do.
-To use stdin testcases give `-` as the only command line parameter.
-To use file input testcases give `@@` as the only command line parameter.
+To use shared-memory test cases, you need nothing to do.
+To use stdin test cases, give `-` as the only command line parameter.
+To use file input test cases, give `@@` as the only command line parameter.
IMPORTANT: if you use `afl-cmin` or `afl-cmin.bash` then either pass `-`
or `@@` as command line parameters.
@@ -30,8 +30,8 @@ are to be fuzzed in qemu_mode. So we compile them with clang/clang++, without
`clang++ -o fuzz fuzzer_harness.cc libAFLQemuDriver.a [plus required linking]`.
-
Then just do (where the name of the binary is `fuzz`):
+
```
AFL_QEMU_PERSISTENT_ADDR=0x$(nm fuzz | grep "T LLVMFuzzerTestOneInput" | awk '{print $1}')
AFL_QEMU_PERSISTENT_HOOK=/path/to/aflpp_qemu_driver_hook.so afl-fuzz -Q ... -- ./fuzz`
@@ -40,4 +40,4 @@ AFL_QEMU_PERSISTENT_HOOK=/path/to/aflpp_qemu_driver_hook.so afl-fuzz -Q ... -- .
if you use afl-cmin or `afl-showmap -C` with the aflpp_qemu_driver you need to
set the set same AFL_QEMU_... (or AFL_FRIDA_...) environment variables.
If you want to use afl-showmap (without -C) or afl-cmin.bash then you may not
-set these environment variables and rather set `AFL_QEMU_DRIVER_NO_HOOK=1`.
+set these environment variables and rather set `AFL_QEMU_DRIVER_NO_HOOK=1`.
\ No newline at end of file
--
cgit 1.4.1
From 8b5eafe7c504e68e710244ae7e58b1809e6584d9 Mon Sep 17 00:00:00 2001
From: llzmb <46303940+llzmb@users.noreply.github.com>
Date: Mon, 22 Nov 2021 19:56:39 +0100
Subject: Clean up docs folder
---
docs/afl-fuzz_approach.md | 24 +-
docs/features.md | 96 ++++---
docs/fuzzing_binary-only_targets.md | 99 ++++---
docs/limitations.md | 53 ++--
docs/parallel_fuzzing.md | 256 -----------------
docs/technical_details.md | 550 ------------------------------------
docs/third_party_tools.md | 68 +++--
docs/tutorials.md | 14 +-
8 files changed, 204 insertions(+), 956 deletions(-)
delete mode 100644 docs/parallel_fuzzing.md
delete mode 100644 docs/technical_details.md
(limited to 'docs/afl-fuzz_approach.md')
diff --git a/docs/afl-fuzz_approach.md b/docs/afl-fuzz_approach.md
index 57a275d9..e0d5a1c9 100644
--- a/docs/afl-fuzz_approach.md
+++ b/docs/afl-fuzz_approach.md
@@ -37,9 +37,10 @@ superior to blind fuzzing or coverage-only tools.
## Understanding the status screen
-This document provides an overview of the status screen - plus tips for
-troubleshooting any warnings and red text shown in the UI. See
-[README.md](../README.md) for the general instruction manual.
+This chapter provides an overview of the status screen - plus tips for
+troubleshooting any warnings and red text shown in the UI.
+
+For the general instruction manual, see [README.md](../README.md).
### A note about colors
@@ -47,7 +48,7 @@ The status screen and error messages use colors to keep things readable and
attract your attention to the most important details. For example, red almost
always means "consult this doc" :-)
-Unfortunately, the UI will render correctly only if your terminal is using
+Unfortunately, the UI will only render correctly if your terminal is using
traditional un*x palette (white text on black background) or something close to
that.
@@ -61,7 +62,7 @@ If you are using inverse video, you may want to change your settings, say:
Alternatively, if you really like your current colors, you can edit config.h to
comment out USE_COLORS, then do `make clean all`.
-I'm not aware of any other simple way to make this work without causing other
+We are not aware of any other simple way to make this work without causing other
side effects - sorry about that.
With that out of the way, let's talk about what's actually on the screen...
@@ -103,8 +104,8 @@ will be allowed to run for months.
There's one important thing to watch out for: if the tool is not finding new
paths within several minutes of starting, you're probably not invoking the
target binary correctly and it never gets to parse the input files we're
-throwing at it; another possible explanations are that the default memory limit
-(`-m`) is too restrictive, and the program exits after failing to allocate a
+throwing at it; other possible explanations are that the default memory limit
+(`-m`) is too restrictive and the program exits after failing to allocate a
buffer very early on; or that the input files are patently invalid and always
fail a basic header check.
@@ -124,9 +125,9 @@ red warning in this section, too :-)
The first field in this section gives you the count of queue passes done so far
- that is, the number of times the fuzzer went over all the interesting test
-cases discovered so far, fuzzed them, and looped back to the very beginning.
-Every fuzzing session should be allowed to complete at least one cycle; and
-ideally, should run much longer than that.
+ cases discovered so far, fuzzed them, and looped back to the very beginning.
+ Every fuzzing session should be allowed to complete at least one cycle; and
+ ideally, should run much longer than that.
As noted earlier, the first pass can take a day or longer, so sit back and
relax.
@@ -140,7 +141,8 @@ while.
The remaining fields in this part of the screen should be pretty obvious:
there's the number of test cases ("paths") discovered so far, and the number of
unique faults. The test cases, crashes, and hangs can be explored in real-time
-by browsing the output directory, as discussed in [README.md](../README.md).
+by browsing the output directory, see
+[#interpreting-output](#interpreting-output).
### Cycle progress
diff --git a/docs/features.md b/docs/features.md
index 05670e6f..35a869a9 100644
--- a/docs/features.md
+++ b/docs/features.md
@@ -1,49 +1,61 @@
# Important features of AFL++
- AFL++ supports llvm from 3.8 up to version 12, very fast binary fuzzing with QEMU 5.1
- with laf-intel and redqueen, frida mode, unicorn mode, gcc plugin, full *BSD,
- Mac OS, Solaris and Android support and much, much, much more.
+AFL++ supports llvm from 3.8 up to version 12, very fast binary fuzzing with
+QEMU 5.1 with laf-intel and redqueen, frida mode, unicorn mode, gcc plugin, full
+*BSD, Mac OS, Solaris and Android support and much, much, much more.
- | Feature/Instrumentation | afl-gcc | llvm | gcc_plugin | frida_mode(9) | qemu_mode(10) |unicorn_mode(10) |coresight_mode(11)|
- | -------------------------|:-------:|:---------:|:----------:|:----------------:|:----------------:|:----------------:|:----------------:|
- | Threadsafe counters | | x(3) | | | | | |
- | NeverZero | x86[_64]| x(1) | x | x | x | x | |
- | Persistent Mode | | x | x | x86[_64]/arm64 | x86[_64]/arm[64] | x | |
- | LAF-Intel / CompCov | | x | | | x86[_64]/arm[64] | x86[_64]/arm[64] | |
- | CmpLog | | x | | x86[_64]/arm64 | x86[_64]/arm[64] | | |
- | Selective Instrumentation| | x | x | x | x | | |
- | Non-Colliding Coverage | | x(4) | | | (x)(5) | | |
- | Ngram prev_loc Coverage | | x(6) | | | | | |
- | Context Coverage | | x(6) | | | | | |
- | Auto Dictionary | | x(7) | | | | | |
- | Snapshot LKM Support | | (x)(8) | (x)(8) | | (x)(5) | | |
- | Shared Memory Test cases | | x | x | x86[_64]/arm64 | x | x | |
+| Feature/Instrumentation | afl-gcc | llvm | gcc_plugin | frida_mode(9) | qemu_mode(10) |unicorn_mode(10) |coresight_mode(11)|
+| -------------------------|:-------:|:---------:|:----------:|:----------------:|:----------------:|:----------------:|:----------------:|
+| Threadsafe counters | | x(3) | | | | | |
+| NeverZero | x86[_64]| x(1) | x | x | x | x | |
+| Persistent Mode | | x | x | x86[_64]/arm64 | x86[_64]/arm[64] | x | |
+| LAF-Intel / CompCov | | x | | | x86[_64]/arm[64] | x86[_64]/arm[64] | |
+| CmpLog | | x | | x86[_64]/arm64 | x86[_64]/arm[64] | | |
+| Selective Instrumentation| | x | x | x | x | | |
+| Non-Colliding Coverage | | x(4) | | | (x)(5) | | |
+| Ngram prev_loc Coverage | | x(6) | | | | | |
+| Context Coverage | | x(6) | | | | | |
+| Auto Dictionary | | x(7) | | | | | |
+| Snapshot LKM Support | | (x)(8) | (x)(8) | | (x)(5) | | |
+| Shared Memory Test cases | | x | x | x86[_64]/arm64 | x | x | |
- 1. default for LLVM >= 9.0, env var for older version due an efficiency bug in previous llvm versions
- 2. GCC creates non-performant code, hence it is disabled in gcc_plugin
- 3. with `AFL_LLVM_THREADSAFE_INST`, disables NeverZero
- 4. with pcguard mode and LTO mode for LLVM 11 and newer
- 5. upcoming, development in the branch
- 6. not compatible with LTO instrumentation and needs at least LLVM v4.1
- 7. automatic in LTO mode with LLVM 11 and newer, an extra pass for all LLVM versions that write to a file to use with afl-fuzz' `-x`
- 8. the snapshot LKM is currently unmaintained due to too many kernel changes coming too fast :-(
- 9. frida mode is supported on Linux and MacOS for Intel and ARM
- 10. QEMU/Unicorn is only supported on Linux
- 11. Coresight mode is only available on AARCH64 Linux with a CPU with Coresight extension
+1. default for LLVM >= 9.0, env var for older version due an efficiency bug in
+ previous llvm versions
+2. GCC creates non-performant code, hence it is disabled in gcc_plugin
+3. with `AFL_LLVM_THREADSAFE_INST`, disables NeverZero
+4. with pcguard mode and LTO mode for LLVM 11 and newer
+5. upcoming, development in the branch
+6. not compatible with LTO instrumentation and needs at least LLVM v4.1
+7. automatic in LTO mode with LLVM 11 and newer, an extra pass for all LLVM
+ versions that write to a file to use with afl-fuzz' `-x`
+8. the snapshot LKM is currently unmaintained due to too many kernel changes
+ coming too fast :-(
+9. frida mode is supported on Linux and MacOS for Intel and ARM
+10. QEMU/Unicorn is only supported on Linux
+11. Coresight mode is only available on AARCH64 Linux with a CPU with Coresight
+ extension
- Among others, the following features and patches have been integrated:
+Among others, the following features and patches have been integrated:
- * NeverZero patch for afl-gcc, instrumentation, qemu_mode and unicorn_mode which prevents a wrapping map value to zero, increases coverage
- * Persistent mode, deferred forkserver and in-memory fuzzing for qemu_mode
- * Unicorn mode which allows fuzzing of binaries from completely different platforms (integration provided by domenukk)
- * The new CmpLog instrumentation for LLVM and QEMU inspired by [Redqueen](https://www.syssec.ruhr-uni-bochum.de/media/emma/veroeffentlichungen/2018/12/17/NDSS19-Redqueen.pdf)
- * Win32 PE binary-only fuzzing with QEMU and Wine
- * AFLfast's power schedules by Marcel Böhme: [https://github.com/mboehme/aflfast](https://github.com/mboehme/aflfast)
- * The MOpt mutator: [https://github.com/puppet-meteor/MOpt-AFL](https://github.com/puppet-meteor/MOpt-AFL)
- * LLVM mode Ngram coverage by Adrian Herrera [https://github.com/adrianherrera/afl-ngram-pass](https://github.com/adrianherrera/afl-ngram-pass)
- * LAF-Intel/CompCov support for instrumentation, qemu_mode and unicorn_mode (with enhanced capabilities)
- * Radamsa and honggfuzz mutators (as custom mutators).
- * QBDI mode to fuzz android native libraries via Quarkslab's [QBDI](https://github.com/QBDI/QBDI) framework
- * Frida and ptrace mode to fuzz binary-only libraries, etc.
+* NeverZero patch for afl-gcc, instrumentation, qemu_mode and unicorn_mode which
+ prevents a wrapping map value to zero, increases coverage
+* Persistent mode, deferred forkserver and in-memory fuzzing for qemu_mode
+* Unicorn mode which allows fuzzing of binaries from completely different
+ platforms (integration provided by domenukk)
+* The new CmpLog instrumentation for LLVM and QEMU inspired by
+ [Redqueen](https://www.syssec.ruhr-uni-bochum.de/media/emma/veroeffentlichungen/2018/12/17/NDSS19-Redqueen.pdf)
+* Win32 PE binary-only fuzzing with QEMU and Wine
+* AFLfast's power schedules by Marcel Böhme:
+ [https://github.com/mboehme/aflfast](https://github.com/mboehme/aflfast)
+* The MOpt mutator:
+ [https://github.com/puppet-meteor/MOpt-AFL](https://github.com/puppet-meteor/MOpt-AFL)
+* LLVM mode Ngram coverage by Adrian Herrera
+ [https://github.com/adrianherrera/afl-ngram-pass](https://github.com/adrianherrera/afl-ngram-pass)
+* LAF-Intel/CompCov support for instrumentation, qemu_mode and unicorn_mode
+ (with enhanced capabilities)
+* Radamsa and honggfuzz mutators (as custom mutators).
+* QBDI mode to fuzz android native libraries via Quarkslab's
+ [QBDI](https://github.com/QBDI/QBDI) framework
+* Frida and ptrace mode to fuzz binary-only libraries, etc.
- So all in all this is the best-of AFL that is out there :-)
\ No newline at end of file
+So all in all this is the best-of AFL that is out there :-)
\ No newline at end of file
diff --git a/docs/fuzzing_binary-only_targets.md b/docs/fuzzing_binary-only_targets.md
index 0b39042f..4490660d 100644
--- a/docs/fuzzing_binary-only_targets.md
+++ b/docs/fuzzing_binary-only_targets.md
@@ -84,6 +84,8 @@ Wine, python3, and the pefile python package installed.
It is included in AFL++.
+For more information, see [qemu_mode/README.wine.md](../qemu_mode/README.wine.md).
+
### Frida_mode
In frida_mode, you can fuzz binary-only targets as easily as with QEMU.
@@ -99,11 +101,13 @@ make
```
For additional instructions and caveats, see
-[frida_mode/README.md](../frida_mode/README.md). If possible, you should use the
-persistent mode, see [qemu_frida/README.md](../qemu_frida/README.md). The mode
-is approximately 2-5x slower than compile-time instrumentation, and is less
-conducive to parallelization. But for binary-only fuzzing, it gives a huge speed
-improvement if it is possible to use.
+[frida_mode/README.md](../frida_mode/README.md).
+
+If possible, you should use the persistent mode, see
+[qemu_frida/README.md](../qemu_frida/README.md). The mode is approximately 2-5x
+slower than compile-time instrumentation, and is less conducive to
+parallelization. But for binary-only fuzzing, it gives a huge speed improvement
+if it is possible to use.
If you want to fuzz a binary-only library, then you can fuzz it with frida-gum
via frida_mode/. You will have to write a harness to call the target function in
@@ -154,8 +158,6 @@ and use afl-untracer.c as a template. It is slower than frida_mode.
For more information, see
[utils/afl_untracer/README.md](../utils/afl_untracer/README.md).
-## Binary rewriters
-
### Coresight
Coresight is ARM's answer to Intel's PT. With AFL++ v3.15, there is a coresight
@@ -163,6 +165,35 @@ tracer implementation available in `coresight_mode/` which is faster than QEMU,
however, cannot run in parallel. Currently, only one process can be traced, it
is WIP.
+Fore more information, see
+[coresight_mode/README.md](../coresight_mode/README.md).
+
+## Binary rewriters
+
+An alternative solution are binary rewriters. They are faster then the solutions native to AFL++ but don't always work.
+
+### ZAFL
+ZAFL is a static rewriting platform supporting x86-64 C/C++,
+stripped/unstripped, and PIE/non-PIE binaries. Beyond conventional
+instrumentation, ZAFL's API enables transformation passes (e.g., laf-Intel,
+context sensitivity, InsTrim, etc.).
+
+Its baseline instrumentation speed typically averages 90-95% of
+afl-clang-fast's.
+
+[https://git.zephyr-software.com/opensrc/zafl](https://git.zephyr-software.com/opensrc/zafl)
+
+### RetroWrite
+
+If you have an x86/x86_64 binary that still has its symbols, is compiled with
+position independent code (PIC/PIE), and does not use most of the C++ features,
+then the RetroWrite solution might be for you. It decompiles to ASM files which
+can then be instrumented with afl-gcc.
+
+It is at about 80-85% performance.
+
+[https://github.com/HexHive/retrowrite](https://github.com/HexHive/retrowrite)
+
### Dyninst
Dyninst is a binary instrumentation framework similar to Pintool and DynamoRIO.
@@ -183,27 +214,6 @@ with afl-dyninst.
[https://github.com/vanhauser-thc/afl-dyninst](https://github.com/vanhauser-thc/afl-dyninst)
-### Intel PT
-
-If you have a newer Intel CPU, you can make use of Intel's processor trace. The
-big issue with Intel's PT is the small buffer size and the complex encoding of
-the debug information collected through PT. This makes the decoding very CPU
-intensive and hence slow. As a result, the overall speed decrease is about
-70-90% (depending on the implementation and other factors).
-
-There are two AFL intel-pt implementations:
-
-1. [https://github.com/junxzm1990/afl-pt](https://github.com/junxzm1990/afl-pt)
- => This needs Ubuntu 14.04.05 without any updates and the 4.4 kernel.
-
-2. [https://github.com/hunter-ht-2018/ptfuzzer](https://github.com/hunter-ht-2018/ptfuzzer)
- => This needs a 4.14 or 4.15 kernel. The "nopti" kernel boot option must be
- used. This one is faster than the other.
-
-Note that there is also honggfuzz:
-[https://github.com/google/honggfuzz](https://github.com/google/honggfuzz). But
-its IPT performance is just 6%!
-
### Mcsema
Theoretically, you can also decompile to llvm IR with mcsema, and then use
@@ -211,6 +221,8 @@ llvm_mode to instrument the binary. Good luck with that.
[https://github.com/lifting-bits/mcsema](https://github.com/lifting-bits/mcsema)
+## Binary tracers
+
### Pintool & DynamoRIO
Pintool and DynamoRIO are dynamic instrumentation engines. They can be used for
@@ -236,27 +248,26 @@ Pintool solutions:
* [https://github.com/spinpx/afl_pin_mode](https://github.com/spinpx/afl_pin_mode)
<= only old Pintool version supported
-### RetroWrite
-
-If you have an x86/x86_64 binary that still has its symbols, is compiled with
-position independent code (PIC/PIE), and does not use most of the C++ features,
-then the RetroWrite solution might be for you. It decompiles to ASM files which
-can then be instrumented with afl-gcc.
+### Intel PT
-It is at about 80-85% performance.
+If you have a newer Intel CPU, you can make use of Intel's processor trace. The
+big issue with Intel's PT is the small buffer size and the complex encoding of
+the debug information collected through PT. This makes the decoding very CPU
+intensive and hence slow. As a result, the overall speed decrease is about
+70-90% (depending on the implementation and other factors).
-[https://github.com/HexHive/retrowrite](https://github.com/HexHive/retrowrite)
+There are two AFL intel-pt implementations:
-### ZAFL
-ZAFL is a static rewriting platform supporting x86-64 C/C++,
-stripped/unstripped, and PIE/non-PIE binaries. Beyond conventional
-instrumentation, ZAFL's API enables transformation passes (e.g., laf-Intel,
-context sensitivity, InsTrim, etc.).
+1. [https://github.com/junxzm1990/afl-pt](https://github.com/junxzm1990/afl-pt)
+ => This needs Ubuntu 14.04.05 without any updates and the 4.4 kernel.
-Its baseline instrumentation speed typically averages 90-95% of
-afl-clang-fast's.
+2. [https://github.com/hunter-ht-2018/ptfuzzer](https://github.com/hunter-ht-2018/ptfuzzer)
+ => This needs a 4.14 or 4.15 kernel. The "nopti" kernel boot option must be
+ used. This one is faster than the other.
-[https://git.zephyr-software.com/opensrc/zafl](https://git.zephyr-software.com/opensrc/zafl)
+Note that there is also honggfuzz:
+[https://github.com/google/honggfuzz](https://github.com/google/honggfuzz). But
+its IPT performance is just 6%!
## Non-AFL++ solutions
diff --git a/docs/limitations.md b/docs/limitations.md
index a68c0a85..8172a902 100644
--- a/docs/limitations.md
+++ b/docs/limitations.md
@@ -1,36 +1,37 @@
# Known limitations & areas for improvement
-Here are some of the most important caveats for AFL:
+Here are some of the most important caveats for AFL++:
- - AFL++ detects faults by checking for the first spawned process dying due to
- a signal (SIGSEGV, SIGABRT, etc). Programs that install custom handlers for
- these signals may need to have the relevant code commented out. In the same
- vein, faults in child processes spawned by the fuzzed target may evade
- detection unless you manually add some code to catch that.
+- AFL++ detects faults by checking for the first spawned process dying due to a
+ signal (SIGSEGV, SIGABRT, etc). Programs that install custom handlers for
+ these signals may need to have the relevant code commented out. In the same
+ vein, faults in child processes spawned by the fuzzed target may evade
+ detection unless you manually add some code to catch that.
- - As with any other brute-force tool, the fuzzer offers limited coverage if
- encryption, checksums, cryptographic signatures, or compression are used to
- wholly wrap the actual data format to be tested.
+- As with any other brute-force tool, the fuzzer offers limited coverage if
+ encryption, checksums, cryptographic signatures, or compression are used to
+ wholly wrap the actual data format to be tested.
- To work around this, you can comment out the relevant checks (see
- utils/libpng_no_checksum/ for inspiration); if this is not possible,
- you can also write a postprocessor, one of the hooks of custom mutators.
- See [custom_mutators.md](custom_mutators.md) on how to use
- `AFL_CUSTOM_MUTATOR_LIBRARY`
+To work around this, you can comment out the relevant checks (see
+utils/libpng_no_checksum/ for inspiration); if this is not possible, you can
+also write a postprocessor, one of the hooks of custom mutators. See
+[custom_mutators.md](custom_mutators.md) on how to use
+`AFL_CUSTOM_MUTATOR_LIBRARY`.
- - There are some unfortunate trade-offs with ASAN and 64-bit binaries. This
- isn't due to any specific fault of afl-fuzz.
+- There are some unfortunate trade-offs with ASAN and 64-bit binaries. This
+ isn't due to any specific fault of afl-fuzz.
- - There is no direct support for fuzzing network services, background
- daemons, or interactive apps that require UI interaction to work. You may
- need to make simple code changes to make them behave in a more traditional
- way. Preeny may offer a relatively simple option, too - see:
- [https://github.com/zardus/preeny](https://github.com/zardus/preeny)
+- There is no direct support for fuzzing network services, background daemons,
+ or interactive apps that require UI interaction to work. You may need to make
+ simple code changes to make them behave in a more traditional way. Preeny may
+ offer a relatively simple option, too - see:
+ [https://github.com/zardus/preeny](https://github.com/zardus/preeny)
- Some useful tips for modifying network-based services can be also found at:
- [https://www.fastly.com/blog/how-to-fuzz-server-american-fuzzy-lop](https://www.fastly.com/blog/how-to-fuzz-server-american-fuzzy-lop)
+Some useful tips for modifying network-based services can be also found at:
+[https://www.fastly.com/blog/how-to-fuzz-server-american-fuzzy-lop](https://www.fastly.com/blog/how-to-fuzz-server-american-fuzzy-lop)
- - Occasionally, sentient machines rise against their creators. If this
- happens to you, please consult [https://lcamtuf.coredump.cx/prep/](https://lcamtuf.coredump.cx/prep/).
+- Occasionally, sentient machines rise against their creators. If this happens
+ to you, please consult
+ [https://lcamtuf.coredump.cx/prep/](https://lcamtuf.coredump.cx/prep/).
-Beyond this, see [INSTALL.md](INSTALL.md) for platform-specific tips.
+Beyond this, see [INSTALL.md](INSTALL.md) for platform-specific tips.
\ No newline at end of file
diff --git a/docs/parallel_fuzzing.md b/docs/parallel_fuzzing.md
deleted file mode 100644
index 130cb3ce..00000000
--- a/docs/parallel_fuzzing.md
+++ /dev/null
@@ -1,256 +0,0 @@
-# Tips for parallel fuzzing
-
-This document talks about synchronizing afl-fuzz jobs on a single machine or
-across a fleet of systems. See README.md for the general instruction manual.
-
-Note that this document is rather outdated. please refer to the main document
-section on multiple core usage
-[fuzzing_in_depth.md:b) Using multiple cores](fuzzing_in_depth.md#b-using-multiple-cores)
-for up to date strategies!
-
-## 1) Introduction
-
-Every copy of afl-fuzz will take up one CPU core. This means that on an n-core
-system, you can almost always run around n concurrent fuzzing jobs with
-virtually no performance hit (you can use the afl-gotcpu tool to make sure).
-
-In fact, if you rely on just a single job on a multi-core system, you will be
-underutilizing the hardware. So, parallelization is always the right way to go.
-
-When targeting multiple unrelated binaries or using the tool in
-"non-instrumented" (-n) mode, it is perfectly fine to just start up several
-fully separate instances of afl-fuzz. The picture gets more complicated when you
-want to have multiple fuzzers hammering a common target: if a hard-to-hit but
-interesting test case is synthesized by one fuzzer, the remaining instances will
-not be able to use that input to guide their work.
-
-To help with this problem, afl-fuzz offers a simple way to synchronize test
-cases on the fly.
-
-It is a good idea to use different power schedules if you run several instances
-in parallel (`-p` option).
-
-Alternatively running other AFL spinoffs in parallel can be of value, e.g.
-Angora (https://github.com/AngoraFuzzer/Angora/)
-
-## 2) Single-system parallelization
-
-If you wish to parallelize a single job across multiple cores on a local system,
-simply create a new, empty output directory ("sync dir") that will be shared by
-all the instances of afl-fuzz; and then come up with a naming scheme for every
-instance - say, "fuzzer01", "fuzzer02", etc.
-
-Run the first one ("main node", -M) like this:
-
-```
-./afl-fuzz -i testcase_dir -o sync_dir -M fuzzer01 [...other stuff...]
-```
-
-...and then, start up secondary (-S) instances like this:
-
-```
-./afl-fuzz -i testcase_dir -o sync_dir -S fuzzer02 [...other stuff...]
-./afl-fuzz -i testcase_dir -o sync_dir -S fuzzer03 [...other stuff...]
-```
-
-Each fuzzer will keep its state in a separate subdirectory, like so:
-
- /path/to/sync_dir/fuzzer01/
-
-Each instance will also periodically rescan the top-level sync directory for any
-test cases found by other fuzzers - and will incorporate them into its own
-fuzzing when they are deemed interesting enough. For performance reasons only -M
-main node syncs the queue with everyone, the -S secondary nodes will only sync
-from the main node.
-
-The difference between the -M and -S modes is that the main instance will still
-perform deterministic checks; while the secondary instances will proceed
-straight to random tweaks.
-
-Note that you must always have one -M main instance! Running multiple -M
-instances is wasteful!
-
-You can also monitor the progress of your jobs from the command line with the
-provided afl-whatsup tool. When the instances are no longer finding new paths,
-it's probably time to stop.
-
-WARNING: Exercise caution when explicitly specifying the -f option. Each fuzzer
-must use a separate temporary file; otherwise, things will go south. One safe
-example may be:
-
-```
-./afl-fuzz [...] -S fuzzer10 -f file10.txt ./fuzzed/binary @@
-./afl-fuzz [...] -S fuzzer11 -f file11.txt ./fuzzed/binary @@
-./afl-fuzz [...] -S fuzzer12 -f file12.txt ./fuzzed/binary @@
-```
-
-This is not a concern if you use @@ without -f and let afl-fuzz come up with the
-file name.
-
-## 3) Multiple -M mains
-
-
-There is support for parallelizing the deterministic checks. This is only needed
-where
-
- 1. many new paths are found fast over a long time and it looks unlikely that
- main node will ever catch up, and
- 2. deterministic fuzzing is actively helping path discovery (you can see this
- in the main node for the first for lines in the "fuzzing strategy yields"
- section. If the ration `found/attempts` is high, then it is effective. It
- most commonly isn't.)
-
-Only if both are true it is beneficial to have more than one main. You can
-leverage this by creating -M instances like so:
-
-```
-./afl-fuzz -i testcase_dir -o sync_dir -M mainA:1/3 [...]
-./afl-fuzz -i testcase_dir -o sync_dir -M mainB:2/3 [...]
-./afl-fuzz -i testcase_dir -o sync_dir -M mainC:3/3 [...]
-```
-
-... where the first value after ':' is the sequential ID of a particular main
-instance (starting at 1), and the second value is the total number of fuzzers to
-distribute the deterministic fuzzing across. Note that if you boot up fewer
-fuzzers than indicated by the second number passed to -M, you may end up with
-poor coverage.
-
-## 4) Syncing with non-AFL fuzzers or independent instances
-
-A -M main node can be told with the `-F other_fuzzer_queue_directory` option to
-sync results from other fuzzers, e.g. libfuzzer or honggfuzz.
-
-Only the specified directory will by synced into afl, not subdirectories. The
-specified directory does not need to exist yet at the start of afl.
-
-The `-F` option can be passed to the main node several times.
-
-## 5) Multi-system parallelization
-
-The basic operating principle for multi-system parallelization is similar to the
-mechanism explained in section 2. The key difference is that you need to write a
-simple script that performs two actions:
-
- - Uses SSH with authorized_keys to connect to every machine and retrieve a tar
- archive of the /path/to/sync_dir/ directory local to the
- machine. It is best to use a naming scheme that includes host name and it's
- being a main node (e.g. main1, main2) in the fuzzer ID, so that you can do
- something like:
-
- ```sh
- for host in `cat HOSTLIST`; do
- ssh user@$host "tar -czf - sync/$host_main*/" > $host.tgz
- done
- ```
-
- - Distributes and unpacks these files on all the remaining machines, e.g.:
-
- ```sh
- for srchost in `cat HOSTLIST`; do
- for dsthost in `cat HOSTLIST`; do
- test "$srchost" = "$dsthost" && continue
- ssh user@$srchost 'tar -kxzf -' < $dsthost.tgz
- done
- done
- ```
-
-There is an example of such a script in utils/distributed_fuzzing/.
-
-There are other (older) more featured, experimental tools:
- * https://github.com/richo/roving
- * https://github.com/MartijnB/disfuzz-afl
-
-However these do not support syncing just main nodes (yet).
-
-When developing custom test case sync code, there are several optimizations to
-keep in mind:
-
- - The synchronization does not have to happen very often; running the task
- every 60 minutes or even less often at later fuzzing stages is fine
-
- - There is no need to synchronize crashes/ or hangs/; you only need to copy
- over queue/* (and ideally, also fuzzer_stats).
-
- - It is not necessary (and not advisable!) to overwrite existing files; the -k
- option in tar is a good way to avoid that.
-
- - There is no need to fetch directories for fuzzers that are not running
- locally on a particular machine, and were simply copied over onto that
- system during earlier runs.
-
- - For large fleets, you will want to consolidate tarballs for each host, as
- this will let you use n SSH connections for sync, rather than n*(n-1).
-
- You may also want to implement staged synchronization. For example, you
- could have 10 groups of systems, with group 1 pushing test cases only to
- group 2; group 2 pushing them only to group 3; and so on, with group
- eventually 10 feeding back to group 1.
-
- This arrangement would allow test interesting cases to propagate across the
- fleet without having to copy every fuzzer queue to every single host.
-
- - You do not want a "main" instance of afl-fuzz on every system; you should
- run them all with -S, and just designate a single process somewhere within
- the fleet to run with -M.
-
- - Syncing is only necessary for the main nodes on a system. It is possible to
- run main-less with only secondaries. However then you need to find out which
- secondary took over the temporary role to be the main node. Look for the
- `is_main_node` file in the fuzzer directories, eg.
- `sync-dir/hostname-*/is_main_node`
-
-It is *not* advisable to skip the synchronization script and run the fuzzers
-directly on a network filesystem; unexpected latency and unkillable processes in
-I/O wait state can mess things up.
-
-## 6) Remote monitoring and data collection
-
-You can use screen, nohup, tmux, or something equivalent to run remote instances
-of afl-fuzz. If you redirect the program's output to a file, it will
-automatically switch from a fancy UI to more limited status reports. There is
-also basic machine-readable information which is always written to the
-fuzzer_stats file in the output directory. Locally, that information can be
-interpreted with afl-whatsup.
-
-In principle, you can use the status screen of the main (-M) instance to monitor
-the overall fuzzing progress and decide when to stop. In this mode, the most
-important signal is just that no new paths are being found for a longer while.
-If you do not have a main instance, just pick any single secondary instance to
-watch and go by that.
-
-You can also rely on that instance's output directory to collect the synthesized
-corpus that covers all the noteworthy paths discovered anywhere within the
-fleet. Secondary (-S) instances do not require any special monitoring, other
-than just making sure that they are up.
-
-Keep in mind that crashing inputs are *not* automatically propagated to the main
-instance, so you may still want to monitor for crashes fleet-wide from within
-your synchronization or health checking scripts (see afl-whatsup).
-
-## 7) Asymmetric setups
-
-It is perhaps worth noting that all of the following is permitted:
-
- - Running afl-fuzz with conjunction with other guided tools that can extend
- coverage (e.g., via concolic execution). Third-party tools simply need to
- follow the protocol described above for pulling new test cases from
- out_dir//queue/* and writing their own finds to sequentially
- numbered id:nnnnnn files in out_dir//queue/*.
-
- - Running some of the synchronized fuzzers with different (but related) target
- binaries. For example, simultaneously stress-testing several different JPEG
- parsers (say, IJG jpeg and libjpeg-turbo) while sharing the discovered test
- cases can have synergistic effects and improve the overall coverage.
-
- (In this case, running one -M instance per target is necessary.)
-
- - Having some of the fuzzers invoke the binary in different ways. For example,
- 'djpeg' supports several DCT modes, configurable with a command-line flag,
- while 'dwebp' supports incremental and one-shot decoding. In some scenarios,
- going after multiple distinct modes and then pooling test cases will improve
- coverage.
-
- - Much less convincingly, running the synchronized fuzzers with different
- starting test cases (e.g., progressive and standard JPEG) or dictionaries.
- The synchronization mechanism ensures that the test sets will get fairly
- homogeneous over time, but it introduces some initial variability.
\ No newline at end of file
diff --git a/docs/technical_details.md b/docs/technical_details.md
deleted file mode 100644
index 994ffe9f..00000000
--- a/docs/technical_details.md
+++ /dev/null
@@ -1,550 +0,0 @@
-# Technical "whitepaper" for afl-fuzz
-
-
-NOTE: this document is mostly outdated!
-
-
-This document provides a quick overview of the guts of American Fuzzy Lop.
-See README.md for the general instruction manual; and for a discussion of
-motivations and design goals behind AFL, see historical_notes.md.
-
-## 0. Design statement
-
-American Fuzzy Lop does its best not to focus on any singular principle of
-operation and not be a proof-of-concept for any specific theory. The tool can
-be thought of as a collection of hacks that have been tested in practice,
-found to be surprisingly effective, and have been implemented in the simplest,
-most robust way I could think of at the time.
-
-Many of the resulting features are made possible thanks to the availability of
-lightweight instrumentation that served as a foundation for the tool, but this
-mechanism should be thought of merely as a means to an end. The only true
-governing principles are speed, reliability, and ease of use.
-
-## 1. Coverage measurements
-
-The instrumentation injected into compiled programs captures branch (edge)
-coverage, along with coarse branch-taken hit counts. The code injected at
-branch points is essentially equivalent to:
-
-```c
- cur_location = ;
- shared_mem[cur_location ^ prev_location]++;
- prev_location = cur_location >> 1;
-```
-
-The `cur_location` value is generated randomly to simplify the process of
-linking complex projects and keep the XOR output distributed uniformly.
-
-The `shared_mem[]` array is a 64 kB SHM region passed to the instrumented binary
-by the caller. Every byte set in the output map can be thought of as a hit for
-a particular (`branch_src`, `branch_dst`) tuple in the instrumented code.
-
-The size of the map is chosen so that collisions are sporadic with almost all
-of the intended targets, which usually sport between 2k and 10k discoverable
-branch points:
-
-```
- Branch cnt | Colliding tuples | Example targets
- ------------+------------------+-----------------
- 1,000 | 0.75% | giflib, lzo
- 2,000 | 1.5% | zlib, tar, xz
- 5,000 | 3.5% | libpng, libwebp
- 10,000 | 7% | libxml
- 20,000 | 14% | sqlite
- 50,000 | 30% | -
-```
-
-At the same time, its size is small enough to allow the map to be analyzed
-in a matter of microseconds on the receiving end, and to effortlessly fit
-within L2 cache.
-
-This form of coverage provides considerably more insight into the execution
-path of the program than simple block coverage. In particular, it trivially
-distinguishes between the following execution traces:
-
-```
- A -> B -> C -> D -> E (tuples: AB, BC, CD, DE)
- A -> B -> D -> C -> E (tuples: AB, BD, DC, CE)
-```
-
-This aids the discovery of subtle fault conditions in the underlying code,
-because security vulnerabilities are more often associated with unexpected
-or incorrect state transitions than with merely reaching a new basic block.
-
-The reason for the shift operation in the last line of the pseudocode shown
-earlier in this section is to preserve the directionality of tuples (without
-this, A ^ B would be indistinguishable from B ^ A) and to retain the identity
-of tight loops (otherwise, A ^ A would be obviously equal to B ^ B).
-
-The absence of simple saturating arithmetic opcodes on Intel CPUs means that
-the hit counters can sometimes wrap around to zero. Since this is a fairly
-unlikely and localized event, it's seen as an acceptable performance trade-off.
-
-### 2. Detecting new behaviors
-
-The fuzzer maintains a global map of tuples seen in previous executions; this
-data can be rapidly compared with individual traces and updated in just a couple
-of dword- or qword-wide instructions and a simple loop.
-
-When a mutated input produces an execution trace containing new tuples, the
-corresponding input file is preserved and routed for additional processing
-later on (see section #3). Inputs that do not trigger new local-scale state
-transitions in the execution trace (i.e., produce no new tuples) are discarded,
-even if their overall control flow sequence is unique.
-
-This approach allows for a very fine-grained and long-term exploration of
-program state while not having to perform any computationally intensive and
-fragile global comparisons of complex execution traces, and while avoiding the
-scourge of path explosion.
-
-To illustrate the properties of the algorithm, consider that the second trace
-shown below would be considered substantially new because of the presence of
-new tuples (CA, AE):
-
-```
- #1: A -> B -> C -> D -> E
- #2: A -> B -> C -> A -> E
-```
-
-At the same time, with #2 processed, the following pattern will not be seen
-as unique, despite having a markedly different overall execution path:
-
-```
- #3: A -> B -> C -> A -> B -> C -> A -> B -> C -> D -> E
-```
-
-In addition to detecting new tuples, the fuzzer also considers coarse tuple
-hit counts. These are divided into several buckets:
-
-```
- 1, 2, 3, 4-7, 8-15, 16-31, 32-127, 128+
-```
-
-To some extent, the number of buckets is an implementation artifact: it allows
-an in-place mapping of an 8-bit counter generated by the instrumentation to
-an 8-position bitmap relied on by the fuzzer executable to keep track of the
-already-seen execution counts for each tuple.
-
-Changes within the range of a single bucket are ignored; transition from one
-bucket to another is flagged as an interesting change in program control flow,
-and is routed to the evolutionary process outlined in the section below.
-
-The hit count behavior provides a way to distinguish between potentially
-interesting control flow changes, such as a block of code being executed
-twice when it was normally hit only once. At the same time, it is fairly
-insensitive to empirically less notable changes, such as a loop going from
-47 cycles to 48. The counters also provide some degree of "accidental"
-immunity against tuple collisions in dense trace maps.
-
-The execution is policed fairly heavily through memory and execution time
-limits; by default, the timeout is set at 5x the initially-calibrated
-execution speed, rounded up to 20 ms. The aggressive timeouts are meant to
-prevent dramatic fuzzer performance degradation by descending into tarpits
-that, say, improve coverage by 1% while being 100x slower; we pragmatically
-reject them and hope that the fuzzer will find a less expensive way to reach
-the same code. Empirical testing strongly suggests that more generous time
-limits are not worth the cost.
-
-## 3. Evolving the input queue
-
-Mutated test cases that produced new state transitions within the program are
-added to the input queue and used as a starting point for future rounds of
-fuzzing. They supplement, but do not automatically replace, existing finds.
-
-In contrast to more greedy genetic algorithms, this approach allows the tool
-to progressively explore various disjoint and possibly mutually incompatible
-features of the underlying data format, as shown in this image:
-
- 
-
-Several practical examples of the results of this algorithm are discussed
-here:
-
- https://lcamtuf.blogspot.com/2014/11/pulling-jpegs-out-of-thin-air.html
- https://lcamtuf.blogspot.com/2014/11/afl-fuzz-nobody-expects-cdata-sections.html
-
-The synthetic corpus produced by this process is essentially a compact
-collection of "hmm, this does something new!" input files, and can be used to
-seed any other testing processes down the line (for example, to manually
-stress-test resource-intensive desktop apps).
-
-With this approach, the queue for most targets grows to somewhere between 1k
-and 10k entries; approximately 10-30% of this is attributable to the discovery
-of new tuples, and the remainder is associated with changes in hit counts.
-
-The following table compares the relative ability to discover file syntax and
-explore program states when using several different approaches to guided
-fuzzing. The instrumented target was GNU patch 2.7k.3 compiled with `-O3` and
-seeded with a dummy text file; the session consisted of a single pass over the
-input queue with afl-fuzz:
-
-```
- Fuzzer guidance | Blocks | Edges | Edge hit | Highest-coverage
- strategy used | reached | reached | cnt var | test case generated
- ------------------+---------+---------+----------+---------------------------
- (Initial file) | 156 | 163 | 1.00 | (none)
- | | | |
- Blind fuzzing S | 182 | 205 | 2.23 | First 2 B of RCS diff
- Blind fuzzing L | 228 | 265 | 2.23 | First 4 B of -c mode diff
- Block coverage | 855 | 1,130 | 1.57 | Almost-valid RCS diff
- Edge coverage | 1,452 | 2,070 | 2.18 | One-chunk -c mode diff
- AFL model | 1,765 | 2,597 | 4.99 | Four-chunk -c mode diff
-```
-
-The first entry for blind fuzzing ("S") corresponds to executing just a single
-round of testing; the second set of figures ("L") shows the fuzzer running in a
-loop for a number of execution cycles comparable with that of the instrumented
-runs, which required more time to fully process the growing queue.
-
-Roughly similar results have been obtained in a separate experiment where the
-fuzzer was modified to compile out all the random fuzzing stages and leave just
-a series of rudimentary, sequential operations such as walking bit flips.
-Because this mode would be incapable of altering the size of the input file,
-the sessions were seeded with a valid unified diff:
-
-```
- Queue extension | Blocks | Edges | Edge hit | Number of unique
- strategy used | reached | reached | cnt var | crashes found
- ------------------+---------+---------+----------+------------------
- (Initial file) | 624 | 717 | 1.00 | -
- | | | |
- Blind fuzzing | 1,101 | 1,409 | 1.60 | 0
- Block coverage | 1,255 | 1,649 | 1.48 | 0
- Edge coverage | 1,259 | 1,734 | 1.72 | 0
- AFL model | 1,452 | 2,040 | 3.16 | 1
-```
-
-At noted earlier on, some of the prior work on genetic fuzzing relied on
-maintaining a single test case and evolving it to maximize coverage. At least
-in the tests described above, this "greedy" approach appears to confer no
-substantial benefits over blind fuzzing strategies.
-
-### 4. Culling the corpus
-
-The progressive state exploration approach outlined above means that some of
-the test cases synthesized later on in the game may have edge coverage that
-is a strict superset of the coverage provided by their ancestors.
-
-To optimize the fuzzing effort, AFL periodically re-evaluates the queue using a
-fast algorithm that selects a smaller subset of test cases that still cover
-every tuple seen so far, and whose characteristics make them particularly
-favorable to the tool.
-
-The algorithm works by assigning every queue entry a score proportional to its
-execution latency and file size; and then selecting lowest-scoring candidates
-for each tuple.
-
-The tuples are then processed sequentially using a simple workflow:
-
- 1) Find next tuple not yet in the temporary working set,
- 2) Locate the winning queue entry for this tuple,
- 3) Register *all* tuples present in that entry's trace in the working set,
- 4) Go to #1 if there are any missing tuples in the set.
-
-The generated corpus of "favored" entries is usually 5-10x smaller than the
-starting data set. Non-favored entries are not discarded, but they are skipped
-with varying probabilities when encountered in the queue:
-
- - If there are new, yet-to-be-fuzzed favorites present in the queue, 99%
- of non-favored entries will be skipped to get to the favored ones.
- - If there are no new favorites:
- * If the current non-favored entry was fuzzed before, it will be skipped
- 95% of the time.
- * If it hasn't gone through any fuzzing rounds yet, the odds of skipping
- drop down to 75%.
-
-Based on empirical testing, this provides a reasonable balance between queue
-cycling speed and test case diversity.
-
-Slightly more sophisticated but much slower culling can be performed on input
-or output corpora with `afl-cmin`. This tool permanently discards the redundant
-entries and produces a smaller corpus suitable for use with `afl-fuzz` or
-external tools.
-
-## 5. Trimming input files
-
-File size has a dramatic impact on fuzzing performance, both because large
-files make the target binary slower, and because they reduce the likelihood
-that a mutation would touch important format control structures, rather than
-redundant data blocks. This is discussed in more detail in perf_tips.md.
-
-The possibility that the user will provide a low-quality starting corpus aside,
-some types of mutations can have the effect of iteratively increasing the size
-of the generated files, so it is important to counter this trend.
-
-Luckily, the instrumentation feedback provides a simple way to automatically
-trim down input files while ensuring that the changes made to the files have no
-impact on the execution path.
-
-The built-in trimmer in afl-fuzz attempts to sequentially remove blocks of data
-with variable length and stepover; any deletion that doesn't affect the checksum
-of the trace map is committed to disk. The trimmer is not designed to be
-particularly thorough; instead, it tries to strike a balance between precision
-and the number of `execve()` calls spent on the process, selecting the block size
-and stepover to match. The average per-file gains are around 5-20%.
-
-The standalone `afl-tmin` tool uses a more exhaustive, iterative algorithm, and
-also attempts to perform alphabet normalization on the trimmed files. The
-operation of `afl-tmin` is as follows.
-
-First, the tool automatically selects the operating mode. If the initial input
-crashes the target binary, afl-tmin will run in non-instrumented mode, simply
-keeping any tweaks that produce a simpler file but still crash the target.
-The same mode is used for hangs, if `-H` (hang mode) is specified.
-If the target is non-crashing, the tool uses an instrumented mode and keeps only
-the tweaks that produce exactly the same execution path.
-
-The actual minimization algorithm is:
-
- 1) Attempt to zero large blocks of data with large stepovers. Empirically,
- this is shown to reduce the number of execs by preempting finer-grained
- efforts later on.
- 2) Perform a block deletion pass with decreasing block sizes and stepovers,
- binary-search-style.
- 3) Perform alphabet normalization by counting unique characters and trying
- to bulk-replace each with a zero value.
- 4) As a last result, perform byte-by-byte normalization on non-zero bytes.
-
-Instead of zeroing with a 0x00 byte, `afl-tmin` uses the ASCII digit '0'. This
-is done because such a modification is much less likely to interfere with
-text parsing, so it is more likely to result in successful minimization of
-text files.
-
-The algorithm used here is less involved than some other test case
-minimization approaches proposed in academic work, but requires far fewer
-executions and tends to produce comparable results in most real-world
-applications.
-
-## 6. Fuzzing strategies
-
-The feedback provided by the instrumentation makes it easy to understand the
-value of various fuzzing strategies and optimize their parameters so that they
-work equally well across a wide range of file types. The strategies used by
-afl-fuzz are generally format-agnostic and are discussed in more detail here:
-
- https://lcamtuf.blogspot.com/2014/08/binary-fuzzing-strategies-what-works.html
-
-It is somewhat notable that especially early on, most of the work done by
-`afl-fuzz` is actually highly deterministic, and progresses to random stacked
-modifications and test case splicing only at a later stage. The deterministic
-strategies include:
-
- - Sequential bit flips with varying lengths and stepovers,
- - Sequential addition and subtraction of small integers,
- - Sequential insertion of known interesting integers (`0`, `1`, `INT_MAX`, etc),
-
-The purpose of opening with deterministic steps is related to their tendency to
-produce compact test cases and small diffs between the non-crashing and crashing
-inputs.
-
-With deterministic fuzzing out of the way, the non-deterministic steps include
-stacked bit flips, insertions, deletions, arithmetics, and splicing of different
-test cases.
-
-The relative yields and `execve()` costs of all these strategies have been
-investigated and are discussed in the aforementioned blog post.
-
-For the reasons discussed in historical_notes.md (chiefly, performance,
-simplicity, and reliability), AFL generally does not try to reason about the
-relationship between specific mutations and program states; the fuzzing steps
-are nominally blind, and are guided only by the evolutionary design of the
-input queue.
-
-That said, there is one (trivial) exception to this rule: when a new queue
-entry goes through the initial set of deterministic fuzzing steps, and tweaks to
-some regions in the file are observed to have no effect on the checksum of the
-execution path, they may be excluded from the remaining phases of
-deterministic fuzzing - and the fuzzer may proceed straight to random tweaks.
-Especially for verbose, human-readable data formats, this can reduce the number
-of execs by 10-40% or so without an appreciable drop in coverage. In extreme
-cases, such as normally block-aligned tar archives, the gains can be as high as
-90%.
-
-Because the underlying "effector maps" are local every queue entry and remain
-in force only during deterministic stages that do not alter the size or the
-general layout of the underlying file, this mechanism appears to work very
-reliably and proved to be simple to implement.
-
-## 7. Dictionaries
-
-The feedback provided by the instrumentation makes it easy to automatically
-identify syntax tokens in some types of input files, and to detect that certain
-combinations of predefined or auto-detected dictionary terms constitute a
-valid grammar for the tested parser.
-
-A discussion of how these features are implemented within afl-fuzz can be found
-here:
-
- https://lcamtuf.blogspot.com/2015/01/afl-fuzz-making-up-grammar-with.html
-
-In essence, when basic, typically easily-obtained syntax tokens are combined
-together in a purely random manner, the instrumentation and the evolutionary
-design of the queue together provide a feedback mechanism to differentiate
-between meaningless mutations and ones that trigger new behaviors in the
-instrumented code - and to incrementally build more complex syntax on top of
-this discovery.
-
-The dictionaries have been shown to enable the fuzzer to rapidly reconstruct
-the grammar of highly verbose and complex languages such as JavaScript, SQL,
-or XML; several examples of generated SQL statements are given in the blog
-post mentioned above.
-
-Interestingly, the AFL instrumentation also allows the fuzzer to automatically
-isolate syntax tokens already present in an input file. It can do so by looking
-for run of bytes that, when flipped, produce a consistent change to the
-program's execution path; this is suggestive of an underlying atomic comparison
-to a predefined value baked into the code. The fuzzer relies on this signal
-to build compact "auto dictionaries" that are then used in conjunction with
-other fuzzing strategies.
-
-## 8. De-duping crashes
-
-De-duplication of crashes is one of the more important problems for any
-competent fuzzing tool. Many of the naive approaches run into problems; in
-particular, looking just at the faulting address may lead to completely
-unrelated issues being clustered together if the fault happens in a common
-library function (say, `strcmp`, `strcpy`); while checksumming call stack
-backtraces can lead to extreme crash count inflation if the fault can be
-reached through a number of different, possibly recursive code paths.
-
-The solution implemented in `afl-fuzz` considers a crash unique if any of two
-conditions are met:
-
- - The crash trace includes a tuple not seen in any of the previous crashes,
- - The crash trace is missing a tuple that was always present in earlier
- faults.
-
-The approach is vulnerable to some path count inflation early on, but exhibits
-a very strong self-limiting effect, similar to the execution path analysis
-logic that is the cornerstone of `afl-fuzz`.
-
-## 9. Investigating crashes
-
-The exploitability of many types of crashes can be ambiguous; afl-fuzz tries
-to address this by providing a crash exploration mode where a known-faulting
-test case is fuzzed in a manner very similar to the normal operation of the
-fuzzer, but with a constraint that causes any non-crashing mutations to be
-thrown away.
-
-A detailed discussion of the value of this approach can be found here:
-
- https://lcamtuf.blogspot.com/2014/11/afl-fuzz-crash-exploration-mode.html
-
-The method uses instrumentation feedback to explore the state of the crashing
-program to get past the ambiguous faulting condition and then isolate the
-newly-found inputs for human review.
-
-On the subject of crashes, it is worth noting that in contrast to normal
-queue entries, crashing inputs are *not* trimmed; they are kept exactly as
-discovered to make it easier to compare them to the parent, non-crashing entry
-in the queue. That said, `afl-tmin` can be used to shrink them at will.
-
-## 10 The fork server
-
-To improve performance, `afl-fuzz` uses a "fork server", where the fuzzed process
-goes through `execve()`, linking, and libc initialization only once, and is then
-cloned from a stopped process image by leveraging copy-on-write. The
-implementation is described in more detail here:
-
- https://lcamtuf.blogspot.com/2014/10/fuzzing-binaries-without-execve.html
-
-The fork server is an integral aspect of the injected instrumentation and
-simply stops at the first instrumented function to await commands from
-`afl-fuzz`.
-
-With fast targets, the fork server can offer considerable performance gains,
-usually between 1.5x and 2x. It is also possible to:
-
- - Use the fork server in manual ("deferred") mode, skipping over larger,
- user-selected chunks of initialization code. It requires very modest
- code changes to the targeted program, and With some targets, can
- produce 10x+ performance gains.
- - Enable "persistent" mode, where a single process is used to try out
- multiple inputs, greatly limiting the overhead of repetitive `fork()`
- calls. This generally requires some code changes to the targeted program,
- but can improve the performance of fast targets by a factor of 5 or more - approximating the benefits of in-process fuzzing jobs while still
- maintaining very robust isolation between the fuzzer process and the
- targeted binary.
-
-## 11. Parallelization
-
-The parallelization mechanism relies on periodically examining the queues
-produced by independently-running instances on other CPU cores or on remote
-machines, and then selectively pulling in the test cases that, when tried
-out locally, produce behaviors not yet seen by the fuzzer at hand.
-
-This allows for extreme flexibility in fuzzer setup, including running synced
-instances against different parsers of a common data format, often with
-synergistic effects.
-
-For more information about this design, see parallel_fuzzing.md.
-
-## 12. Binary-only instrumentation
-
-Instrumentation of black-box, binary-only targets is accomplished with the
-help of a separately-built version of QEMU in "user emulation" mode. This also
-allows the execution of cross-architecture code - say, ARM binaries on x86.
-
-QEMU uses basic blocks as translation units; the instrumentation is implemented
-on top of this and uses a model roughly analogous to the compile-time hooks:
-
-```c
- if (block_address > elf_text_start && block_address < elf_text_end) {
-
- cur_location = (block_address >> 4) ^ (block_address << 8);
- shared_mem[cur_location ^ prev_location]++;
- prev_location = cur_location >> 1;
-
- }
-```
-
-The shift-and-XOR-based scrambling in the second line is used to mask the
-effects of instruction alignment.
-
-The start-up of binary translators such as QEMU, DynamoRIO, and PIN is fairly
-slow; to counter this, the QEMU mode leverages a fork server similar to that
-used for compiler-instrumented code, effectively spawning copies of an
-already-initialized process paused at `_start`.
-
-First-time translation of a new basic block also incurs substantial latency. To
-eliminate this problem, the AFL fork server is extended by providing a channel
-between the running emulator and the parent process. The channel is used
-to notify the parent about the addresses of any newly-encountered blocks and to
-add them to the translation cache that will be replicated for future child
-processes.
-
-As a result of these two optimizations, the overhead of the QEMU mode is
-roughly 2-5x, compared to 100x+ for PIN.
-
-## 13. The `afl-analyze` tool
-
-The file format analyzer is a simple extension of the minimization algorithm
-discussed earlier on; instead of attempting to remove no-op blocks, the tool
-performs a series of walking byte flips and then annotates runs of bytes
-in the input file.
-
-It uses the following classification scheme:
-
- - "No-op blocks" - segments where bit flips cause no apparent changes to
- control flow. Common examples may be comment sections, pixel data within
- a bitmap file, etc.
- - "Superficial content" - segments where some, but not all, bitflips
- produce some control flow changes. Examples may include strings in rich
- documents (e.g., XML, RTF).
- - "Critical stream" - a sequence of bytes where all bit flips alter control
- flow in different but correlated ways. This may be compressed data,
- non-atomically compared keywords or magic values, etc.
- - "Suspected length field" - small, atomic integer that, when touched in
- any way, causes a consistent change to program control flow, suggestive
- of a failed length check.
- - "Suspected cksum or magic int" - an integer that behaves similarly to a
- length field, but has a numerical value that makes the length explanation
- unlikely. This is suggestive of a checksum or other "magic" integer.
- - "Suspected checksummed block" - a long block of data where any change
- always triggers the same new execution path. Likely caused by failing
- a checksum or a similar integrity check before any subsequent parsing
- takes place.
- - "Magic value section" - a generic token where changes cause the type
- of binary behavior outlined earlier, but that doesn't meet any of the
- other criteria. May be an atomically compared keyword or so.
diff --git a/docs/third_party_tools.md b/docs/third_party_tools.md
index 446d373c..92229e84 100644
--- a/docs/third_party_tools.md
+++ b/docs/third_party_tools.md
@@ -1,33 +1,57 @@
# Tools that help fuzzing with AFL++
Speeding up fuzzing:
- * [libfiowrapper](https://github.com/marekzmyslowski/libfiowrapper) - if the function you want to fuzz requires loading a file, this allows using the shared memory test case feature :-) - recommended.
+* [libfiowrapper](https://github.com/marekzmyslowski/libfiowrapper) - if the
+ function you want to fuzz requires loading a file, this allows using the
+ shared memory test case feature :-) - recommended.
Minimization of test cases:
- * [afl-pytmin](https://github.com/ilsani/afl-pytmin) - a wrapper for afl-tmin that tries to speed up the process of minimization of a single test case by using many CPU cores.
- * [afl-ddmin-mod](https://github.com/MarkusTeufelberger/afl-ddmin-mod) - a variation of afl-tmin based on the ddmin algorithm.
- * [halfempty](https://github.com/googleprojectzero/halfempty) - is a fast utility for minimizing test cases by Tavis Ormandy based on parallelization.
+* [afl-pytmin](https://github.com/ilsani/afl-pytmin) - a wrapper for afl-tmin
+ that tries to speed up the process of minimization of a single test case by
+ using many CPU cores.
+* [afl-ddmin-mod](https://github.com/MarkusTeufelberger/afl-ddmin-mod) - a
+ variation of afl-tmin based on the ddmin algorithm.
+* [halfempty](https://github.com/googleprojectzero/halfempty) - is a fast
+ utility for minimizing test cases by Tavis Ormandy based on parallelization.
Distributed execution:
- * [disfuzz-afl](https://github.com/MartijnB/disfuzz-afl) - distributed fuzzing for AFL.
- * [AFLDFF](https://github.com/quantumvm/AFLDFF) - AFL distributed fuzzing framework.
- * [afl-launch](https://github.com/bnagy/afl-launch) - a tool for the execution of many AFL instances.
- * [afl-mothership](https://github.com/afl-mothership/afl-mothership) - management and execution of many synchronized AFL fuzzers on AWS cloud.
- * [afl-in-the-cloud](https://github.com/abhisek/afl-in-the-cloud) - another script for running AFL in AWS.
+* [disfuzz-afl](https://github.com/MartijnB/disfuzz-afl) - distributed fuzzing
+ for AFL.
+* [AFLDFF](https://github.com/quantumvm/AFLDFF) - AFL distributed fuzzing
+ framework.
+* [afl-launch](https://github.com/bnagy/afl-launch) - a tool for the execution
+ of many AFL instances.
+* [afl-mothership](https://github.com/afl-mothership/afl-mothership) -
+ management and execution of many synchronized AFL fuzzers on AWS cloud.
+* [afl-in-the-cloud](https://github.com/abhisek/afl-in-the-cloud) - another
+ script for running AFL in AWS.
Deployment, management, monitoring, reporting
- * [afl-utils](https://gitlab.com/rc0r/afl-utils) - a set of utilities for automatic processing/analysis of crashes and reducing the number of test cases.
- * [afl-other-arch](https://github.com/shellphish/afl-other-arch) - is a set of patches and scripts for easily adding support for various non-x86 architectures for AFL.
- * [afl-trivia](https://github.com/bnagy/afl-trivia) - a few small scripts to simplify the management of AFL.
- * [afl-monitor](https://github.com/reflare/afl-monitor) - a script for monitoring AFL.
- * [afl-manager](https://github.com/zx1340/afl-manager) - a web server on Python for managing multi-afl.
- * [afl-remote](https://github.com/block8437/afl-remote) - a web server for the remote management of AFL instances.
- * [afl-extras](https://github.com/fekir/afl-extras) - shell scripts to parallelize afl-tmin, startup, and data collection.
+* [afl-utils](https://gitlab.com/rc0r/afl-utils) - a set of utilities for
+ automatic processing/analysis of crashes and reducing the number of test
+ cases.
+* [afl-other-arch](https://github.com/shellphish/afl-other-arch) - is a set of
+ patches and scripts for easily adding support for various non-x86
+ architectures for AFL.
+* [afl-trivia](https://github.com/bnagy/afl-trivia) - a few small scripts to
+ simplify the management of AFL.
+* [afl-monitor](https://github.com/reflare/afl-monitor) - a script for
+ monitoring AFL.
+* [afl-manager](https://github.com/zx1340/afl-manager) - a web server on Python
+ for managing multi-afl.
+* [afl-remote](https://github.com/block8437/afl-remote) - a web server for the
+ remote management of AFL instances.
+* [afl-extras](https://github.com/fekir/afl-extras) - shell scripts to
+ parallelize afl-tmin, startup, and data collection.
Crash processing
- * [afl-crash-analyzer](https://github.com/floyd-fuh/afl-crash-analyzer) - another crash analyzer for AFL.
- * [fuzzer-utils](https://github.com/ThePatrickStar/fuzzer-utils) - a set of scripts for the analysis of results.
- * [atriage](https://github.com/Ayrx/atriage) - a simple triage tool.
- * [afl-kit](https://github.com/kcwu/afl-kit) - afl-cmin on Python.
- * [AFLize](https://github.com/d33tah/aflize) - a tool that automatically generates builds of debian packages suitable for AFL.
- * [afl-fid](https://github.com/FoRTE-Research/afl-fid) - a set of tools for working with input data.
\ No newline at end of file
+* [afl-crash-analyzer](https://github.com/floyd-fuh/afl-crash-analyzer) -
+ another crash analyzer for AFL.
+* [fuzzer-utils](https://github.com/ThePatrickStar/fuzzer-utils) - a set of
+ scripts for the analysis of results.
+* [atriage](https://github.com/Ayrx/atriage) - a simple triage tool.
+* [afl-kit](https://github.com/kcwu/afl-kit) - afl-cmin on Python.
+* [AFLize](https://github.com/d33tah/aflize) - a tool that automatically
+ generates builds of debian packages suitable for AFL.
+* [afl-fid](https://github.com/FoRTE-Research/afl-fid) - a set of tools for
+ working with input data.
\ No newline at end of file
diff --git a/docs/tutorials.md b/docs/tutorials.md
index cc7ed130..ed8a7eec 100644
--- a/docs/tutorials.md
+++ b/docs/tutorials.md
@@ -1,6 +1,6 @@
# Tutorials
-Here are some good writeups to show how to effectively use AFL++:
+Here are some good write-ups to show how to effectively use AFL++:
* [https://aflplus.plus/docs/tutorials/libxml2_tutorial/](https://aflplus.plus/docs/tutorials/libxml2_tutorial/)
* [https://bananamafia.dev/post/gb-fuzz/](https://bananamafia.dev/post/gb-fuzz/)
@@ -18,9 +18,13 @@ training, then we can highly recommend the following:
If you are interested in fuzzing structured data (where you define what the
structure is), these links have you covered:
-* Superion for AFL++: [https://github.com/adrian-rt/superion-mutator](https://github.com/adrian-rt/superion-mutator)
-* libprotobuf for AFL++: [https://github.com/P1umer/AFLplusplus-protobuf-mutator](https://github.com/P1umer/AFLplusplus-protobuf-mutator)
-* libprotobuf raw: [https://github.com/bruce30262/libprotobuf-mutator_fuzzing_learning/tree/master/4_libprotobuf_aflpp_custom_mutator](https://github.com/bruce30262/libprotobuf-mutator_fuzzing_learning/tree/master/4_libprotobuf_aflpp_custom_mutator)
-* libprotobuf for old AFL++ API: [https://github.com/thebabush/afl-libprotobuf-mutator](https://github.com/thebabush/afl-libprotobuf-mutator)
+* Superion for AFL++:
+ [https://github.com/adrian-rt/superion-mutator](https://github.com/adrian-rt/superion-mutator)
+* libprotobuf for AFL++:
+ [https://github.com/P1umer/AFLplusplus-protobuf-mutator](https://github.com/P1umer/AFLplusplus-protobuf-mutator)
+* libprotobuf raw:
+ [https://github.com/bruce30262/libprotobuf-mutator_fuzzing_learning/tree/master/4_libprotobuf_aflpp_custom_mutator](https://github.com/bruce30262/libprotobuf-mutator_fuzzing_learning/tree/master/4_libprotobuf_aflpp_custom_mutator)
+* libprotobuf for old AFL++ API:
+ [https://github.com/thebabush/afl-libprotobuf-mutator](https://github.com/thebabush/afl-libprotobuf-mutator)
If you find other good ones, please send them to us :-)
\ No newline at end of file
--
cgit 1.4.1
From 9cb32ca1425879be9a7326b5810ede12713c2649 Mon Sep 17 00:00:00 2001
From: llzmb <46303940+llzmb@users.noreply.github.com>
Date: Thu, 2 Dec 2021 17:03:06 +0100
Subject: Change the word "chapter" to "section"
---
docs/afl-fuzz_approach.md | 2 +-
instrumentation/README.lto.md | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
(limited to 'docs/afl-fuzz_approach.md')
diff --git a/docs/afl-fuzz_approach.md b/docs/afl-fuzz_approach.md
index e0d5a1c9..4e8e5eaa 100644
--- a/docs/afl-fuzz_approach.md
+++ b/docs/afl-fuzz_approach.md
@@ -37,7 +37,7 @@ superior to blind fuzzing or coverage-only tools.
## Understanding the status screen
-This chapter provides an overview of the status screen - plus tips for
+This section provides an overview of the status screen - plus tips for
troubleshooting any warnings and red text shown in the UI.
For the general instruction manual, see [README.md](../README.md).
diff --git a/instrumentation/README.lto.md b/instrumentation/README.lto.md
index a74425dc..24e57b23 100644
--- a/instrumentation/README.lto.md
+++ b/instrumentation/README.lto.md
@@ -202,7 +202,7 @@ bytes or which functions were touched by an input.
## Solving difficult targets
Some targets are difficult because the configure script does unusual stuff that
-is unexpected for afl. See the next chapter `Potential issues` for how to solve
+is unexpected for afl. See the next section `Potential issues` for how to solve
these.
### Example: ffmpeg
--
cgit 1.4.1
From b7395fa46710673602b8fb7257e502e5f129a56c Mon Sep 17 00:00:00 2001
From: llzmb <46303940+llzmb@users.noreply.github.com>
Date: Thu, 2 Dec 2021 19:52:10 +0100
Subject: Change "AFL" to "AFL++"
---
README.md | 2 +-
coresight_mode/README.md | 7 +++++--
dictionaries/README.md | 2 +-
docs/INSTALL.md | 6 +++---
docs/afl-fuzz_approach.md | 4 ++--
docs/custom_mutators.md | 3 ++-
docs/env_variables.md | 4 ++--
docs/fuzzing_in_depth.md | 8 ++++----
testcases/README.md | 2 +-
unicorn_mode/README.md | 10 +++++-----
utils/argv_fuzzing/README.md | 4 ++--
utils/libdislocator/README.md | 8 ++++----
utils/libtokencap/README.md | 2 +-
13 files changed, 33 insertions(+), 29 deletions(-)
(limited to 'docs/afl-fuzz_approach.md')
diff --git a/README.md b/README.md
index dbf49b20..93c0dd10 100644
--- a/README.md
+++ b/README.md
@@ -99,7 +99,7 @@ Step-by-step quick start:
To add a dictionary, add `-x /path/to/dictionary.txt` to afl-fuzz.
If the program takes input from a file, you can put `@@` in the program's
- command line; AFL will put an auto-generated file name in there for you.
+ command line; AFL++ will put an auto-generated file name in there for you.
4. Investigate anything shown in red in the fuzzer UI by promptly consulting
[docs/afl-fuzz_approach.md#understanding-the-status-screen](docs/afl-fuzz_approach.md#understanding-the-status-screen).
diff --git a/coresight_mode/README.md b/coresight_mode/README.md
index cd1bccab..1a39d347 100644
--- a/coresight_mode/README.md
+++ b/coresight_mode/README.md
@@ -3,7 +3,7 @@
CoreSight mode enables binary-only fuzzing on ARM64 Linux using CoreSight (ARM's hardware tracing technology).
NOTE: CoreSight mode is in the early development stage. Not applicable for production use.
-Currently the following hardware boards are supported:
+Currently the following hardware boards are supported:
* NVIDIA Jetson TX2 (NVIDIA Parker)
* NVIDIA Jetson Nano (NVIDIA Tegra X1)
* GIGABYTE R181-T90 (Marvell ThunderX2 CN99XX)
@@ -12,7 +12,10 @@ Currently the following hardware boards are supported:
Please read the [RICSec/coresight-trace README](https://github.com/RICSecLab/coresight-trace/blob/master/README.md) and check the prerequisites (capstone) before getting started.
-CoreSight mode supports the AFL fork server mode to reduce `exec` system call overhead. To support it for binary-only fuzzing, it needs to modify the target ELF binary to re-link to the patched glibc. We employ this design from [PTrix](https://github.com/junxzm1990/afl-pt).
+CoreSight mode supports the AFL++ fork server mode to reduce `exec` system call
+overhead. To support it for binary-only fuzzing, it needs to modify the target
+ELF binary to re-link to the patched glibc. We employ this design from
+[PTrix](https://github.com/junxzm1990/afl-pt).
Check out all the git submodules in the `cs_mode` directory:
diff --git a/dictionaries/README.md b/dictionaries/README.md
index 7c587abb..2c0056f6 100644
--- a/dictionaries/README.md
+++ b/dictionaries/README.md
@@ -1,4 +1,4 @@
-# AFL dictionaries
+# AFL++ dictionaries
(See [../README.md](../README.md) for the general instruction manual.)
diff --git a/docs/INSTALL.md b/docs/INSTALL.md
index cfa20dea..ab6e735b 100644
--- a/docs/INSTALL.md
+++ b/docs/INSTALL.md
@@ -20,7 +20,7 @@ The easiest choice is to build and install everything:
sudo apt-get update
sudo apt-get install -y build-essential python3-dev automake git flex bison libglib2.0-dev libpixman-1-dev python3-setuptools
# try to install llvm 11 and install the distro default if that fails
-sudo apt-get install -y lld-11 llvm-11 llvm-11-dev clang-11 || sudo apt-get install -y lld llvm llvm-dev clang
+sudo apt-get install -y lld-11 llvm-11 llvm-11-dev clang-11 || sudo apt-get install -y lld llvm llvm-dev clang
sudo apt-get install -y gcc-$(gcc --version|head -n1|sed 's/.* //'|sed 's/\..*//')-plugin-dev libstdc++-$(gcc --version|head -n1|sed 's/.* //'|sed 's/\..*//')-dev
sudo apt-get install -y ninja-build # for qemu_mode
git clone https://github.com/AFLplusplus/AFLplusplus
@@ -114,8 +114,8 @@ This means two things:
- Fuzzing will be probably slower than on Linux. In fact, some folks report
considerable performance gains by running the jobs inside a Linux VM on
MacOS X.
- - Some non-portable, platform-specific code may be incompatible with the
- AFL forkserver. If you run into any problems, set `AFL_NO_FORKSRV=1` in the
+ - Some non-portable, platform-specific code may be incompatible with the AFL++
+ forkserver. If you run into any problems, set `AFL_NO_FORKSRV=1` in the
environment before starting afl-fuzz.
User emulation mode of QEMU does not appear to be supported on MacOS X, so black-box instrumentation mode (`-Q`) will not work.
diff --git a/docs/afl-fuzz_approach.md b/docs/afl-fuzz_approach.md
index 4e8e5eaa..3e4faaec 100644
--- a/docs/afl-fuzz_approach.md
+++ b/docs/afl-fuzz_approach.md
@@ -348,7 +348,7 @@ That last bit is actually fairly interesting: it measures the consistency of
observed traces. If a program always behaves the same for the same input data,
it will earn a score of 100%. When the value is lower but still shown in purple,
the fuzzing process is unlikely to be negatively affected. If it goes into red,
-you may be in trouble, since AFL will have difficulty discerning between
+you may be in trouble, since AFL++ will have difficulty discerning between
meaningful and "phantom" effects of tweaking the input file.
Now, most targets will just get a 100% score, but when you see lower figures,
@@ -506,7 +506,7 @@ directory. This includes:
- `edges_found` - how many edges have been found
- `var_byte_count` - how many edges are non-deterministic
- `afl_banner` - banner text (e.g. the target name)
-- `afl_version` - the version of AFL used
+- `afl_version` - the version of AFL++ used
- `target_mode` - default, persistent, qemu, unicorn, non-instrumented
- `command_line` - full command line used for the fuzzing session
diff --git a/docs/custom_mutators.md b/docs/custom_mutators.md
index 2caba560..3a2ec3b2 100644
--- a/docs/custom_mutators.md
+++ b/docs/custom_mutators.md
@@ -21,7 +21,8 @@ fuzzing by using libraries that perform mutations according to a given grammar.
The custom mutator is passed to `afl-fuzz` via the `AFL_CUSTOM_MUTATOR_LIBRARY`
or `AFL_PYTHON_MODULE` environment variable, and must export a fuzz function.
-Now AFL also supports multiple custom mutators which can be specified in the same `AFL_CUSTOM_MUTATOR_LIBRARY` environment variable like this.
+Now AFL++ also supports multiple custom mutators which can be specified in the
+same `AFL_CUSTOM_MUTATOR_LIBRARY` environment variable like this.
```bash
export AFL_CUSTOM_MUTATOR_LIBRARY="full/path/to/mutator_first.so;full/path/to/mutator_second.so"
```
diff --git a/docs/env_variables.md b/docs/env_variables.md
index 6c90e84c..715a60cb 100644
--- a/docs/env_variables.md
+++ b/docs/env_variables.md
@@ -307,7 +307,7 @@ checks or alter some of the more exotic semantics of the tool:
(`-i in`). This is an important feature to set when resuming a fuzzing
session.
- - Setting `AFL_CRASH_EXITCODE` sets the exit code AFL treats as crash. For
+ - Setting `AFL_CRASH_EXITCODE` sets the exit code AFL++ treats as crash. For
example, if `AFL_CRASH_EXITCODE='-1'` is set, each input resulting in a `-1`
return code (i.e. `exit(-1)` got called), will be treated as if a crash had
occurred. This may be beneficial if you look for higher-level faulty
@@ -493,7 +493,7 @@ checks or alter some of the more exotic semantics of the tool:
This is especially useful when running multiple instances (`-M/-S` for
example). Applied tags are `banner` and `afl_version`. `banner` corresponds
to the name of the fuzzer provided through `-M/-S`. `afl_version`
- corresponds to the currently running AFL version (e.g. `++3.0c`). Default
+ corresponds to the currently running AFL++ version (e.g. `++3.0c`). Default
(empty/non present) will add no tags to the metrics. For more information,
see [rpc_statsd.md](rpc_statsd.md).
diff --git a/docs/fuzzing_in_depth.md b/docs/fuzzing_in_depth.md
index 8188a18e..4d2884f6 100644
--- a/docs/fuzzing_in_depth.md
+++ b/docs/fuzzing_in_depth.md
@@ -106,9 +106,9 @@ You can select the mode for the afl-cc compiler by:
MODE can be one of: LTO (afl-clang-lto*), LLVM (afl-clang-fast*), GCC_PLUGIN
(afl-g*-fast) or GCC (afl-gcc/afl-g++) or CLANG(afl-clang/afl-clang++).
-Because no AFL specific command-line options are accepted (beside the --afl-MODE
-command), the compile-time tools make fairly broad use of environment variables,
-which can be listed with `afl-cc -hh` or by reading
+Because no AFL++ specific command-line options are accepted (beside the
+--afl-MODE command), the compile-time tools make fairly broad use of environment
+variables, which can be listed with `afl-cc -hh` or by reading
[env_variables.md](env_variables.md).
### b) Selecting instrumentation options
@@ -213,7 +213,7 @@ is more effective).
If the target has features that make fuzzing more difficult, e.g. checksums,
HMAC, etc. then modify the source code so that checks for these values are
removed. This can even be done safely for source code used in operational
-products by eliminating these checks within these AFL specific blocks:
+products by eliminating these checks within these AFL++ specific blocks:
```
#ifdef FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION
diff --git a/testcases/README.md b/testcases/README.md
index ef38d3c4..a2f74d68 100644
--- a/testcases/README.md
+++ b/testcases/README.md
@@ -1,4 +1,4 @@
-# AFL starting test cases
+# AFL++ starting test cases
(See [../README.md](../README.md) for the general instruction manual.)
diff --git a/unicorn_mode/README.md b/unicorn_mode/README.md
index ed85e687..4c95e8f3 100644
--- a/unicorn_mode/README.md
+++ b/unicorn_mode/README.md
@@ -8,7 +8,8 @@ The CompareCoverage and NeverZero counters features are by Andrea Fioraldi
Date: Thu, 2 Dec 2021 20:37:21 +0100
Subject: Remove the word "we"
---
TODO.md | 2 +-
docs/afl-fuzz_approach.md | 27 +++++++++++++--------------
docs/custom_mutators.md | 4 ++--
docs/fuzzing_binary-only_targets.md | 8 ++++----
docs/fuzzing_in_depth.md | 2 +-
frida_mode/DEBUGGING.md | 2 +-
instrumentation/README.instrument_list.md | 2 +-
unicorn_mode/samples/c/COMPILE.md | 2 +-
unicorn_mode/samples/persistent/COMPILE.md | 4 ++--
unicorn_mode/samples/speedtest/README.md | 2 +-
utils/aflpp_driver/README.md | 2 +-
utils/autodict_ql/readme.md | 2 +-
utils/qbdi_mode/README.md | 5 +++--
13 files changed, 32 insertions(+), 32 deletions(-)
(limited to 'docs/afl-fuzz_approach.md')
diff --git a/TODO.md b/TODO.md
index 77fb080f..b8ac22ef 100644
--- a/TODO.md
+++ b/TODO.md
@@ -20,7 +20,7 @@ qemu_mode/frida_mode:
- non colliding instrumentation
- rename qemu specific envs to AFL_QEMU (AFL_ENTRYPOINT, AFL_CODE_START/END,
AFL_COMPCOV_LEVEL?)
- - add AFL_QEMU_EXITPOINT (maybe multiple?), maybe pointless as we have
+ - add AFL_QEMU_EXITPOINT (maybe multiple?), maybe pointless as there is
persistent mode
diff --git a/docs/afl-fuzz_approach.md b/docs/afl-fuzz_approach.md
index 3e4faaec..242104f7 100644
--- a/docs/afl-fuzz_approach.md
+++ b/docs/afl-fuzz_approach.md
@@ -103,8 +103,8 @@ will be allowed to run for months.
There's one important thing to watch out for: if the tool is not finding new
paths within several minutes of starting, you're probably not invoking the
-target binary correctly and it never gets to parse the input files we're
-throwing at it; other possible explanations are that the default memory limit
+target binary correctly and it never gets to parse the input files that are
+thrown at it; other possible explanations are that the default memory limit
(`-m`) is too restrictive and the program exits after failing to allocate a
buffer very early on; or that the input files are patently invalid and always
fail a basic header check.
@@ -172,10 +172,9 @@ processed path is not "favored" (a property discussed later on).
The section provides some trivia about the coverage observed by the
instrumentation embedded in the target binary.
-The first line in the box tells you how many branch tuples we have already hit,
-in proportion to how much the bitmap can hold. The number on the left describes
-the current input; the one on the right is the value for the entire input
-corpus.
+The first line in the box tells you how many branch tuples already were hit, in
+proportion to how much the bitmap can hold. The number on the left describes the
+current input; the one on the right is the value for the entire input corpus.
Be wary of extremes:
@@ -194,7 +193,7 @@ Be wary of extremes:
The other line deals with the variability in tuple hit counts seen in the
binary. In essence, if every taken branch is always taken a fixed number of
-times for all the inputs we have tried, this will read `1.00`. As we manage to
+times for all the inputs that were tried, this will read `1.00`. As we manage to
trigger other hit counts for every branch, the needle will start to move toward
`8.00` (every bit in the 8-bit map hit), but will probably never reach that
extreme.
@@ -295,9 +294,9 @@ exceed it by a margin sufficient to be classified as hangs.
+-----------------------------------------------------+
```
-This is just another nerd-targeted section keeping track of how many paths we
-have netted, in proportion to the number of execs attempted, for each of the
-fuzzing strategies discussed earlier on. This serves to convincingly validate
+This is just another nerd-targeted section keeping track of how many paths were
+netted, in proportion to the number of execs attempted, for each of the fuzzing
+strategies discussed earlier on. This serves to convincingly validate
assumptions about the usefulness of the various approaches taken by afl-fuzz.
The trim strategy stats in this section are a bit different than the rest. The
@@ -339,10 +338,10 @@ fuzzing yet. The same stat is also given for "favored" entries that the fuzzer
really wants to get to in this queue cycle (the non-favored entries may have to
wait a couple of cycles to get their chance).
-Next, we have the number of new paths found during this fuzzing section and
-imported from other fuzzer instances when doing parallelized fuzzing; and the
-extent to which identical inputs appear to sometimes produce variable behavior
-in the tested binary.
+Next is the number of new paths found during this fuzzing section and imported
+from other fuzzer instances when doing parallelized fuzzing; and the extent to
+which identical inputs appear to sometimes produce variable behavior in the
+tested binary.
That last bit is actually fairly interesting: it measures the consistency of
observed traces. If a program always behaves the same for the same input data,
diff --git a/docs/custom_mutators.md b/docs/custom_mutators.md
index 7d362950..4018d633 100644
--- a/docs/custom_mutators.md
+++ b/docs/custom_mutators.md
@@ -204,8 +204,8 @@ trimmed input. Here's a quick API description:
- `trim` (optional)
This method is called for each trimming operation. It doesn't have any
- arguments because we already have the initial buffer from `init_trim` and we
- can memorize the current state in the data variables. This can also save
+ arguments because there is already the initial buffer from `init_trim` and
+ we can memorize the current state in the data variables. This can also save
reparsing steps for each iteration. It should return the trimmed input
buffer.
diff --git a/docs/fuzzing_binary-only_targets.md b/docs/fuzzing_binary-only_targets.md
index 2d57d0dc..c3204212 100644
--- a/docs/fuzzing_binary-only_targets.md
+++ b/docs/fuzzing_binary-only_targets.md
@@ -201,10 +201,10 @@ target at load time and then let it run - or save the binary with the changes.
This is great for some things, e.g. fuzzing, and not so effective for others,
e.g. malware analysis.
-So, what we can do with Dyninst is taking every basic block and put AFL++'s
-instrumentation code in there - and then save the binary. Afterwards, we can
-just fuzz the newly saved target binary with afl-fuzz. Sounds great? It is. The
-issue though - it is a non-trivial problem to insert instructions, which change
+So, what you can do with Dyninst is taking every basic block and putting AFL++'s
+instrumentation code in there - and then save the binary. Afterwards, just fuzz
+the newly saved target binary with afl-fuzz. Sounds great? It is. The issue
+though - it is a non-trivial problem to insert instructions, which change
addresses in the process space, so that everything is still working afterwards.
Hence, more often than not binaries crash when they are run.
diff --git a/docs/fuzzing_in_depth.md b/docs/fuzzing_in_depth.md
index 4d2884f6..92b3cf86 100644
--- a/docs/fuzzing_in_depth.md
+++ b/docs/fuzzing_in_depth.md
@@ -391,7 +391,7 @@ to be used in fuzzing! :-)
## 3. Fuzzing the target
-In this final step we fuzz the target. There are not that many important options
+In this final step, fuzz the target. There are not that many important options
to run the target - unless you want to use many CPU cores/threads for the
fuzzing, which will make the fuzzing much more useful.
diff --git a/frida_mode/DEBUGGING.md b/frida_mode/DEBUGGING.md
index b703ae43..207a48bf 100644
--- a/frida_mode/DEBUGGING.md
+++ b/frida_mode/DEBUGGING.md
@@ -95,7 +95,7 @@ gdb \
```
Note:
-- We have to manually set the `__AFL_PERSISTENT` environment variable which is
+- You have to manually set the `__AFL_PERSISTENT` environment variable which is
usually passed by `afl-fuzz`.
- Setting breakpoints etc. is likely to interfere with FRIDA and cause spurious
errors.
diff --git a/instrumentation/README.instrument_list.md b/instrumentation/README.instrument_list.md
index b412b600..3ed64807 100644
--- a/instrumentation/README.instrument_list.md
+++ b/instrumentation/README.instrument_list.md
@@ -128,4 +128,4 @@ Note that whitespace is ignored and comments (`# foo`) are supported.
### 3b) UNIX-style pattern matching
You can add UNIX-style pattern matching in the "instrument file list" entries.
-See `man fnmatch` for the syntax. We do not set any of the `fnmatch` flags.
\ No newline at end of file
+See `man fnmatch` for the syntax. Do not set any of the `fnmatch` flags.
\ No newline at end of file
diff --git a/unicorn_mode/samples/c/COMPILE.md b/unicorn_mode/samples/c/COMPILE.md
index 7da140f7..4e3cf568 100644
--- a/unicorn_mode/samples/c/COMPILE.md
+++ b/unicorn_mode/samples/c/COMPILE.md
@@ -19,4 +19,4 @@ was built in case you want to rebuild it or recompile it for any reason.
The pre-built binary (persistent_target_x86_64) was built using -g -O0 in gcc.
-We then load the binary and execute the main function directly.
+Then load the binary and execute the main function directly.
diff --git a/unicorn_mode/samples/persistent/COMPILE.md b/unicorn_mode/samples/persistent/COMPILE.md
index 9f2ae718..5e607aef 100644
--- a/unicorn_mode/samples/persistent/COMPILE.md
+++ b/unicorn_mode/samples/persistent/COMPILE.md
@@ -3,7 +3,7 @@
This shows a simple persistent harness for unicornafl in C.
In contrast to the normal c harness, this harness manually resets the unicorn
state on each new input.
-Thanks to this, we can rerun the test case in unicorn multiple times, without
+Thanks to this, you can rerun the test case in unicorn multiple times, without
the need to fork again.
## Compiling sample.c
@@ -25,4 +25,4 @@ was built in case you want to rebuild it or recompile it for any reason.
The pre-built binary (persistent_target_x86_64.bin) was built using -g -O0 in
gcc.
-We then load the binary and we execute the main function directly.
\ No newline at end of file
+Then load the binary and execute the main function directly.
\ No newline at end of file
diff --git a/unicorn_mode/samples/speedtest/README.md b/unicorn_mode/samples/speedtest/README.md
index 3c1184a2..496d75cd 100644
--- a/unicorn_mode/samples/speedtest/README.md
+++ b/unicorn_mode/samples/speedtest/README.md
@@ -44,7 +44,7 @@ was built in case you want to rebuild it or recompile it for any reason.
The pre-built binary (simple_target_x86_64.bin) was built using -g -O0 in gcc.
-We then load the binary and execute the main function directly.
+Then load the binary and execute the main function directly.
## Addresses for the harness:
To find the address (in hex) of main, run:
diff --git a/utils/aflpp_driver/README.md b/utils/aflpp_driver/README.md
index 4560be2b..d534cd7f 100644
--- a/utils/aflpp_driver/README.md
+++ b/utils/aflpp_driver/README.md
@@ -25,7 +25,7 @@ or `@@` as command line parameters.
Note that you can use the driver too for frida_mode (`-O`).
aflpp_qemu_driver is used for libfuzzer `LLVMFuzzerTestOneInput()` targets that
-are to be fuzzed in qemu_mode. So we compile them with clang/clang++, without
+are to be fuzzed in qemu_mode. So compile them with clang/clang++, without
-fsantize=fuzzer or afl-clang-fast, and link in libAFLQemuDriver.a:
`clang++ -o fuzz fuzzer_harness.cc libAFLQemuDriver.a [plus required linking]`.
diff --git a/utils/autodict_ql/readme.md b/utils/autodict_ql/readme.md
index a28f1725..491ec85b 100644
--- a/utils/autodict_ql/readme.md
+++ b/utils/autodict_ql/readme.md
@@ -37,7 +37,7 @@ sudo apt install build-essential libtool-bin python3-dev python3 automake git vi
```
The usage of Autodict-QL is pretty easy. But let's describe it as:
-1. First of all, you need to have CodeQL installed on the system. we make this possible with `build-codeql.sh` bash script. This script will install CodeQL completety and will set the required environment variables for your system.
+1. First of all, you need to have CodeQL installed on the system. We make this possible with `build-codeql.sh` bash script. This script will install CodeQL completety and will set the required environment variables for your system.
Do the following :
```shell
# chmod +x codeql-build.sh
diff --git a/utils/qbdi_mode/README.md b/utils/qbdi_mode/README.md
index cd59fb9c..c8d46fca 100755
--- a/utils/qbdi_mode/README.md
+++ b/utils/qbdi_mode/README.md
@@ -131,7 +131,8 @@ int target_func(char *buf, int size) {
This could be built to `libdemo.so`.
-Then we should load the library in template.cpp and find the `target` function address.
+Then load the library in template.cpp and find the `target` function address:
+
```c
void *handle = dlopen(lib_path, RTLD_LAZY);
..........................................
@@ -140,7 +141,7 @@ Then we should load the library in template.cpp and find the `target` function a
p_target_func = (target_func)dlsym(handle, "target_func");
```
-then we read the data from file and call the function in `fuzz_func`
+Then read the data from file and call the function in `fuzz_func`:
```c
QBDI_NOINLINE int fuzz_func() {
--
cgit 1.4.1
From 65c3db86256b3907404623fe1c52e01c9d12ff97 Mon Sep 17 00:00:00 2001
From: llzmb <46303940+llzmb@users.noreply.github.com>
Date: Thu, 2 Dec 2021 21:03:59 +0100
Subject: Fix punctuation in connection with "e.g."
---
.github/ISSUE_TEMPLATE/bug_report.md | 5 +++--
CONTRIBUTING.md | 2 +-
docs/FAQ.md | 4 ++--
docs/INSTALL.md | 3 ++-
docs/afl-fuzz_approach.md | 2 +-
docs/best_practices.md | 4 ++--
docs/custom_mutators.md | 7 ++++---
docs/env_variables.md | 22 +++++++++++-----------
docs/fuzzing_binary-only_targets.md | 6 +++---
docs/fuzzing_in_depth.md | 32 ++++++++++++++++----------------
docs/ideas.md | 2 +-
docs/important_changes.md | 2 +-
instrumentation/README.llvm.md | 2 +-
instrumentation/README.lto.md | 2 +-
utils/README.md | 2 +-
utils/afl_network_proxy/README.md | 6 ++++--
16 files changed, 54 insertions(+), 49 deletions(-)
(limited to 'docs/afl-fuzz_approach.md')
diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
index 31152cd2..0d80f4a3 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.md
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -8,8 +8,9 @@ assignees: ''
---
**IMPORTANT**
-1. You have verified that the issue to be present in the current `dev` branch
-2. Please supply the command line options and relevant environment variables, e.g. a copy-paste of the contents of `out/default/fuzzer_setup`
+1. You have verified that the issue to be present in the current `dev` branch.
+2. Please supply the command line options and relevant environment variables,
+ e.g., a copy-paste of the contents of `out/default/fuzzer_setup`.
Thank you for making AFL++ better!
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 0268b2e5..0ab4f8ec 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -17,7 +17,7 @@ project, or added a file in a directory we already format, otherwise run:
Regarding the coding style, please follow the AFL style.
No camel case at all and use AFL's macros wherever possible
-(e.g. WARNF, FATAL, MAP_SIZE, ...).
+(e.g., WARNF, FATAL, MAP_SIZE, ...).
Remember that AFL++ has to build and run on many platforms, so
generalize your Makefiles/GNUmakefile (or your patches to our pre-existing
diff --git a/docs/FAQ.md b/docs/FAQ.md
index 49444999..27250415 100644
--- a/docs/FAQ.md
+++ b/docs/FAQ.md
@@ -21,7 +21,7 @@ If you find an interesting or important question missing, submit it via
This already resulted in a much advanced AFL.
Until the end of 2019, the AFL++ team had grown to four active developers which then implemented their own research and features, making it now by far the most flexible and feature rich guided fuzzer available as open source.
- And in independent fuzzing benchmarks it is one of the best fuzzers available, e.g. [Fuzzbench Report](https://www.fuzzbench.com/reports/2020-08-03/index.html).
+ And in independent fuzzing benchmarks it is one of the best fuzzers available, e.g., [Fuzzbench Report](https://www.fuzzbench.com/reports/2020-08-03/index.html).
@@ -123,7 +123,7 @@ If you find an interesting or important question missing, submit it via
Sending the same input again and again should take the exact same path through the target every time.
If that is the case, the stability is 100%.
- If, however, randomness happens, e.g. a thread reading other external data,
+ If, however, randomness happens, e.g., a thread reading other external data,
reaction to timing, etc., then in some of the re-executions with the same data
the edge coverage result will be different accross runs. Those edges that
change are then flagged "unstable".
diff --git a/docs/INSTALL.md b/docs/INSTALL.md
index ab6e735b..c1e22e36 100644
--- a/docs/INSTALL.md
+++ b/docs/INSTALL.md
@@ -69,7 +69,8 @@ These build options exist:
* NO_PYTHON - disable python support
* NO_SPLICING - disables splicing mutation in afl-fuzz, not recommended for normal fuzzing
* AFL_NO_X86 - if compiling on non-intel/amd platforms
-* LLVM_CONFIG - if your distro doesn't use the standard name for llvm-config (e.g. Debian)
+* LLVM_CONFIG - if your distro doesn't use the standard name for llvm-config
+ (e.g., Debian)
e.g.: `make ASAN_BUILD=1`
diff --git a/docs/afl-fuzz_approach.md b/docs/afl-fuzz_approach.md
index 242104f7..68f45891 100644
--- a/docs/afl-fuzz_approach.md
+++ b/docs/afl-fuzz_approach.md
@@ -504,7 +504,7 @@ directory. This includes:
- `peak_rss_mb` - max rss usage reached during fuzzing in MB
- `edges_found` - how many edges have been found
- `var_byte_count` - how many edges are non-deterministic
-- `afl_banner` - banner text (e.g. the target name)
+- `afl_banner` - banner text (e.g., the target name)
- `afl_version` - the version of AFL++ used
- `target_mode` - default, persistent, qemu, unicorn, non-instrumented
- `command_line` - full command line used for the fuzzing session
diff --git a/docs/best_practices.md b/docs/best_practices.md
index 15f8870c..6a406bde 100644
--- a/docs/best_practices.md
+++ b/docs/best_practices.md
@@ -48,7 +48,7 @@ this with persistent mode [instrumentation/README.persistent_mode.md](../instrum
and you have a performance gain of x10 instead of a performance loss of over
x10 - that is a x100 difference!).
-If modifying the source is not an option (e.g. because you only have a binary
+If modifying the source is not an option (e.g., because you only have a binary
and perform binary fuzzing) you can also use a shared library with AFL_PRELOAD
to emulate the network. This is also much faster than the real network would be.
See [utils/socket_fuzzing/](../utils/socket_fuzzing/).
@@ -123,7 +123,7 @@ Four steps are required to do this and it also requires quite some knowledge of
Only exclude those functions from instrumentation that provide no value for
coverage - that is if it does not process any fuzz data directly or
- indirectly (e.g. hash maps, thread management etc.). If, however, a
+ indirectly (e.g., hash maps, thread management etc.). If, however, a
function directly or indirectly handles fuzz data, then you should not put
the function in a deny instrumentation list and rather live with the
instability it comes with.
diff --git a/docs/custom_mutators.md b/docs/custom_mutators.md
index fc5ecbf9..6bee5413 100644
--- a/docs/custom_mutators.md
+++ b/docs/custom_mutators.md
@@ -124,7 +124,7 @@ def deinit(): # optional for Python
additional test case.
Note that this function is optional - but it makes sense to use it.
You would only skip this if `post_process` is used to fix checksums etc.
- so if you are using it e.g. as a post processing library.
+ so if you are using it, e.g., as a post processing library.
Note that a length > 0 *must* be returned!
- `describe` (optional):
@@ -191,8 +191,9 @@ trimmed input. Here's a quick API description:
This method is called at the start of each trimming operation and receives
the initial buffer. It should return the amount of iteration steps possible
- on this input (e.g. if your input has n elements and you want to remove them
- one by one, return n, if you do a binary search, return log(n), and so on).
+ on this input (e.g., if your input has n elements and you want to remove
+ them one by one, return n, if you do a binary search, return log(n), and so
+ on).
If your trimming algorithm doesn't allow to determine the amount of
(remaining) steps easily (esp. while running), then you can alternatively
diff --git a/docs/env_variables.md b/docs/env_variables.md
index 715a60cb..771bf157 100644
--- a/docs/env_variables.md
+++ b/docs/env_variables.md
@@ -80,9 +80,9 @@ fairly broad use of environment variables instead:
Setting `AFL_INST_RATIO` to 0 is a valid choice. This will instrument only
the transitions between function entry points, but not individual branches.
- Note that this is an outdated variable. A few instances (e.g. afl-gcc) still
- support these, but state-of-the-art (e.g. LLVM LTO and LLVM PCGUARD) do not
- need this.
+ Note that this is an outdated variable. A few instances (e.g., afl-gcc)
+ still support these, but state-of-the-art (e.g., LLVM LTO and LLVM PCGUARD)
+ do not need this.
- `AFL_NO_BUILTIN` causes the compiler to generate code suitable for use with
libtokencap.so (but perhaps running a bit slower than without the flag).
@@ -319,7 +319,7 @@ checks or alter some of the more exotic semantics of the tool:
afl-fuzz), setting `AFL_PYTHON_MODULE` to a Python module can also provide
additional mutations. If `AFL_CUSTOM_MUTATOR_ONLY` is also set, all
mutations will solely be performed with the custom mutator. This feature
- allows to configure custom mutators which can be very helpful, e.g. fuzzing
+ allows to configure custom mutators which can be very helpful, e.g., fuzzing
XML or other highly flexible structured input. For details, see
[custom_mutators.md](custom_mutators.md).
@@ -449,7 +449,7 @@ checks or alter some of the more exotic semantics of the tool:
not crash the target again when the test case is given. To be able to still
re-trigger these crashes, you can use the `AFL_PERSISTENT_RECORD` variable
with a value of how many previous fuzz cases to keep prio a crash. If set to
- e.g. 10, then the 9 previous inputs are written to out/default/crashes as
+ e.g., 10, then the 9 previous inputs are written to out/default/crashes as
RECORD:000000,cnt:000000 to RECORD:000000,cnt:000008 and
RECORD:000000,cnt:000009 being the crash case. NOTE: This option needs to be
enabled in config.h first!
@@ -493,7 +493,7 @@ checks or alter some of the more exotic semantics of the tool:
This is especially useful when running multiple instances (`-M/-S` for
example). Applied tags are `banner` and `afl_version`. `banner` corresponds
to the name of the fuzzer provided through `-M/-S`. `afl_version`
- corresponds to the currently running AFL++ version (e.g. `++3.0c`). Default
+ corresponds to the currently running AFL++ version (e.g., `++3.0c`). Default
(empty/non present) will add no tags to the metrics. For more information,
see [rpc_statsd.md](rpc_statsd.md).
@@ -535,11 +535,11 @@ The QEMU wrapper used to instrument binary-only code supports several settings:
- `AFL_DEBUG` will print the found entry point for the binary to stderr. Use
this if you are unsure if the entry point might be wrong - but use it
- directly, e.g. `afl-qemu-trace ./program`.
+ directly, e.g., `afl-qemu-trace ./program`.
- `AFL_ENTRYPOINT` allows you to specify a specific entry point into the
binary (this can be very good for the performance!). The entry point is
- specified as hex address, e.g. `0x4004110`. Note that the address must be
+ specified as hex address, e.g., `0x4004110`. Note that the address must be
the address of a basic block.
- Setting `AFL_INST_LIBS` causes the translator to also instrument the code
@@ -595,7 +595,7 @@ QEMU driver to provide a `main` loop for a user provided
`stdin` rather than using in-memory test cases.
* `AFL_FRIDA_EXCLUDE_RANGES` - See `AFL_QEMU_EXCLUDE_RANGES`
* `AFL_FRIDA_INST_COVERAGE_FILE` - File to write DynamoRio format coverage
-information (e.g. to be loaded within IDA lighthouse).
+information (e.g., to be loaded within IDA lighthouse).
* `AFL_FRIDA_INST_DEBUG_FILE` - File to write raw assembly of original blocks
and their instrumented counterparts during block compilation.
* `AFL_FRIDA_INST_JIT` - Enable the instrumentation of Just-In-Time compiled
@@ -617,13 +617,13 @@ child on fork.
* `AFL_FRIDA_INST_RANGES` - See `AFL_QEMU_INST_RANGES`
* `AFL_FRIDA_INST_SEED` - Sets the initial seed for the hash function used to
generate block (and hence edge) IDs. Setting this to a constant value may be
-useful for debugging purposes, e.g. investigating unstable edges.
+useful for debugging purposes, e.g., investigating unstable edges.
* `AFL_FRIDA_INST_TRACE` - Log to stdout the address of executed blocks,
implies `AFL_FRIDA_INST_NO_OPTIMIZE`.
* `AFL_FRIDA_INST_TRACE_UNIQUE` - As per `AFL_FRIDA_INST_TRACE`, but each edge
is logged only once, requires `AFL_FRIDA_INST_NO_OPTIMIZE`.
* `AFL_FRIDA_INST_UNSTABLE_COVERAGE_FILE` - File to write DynamoRio format
-coverage information for unstable edges (e.g. to be loaded within IDA
+coverage information for unstable edges (e.g., to be loaded within IDA
lighthouse).
* `AFL_FRIDA_JS_SCRIPT` - Set the script to be loaded by the FRIDA scripting
engine. See [here](Scripting.md) for details.
diff --git a/docs/fuzzing_binary-only_targets.md b/docs/fuzzing_binary-only_targets.md
index c3204212..a786fd8b 100644
--- a/docs/fuzzing_binary-only_targets.md
+++ b/docs/fuzzing_binary-only_targets.md
@@ -113,7 +113,7 @@ If you want to fuzz a binary-only library, then you can fuzz it with frida-gum
via frida_mode/. You will have to write a harness to call the target function in
the library, use afl-frida.c as a template.
-You can also perform remote fuzzing with frida, e.g. if you want to fuzz on
+You can also perform remote fuzzing with frida, e.g., if you want to fuzz on
iPhone or Android devices, for this you can use
[https://github.com/ttdennis/fpicker/](https://github.com/ttdennis/fpicker/) as
an intermediate that uses AFL++ for fuzzing.
@@ -198,8 +198,8 @@ It is at about 80-85% performance.
Dyninst is a binary instrumentation framework similar to Pintool and DynamoRIO.
However, whereas Pintool and DynamoRIO work at runtime, Dyninst instruments the
target at load time and then let it run - or save the binary with the changes.
-This is great for some things, e.g. fuzzing, and not so effective for others,
-e.g. malware analysis.
+This is great for some things, e.g., fuzzing, and not so effective for others,
+e.g., malware analysis.
So, what you can do with Dyninst is taking every basic block and putting AFL++'s
instrumentation code in there - and then save the binary. Afterwards, just fuzz
diff --git a/docs/fuzzing_in_depth.md b/docs/fuzzing_in_depth.md
index 96e709ab..4e1e001e 100644
--- a/docs/fuzzing_in_depth.md
+++ b/docs/fuzzing_in_depth.md
@@ -167,7 +167,7 @@ allows you to find bugs that would not necessarily result in a crash.
Note that sanitizers have a huge impact on CPU (= less executions per second)
and RAM usage. Also you should only run one afl-fuzz instance per sanitizer
-type. This is enough because a use-after-free bug will be picked up, e.g. by
+type. This is enough because a use-after-free bug will be picked up, e.g., by
ASAN (address sanitizer) anyway when syncing to other fuzzing instances, so not
all fuzzing instances need to be instrumented with ASAN.
@@ -179,7 +179,7 @@ The following sanitizers have built-in support in AFL++:
local variable that is defined and read before it is even set. Enabled with
`export AFL_USE_MSAN=1` before compiling.
* UBSAN = Undefined Behavior SANitizer, finds instances where - by the C and C++
- standards - undefined behavior happens, e.g. adding two signed integers
+ standards - undefined behavior happens, e.g., adding two signed integers
together where the result is larger than a signed integer can hold. Enabled
with `export AFL_USE_UBSAN=1` before compiling.
* CFISAN = Control Flow Integrity SANitizer, finds instances where the control
@@ -202,15 +202,15 @@ be looked up in the sanitizer documentation of llvm/clang. afl-fuzz, however,
requires some specific parameters important for fuzzing to be set. If you want
to set your own, it might bail and report what it is missing.
-Note that some sanitizers cannot be used together, e.g. ASAN and MSAN, and
-others often cannot work together because of target weirdness, e.g. ASAN and
+Note that some sanitizers cannot be used together, e.g., ASAN and MSAN, and
+others often cannot work together because of target weirdness, e.g., ASAN and
CFISAN. You might need to experiment which sanitizers you can combine in a
target (which means more instances can be run without a sanitized target, which
is more effective).
### d) Modifying the target
-If the target has features that make fuzzing more difficult, e.g. checksums,
+If the target has features that make fuzzing more difficult, e.g., checksums,
HMAC, etc. then modify the source code so that checks for these values are
removed. This can even be done safely for source code used in operational
products by eliminating these checks within these AFL++ specific blocks:
@@ -250,7 +250,7 @@ Then build the target. (Usually with `make`)
reporting via `export AFL_QUIET=1`.
2. sometimes configure and build systems error on warnings - these should be
- disabled (e.g. `--disable-werror` for some configure scripts).
+ disabled (e.g., `--disable-werror` for some configure scripts).
3. in case the configure/build system complains about AFL++'s compiler and
aborts then set `export AFL_NOOPT=1` which will then just behave like the
@@ -354,7 +354,7 @@ You can find many good examples of starting files in the
Use the AFL++ tool `afl-cmin` to remove inputs from the corpus that do not
produce a new path in the target.
-Put all files from step a) into one directory, e.g. INPUTS.
+Put all files from step a) into one directory, e.g., INPUTS.
If the target program is to be called by fuzzing as `bin/target -d INPUTFILE`
the run afl-cmin like this:
@@ -380,8 +380,8 @@ for i in *; do
done
```
-This step can also be parallelized, e.g. with `parallel`. Note that this step is
-rather optional though.
+This step can also be parallelized, e.g., with `parallel`. Note that this step
+is rather optional though.
### Done!
@@ -503,7 +503,7 @@ can set the cache size (in MB) by setting the environment variable
`AFL_TESTCACHE_SIZE`.
There should be one main fuzzer (`-M main-$HOSTNAME` option) and as many
-secondary fuzzers (e.g. `-S variant1`) as you have cores that you use. Every
+secondary fuzzers (e.g., `-S variant1`) as you have cores that you use. Every
-M/-S entry needs a unique name (that can be whatever), however, the same -o
output directory location has to be used for all instances.
@@ -522,7 +522,7 @@ All other secondaries should be used like this:
* a quarter to a third with the MOpt mutator enabled: `-L 0`
* run with a different power schedule, recommended are:
`fast (default), explore, coe, lin, quad, exploit and rare` which you can set
- with e.g. `-p explore`
+ with, e.g., `-p explore`
* a few instances should use the old queue cycling with `-Z`
Also, it is recommended to set `export AFL_IMPORT_FIRST=1` to load test cases
@@ -547,7 +547,7 @@ A long list can be found at
However, you can also sync AFL++ with honggfuzz, libfuzzer with `-entropic=1`,
etc. Just show the main fuzzer (-M) with the `-F` option where the queue/work
-directory of a different fuzzer is, e.g. `-F /src/target/honggfuzz`. Using
+directory of a different fuzzer is, e.g., `-F /src/target/honggfuzz`. Using
honggfuzz (with `-n 1` or `-n 2`) and libfuzzer in parallel is highly
recommended!
@@ -615,8 +615,8 @@ To restart an afl-fuzz run, just reuse the same command line but replace the `-i
directory` with `-i -` or set `AFL_AUTORESUME=1`.
If you want to add new seeds to a fuzzing campaign you can run a temporary
-fuzzing instance, e.g. when your main fuzzer is using `-o out` and the new seeds
-are in `newseeds/` directory:
+fuzzing instance, e.g., when your main fuzzer is using `-o out` and the new
+seeds are in `newseeds/` directory:
```
AFL_BENCH_JUST_ONE=1 AFL_FAST_CAL=1 afl-fuzz -i newseeds -o out -S newseeds -- ./target
@@ -665,9 +665,9 @@ then you will not touch any of the other library APIs and features.
### h) How long to fuzz a target?
This is a difficult question. Basically if no new path is found for a long time
-(e.g. for a day or a week) then you can expect that your fuzzing won't be
+(e.g., for a day or a week) then you can expect that your fuzzing won't be
fruitful anymore. However, often this just means that you should switch out
-secondaries for others, e.g. custom mutator modules, sync to very different
+secondaries for others, e.g., custom mutator modules, sync to very different
fuzzers, etc.
Keep the queue/ directory (for future fuzzings of the same or similar targets)
diff --git a/docs/ideas.md b/docs/ideas.md
index 325e7031..8193983b 100644
--- a/docs/ideas.md
+++ b/docs/ideas.md
@@ -32,7 +32,7 @@ Mentor: any
## Support other programming languages
Other programming languages also use llvm hence they could (easily?) supported
-for fuzzing, e.g. mono, swift, go, kotlin native, fortran, ...
+for fuzzing, e.g., mono, swift, go, kotlin native, fortran, ...
GCC also supports: Objective-C, Fortran, Ada, Go, and D
(according to [Gcc homepage](https://gcc.gnu.org/))
diff --git a/docs/important_changes.md b/docs/important_changes.md
index 6cd00791..82de054f 100644
--- a/docs/important_changes.md
+++ b/docs/important_changes.md
@@ -44,7 +44,7 @@ behaviors and defaults:
* `-i` input directory option now descends into subdirectories. It also
does not fatal on crashes and too large files, instead it skips them
and uses them for splicing mutations
- * -m none is now default, set memory limits (in MB) with e.g. -m 250
+ * -m none is now default, set memory limits (in MB) with, e.g., -m 250
* deterministic fuzzing is now disabled by default (unless using -M) and
can be enabled with -D
* a caching of test cases can now be performed and can be modified by
diff --git a/instrumentation/README.llvm.md b/instrumentation/README.llvm.md
index d16049fa..ac8f2f2a 100644
--- a/instrumentation/README.llvm.md
+++ b/instrumentation/README.llvm.md
@@ -40,7 +40,7 @@ The idea and much of the initial implementation came from Laszlo Szekeres.
## 2a) How to use this - short
-Set the `LLVM_CONFIG` variable to the clang version you want to use, e.g.
+Set the `LLVM_CONFIG` variable to the clang version you want to use, e.g.:
```
LLVM_CONFIG=llvm-config-9 make
diff --git a/instrumentation/README.lto.md b/instrumentation/README.lto.md
index b97e5799..a20175b1 100644
--- a/instrumentation/README.lto.md
+++ b/instrumentation/README.lto.md
@@ -71,7 +71,7 @@ use an outdated Linux distribution, read the next section.
Installing the llvm snapshot builds is easy and mostly painless:
In the following line, change `NAME` for your Debian or Ubuntu release name
-(e.g. buster, focal, eon, etc.):
+(e.g., buster, focal, eon, etc.):
```
echo deb http://apt.llvm.org/NAME/ llvm-toolchain-NAME NAME >> /etc/apt/sources.list
diff --git a/utils/README.md b/utils/README.md
index b8df0b47..b7eead8e 100644
--- a/utils/README.md
+++ b/utils/README.md
@@ -13,7 +13,7 @@ Here's a quick overview of the stuff you can find in this directory:
- afl_proxy - skeleton file example to show how to fuzz
something where you gather coverage data via
- different means, e.g. hw debugger
+ different means, e.g., hw debugger
- afl_untracer - fuzz binary-only libraries much faster but with
less coverage than qemu_mode
diff --git a/utils/afl_network_proxy/README.md b/utils/afl_network_proxy/README.md
index d2c00be2..c478319a 100644
--- a/utils/afl_network_proxy/README.md
+++ b/utils/afl_network_proxy/README.md
@@ -6,7 +6,8 @@ Note that the impact on fuzzing speed will be huge, expect a loss of 90%.
## When to use this
1. when you have to fuzz a target that has to run on a system that cannot
- contain the fuzzing output (e.g. /tmp too small and file system is read-only)
+ contain the fuzzing output (e.g., /tmp too small and file system is
+ read-only)
2. when the target instantly reboots on crashes
3. ... any other reason you would need this
@@ -28,6 +29,7 @@ For most targets this hurts performance though so it is disabled by default.
Run `afl-network-server` with your target with the -m and -t values you need.
Important is the -i parameter which is the TCP port to listen on.
e.g.:
+
```
afl-network-server -i 1111 -m 25M -t 1000 -- /bin/target -f @@
```
@@ -50,7 +52,7 @@ value itself should be 500-1000 higher than the one on afl-network-server.
The TARGET can be an IPv4 or IPv6 address, or a host name that resolves to
either. Note that also the outgoing interface can be specified with a '%' for
-`afl-network-client`, e.g. `fe80::1234%eth0`.
+`afl-network-client`, e.g., `fe80::1234%eth0`.
Also make sure your default TCP window size is larger than your MAP_SIZE
(130kb is a good value).
--
cgit 1.4.1
From 0594bcb0cbefcb5e99f101800d4bd0b89f6689f1 Mon Sep 17 00:00:00 2001
From: llzmb <46303940+llzmb@users.noreply.github.com>
Date: Sat, 4 Dec 2021 19:31:32 +0100
Subject: Remove old references
---
dictionaries/README.md | 4 +---
docs/afl-fuzz_approach.md | 3 +--
2 files changed, 2 insertions(+), 5 deletions(-)
(limited to 'docs/afl-fuzz_approach.md')
diff --git a/dictionaries/README.md b/dictionaries/README.md
index 2c0056f6..f3b8a9e5 100644
--- a/dictionaries/README.md
+++ b/dictionaries/README.md
@@ -4,9 +4,7 @@
This subdirectory contains a set of dictionaries that can be used in
conjunction with the -x option to allow the fuzzer to effortlessly explore the
-grammar of some of the more verbose data formats or languages. The basic
-principle behind the operation of fuzzer dictionaries is outlined in section 10
-of the "main" README.md for the project.
+grammar of some of the more verbose data formats or languages.
These sets were done by Michal Zalewski, various contributors, and imported
from oss-fuzz, go-fuzz and libfuzzer.
diff --git a/docs/afl-fuzz_approach.md b/docs/afl-fuzz_approach.md
index 68f45891..702e020d 100644
--- a/docs/afl-fuzz_approach.md
+++ b/docs/afl-fuzz_approach.md
@@ -424,8 +424,7 @@ There are three subdirectories created within the output directory and updated
in real-time:
- queue/ - test cases for every distinctive execution path, plus all the
- starting files given by the user. This is the synthesized corpus
- mentioned in section 2.
+ starting files given by the user. This is the synthesized corpus.
Before using this corpus for any other purposes, you can shrink
it to a smaller size using the afl-cmin tool. The tool will find
--
cgit 1.4.1
From 13eedcd5e8128419ae1b3e04d56a775eeea6f471 Mon Sep 17 00:00:00 2001
From: llzmb <46303940+llzmb@users.noreply.github.com>
Date: Sat, 4 Dec 2021 19:42:47 +0100
Subject: Fix punctuation in connection with "etc."
---
README.md | 2 +-
docs/afl-fuzz_approach.md | 2 +-
docs/fuzzing_in_depth.md | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
(limited to 'docs/afl-fuzz_approach.md')
diff --git a/README.md b/README.md
index 93c0dd10..08363149 100644
--- a/README.md
+++ b/README.md
@@ -86,7 +86,7 @@ Step-by-step quick start:
```
2. Get a small but valid input file that makes sense to the program. When
- fuzzing verbose syntax (SQL, HTTP, etc), create a dictionary as described in
+ fuzzing verbose syntax (SQL, HTTP, etc.), create a dictionary as described in
[dictionaries/README.md](dictionaries/README.md), too.
3. If the program reads from stdin, run `afl-fuzz` like so:
diff --git a/docs/afl-fuzz_approach.md b/docs/afl-fuzz_approach.md
index 702e020d..fefde029 100644
--- a/docs/afl-fuzz_approach.md
+++ b/docs/afl-fuzz_approach.md
@@ -523,7 +523,7 @@ into each of them or deploy scripts to read the fuzzer statistics. Using
to your favorite StatsD server. Depending on your StatsD server, you will be
able to monitor, trigger alerts, or perform actions based on these metrics (e.g:
alert on slow exec/s for a new build, threshold of crashes, time since last
-crash > X, etc).
+crash > X, etc.).
The selected metrics are a subset of all the metrics found in the status and in
the plot file. The list is the following: `cycle_done`, `cycles_wo_finds`,
diff --git a/docs/fuzzing_in_depth.md b/docs/fuzzing_in_depth.md
index 011ba783..d408aa91 100644
--- a/docs/fuzzing_in_depth.md
+++ b/docs/fuzzing_in_depth.md
@@ -722,7 +722,7 @@ just for AFL++).
Here are some of the most important caveats for AFL++:
- AFL++ detects faults by checking for the first spawned process dying due to a
- signal (SIGSEGV, SIGABRT, etc). Programs that install custom handlers for
+ signal (SIGSEGV, SIGABRT, etc.). Programs that install custom handlers for
these signals may need to have the relevant code commented out. In the same
vein, faults in child processes spawned by the fuzzed target may evade
detection unless you manually add some code to catch that.
--
cgit 1.4.1
From bcd81c377d22cf26812127881a8ac15ed9c022ad Mon Sep 17 00:00:00 2001
From: llzmb <46303940+llzmb@users.noreply.github.com>
Date: Sat, 4 Dec 2021 20:38:00 +0100
Subject: Fix line length and formatting
---
CONTRIBUTING.md | 11 ++--
README.md | 26 +++++---
TODO.md | 9 ++-
docs/FAQ.md | 82 ++++++++++++++++---------
docs/INSTALL.md | 79 +++++++++++++++---------
docs/afl-fuzz_approach.md | 4 +-
docs/best_practices.md | 114 +++++++++++++++++++++++------------
docs/custom_mutators.md | 80 ++++++++++++------------
docs/env_variables.md | 96 ++++++++++++++---------------
docs/fuzzing_binary-only_targets.md | 6 +-
docs/fuzzing_in_depth.md | 50 ++++++++-------
docs/ideas.md | 55 +++++++++--------
docs/important_changes.md | 29 ++++-----
docs/rpc_statsd.md | 73 ++++++++++++++++------
frida_mode/Scripting.md | 20 +++---
instrumentation/README.gcc_plugin.md | 4 +-
16 files changed, 435 insertions(+), 303 deletions(-)
(limited to 'docs/afl-fuzz_approach.md')
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 0ab4f8ec..fb13b91a 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -15,10 +15,9 @@ project, or added a file in a directory we already format, otherwise run:
./.custom-format.py -i file-that-you-have-created.c
```
-Regarding the coding style, please follow the AFL style.
-No camel case at all and use AFL's macros wherever possible
-(e.g., WARNF, FATAL, MAP_SIZE, ...).
+Regarding the coding style, please follow the AFL style. No camel case at all
+and use AFL's macros wherever possible (e.g., WARNF, FATAL, MAP_SIZE, ...).
-Remember that AFL++ has to build and run on many platforms, so
-generalize your Makefiles/GNUmakefile (or your patches to our pre-existing
-Makefiles) to be as generic as possible.
+Remember that AFL++ has to build and run on many platforms, so generalize your
+Makefiles/GNUmakefile (or your patches to our pre-existing Makefiles) to be as
+generic as possible.
\ No newline at end of file
diff --git a/README.md b/README.md
index 08363149..b70eb1ab 100644
--- a/README.md
+++ b/README.md
@@ -44,8 +44,8 @@ Here is some information to get you started:
## Building and installing AFL++
-To have AFL++ easily available with everything compiled, pull the image
-directly from the Docker Hub:
+To have AFL++ easily available with everything compiled, pull the image directly
+from the Docker Hub:
```shell
docker pull aflplusplus/aflplusplus
@@ -53,8 +53,8 @@ docker run -ti -v /location/of/your/target:/src aflplusplus/aflplusplus
```
This image is automatically generated when a push to the stable repo happens
-(see [branches](#branches)). You will find your target source
-code in `/src` in the container.
+(see [branches](#branches)). You will find your target source code in `/src` in
+the container.
To build AFL++ yourself, continue at [docs/INSTALL.md](docs/INSTALL.md).
@@ -120,8 +120,8 @@ Questions? Concerns? Bug reports?
* The contributors can be reached via
[https://github.com/AFLplusplus/AFLplusplus](https://github.com/AFLplusplus/AFLplusplus).
-* Take a look at our [FAQ](docs/FAQ.md). If you find an interesting or
- important question missing, submit it via
+* Take a look at our [FAQ](docs/FAQ.md). If you find an interesting or important
+ question missing, submit it via
[https://github.com/AFLplusplus/AFLplusplus/discussions](https://github.com/AFLplusplus/AFLplusplus/discussions).
* There is a mailing list for the AFL/AFL++ project
([browse archive](https://groups.google.com/group/afl-users)). To compare
@@ -133,10 +133,16 @@ Questions? Concerns? Bug reports?
The following branches exist:
-* [release](https://github.com/AFLplusplus/AFLplusplus/tree/release): the latest release
-* [stable/trunk](https://github.com/AFLplusplus/AFLplusplus/): stable state of AFL++ - it is synced from dev from time to time when we are satisfied with its stability
-* [dev](https://github.com/AFLplusplus/AFLplusplus/tree/dev): development state of AFL++ - bleeding edge and you might catch a checkout which does not compile or has a bug. *We only accept PRs in dev!!*
-* (any other): experimental branches to work on specific features or testing new functionality or changes.
+* [release](https://github.com/AFLplusplus/AFLplusplus/tree/release): the latest
+ release
+* [stable/trunk](https://github.com/AFLplusplus/AFLplusplus/): stable state of
+ AFL++ - it is synced from dev from time to time when we are satisfied with its
+ stability
+* [dev](https://github.com/AFLplusplus/AFLplusplus/tree/dev): development state
+ of AFL++ - bleeding edge and you might catch a checkout which does not compile
+ or has a bug. *We only accept PRs in dev!!*
+* (any other): experimental branches to work on specific features or testing new
+ functionality or changes.
## Help wanted
diff --git a/TODO.md b/TODO.md
index b8ac22ef..04f3abab 100644
--- a/TODO.md
+++ b/TODO.md
@@ -23,11 +23,10 @@ qemu_mode/frida_mode:
- add AFL_QEMU_EXITPOINT (maybe multiple?), maybe pointless as there is
persistent mode
-
## Ideas
- LTO/sancov: write current edge to prev_loc and use that information when
- using cmplog or __sanitizer_cov_trace_cmp*. maybe we can deduct by follow
- up edge numbers that both following cmp paths have been found and then
- disable working on this edge id -> cmplog_intelligence branch
- - use cmplog colorization taint result for havoc locations?
+ using cmplog or __sanitizer_cov_trace_cmp*. maybe we can deduct by follow up
+ edge numbers that both following cmp paths have been found and then disable
+ working on this edge id -> cmplog_intelligence branch
+ - use cmplog colorization taint result for havoc locations?
\ No newline at end of file
diff --git a/docs/FAQ.md b/docs/FAQ.md
index 671957ef..7869ee61 100644
--- a/docs/FAQ.md
+++ b/docs/FAQ.md
@@ -8,35 +8,45 @@ If you find an interesting or important question missing, submit it via
What is the difference between AFL and AFL++?
- AFL++ is a superior fork to Google's AFL - more speed, more and better mutations, more and better instrumentation, custom module support, etc.
+ AFL++ is a superior fork to Google's AFL - more speed, more and better
+ mutations, more and better instrumentation, custom module support, etc.
- American Fuzzy Lop (AFL) was developed by Michał "lcamtuf" Zalewski starting in 2013/2014, and when he left Google end of 2017 he stopped developing it.
+ American Fuzzy Lop (AFL) was developed by Michał "lcamtuf" Zalewski starting
+ in 2013/2014, and when he left Google end of 2017 he stopped developing it.
At the end of 2019, the Google fuzzing team took over maintenance of AFL,
however, it is only accepting PRs from the community and is not developing
enhancements anymore.
- In the second quarter of 2019, 1 1/2 years later, when no further development of AFL had happened and it became clear there would none be coming, AFL++ was born, where initially community patches were collected and applied for bug fixes and enhancements.
- Then from various AFL spin-offs - mostly academic research - features were integrated.
- This already resulted in a much advanced AFL.
-
- Until the end of 2019, the AFL++ team had grown to four active developers which then implemented their own research and features, making it now by far the most flexible and feature rich guided fuzzer available as open source.
- And in independent fuzzing benchmarks it is one of the best fuzzers available, e.g., [Fuzzbench Report](https://www.fuzzbench.com/reports/2020-08-03/index.html).
+ In the second quarter of 2019, 1 1/2 years later, when no further development
+ of AFL had happened and it became clear there would none be coming, AFL++ was
+ born, where initially community patches were collected and applied for bug
+ fixes and enhancements. Then from various AFL spin-offs - mostly academic
+ research - features were integrated. This already resulted in a much advanced
+ AFL.
+
+ Until the end of 2019, the AFL++ team had grown to four active developers
+ which then implemented their own research and features, making it now by far
+ the most flexible and feature rich guided fuzzer available as open source. And
+ in independent fuzzing benchmarks it is one of the best fuzzers available,
+ e.g., [Fuzzbench
+ Report](https://www.fuzzbench.com/reports/2020-08-03/index.html).
Where can I find tutorials?
- We compiled a list of tutorials and exercises, see [tutorials.md](tutorials.md).
+ We compiled a list of tutorials and exercises, see
+ [tutorials.md](tutorials.md).
What is an "edge"?
A program contains `functions`, `functions` contain the compiled machine code.
- The compiled machine code in a `function` can be in a single or many `basic blocks`.
- A `basic block` is the largest possible number of subsequent machine code
- instructions that has exactly one entry point (which can be be entered by
+ The compiled machine code in a `function` can be in a single or many `basic
+ blocks`. A `basic block` is the largest possible number of subsequent machine
+ code instructions that has exactly one entry point (which can be be entered by
multiple other basic blocks) and runs linearly without branching or jumping to
other addresses (except at the end).
@@ -60,7 +70,8 @@ If you find an interesting or important question missing, submit it via
Every code block between two jump locations is a `basic block`.
- An `edge` is then the unique relationship between two directly connected `basic blocks` (from the code example above):
+ An `edge` is then the unique relationship between two directly connected
+ `basic blocks` (from the code example above):
```
Block A
@@ -75,8 +86,8 @@ If you find an interesting or important question missing, submit it via
Block E
```
- Every line between two blocks is an `edge`.
- Note that a few basic block loop to itself, this too would be an edge.
+ Every line between two blocks is an `edge`. Note that a few basic block loop
+ to itself, this too would be an edge.
## Targets
@@ -86,7 +97,8 @@ If you find an interesting or important question missing, submit it via
AFL++ is a great fuzzer if you have the source code available.
- However, if there is only the binary program and no source code available, then the standard non-instrumented mode is not effective.
+ However, if there is only the binary program and no source code available,
+ then the standard non-instrumented mode is not effective.
To learn how these binaries can be fuzzed, read
[fuzzing_binary-only_targets.md](fuzzing_binary-only_targets.md).
@@ -97,15 +109,19 @@ If you find an interesting or important question missing, submit it via
The short answer is - you cannot, at least not "out of the box".
- For more information on fuzzing network services, see [best_practices.md#fuzzing-a-network-service](best_practices.md#fuzzing-a-network-service).
+ For more information on fuzzing network services, see
+ [best_practices.md#fuzzing-a-network-service](best_practices.md#fuzzing-a-network-service).
How can I fuzz a GUI program?
- Not all GUI programs are suitable for fuzzing. If the GUI program can read the fuzz data from a file without needing any user interaction, then it would be suitable for fuzzing.
+ Not all GUI programs are suitable for fuzzing. If the GUI program can read the
+ fuzz data from a file without needing any user interaction, then it would be
+ suitable for fuzzing.
- For more information on fuzzing GUI programs, see [best_practices.md#fuzzing-a-gui-program](best_practices.md#fuzzing-a-gui-program).
+ For more information on fuzzing GUI programs, see
+ [best_practices.md#fuzzing-a-gui-program](best_practices.md#fuzzing-a-gui-program).
## Performance
@@ -113,27 +129,33 @@ If you find an interesting or important question missing, submit it via
How can I improve the fuzzing speed?
- There are a few things you can do to improve the fuzzing speed, see [best_practices.md#improving-speed](best_practices.md#improving-speed).
+ There are a few things you can do to improve the fuzzing speed, see
+ [best_practices.md#improving-speed](best_practices.md#improving-speed).
Why is my stability below 100%?
- Stability is measured by how many percent of the edges in the target are "stable".
- Sending the same input again and again should take the exact same path through the target every time.
- If that is the case, the stability is 100%.
+ Stability is measured by how many percent of the edges in the target are
+ "stable". Sending the same input again and again should take the exact same
+ path through the target every time. If that is the case, the stability is
+ 100%.
If, however, randomness happens, e.g., a thread reading other external data,
reaction to timing, etc., then in some of the re-executions with the same data
the edge coverage result will be different across runs. Those edges that
change are then flagged "unstable".
- The more "unstable" edges, the more difficult for AFL++ to identify valid new paths.
+ The more "unstable" edges, the more difficult for AFL++ to identify valid new
+ paths.
- A value above 90% is usually fine and a value above 80% is also still ok, and even a value above 20% can still result in successful finds of bugs.
- However, it is recommended that for values below 90% or 80% you should take countermeasures to improve stability.
+ A value above 90% is usually fine and a value above 80% is also still ok, and
+ even a value above 20% can still result in successful finds of bugs. However,
+ it is recommended that for values below 90% or 80% you should take
+ countermeasures to improve stability.
- For more information on stability and how to improve the stability value, see [best_practices.md#improving-stability](best_practices.md#improving-stability).
+ For more information on stability and how to improve the stability value, see
+ [best_practices.md#improving-stability](best_practices.md#improving-stability).
## Troubleshooting
@@ -141,7 +163,8 @@ If you find an interesting or important question missing, submit it via
I got a weird compile error from clang.
- If you see this kind of error when trying to instrument a target with afl-cc/afl-clang-fast/afl-clang-lto:
+ If you see this kind of error when trying to instrument a target with
+ afl-cc/afl-clang-fast/afl-clang-lto:
```
/prg/tmp/llvm-project/build/bin/clang-13: symbol lookup error: /usr/local/bin/../lib/afl//cmplog-instructions-pass.so: undefined symbol: _ZNK4llvm8TypeSizecvmEv
@@ -155,7 +178,8 @@ If you find an interesting or important question missing, submit it via
********************
```
- Then this means that your OS updated the clang installation from an upgrade package and because of that the AFL++ llvm plugins do not match anymore.
+ Then this means that your OS updated the clang installation from an upgrade
+ package and because of that the AFL++ llvm plugins do not match anymore.
Solution: `git pull ; make clean install` of AFL++.
\ No newline at end of file
diff --git a/docs/INSTALL.md b/docs/INSTALL.md
index c1e22e36..08d3283e 100644
--- a/docs/INSTALL.md
+++ b/docs/INSTALL.md
@@ -3,7 +3,8 @@
## Linux on x86
An easy way to install AFL++ with everything compiled is available via docker:
-You can use the [Dockerfile](../Dockerfile) (which has gcc-10 and clang-11 - hence afl-clang-lto is available!) or just pull directly from the Docker Hub:
+You can use the [Dockerfile](../Dockerfile) (which has gcc-10 and clang-11 -
+hence afl-clang-lto is available!) or just pull directly from the Docker Hub:
```shell
docker pull aflplusplus/aflplusplus
@@ -13,8 +14,8 @@ docker run -ti -v /location/of/your/target:/src aflplusplus/aflplusplus
This image is automatically generated when a push to the stable repo happens.
You will find your target source code in /src in the container.
-If you want to build AFL++ yourself, you have many options.
-The easiest choice is to build and install everything:
+If you want to build AFL++ yourself, you have many options. The easiest choice
+is to build and install everything:
```shell
sudo apt-get update
@@ -29,10 +30,13 @@ make distrib
sudo make install
```
-It is recommended to install the newest available gcc, clang and llvm-dev possible in your distribution!
+It is recommended to install the newest available gcc, clang and llvm-dev
+possible in your distribution!
-Note that "make distrib" also builds instrumentation, qemu_mode, unicorn_mode and more.
-If you just want plain AFL++, then do "make all". However, compiling and using at least instrumentation is highly recommended for much better results - hence in this case choose:
+Note that "make distrib" also builds instrumentation, qemu_mode, unicorn_mode
+and more. If you just want plain AFL++, then do "make all". However, compiling
+and using at least instrumentation is highly recommended for much better results
+- hence in this case choose:
```shell
make source-only
@@ -41,19 +45,25 @@ make source-only
These build targets exist:
* all: just the main AFL++ binaries
-* binary-only: everything for binary-only fuzzing: qemu_mode, unicorn_mode, libdislocator, libtokencap
-* source-only: everything for source code fuzzing: instrumentation, libdislocator, libtokencap
+* binary-only: everything for binary-only fuzzing: qemu_mode, unicorn_mode,
+ libdislocator, libtokencap
+* source-only: everything for source code fuzzing: instrumentation,
+ libdislocator, libtokencap
* distrib: everything (for both binary-only and source code fuzzing)
* man: creates simple man pages from the help option of the programs
* install: installs everything you have compiled with the build options above
* clean: cleans everything compiled, not downloads (unless not on a checkout)
* deepclean: cleans everything including downloads
* code-format: format the code, do this before you commit and send a PR please!
-* tests: runs test cases to ensure that all features are still working as they should
+* tests: runs test cases to ensure that all features are still working as they
+ should
* unit: perform unit tests (based on cmocka)
* help: shows these build options
-[Unless you are on Mac OS X](https://developer.apple.com/library/archive/qa/qa1118/_index.html), you can also build statically linked versions of the AFL++ binaries by passing the `STATIC=1` argument to make:
+[Unless you are on Mac OS
+X](https://developer.apple.com/library/archive/qa/qa1118/_index.html), you can
+also build statically linked versions of the AFL++ binaries by passing the
+`STATIC=1` argument to make:
```shell
make STATIC=1
@@ -67,7 +77,8 @@ These build options exist:
* PROFILING - compile with profiling information (gprof)
* INTROSPECTION - compile afl-fuzz with mutation introspection
* NO_PYTHON - disable python support
-* NO_SPLICING - disables splicing mutation in afl-fuzz, not recommended for normal fuzzing
+* NO_SPLICING - disables splicing mutation in afl-fuzz, not recommended for
+ normal fuzzing
* AFL_NO_X86 - if compiling on non-intel/amd platforms
* LLVM_CONFIG - if your distro doesn't use the standard name for llvm-config
(e.g., Debian)
@@ -76,15 +87,17 @@ e.g.: `make ASAN_BUILD=1`
## MacOS X on x86 and arm64 (M1)
-MacOS X should work, but there are some gotchas due to the idiosyncrasies of the platform.
-On top of this, we have limited release testing capabilities and depend mostly on user feedback.
+MacOS X should work, but there are some gotchas due to the idiosyncrasies of the
+platform. On top of this, we have limited release testing capabilities and
+depend mostly on user feedback.
-To build AFL, install llvm (and perhaps gcc) from brew and follow the general instructions for Linux.
-If possible, avoid Xcode at all cost.
+To build AFL, install llvm (and perhaps gcc) from brew and follow the general
+instructions for Linux. If possible, avoid Xcode at all cost.
`brew install wget git make cmake llvm gdb`
-Be sure to setup `PATH` to point to the correct clang binaries and use the freshly installed clang, clang++ and gmake, e.g.:
+Be sure to setup `PATH` to point to the correct clang binaries and use the
+freshly installed clang, clang++ and gmake, e.g.:
```
export PATH="/usr/local/Cellar/llvm/12.0.1/bin/:$PATH"
@@ -97,20 +110,20 @@ cd ..
gmake install
```
-`afl-gcc` will fail unless you have GCC installed, but that is using outdated instrumentation anyway.
-You don't want that.
-Note that `afl-clang-lto`, `afl-gcc-fast` and `qemu_mode` are not working on MacOS.
+`afl-gcc` will fail unless you have GCC installed, but that is using outdated
+instrumentation anyway. You don't want that. Note that `afl-clang-lto`,
+`afl-gcc-fast` and `qemu_mode` are not working on MacOS.
-The crash reporting daemon that comes by default with MacOS X will cause problems with fuzzing.
-You need to turn it off:
+The crash reporting daemon that comes by default with MacOS X will cause
+problems with fuzzing. You need to turn it off:
```
launchctl unload -w /System/Library/LaunchAgents/com.apple.ReportCrash.plist
sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.ReportCrash.Root.plist
```
-The `fork()` semantics on OS X are a bit unusual compared to other unix systems and definitely don't look POSIX-compliant.
-This means two things:
+The `fork()` semantics on OS X are a bit unusual compared to other unix systems
+and definitely don't look POSIX-compliant. This means two things:
- Fuzzing will be probably slower than on Linux. In fact, some folks report
considerable performance gains by running the jobs inside a Linux VM on
@@ -119,11 +132,13 @@ This means two things:
forkserver. If you run into any problems, set `AFL_NO_FORKSRV=1` in the
environment before starting afl-fuzz.
-User emulation mode of QEMU does not appear to be supported on MacOS X, so black-box instrumentation mode (`-Q`) will not work.
-However, Frida mode (`-O`) should work on x86 and arm64 MacOS boxes.
+User emulation mode of QEMU does not appear to be supported on MacOS X, so
+black-box instrumentation mode (`-Q`) will not work. However, Frida mode (`-O`)
+should work on x86 and arm64 MacOS boxes.
-MacOS X supports SYSV shared memory used by AFL's instrumentation, but the default settings aren't usable with AFL++.
-The default settings on 10.14 seem to be:
+MacOS X supports SYSV shared memory used by AFL's instrumentation, but the
+default settings aren't usable with AFL++. The default settings on 10.14 seem to
+be:
```bash
$ ipcs -M
@@ -136,14 +151,16 @@ shminfo:
shmall: 1024 (max amount of shared memory in pages)
```
-To temporarily change your settings to something minimally usable with AFL++, run these commands as root:
+To temporarily change your settings to something minimally usable with AFL++,
+run these commands as root:
```bash
sysctl kern.sysv.shmmax=8388608
sysctl kern.sysv.shmall=4096
```
-If you're running more than one instance of AFL, you likely want to make `shmall` bigger and increase `shmseg` as well:
+If you're running more than one instance of AFL, you likely want to make
+`shmall` bigger and increase `shmseg` as well:
```bash
sysctl kern.sysv.shmmax=8388608
@@ -151,4 +168,6 @@ sysctl kern.sysv.shmseg=48
sysctl kern.sysv.shmall=98304
```
-See [https://www.spy-hill.com/help/apple/SharedMemory.html](https://www.spy-hill.com/help/apple/SharedMemory.html) for documentation for these settings and how to make them permanent.
\ No newline at end of file
+See
+[https://www.spy-hill.com/help/apple/SharedMemory.html](https://www.spy-hill.com/help/apple/SharedMemory.html)
+for documentation for these settings and how to make them permanent.
\ No newline at end of file
diff --git a/docs/afl-fuzz_approach.md b/docs/afl-fuzz_approach.md
index fefde029..3804f5a0 100644
--- a/docs/afl-fuzz_approach.md
+++ b/docs/afl-fuzz_approach.md
@@ -445,8 +445,8 @@ involve any state transitions not seen in previously-recorded faults. If a
single bug can be reached in multiple ways, there will be some count inflation
early in the process, but this should quickly taper off.
-The file names for crashes and hangs are correlated with the parent, non-faulting
-queue entries. This should help with debugging.
+The file names for crashes and hangs are correlated with the parent,
+non-faulting queue entries. This should help with debugging.
## Visualizing
diff --git a/docs/best_practices.md b/docs/best_practices.md
index 6a406bde..e6b252f6 100644
--- a/docs/best_practices.md
+++ b/docs/best_practices.md
@@ -18,7 +18,8 @@
### Fuzzing a target with source code available
-To learn how to fuzz a target if source code is available, see [fuzzing_in_depth.md](fuzzing_in_depth.md).
+To learn how to fuzz a target if source code is available, see
+[fuzzing_in_depth.md](fuzzing_in_depth.md).
### Fuzzing a binary-only target
@@ -27,11 +28,16 @@ For a comprehensive guide, see
### Fuzzing a GUI program
-If the GUI program can read the fuzz data from a file (via the command line, a fixed location or via an environment variable) without needing any user interaction, then it would be suitable for fuzzing.
+If the GUI program can read the fuzz data from a file (via the command line, a
+fixed location or via an environment variable) without needing any user
+interaction, then it would be suitable for fuzzing.
-Otherwise, it is not possible without modifying the source code - which is a very good idea anyway as the GUI functionality is a huge CPU/time overhead for the fuzzing.
+Otherwise, it is not possible without modifying the source code - which is a
+very good idea anyway as the GUI functionality is a huge CPU/time overhead for
+the fuzzing.
-So create a new `main()` that just reads the test case and calls the functionality for processing the input that the GUI program is using.
+So create a new `main()` that just reads the test case and calls the
+functionality for processing the input that the GUI program is using.
### Fuzzing a network service
@@ -40,13 +46,16 @@ Fuzzing a network service does not work "out of the box".
Using a network channel is inadequate for several reasons:
- it has a slow-down of x10-20 on the fuzzing speed
- it does not scale to fuzzing multiple instances easily,
-- instead of one initial data packet often a back-and-forth interplay of packets is needed for stateful protocols (which is totally unsupported by most coverage aware fuzzers).
+- instead of one initial data packet often a back-and-forth interplay of packets
+ is needed for stateful protocols (which is totally unsupported by most
+ coverage aware fuzzers).
-The established method to fuzz network services is to modify the source code
-to read from a file or stdin (fd 0) (or even faster via shared memory, combine
-this with persistent mode [instrumentation/README.persistent_mode.md](../instrumentation/README.persistent_mode.md)
-and you have a performance gain of x10 instead of a performance loss of over
-x10 - that is a x100 difference!).
+The established method to fuzz network services is to modify the source code to
+read from a file or stdin (fd 0) (or even faster via shared memory, combine this
+with persistent mode
+[instrumentation/README.persistent_mode.md](../instrumentation/README.persistent_mode.md)
+and you have a performance gain of x10 instead of a performance loss of over x10
+- that is a x100 difference!).
If modifying the source is not an option (e.g., because you only have a binary
and perform binary fuzzing) you can also use a shared library with AFL_PRELOAD
@@ -64,13 +73,25 @@ allows you to define network state with different type of data packets.
### Improving speed
-1. Use [llvm_mode](../instrumentation/README.llvm.md): afl-clang-lto (llvm >= 11) or afl-clang-fast (llvm >= 9 recommended).
-2. Use [persistent mode](../instrumentation/README.persistent_mode.md) (x2-x20 speed increase).
-3. Instrument just what you are interested in, see [instrumentation/README.instrument_list.md](../instrumentation/README.instrument_list.md).
-4. If you do not use shmem persistent mode, use `AFL_TMPDIR` to put the input file directory on a tempfs location, see [env_variables.md](env_variables.md).
-5. Improve Linux kernel performance: modify `/etc/default/grub`, set `GRUB_CMDLINE_LINUX_DEFAULT="ibpb=off ibrs=off kpti=off l1tf=off mds=off mitigations=off no_stf_barrier noibpb noibrs nopcid nopti nospec_store_bypass_disable nospectre_v1 nospectre_v2 pcid=off pti=off spec_store_bypass_disable=off spectre_v2=off stf_barrier=off"`; then `update-grub` and `reboot` (warning: makes the system less secure).
-6. Running on an `ext2` filesystem with `noatime` mount option will be a bit faster than on any other journaling filesystem.
-7. Use your cores ([fuzzing_in_depth.md:3c) Using multiple cores](fuzzing_in_depth.md#c-using-multiple-cores))!
+1. Use [llvm_mode](../instrumentation/README.llvm.md): afl-clang-lto (llvm >=
+ 11) or afl-clang-fast (llvm >= 9 recommended).
+2. Use [persistent mode](../instrumentation/README.persistent_mode.md) (x2-x20
+ speed increase).
+3. Instrument just what you are interested in, see
+ [instrumentation/README.instrument_list.md](../instrumentation/README.instrument_list.md).
+4. If you do not use shmem persistent mode, use `AFL_TMPDIR` to put the input
+ file directory on a tempfs location, see
+ [env_variables.md](env_variables.md).
+5. Improve Linux kernel performance: modify `/etc/default/grub`, set
+ `GRUB_CMDLINE_LINUX_DEFAULT="ibpb=off ibrs=off kpti=off l1tf=off mds=off
+ mitigations=off no_stf_barrier noibpb noibrs nopcid nopti
+ nospec_store_bypass_disable nospectre_v1 nospectre_v2 pcid=off pti=off
+ spec_store_bypass_disable=off spectre_v2=off stf_barrier=off"`; then
+ `update-grub` and `reboot` (warning: makes the system less secure).
+6. Running on an `ext2` filesystem with `noatime` mount option will be a bit
+ faster than on any other journaling filesystem.
+7. Use your cores
+ ([fuzzing_in_depth.md:3c) Using multiple cores](fuzzing_in_depth.md#c-using-multiple-cores))!
### Improving stability
@@ -78,46 +99,60 @@ For fuzzing a 100% stable target that covers all edges is the best case. A 90%
stable target that covers all edges is, however, better than a 100% stable
target that ignores 10% of the edges.
-With instability, you basically have a partial coverage loss on an edge, with ignored functions you have a full loss on that edges.
+With instability, you basically have a partial coverage loss on an edge, with
+ignored functions you have a full loss on that edges.
There are functions that are unstable, but also provide value to coverage, e.g.,
init functions that use fuzz data as input. If, however, a function that has
nothing to do with the input data is the source of instability, e.g., checking
jitter, or is a hash map function etc., then it should not be instrumented.
-To be able to exclude these functions (based on AFL++'s measured stability), the following process will allow to identify functions with variable edges.
+To be able to exclude these functions (based on AFL++'s measured stability), the
+following process will allow to identify functions with variable edges.
-Four steps are required to do this and it also requires quite some knowledge of coding and/or disassembly and is effectively possible only with `afl-clang-fast` `PCGUARD` and `afl-clang-lto` `LTO` instrumentation.
+Four steps are required to do this and it also requires quite some knowledge of
+coding and/or disassembly and is effectively possible only with `afl-clang-fast`
+`PCGUARD` and `afl-clang-lto` `LTO` instrumentation.
1. Instrument to be able to find the responsible function(s):
- a) For LTO instrumented binaries, this can be documented during compile time, just set `export AFL_LLVM_DOCUMENT_IDS=/path/to/a/file`.
- This file will have one assigned edge ID and the corresponding function per line.
-
- b) For PCGUARD instrumented binaries, it is much more difficult. Here you can either modify the `__sanitizer_cov_trace_pc_guard` function in `instrumentation/afl-llvm-rt.o.c` to write a backtrace to a file if the ID in `__afl_area_ptr[*guard]` is one of the unstable edge IDs.
- (Example code is already there).
- Then recompile and reinstall `llvm_mode` and rebuild your target.
- Run the recompiled target with `afl-fuzz` for a while and then check the file that you wrote with the backtrace information.
- Alternatively, you can use `gdb` to hook `__sanitizer_cov_trace_pc_guard_init` on start, check to which memory address the edge ID value is written, and set a write breakpoint to that address (`watch 0x.....`).
-
- c) In other instrumentation types, this is not possible.
- So just recompile with the two mentioned above.
- This is just for identifying the functions that have unstable edges.
+ a) For LTO instrumented binaries, this can be documented during compile
+ time, just set `export AFL_LLVM_DOCUMENT_IDS=/path/to/a/file`. This file
+ will have one assigned edge ID and the corresponding function per line.
+
+ b) For PCGUARD instrumented binaries, it is much more difficult. Here you
+ can either modify the `__sanitizer_cov_trace_pc_guard` function in
+ `instrumentation/afl-llvm-rt.o.c` to write a backtrace to a file if the
+ ID in `__afl_area_ptr[*guard]` is one of the unstable edge IDs. (Example
+ code is already there). Then recompile and reinstall `llvm_mode` and
+ rebuild your target. Run the recompiled target with `afl-fuzz` for a
+ while and then check the file that you wrote with the backtrace
+ information. Alternatively, you can use `gdb` to hook
+ `__sanitizer_cov_trace_pc_guard_init` on start, check to which memory
+ address the edge ID value is written, and set a write breakpoint to that
+ address (`watch 0x.....`).
+
+ c) In other instrumentation types, this is not possible. So just recompile
+ with the two mentioned above. This is just for identifying the functions
+ that have unstable edges.
2. Identify which edge ID numbers are unstable.
Run the target with `export AFL_DEBUG=1` for a few minutes then terminate.
The out/fuzzer_stats file will then show the edge IDs that were identified
- as unstable in the `var_bytes` entry. You can match these numbers
- directly to the data you created in the first step.
- Now you know which functions are responsible for the instability
+ as unstable in the `var_bytes` entry. You can match these numbers directly
+ to the data you created in the first step. Now you know which functions are
+ responsible for the instability
3. Create a text file with the filenames/functions
- Identify which source code files contain the functions that you need to remove from instrumentation, or just specify the functions you want to skip for instrumentation.
- Note that optimization might inline functions!
+ Identify which source code files contain the functions that you need to
+ remove from instrumentation, or just specify the functions you want to skip
+ for instrumentation. Note that optimization might inline functions!
+
+ Follow this document on how to do this:
+ [instrumentation/README.instrument_list.md](../instrumentation/README.instrument_list.md).
- Follow this document on how to do this: [instrumentation/README.instrument_list.md](../instrumentation/README.instrument_list.md).
If `PCGUARD` is used, then you need to follow this guide (needs llvm 12+!):
[https://clang.llvm.org/docs/SanitizerCoverage.html#partially-disabling-instrumentation](https://clang.llvm.org/docs/SanitizerCoverage.html#partially-disabling-instrumentation)
@@ -132,4 +167,5 @@ Four steps are required to do this and it also requires quite some knowledge of
Recompile, fuzz it, be happy :)
- This link explains this process for [Fuzzbench](https://github.com/google/fuzzbench/issues/677).
+ This link explains this process for
+ [Fuzzbench](https://github.com/google/fuzzbench/issues/677).
\ No newline at end of file
diff --git a/docs/custom_mutators.md b/docs/custom_mutators.md
index 6bee5413..2a77db82 100644
--- a/docs/custom_mutators.md
+++ b/docs/custom_mutators.md
@@ -4,13 +4,13 @@ This file describes how you can implement custom mutations to be used in AFL.
For now, we support C/C++ library and Python module, collectivelly named as the
custom mutator.
-There is also experimental support for Rust in `custom_mutators/rust`.
-For documentation, refer to that directory.
-Run ```cargo doc -p custom_mutator --open``` in that directory to view the
-documentation in your web browser.
+There is also experimental support for Rust in `custom_mutators/rust`. For
+documentation, refer to that directory. Run `cargo doc -p custom_mutator --open`
+in that directory to view the documentation in your web browser.
Implemented by
-- C/C++ library (`*.so`): Khaled Yakdan from Code Intelligence ()
+- C/C++ library (`*.so`): Khaled Yakdan from Code Intelligence
+ ()
- Python module: Christian Holler from Mozilla ()
## 1) Introduction
@@ -29,7 +29,8 @@ export AFL_CUSTOM_MUTATOR_LIBRARY="full/path/to/mutator_first.so;full/path/to/mu
For details, see [APIs](#2-apis) and [Usage](#3-usage).
-The custom mutation stage is set to be the first non-deterministic stage (right before the havoc stage).
+The custom mutation stage is set to be the first non-deterministic stage (right
+before the havoc stage).
Note: If `AFL_CUSTOM_MUTATOR_ONLY` is set, all mutations will solely be
performed with the custom mutator.
@@ -103,7 +104,8 @@ def deinit(): # optional for Python
- `init`:
- This method is called when AFL++ starts up and is used to seed RNG and set up buffers and state.
+ This method is called when AFL++ starts up and is used to seed RNG and set
+ up buffers and state.
- `queue_get` (optional):
@@ -121,18 +123,17 @@ def deinit(): # optional for Python
- `fuzz` (optional):
This method performs custom mutations on a given input. It also accepts an
- additional test case.
- Note that this function is optional - but it makes sense to use it.
- You would only skip this if `post_process` is used to fix checksums etc.
- so if you are using it, e.g., as a post processing library.
+ additional test case. Note that this function is optional - but it makes
+ sense to use it. You would only skip this if `post_process` is used to fix
+ checksums etc. so if you are using it, e.g., as a post processing library.
Note that a length > 0 *must* be returned!
- `describe` (optional):
When this function is called, it shall describe the current test case,
- generated by the last mutation. This will be called, for example,
- to name the written test case file after a crash occurred.
- Using it can help to reproduce crashing mutations.
+ generated by the last mutation. This will be called, for example, to name
+ the written test case file after a crash occurred. Using it can help to
+ reproduce crashing mutations.
- `havoc_mutation` and `havoc_mutation_probability` (optional):
@@ -144,21 +145,21 @@ def deinit(): # optional for Python
- `post_process` (optional):
For some cases, the format of the mutated data returned from the custom
- mutator is not suitable to directly execute the target with this input.
- For example, when using libprotobuf-mutator, the data returned is in a
- protobuf format which corresponds to a given grammar. In order to execute
- the target, the protobuf data must be converted to the plain-text format
- expected by the target. In such scenarios, the user can define the
- `post_process` function. This function is then transforming the data into the
- format expected by the API before executing the target.
+ mutator is not suitable to directly execute the target with this input. For
+ example, when using libprotobuf-mutator, the data returned is in a protobuf
+ format which corresponds to a given grammar. In order to execute the target,
+ the protobuf data must be converted to the plain-text format expected by the
+ target. In such scenarios, the user can define the `post_process` function.
+ This function is then transforming the data into the format expected by the
+ API before executing the target.
This can return any python object that implements the buffer protocol and
supports PyBUF_SIMPLE. These include bytes, bytearray, etc.
- `queue_new_entry` (optional):
- This methods is called after adding a new test case to the queue.
- If the contents of the file was changed return True, False otherwise.
+ This methods is called after adding a new test case to the queue. If the
+ contents of the file was changed, return True, False otherwise.
- `introspection` (optional):
@@ -170,8 +171,8 @@ def deinit(): # optional for Python
The last method to be called, deinitializing the state.
-Note that there are also three functions for trimming as described in the
-next section.
+Note that there are also three functions for trimming as described in the next
+section.
### Trimming Support
@@ -179,8 +180,8 @@ The generic trimming routines implemented in AFL++ can easily destroy the
structure of complex formats, possibly leading to a point where you have a lot
of test cases in the queue that your Python module cannot process anymore but
your target application still accepts. This is especially the case when your
-target can process a part of the input (causing coverage) and then errors out
-on the remaining input.
+target can process a part of the input (causing coverage) and then errors out on
+the remaining input.
In such cases, it makes sense to implement a custom trimming routine. The API
consists of multiple methods because after each trimming step, we have to go
@@ -213,10 +214,10 @@ trimmed input. Here's a quick API description:
- `post_trim` (optional)
This method is called after each trim operation to inform you if your
- trimming step was successful or not (in terms of coverage). If you receive
- a failure here, you should reset your input to the last known good state.
- In any case, this method must return the next trim iteration index (from 0
- to the maximum amount of steps you returned in `init_trim`).
+ trimming step was successful or not (in terms of coverage). If you receive a
+ failure here, you should reset your input to the last known good state. In
+ any case, this method must return the next trim iteration index (from 0 to
+ the maximum amount of steps you returned in `init_trim`).
Omitting any of three trimming methods will cause the trimming to be disabled
and trigger a fallback to the built-in default trimming routine.
@@ -227,10 +228,10 @@ Optionally, the following environment variables are supported:
- `AFL_CUSTOM_MUTATOR_ONLY`
- Disable all other mutation stages. This can prevent broken test cases
- (those that your Python module can't work with anymore) to fill up your
- queue. Best combined with a custom trimming routine (see below) because
- trimming can cause the same test breakage like havoc and splice.
+ Disable all other mutation stages. This can prevent broken test cases (those
+ that your Python module can't work with anymore) to fill up your queue. Best
+ combined with a custom trimming routine (see below) because trimming can
+ cause the same test breakage like havoc and splice.
- `AFL_PYTHON_ONLY`
@@ -270,9 +271,10 @@ For C/C++ mutators, the source code must be compiled as a shared object:
```bash
gcc -shared -Wall -O3 example.c -o example.so
```
-Note that if you specify multiple custom mutators, the corresponding functions will
-be called in the order in which they are specified. e.g first `post_process` function of
-`example_first.so` will be called and then that of `example_second.so`.
+Note that if you specify multiple custom mutators, the corresponding functions
+will be called in the order in which they are specified. e.g first
+`post_process` function of `example_first.so` will be called and then that of
+`example_second.so`.
### Run
@@ -300,4 +302,4 @@ See [example.c](../custom_mutators/examples/example.c) and
- [bruce30262/libprotobuf-mutator_fuzzing_learning](https://github.com/bruce30262/libprotobuf-mutator_fuzzing_learning/tree/master/4_libprotobuf_aflpp_custom_mutator)
- [thebabush/afl-libprotobuf-mutator](https://github.com/thebabush/afl-libprotobuf-mutator)
- [XML Fuzzing@NullCon 2017](https://www.agarri.fr/docs/XML_Fuzzing-NullCon2017-PUBLIC.pdf)
- - [A bug detected by AFL + XML-aware mutators](https://bugs.chromium.org/p/chromium/issues/detail?id=930663)
+ - [A bug detected by AFL + XML-aware mutators](https://bugs.chromium.org/p/chromium/issues/detail?id=930663)
\ No newline at end of file
diff --git a/docs/env_variables.md b/docs/env_variables.md
index 771bf157..3f7bdadb 100644
--- a/docs/env_variables.md
+++ b/docs/env_variables.md
@@ -590,79 +590,81 @@ the preferred way to configure FRIDA mode is through its
* `AFL_FRIDA_DEBUG_MAPS` - See `AFL_QEMU_DEBUG_MAPS`
* `AFL_FRIDA_DRIVER_NO_HOOK` - See `AFL_QEMU_DRIVER_NO_HOOK`. When using the
-QEMU driver to provide a `main` loop for a user provided
-`LLVMFuzzerTestOneInput`, this option configures the driver to read input from
-`stdin` rather than using in-memory test cases.
+ QEMU driver to provide a `main` loop for a user provided
+ `LLVMFuzzerTestOneInput`, this option configures the driver to read input from
+ `stdin` rather than using in-memory test cases.
* `AFL_FRIDA_EXCLUDE_RANGES` - See `AFL_QEMU_EXCLUDE_RANGES`
* `AFL_FRIDA_INST_COVERAGE_FILE` - File to write DynamoRio format coverage
-information (e.g., to be loaded within IDA lighthouse).
+ information (e.g., to be loaded within IDA lighthouse).
* `AFL_FRIDA_INST_DEBUG_FILE` - File to write raw assembly of original blocks
-and their instrumented counterparts during block compilation.
+ and their instrumented counterparts during block compilation.
* `AFL_FRIDA_INST_JIT` - Enable the instrumentation of Just-In-Time compiled
-code. Code is considered to be JIT if the executable segment is not backed by a
-file.
+ code. Code is considered to be JIT if the executable segment is not backed by
+ a file.
* `AFL_FRIDA_INST_NO_OPTIMIZE` - Don't use optimized inline assembly coverage
-instrumentation (the default where available). Required to use
-`AFL_FRIDA_INST_TRACE`.
+ instrumentation (the default where available). Required to use
+ `AFL_FRIDA_INST_TRACE`.
* `AFL_FRIDA_INST_NO_BACKPATCH` - Disable backpatching. At the end of executing
-each block, control will return to FRIDA to identify the next block to execute.
+ each block, control will return to FRIDA to identify the next block to
+ execute.
* `AFL_FRIDA_INST_NO_PREFETCH` - Disable prefetching. By default the child will
-report instrumented blocks back to the parent so that it can also instrument
-them and they be inherited by the next child on fork, implies
-`AFL_FRIDA_INST_NO_PREFETCH_BACKPATCH`.
+ report instrumented blocks back to the parent so that it can also instrument
+ them and they be inherited by the next child on fork, implies
+ `AFL_FRIDA_INST_NO_PREFETCH_BACKPATCH`.
* `AFL_FRIDA_INST_NO_PREFETCH_BACKPATCH` - Disable prefetching of stalker
-backpatching information. By default the child will report applied backpatches
-to the parent so that they can be applied and then be inherited by the next
-child on fork.
+ backpatching information. By default the child will report applied backpatches
+ to the parent so that they can be applied and then be inherited by the next
+ child on fork.
* `AFL_FRIDA_INST_RANGES` - See `AFL_QEMU_INST_RANGES`
* `AFL_FRIDA_INST_SEED` - Sets the initial seed for the hash function used to
-generate block (and hence edge) IDs. Setting this to a constant value may be
-useful for debugging purposes, e.g., investigating unstable edges.
-* `AFL_FRIDA_INST_TRACE` - Log to stdout the address of executed blocks,
-implies `AFL_FRIDA_INST_NO_OPTIMIZE`.
+ generate block (and hence edge) IDs. Setting this to a constant value may be
+ useful for debugging purposes, e.g., investigating unstable edges.
+* `AFL_FRIDA_INST_TRACE` - Log to stdout the address of executed blocks, implies
+ `AFL_FRIDA_INST_NO_OPTIMIZE`.
* `AFL_FRIDA_INST_TRACE_UNIQUE` - As per `AFL_FRIDA_INST_TRACE`, but each edge
-is logged only once, requires `AFL_FRIDA_INST_NO_OPTIMIZE`.
+ is logged only once, requires `AFL_FRIDA_INST_NO_OPTIMIZE`.
* `AFL_FRIDA_INST_UNSTABLE_COVERAGE_FILE` - File to write DynamoRio format
-coverage information for unstable edges (e.g., to be loaded within IDA
-lighthouse).
+ coverage information for unstable edges (e.g., to be loaded within IDA
+ lighthouse).
* `AFL_FRIDA_JS_SCRIPT` - Set the script to be loaded by the FRIDA scripting
-engine. See [here](Scripting.md) for details.
+ engine. See [here](Scripting.md) for details.
* `AFL_FRIDA_OUTPUT_STDOUT` - Redirect the standard output of the target
-application to the named file (supersedes the setting of `AFL_DEBUG_CHILD`)
+ application to the named file (supersedes the setting of `AFL_DEBUG_CHILD`)
* `AFL_FRIDA_OUTPUT_STDERR` - Redirect the standard error of the target
-application to the named file (supersedes the setting of `AFL_DEBUG_CHILD`)
+ application to the named file (supersedes the setting of `AFL_DEBUG_CHILD`)
* `AFL_FRIDA_PERSISTENT_ADDR` - See `AFL_QEMU_PERSISTENT_ADDR`
* `AFL_FRIDA_PERSISTENT_CNT` - See `AFL_QEMU_PERSISTENT_CNT`
* `AFL_FRIDA_PERSISTENT_DEBUG` - Insert a Breakpoint into the instrumented code
-at `AFL_FRIDA_PERSISTENT_HOOK` and `AFL_FRIDA_PERSISTENT_RET` to allow the user
-to detect issues in the persistent loop using a debugger.
+ at `AFL_FRIDA_PERSISTENT_HOOK` and `AFL_FRIDA_PERSISTENT_RET` to allow the
+ user to detect issues in the persistent loop using a debugger.
* `AFL_FRIDA_PERSISTENT_HOOK` - See `AFL_QEMU_PERSISTENT_HOOK`
* `AFL_FRIDA_PERSISTENT_RET` - See `AFL_QEMU_PERSISTENT_RET`
* `AFL_FRIDA_SECCOMP_FILE` - Write a log of any syscalls made by the target to
-the specified file.
+ the specified file.
* `AFL_FRIDA_STALKER_ADJACENT_BLOCKS` - Configure the number of adjacent blocks
- to fetch when generating instrumented code. By fetching blocks in the same
- order they appear in the original program, rather than the order of execution
- should help reduce locallity and adjacency. This includes allowing us to vector
- between adjancent blocks using a NOP slide rather than an immediate branch.
+ to fetch when generating instrumented code. By fetching blocks in the same
+ order they appear in the original program, rather than the order of execution
+ should help reduce locallity and adjacency. This includes allowing us to
+ vector between adjancent blocks using a NOP slide rather than an immediate
+ branch.
* `AFL_FRIDA_STALKER_IC_ENTRIES` - Configure the number of inline cache entries
-stored along-side branch instructions which provide a cache to avoid having to
-call back into FRIDA to find the next block. Default is 32.
+ stored along-side branch instructions which provide a cache to avoid having to
+ call back into FRIDA to find the next block. Default is 32.
* `AFL_FRIDA_STATS_FILE` - Write statistics information about the code being
-instrumented to the given file name. The statistics are written only for the
-child process when new block is instrumented (when the
-`AFL_FRIDA_STATS_INTERVAL` has expired). Note that just because a new path is
-found does not mean a new block needs to be compiled. It could be that
-the existing blocks instrumented have been executed in a different order.
+ instrumented to the given file name. The statistics are written only for the
+ child process when new block is instrumented (when the
+ `AFL_FRIDA_STATS_INTERVAL` has expired). Note that just because a new path is
+ found does not mean a new block needs to be compiled. It could be that the
+ existing blocks instrumented have been executed in a different order.
* `AFL_FRIDA_STATS_INTERVAL` - The maximum frequency to output statistics
-information. Stats will be written whenever they are updated if the given
-interval has elapsed since last time they were written.
+ information. Stats will be written whenever they are updated if the given
+ interval has elapsed since last time they were written.
* `AFL_FRIDA_TRACEABLE` - Set the child process to be traceable by any process
-to aid debugging and overcome the restrictions imposed by YAMA. Supported on
-Linux only. Permits a non-root user to use `gcore` or similar to collect a core
-dump of the instrumented target. Note that in order to capture the core dump you
-must set a sufficient timeout (using `-t`) to avoid `afl-fuzz` killing the
-process whilst it is being dumped.
+ to aid debugging and overcome the restrictions imposed by YAMA. Supported on
+ Linux only. Permits a non-root user to use `gcore` or similar to collect a
+ core dump of the instrumented target. Note that in order to capture the core
+ dump you must set a sufficient timeout (using `-t`) to avoid `afl-fuzz`
+ killing the process whilst it is being dumped.
## 8) Settings for afl-cmin
diff --git a/docs/fuzzing_binary-only_targets.md b/docs/fuzzing_binary-only_targets.md
index a786fd8b..b3d9ca02 100644
--- a/docs/fuzzing_binary-only_targets.md
+++ b/docs/fuzzing_binary-only_targets.md
@@ -84,7 +84,8 @@ Wine, python3, and the pefile python package installed.
It is included in AFL++.
-For more information, see [qemu_mode/README.wine.md](../qemu_mode/README.wine.md).
+For more information, see
+[qemu_mode/README.wine.md](../qemu_mode/README.wine.md).
### Frida_mode
@@ -169,7 +170,8 @@ Fore more information, see
## Binary rewriters
-An alternative solution are binary rewriters. They are faster then the solutions native to AFL++ but don't always work.
+An alternative solution are binary rewriters. They are faster then the solutions
+native to AFL++ but don't always work.
### ZAFL
ZAFL is a static rewriting platform supporting x86-64 C/C++,
diff --git a/docs/fuzzing_in_depth.md b/docs/fuzzing_in_depth.md
index d408aa91..9611d6b7 100644
--- a/docs/fuzzing_in_depth.md
+++ b/docs/fuzzing_in_depth.md
@@ -259,6 +259,7 @@ Then build the target. (Usually with `make`)
#### configure
For `configure` build systems this is usually done by:
+
`CC=afl-clang-fast CXX=afl-clang-fast++ ./configure --disable-shared`
Note that if you are using the (better) afl-clang-lto compiler you also have to
@@ -268,6 +269,7 @@ described in [instrumentation/README.lto.md](../instrumentation/README.lto.md).
#### cmake
For `cmake` build systems this is usually done by:
+
`mkdir build; cd build; cmake -DCMAKE_C_COMPILER=afl-cc -DCMAKE_CXX_COMPILER=afl-c++ ..`
Note that if you are using the (better) afl-clang-lto compiler you also have to
@@ -307,8 +309,8 @@ it for a hobby and not professionally :-).
### g) libfuzzer fuzzer harnesses with LLVMFuzzerTestOneInput()
-libfuzzer `LLVMFuzzerTestOneInput()` harnesses are the defacto standard
-for fuzzing, and they can be used with AFL++ (and honggfuzz) as well!
+libfuzzer `LLVMFuzzerTestOneInput()` harnesses are the defacto standard for
+fuzzing, and they can be used with AFL++ (and honggfuzz) as well!
Compiling them is as simple as:
@@ -358,8 +360,11 @@ Put all files from step a) into one directory, e.g., INPUTS.
If the target program is to be called by fuzzing as `bin/target -d INPUTFILE`
the run afl-cmin like this:
+
`afl-cmin -i INPUTS -o INPUTS_UNIQUE -- bin/target -d @@`
-Note that the INPUTFILE argument that the target program would read from has to be set as `@@`.
+
+Note that the INPUTFILE argument that the target program would read from has to
+be set as `@@`.
If the target reads from stdin instead, just omit the `@@` as this is the
default.
@@ -420,22 +425,25 @@ as test data in there.
If you do not want anything special, the defaults are already usually best,
hence all you need is to specify the seed input directory with the result of
step [2a) Collect inputs](#a-collect-inputs):
+
`afl-fuzz -i input -o output -- bin/target -d @@`
-Note that the directory specified with -o will be created if it does not exist.
+
+Note that the directory specified with `-o` will be created if it does not
+exist.
It can be valuable to run afl-fuzz in a screen or tmux shell so you can log off,
or afl-fuzz is not aborted if you are running it in a remote ssh session where
-the connection fails in between.
-Only do that though once you have verified that your fuzzing setup works!
-Run it like `screen -dmS afl-main -- afl-fuzz -M main-$HOSTNAME -i ...`
-and it will start away in a screen session. To enter this session, type
-`screen -r afl-main`. You see - it makes sense to name the screen session
-same as the afl-fuzz -M/-S naming :-)
-For more information on screen or tmux, check their documentation.
+the connection fails in between. Only do that though once you have verified that
+your fuzzing setup works! Run it like `screen -dmS afl-main -- afl-fuzz -M
+main-$HOSTNAME -i ...` and it will start away in a screen session. To enter this
+session, type `screen -r afl-main`. You see - it makes sense to name the screen
+session same as the afl-fuzz -M/-S naming :-) For more information on screen or
+tmux, check their documentation.
If you need to stop and re-start the fuzzing, use the same command line options
(or even change them by selecting a different power schedule or another mutation
mode!) and switch the input directory with a dash (`-`):
+
`afl-fuzz -i - -o output -- bin/target -d @@`
Adding a dictionary is helpful. See the directory
@@ -457,12 +465,13 @@ handling in the target. Play around with various -m values until you find one
that safely works for all your input seeds (if you have good ones and then
double or quadruple that.
-By default afl-fuzz never stops fuzzing. To terminate AFL++, press
-Control-C or send a signal SIGINT. You can limit the number of executions or
-approximate runtime in seconds with options also.
+By default afl-fuzz never stops fuzzing. To terminate AFL++, press Control-C or
+send a signal SIGINT. You can limit the number of executions or approximate
+runtime in seconds with options also.
When you start afl-fuzz you will see a user interface that shows what the status
is:
+

All labels are explained in [status_screen.md](status_screen.md).
@@ -528,8 +537,8 @@ All other secondaries should be used like this:
Also, it is recommended to set `export AFL_IMPORT_FIRST=1` to load test cases
from other fuzzers in the campaign first.
-If you have a large corpus, a corpus from a previous run or are fuzzing in
-a CI, then also set `export AFL_CMPLOG_ONLY_NEW=1` and `export AFL_FAST_CAL=1`.
+If you have a large corpus, a corpus from a previous run or are fuzzing in a CI,
+then also set `export AFL_CMPLOG_ONLY_NEW=1` and `export AFL_FAST_CAL=1`.
You can also use different fuzzers. If you are using AFL spinoffs or AFL
conforming fuzzers, then just use the same -o directory and give it a unique
@@ -553,11 +562,10 @@ recommended!
### d) Using multiple machines for fuzzing
-Maybe you have more than one machine you want to fuzz the same target on.
-Start the `afl-fuzz` (and perhaps libfuzzer, honggfuzz, ...)
-orchestra as you like, just ensure that your have one and only one `-M`
-instance per server, and that its name is unique, hence the recommendation
-for `-M main-$HOSTNAME`.
+Maybe you have more than one machine you want to fuzz the same target on. Start
+the `afl-fuzz` (and perhaps libfuzzer, honggfuzz, ...) orchestra as you like,
+just ensure that your have one and only one `-M` instance per server, and that
+its name is unique, hence the recommendation for `-M main-$HOSTNAME`.
Now there are three strategies on how you can sync between the servers:
* never: sounds weird, but this makes every server an island and has the chance
diff --git a/docs/ideas.md b/docs/ideas.md
index 8193983b..1a578313 100644
--- a/docs/ideas.md
+++ b/docs/ideas.md
@@ -1,31 +1,29 @@
# Ideas for AFL++
-In the following, we describe a variety of ideas that could be implemented
-for future AFL++ versions.
+In the following, we describe a variety of ideas that could be implemented for
+future AFL++ versions.
## Analysis software
-Currently analysis is done by using afl-plot, which is rather outdated.
-A GTK or browser tool to create run-time analysis based on fuzzer_stats,
-queue/id* information and plot_data that allows for zooming in and out,
-changing min/max display values etc. and doing that for a single run,
-different runs and campaigns vs campaigns.
-Interesting values are execs, and execs/s, edges discovered (total, when
-each edge was discovered and which other fuzzer share finding that edge),
-test cases executed.
-It should be clickable which value is X and Y axis, zoom factor, log scaling
-on-off, etc.
+Currently analysis is done by using afl-plot, which is rather outdated. A GTK or
+browser tool to create run-time analysis based on fuzzer_stats, queue/id*
+information and plot_data that allows for zooming in and out, changing min/max
+display values etc. and doing that for a single run, different runs and
+campaigns vs campaigns. Interesting values are execs, and execs/s, edges
+discovered (total, when each edge was discovered and which other fuzzer share
+finding that edge), test cases executed. It should be clickable which value is X
+and Y axis, zoom factor, log scaling on-off, etc.
Mentor: vanhauser-thc
## WASM Instrumentation
Currently, AFL++ can be used for source code fuzzing and traditional binaries.
-With the rise of WASM as compile target, however, a novel way of
-instrumentation needs to be implemented for binaries compiled to Webassembly.
-This can either be done by inserting instrumentation directly into the
-WASM AST, or by patching feedback into a WASM VMs of choice, similar to
-the current Unicorn instrumentation.
+With the rise of WASM as compile target, however, a novel way of instrumentation
+needs to be implemented for binaries compiled to Webassembly. This can either be
+done by inserting instrumentation directly into the WASM AST, or by patching
+feedback into a WASM VMs of choice, similar to the current Unicorn
+instrumentation.
Mentor: any
@@ -34,25 +32,26 @@ Mentor: any
Other programming languages also use llvm hence they could (easily?) supported
for fuzzing, e.g., mono, swift, go, kotlin native, fortran, ...
-GCC also supports: Objective-C, Fortran, Ada, Go, and D
-(according to [Gcc homepage](https://gcc.gnu.org/))
+GCC also supports: Objective-C, Fortran, Ada, Go, and D (according to
+[Gcc homepage](https://gcc.gnu.org/))
-LLVM is also used by: Rust, LLGo (Go), kaleidoscope (Haskell), flang (Fortran), emscripten (JavaScript, WASM), ilwasm (CIL (C#))
-(according to [LLVM frontends](https://gist.github.com/axic/62d66fb9d8bccca6cc48fa9841db9241))
+LLVM is also used by: Rust, LLGo (Go), kaleidoscope (Haskell), flang (Fortran),
+emscripten (JavaScript, WASM), ilwasm (CIL (C#)) (according to
+[LLVM frontends](https://gist.github.com/axic/62d66fb9d8bccca6cc48fa9841db9241))
Mentor: vanhauser-thc
## Machine Learning
-Something with machine learning, better than [NEUZZ](https://github.com/dongdongshe/neuzz) :-)
-Either improve a single mutator thorugh learning of many different bugs
-(a bug class) or gather deep insights about a single target beforehand
-(CFG, DFG, VFG, ...?) and improve performance for a single target.
+Something with machine learning, better than
+[NEUZZ](https://github.com/dongdongshe/neuzz) :-) Either improve a single
+mutator thorugh learning of many different bugs (a bug class) or gather deep
+insights about a single target beforehand (CFG, DFG, VFG, ...?) and improve
+performance for a single target.
Mentor: domenukk
## Your idea!
-Finally, we are open to proposals!
-Create an issue at https://github.com/AFLplusplus/AFLplusplus/issues and let's discuss :-)
-
+Finally, we are open to proposals! Create an issue at
+https://github.com/AFLplusplus/AFLplusplus/issues and let's discuss :-)
\ No newline at end of file
diff --git a/docs/important_changes.md b/docs/important_changes.md
index 82de054f..9d4523e8 100644
--- a/docs/important_changes.md
+++ b/docs/important_changes.md
@@ -1,6 +1,7 @@
# Important changes in AFL++
-This document lists important changes in AFL++, for example, major behavior changes.
+This document lists important changes in AFL++, for example, major behavior
+changes.
## From version 3.00 onwards
@@ -10,8 +11,8 @@ iOS etc.
With AFL++ 3.15 we introduced the following changes from previous behaviors:
* Also -M main mode does not do deterministic fuzzing by default anymore
- * afl-cmin and afl-showmap -Ci now descent into subdirectories like
- afl-fuzz -i does (but note that afl-cmin.bash does not)
+ * afl-cmin and afl-showmap -Ci now descent into subdirectories like afl-fuzz
+ -i does (but note that afl-cmin.bash does not)
With AFL++ 3.14 we introduced the following changes from previous behaviors:
* afl-fuzz: deterministic fuzzing it not a default for -M main anymore
@@ -31,22 +32,22 @@ behaviors and defaults:
* The gcc_plugin was replaced with a new version submitted by AdaCore that
supports more features. Thank you!
* qemu_mode got upgraded to QEMU 5.1, but to be able to build this a current
- ninja build tool version and python3 setuptools are required.
- qemu_mode also got new options like snapshotting, instrumenting specific
- shared libraries, etc. Additionally QEMU 5.1 supports more CPU targets so
- this is really worth it.
+ ninja build tool version and python3 setuptools are required. qemu_mode also
+ got new options like snapshotting, instrumenting specific shared libraries,
+ etc. Additionally QEMU 5.1 supports more CPU targets so this is really worth
+ it.
* When instrumenting targets, afl-cc will not supersede optimizations anymore
if any were given. This allows to fuzz targets build regularly like those
for debug or release versions.
* afl-fuzz:
- * if neither -M or -S is specified, `-S default` is assumed, so more
- fuzzers can easily be added later
- * `-i` input directory option now descends into subdirectories. It also
- does not fatal on crashes and too large files, instead it skips them
- and uses them for splicing mutations
+ * if neither -M or -S is specified, `-S default` is assumed, so more fuzzers
+ can easily be added later
+ * `-i` input directory option now descends into subdirectories. It also does
+ not fatal on crashes and too large files, instead it skips them and uses
+ them for splicing mutations
* -m none is now default, set memory limits (in MB) with, e.g., -m 250
- * deterministic fuzzing is now disabled by default (unless using -M) and
- can be enabled with -D
+ * deterministic fuzzing is now disabled by default (unless using -M) and can
+ be enabled with -D
* a caching of test cases can now be performed and can be modified by
editing config.h for TESTCASE_CACHE or by specifying the environment
variable `AFL_TESTCACHE_SIZE` (in MB). Good values are between 50-500
diff --git a/docs/rpc_statsd.md b/docs/rpc_statsd.md
index 9b3d8d40..003b9c79 100644
--- a/docs/rpc_statsd.md
+++ b/docs/rpc_statsd.md
@@ -1,18 +1,29 @@
# Remote monitoring and metrics visualization
-AFL++ can send out metrics as StatsD messages. For remote monitoring and visualization of the metrics, you can set up a tool chain. For example, with Prometheus and Grafana. All tools are free and open source.
+AFL++ can send out metrics as StatsD messages. For remote monitoring and
+visualization of the metrics, you can set up a tool chain. For example, with
+Prometheus and Grafana. All tools are free and open source.
-This enables you to create nice and readable dashboards containing all the information you need on your fuzzer instances. There is no need to write your own statistics parsing system, deploy and maintain it to all your instances, and sync with your graph rendering system.
+This enables you to create nice and readable dashboards containing all the
+information you need on your fuzzer instances. There is no need to write your
+own statistics parsing system, deploy and maintain it to all your instances, and
+sync with your graph rendering system.
-Compared to the default integrated UI of AFL++, this can help you to visualize trends and the fuzzing state over time. You might be able to see when the fuzzing process has reached a state of no progress and visualize what are the "best strategies" for your targets (according to your own criteria). You can do so without logging into each instance individually.
+Compared to the default integrated UI of AFL++, this can help you to visualize
+trends and the fuzzing state over time. You might be able to see when the
+fuzzing process has reached a state of no progress and visualize what are the
+"best strategies" for your targets (according to your own criteria). You can do
+so without logging into each instance individually.

-This is an example visualization with Grafana. The dashboard can be imported with [this JSON template](resources/grafana-afl++.json).
+This is an example visualization with Grafana. The dashboard can be imported
+with [this JSON template](resources/grafana-afl++.json).
## AFL++ metrics and StatsD
-StatsD allows you to receive and aggregate metrics from a wide range of applications and retransmit them to a backend of your choice.
+StatsD allows you to receive and aggregate metrics from a wide range of
+applications and retransmit them to a backend of your choice.
From AFL++, StatsD can receive the following metrics:
- cur_path
@@ -36,35 +47,57 @@ From AFL++, StatsD can receive the following metrics:
- var_byte_count
- variable_paths
-Depending on your StatsD server, you will be able to monitor, trigger alerts, or perform actions based on these metrics (for example: alert on slow exec/s for a new build, threshold of crashes, time since last crash > X, and so on).
+Depending on your StatsD server, you will be able to monitor, trigger alerts, or
+perform actions based on these metrics (for example: alert on slow exec/s for a
+new build, threshold of crashes, time since last crash > X, and so on).
## Setting environment variables in AFL++
-1. To enable the StatsD metrics collection on your fuzzer instances, set the environment variable `AFL_STATSD=1`. By default, AFL++ will send the metrics over UDP to 127.0.0.1:8125.
+1. To enable the StatsD metrics collection on your fuzzer instances, set the
+ environment variable `AFL_STATSD=1`. By default, AFL++ will send the metrics
+ over UDP to 127.0.0.1:8125.
-2. To enable tags for each metric based on their format (banner and afl_version), set the environment variable `AFL_STATSD_TAGS_FLAVOR`. By default, no tags will be added to the metrics.
+2. To enable tags for each metric based on their format (banner and
+ afl_version), set the environment variable `AFL_STATSD_TAGS_FLAVOR`. By
+ default, no tags will be added to the metrics.
The available values are the following:
- `dogstatsd`
- `influxdb`
- `librato`
- `signalfx`
-
- For more information on environment variables, see [env_variables.md](env_variables.md).
- Note: When using multiple fuzzer instances with StatsD it is *strongly* recommended to set up `AFL_STATSD_TAGS_FLAVOR` to match your StatsD server. This will allow you to see individual fuzzer performance, detect bad ones, and see the progress of each strategy.
+ For more information on environment variables, see
+ [env_variables.md](env_variables.md).
-3. Optional: To set the host and port of your StatsD daemon, set `AFL_STATSD_HOST` and `AFL_STATSD_PORT`. The default values are `localhost` and `8125`.
+ Note: When using multiple fuzzer instances with StatsD it is *strongly*
+ recommended to set up `AFL_STATSD_TAGS_FLAVOR` to match your StatsD server.
+ This will allow you to see individual fuzzer performance, detect bad ones,
+ and see the progress of each strategy.
+
+3. Optional: To set the host and port of your StatsD daemon, set
+ `AFL_STATSD_HOST` and `AFL_STATSD_PORT`. The default values are `localhost`
+ and `8125`.
## Installing and setting up StatsD, Prometheus, and Grafana
-The easiest way to install and set up the infrastructure is with Docker and Docker Compose.
+The easiest way to install and set up the infrastructure is with Docker and
+Docker Compose.
-Depending on your fuzzing setup and infrastructure, you may not want to run these applications on your fuzzer instances. This setup may be modified before use in a production environment; for example, adding passwords, creating volumes for storage, tweaking the metrics gathering to get host metrics (CPU, RAM, and so on).
+Depending on your fuzzing setup and infrastructure, you may not want to run
+these applications on your fuzzer instances. This setup may be modified before
+use in a production environment; for example, adding passwords, creating volumes
+for storage, tweaking the metrics gathering to get host metrics (CPU, RAM, and
+so on).
-For all your fuzzing instances, only one instance of Prometheus and Grafana is required. The [statsd exporter](https://registry.hub.docker.com/r/prom/statsd-exporter) converts the StatsD metrics to Prometheus. If you are using a provider that supports StatsD directly, you can skip this part of the setup."
+For all your fuzzing instances, only one instance of Prometheus and Grafana is
+required. The
+[statsd exporter](https://registry.hub.docker.com/r/prom/statsd-exporter)
+converts the StatsD metrics to Prometheus. If you are using a provider that
+supports StatsD directly, you can skip this part of the setup."
-You can create and move the infrastructure files into a directory of your choice. The directory will store all the required configuration files.
+You can create and move the infrastructure files into a directory of your
+choice. The directory will store all the required configuration files.
To install and set up Prometheus and Grafana:
@@ -76,6 +109,7 @@ To install and set up Prometheus and Grafana:
```
2. Create a `docker-compose.yml` containing the following:
+
```yml
version: '3'
@@ -109,7 +143,7 @@ To install and set up Prometheus and Grafana:
- "8125:9125/udp"
networks:
- statsd-net
-
+
grafana:
image: grafana/grafana
container_name: grafana
@@ -134,7 +168,8 @@ To install and set up Prometheus and Grafana:
```
4. Create a `statsd_mapping.yml` containing the following:
- ```yml
+
+ ```yml
mappings:
- match: "fuzzing.*"
name: "fuzzing"
@@ -152,4 +187,4 @@ To run your fuzzing instances:
AFL_STATSD_TAGS_FLAVOR=dogstatsd AFL_STATSD=1 afl-fuzz -M test-fuzzer-1 -i i -o o [./bin/my-application] @@
AFL_STATSD_TAGS_FLAVOR=dogstatsd AFL_STATSD=1 afl-fuzz -S test-fuzzer-2 -i i -o o [./bin/my-application] @@
...
-```
+```
\ No newline at end of file
diff --git a/frida_mode/Scripting.md b/frida_mode/Scripting.md
index fd4282db..63ab1718 100644
--- a/frida_mode/Scripting.md
+++ b/frida_mode/Scripting.md
@@ -334,8 +334,8 @@ Interceptor.replace(LLVMFuzzerTestOneInput, cm.My_LLVMFuzzerTestOneInput);
### Hooking `main`
-Lastly, it should be noted that using FRIDA mode's scripting support to hook
-the `main` function is a special case. This is because the `main` function is
+Lastly, it should be noted that using FRIDA mode's scripting support to hook the
+`main` function is a special case. This is because the `main` function is
already hooked by the FRIDA mode engine itself and hence the function `main` (or
at least the first basic block already been compiled by Stalker ready for
execution). Hence any attempt to use `Interceptor.replace` like in the example
@@ -405,22 +405,22 @@ Consider the [following](test/js/test2.c) test code...
#include
const uint32_t crc32_tab[] = {
- 0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419, 0x706af48f,
+ 0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419, 0x706af48f,
...
- 0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
+ 0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
};
uint32_t
crc32(const void *buf, size_t size)
{
- const uint8_t *p = buf;
- uint32_t crc;
- crc = ~0U;
- while (size--)
- crc = crc32_tab[(crc ^ *p++) & 0xFF] ^ (crc >> 8);
- return crc ^ ~0U;
+ const uint8_t *p = buf;
+ uint32_t crc;
+ crc = ~0U;
+ while (size--)
+ crc = crc32_tab[(crc ^ *p++) & 0xFF] ^ (crc >> 8);
+ return crc ^ ~0U;
}
/*
diff --git a/instrumentation/README.gcc_plugin.md b/instrumentation/README.gcc_plugin.md
index f251415b..ef38662b 100644
--- a/instrumentation/README.gcc_plugin.md
+++ b/instrumentation/README.gcc_plugin.md
@@ -1,7 +1,7 @@
# GCC-based instrumentation for afl-fuzz
-For the general instruction manual, see [../README.md](../README.md).
-For the LLVM-based instrumentation, see [README.llvm.md](README.llvm.md).
+For the general instruction manual, see [../README.md](../README.md). For the
+LLVM-based instrumentation, see [README.llvm.md](README.llvm.md).
This document describes how to build and use `afl-gcc-fast` and `afl-g++-fast`,
which instrument the target with the help of gcc plugins.
--
cgit 1.4.1
From 415be06c54a61ae87fd8a99da2ee12d1ea5d1638 Mon Sep 17 00:00:00 2001
From: llzmb <46303940+llzmb@users.noreply.github.com>
Date: Sat, 4 Dec 2021 21:29:15 +0100
Subject: Add links to orphaned files
---
README.md | 3 ++-
docs/afl-fuzz_approach.md | 1 +
2 files changed, 3 insertions(+), 1 deletion(-)
(limited to 'docs/afl-fuzz_approach.md')
diff --git a/README.md b/README.md
index b70eb1ab..21724696 100644
--- a/README.md
+++ b/README.md
@@ -31,7 +31,8 @@ Here is some information to get you started:
* For releases, see the
[Releases tab](https://github.com/AFLplusplus/AFLplusplus/releases) and
[branches](#branches). Also take a look at the list of
- [important changes in AFL++](docs/important_changes.md).
+ [important changes in AFL++](docs/important_changes.md) and the list of
+ [features](docs/features.md).
* If you want to use AFL++ for your academic work, check the
[papers page](https://aflplus.plus/papers/) on the website.
* To cite our work, look at the [Cite](#cite) section.
diff --git a/docs/afl-fuzz_approach.md b/docs/afl-fuzz_approach.md
index 3804f5a0..a72087c2 100644
--- a/docs/afl-fuzz_approach.md
+++ b/docs/afl-fuzz_approach.md
@@ -466,6 +466,7 @@ cd ../../
sudo make install
```
+To learn more about remote monitoring and metrics visualization with StatsD, see [rpc_statsd.md](rpc_statsd.md).
### Addendum: status and plot files
--
cgit 1.4.1
From 89df436290c67b1c03122bfe5c68cf4f92e581c0 Mon Sep 17 00:00:00 2001
From: llzmb <46303940+llzmb@users.noreply.github.com>
Date: Sun, 5 Dec 2021 19:03:48 +0100
Subject: Fix broken links - 1st run
---
docs/INSTALL.md | 9 ++++----
docs/afl-fuzz_approach.md | 14 ++++++------
docs/env_variables.md | 11 +++++-----
docs/fuzzing_binary-only_targets.md | 8 +++----
docs/fuzzing_in_depth.md | 9 ++++----
frida_mode/Scripting.md | 4 ++--
instrumentation/README.llvm.md | 43 ++++++++++++++++++++++++++++++++++++-
utils/README.md | 2 +-
8 files changed, 72 insertions(+), 28 deletions(-)
(limited to 'docs/afl-fuzz_approach.md')
diff --git a/docs/INSTALL.md b/docs/INSTALL.md
index 9d1309fe..906d3f8e 100644
--- a/docs/INSTALL.md
+++ b/docs/INSTALL.md
@@ -60,10 +60,9 @@ These build targets exist:
* unit: perform unit tests (based on cmocka)
* help: shows these build options
-[Unless you are on Mac OS
-X](https://developer.apple.com/library/archive/qa/qa1118/_index.html), you can
-also build statically linked versions of the AFL++ binaries by passing the
-`STATIC=1` argument to make:
+[Unless you are on Mac OS X](https://developer.apple.com/library/archive/qa/qa1118/_index.html),
+you can also build statically linked versions of the AFL++ binaries by passing
+the `STATIC=1` argument to make:
```shell
make STATIC=1
@@ -169,5 +168,5 @@ sysctl kern.sysv.shmall=98304
```
See
-[https://www.spy-hill.com/help/apple/SharedMemory.html](https://www.spy-hill.com/help/apple/SharedMemory.html)
+[http://www.spy-hill.com/help/apple/SharedMemory.html](http://www.spy-hill.com/help/apple/SharedMemory.html)
for documentation for these settings and how to make them permanent.
\ No newline at end of file
diff --git a/docs/afl-fuzz_approach.md b/docs/afl-fuzz_approach.md
index a72087c2..01888935 100644
--- a/docs/afl-fuzz_approach.md
+++ b/docs/afl-fuzz_approach.md
@@ -243,9 +243,10 @@ now. It tells you about the current stage, which can be any of:
together two random inputs from the queue at some arbitrarily selected
midpoint.
- sync - a stage used only when `-M` or `-S` is set (see
- [parallel_fuzzing.md](parallel_fuzzing.md)). No real fuzzing is involved, but
- the tool scans the output from other fuzzers and imports test cases as
- necessary. The first time this is done, it may take several minutes or so.
+ [fuzzing_in_depth.md:3c) Using multiple cores](fuzzing_in_depth.md#c-using-multiple-cores)).
+ No real fuzzing is involved, but the tool scans the output from other fuzzers
+ and imports test cases as necessary. The first time this is done, it may take
+ several minutes or so.
The remaining fields should be fairly self-evident: there's the exec count
progress indicator for the current stage, a global exec counter, and a benchmark
@@ -254,8 +255,8 @@ to another, but the benchmark should be ideally over 500 execs/sec most of the
time - and if it stays below 100, the job will probably take very long.
The fuzzer will explicitly warn you about slow targets, too. If this happens,
-see the [perf_tips.md](perf_tips.md) file included with the fuzzer for ideas on
-how to speed things up.
+see the [best_practices.md#improving-speed](best_practices.md#improving-speed)
+for ideas on how to speed things up.
### Findings in depth
@@ -396,7 +397,8 @@ comparing it to the number of logical cores on the system.
If the value is shown in green, you are using fewer CPU cores than available on
your system and can probably parallelize to improve performance; for tips on how
-to do that, see [parallel_fuzzing.md](parallel_fuzzing.md).
+to do that, see
+[fuzzing_in_depth.md:3c) Using multiple cores](fuzzing_in_depth.md#c-using-multiple-cores).
If the value is shown in red, your CPU is *possibly* oversubscribed, and running
additional fuzzers may not give you any benefits.
diff --git a/docs/env_variables.md b/docs/env_variables.md
index 86ebf25c..0952b960 100644
--- a/docs/env_variables.md
+++ b/docs/env_variables.md
@@ -583,10 +583,11 @@ The QEMU wrapper used to instrument binary-only code supports several settings:
The FRIDA wrapper used to instrument binary-only code supports many of the same
options as `afl-qemu-trace`, but also has a number of additional advanced
-options. These are listed in brief below (see [here](../frida_mode/README.md)
-for more details). These settings are provided for compatibiltiy with QEMU mode,
-the preferred way to configure FRIDA mode is through its
-[scripting](../frida_mode/Scripting.md) support.
+options. These are listed in brief below (see
+[frida_mode/README.md](../frida_mode/README.md) for more details). These
+settings are provided for compatibiltiy with QEMU mode, the preferred way to
+configure FRIDA mode is through its [scripting](../frida_mode/Scripting.md)
+support.
* `AFL_FRIDA_DEBUG_MAPS` - See `AFL_QEMU_DEBUG_MAPS`
* `AFL_FRIDA_DRIVER_NO_HOOK` - See `AFL_QEMU_DRIVER_NO_HOOK`. When using the
@@ -627,7 +628,7 @@ the preferred way to configure FRIDA mode is through its
coverage information for unstable edges (e.g., to be loaded within IDA
lighthouse).
* `AFL_FRIDA_JS_SCRIPT` - Set the script to be loaded by the FRIDA scripting
- engine. See [here](Scripting.md) for details.
+ engine. See [frida_mode/Scripting.md](../frida_mode/Scripting.md) for details.
* `AFL_FRIDA_OUTPUT_STDOUT` - Redirect the standard output of the target
application to the named file (supersedes the setting of `AFL_DEBUG_CHILD`)
* `AFL_FRIDA_OUTPUT_STDERR` - Redirect the standard error of the target
diff --git a/docs/fuzzing_binary-only_targets.md b/docs/fuzzing_binary-only_targets.md
index eaed3a91..fd18b5c1 100644
--- a/docs/fuzzing_binary-only_targets.md
+++ b/docs/fuzzing_binary-only_targets.md
@@ -107,10 +107,10 @@ For additional instructions and caveats, see
[frida_mode/README.md](../frida_mode/README.md).
If possible, you should use the persistent mode, see
-[qemu_frida/README.md](../qemu_frida/README.md). The mode is approximately 2-5x
-slower than compile-time instrumentation, and is less conducive to
-parallelization. But for binary-only fuzzing, it gives a huge speed improvement
-if it is possible to use.
+[instrumentation/README.persistent_mode.md](../instrumentation/README.persistent_mode.md).
+The mode is approximately 2-5x slower than compile-time instrumentation, and is
+less conducive to parallelization. But for binary-only fuzzing, it gives a huge
+speed improvement if it is possible to use.
If you want to fuzz a binary-only library, then you can fuzz it with frida-gum
via frida_mode/. You will have to write a harness to call the target function in
diff --git a/docs/fuzzing_in_depth.md b/docs/fuzzing_in_depth.md
index 4a1ddf45..29e8f817 100644
--- a/docs/fuzzing_in_depth.md
+++ b/docs/fuzzing_in_depth.md
@@ -153,12 +153,12 @@ only instrument parts of the target that you are interested in:
There are many more options and modes available, however, these are most of the
time less effective. See:
-* [instrumentation/README.ctx.md](../instrumentation/README.ctx.md)
-* [instrumentation/README.ngram.md](../instrumentation/README.ngram.md)
+* [instrumentation/README.llvm.md#6) AFL++ Context Sensitive Branch Coverage](../instrumentation/README.llvm.md#6-afl-context-sensitive-branch-coverage)
+* [instrumentation/README.llvm.md#7) AFL++ N-Gram Branch Coverage](../instrumentation/README.llvm.md#7-afl-n-gram-branch-coverage)
AFL++ performs "never zero" counting in its bitmap. You can read more about this
here:
-* [instrumentation/README.neverzero.md](../instrumentation/README.neverzero.md)
+* [instrumentation/README.llvm.md#8-neverzero-counters](../instrumentation/README.llvm.md#8-neverzero-counters)
### c) Selecting sanitizers
@@ -474,7 +474,8 @@ is:

-All labels are explained in [status_screen.md](status_screen.md).
+All labels are explained in
+[afl-fuzz_approach.md#understanding-the-status-screen](afl-fuzz_approach.md#understanding-the-status-screen).
### b) Keeping memory use and timeouts in check
diff --git a/frida_mode/Scripting.md b/frida_mode/Scripting.md
index 63ab1718..ad86fdd3 100644
--- a/frida_mode/Scripting.md
+++ b/frida_mode/Scripting.md
@@ -109,8 +109,8 @@ Afl.setPersistentAddress(address);
A persistent hook can be implemented using a conventional shared object, sample
source code for a hook suitable for the prototype of `LLVMFuzzerTestOneInput`
-can be found in [hook/hook.c](hook/hook.c). This can be configured using code
-similar to the following.
+can be found in [hook/](hook/). This can be configured using code similar to the
+following.
```js
const path = Afl.module.path;
diff --git a/instrumentation/README.llvm.md b/instrumentation/README.llvm.md
index fa025643..ca9ce933 100644
--- a/instrumentation/README.llvm.md
+++ b/instrumentation/README.llvm.md
@@ -234,4 +234,45 @@ are 2-16.
It is highly recommended to increase the MAP_SIZE_POW2 definition in config.h to
at least 18 and maybe up to 20 for this as otherwise too many map collisions
-occur.
\ No newline at end of file
+occur.
+
+## 8) NeverZero counters
+
+In larger, complex, or reiterative programs, the byte sized counters that
+collect the edge coverage can easily fill up and wrap around. This is not that
+much of an issue - unless, by chance, it wraps just to a value of zero when the
+program execution ends. In this case, afl-fuzz is not able to see that the edge
+has been accessed and will ignore it.
+
+NeverZero prevents this behavior. If a counter wraps, it jumps over the value 0
+directly to a 1. This improves path discovery (by a very small amount) at a very
+low cost (one instruction per edge).
+
+(The alternative of saturated counters has been tested also and proved to be
+inferior in terms of path discovery.)
+
+This is implemented in afl-gcc and afl-gcc-fast, however, for llvm_mode this is
+optional if multithread safe counters are selected or the llvm version is below
+9 - as there are severe performance costs in these cases.
+
+If you want to enable this for llvm versions below 9 or thread safe counters,
+then set
+
+```
+export AFL_LLVM_NOT_ZERO=1
+```
+
+In case you are on llvm 9 or greater and you do not want this behavior, then you
+can set:
+
+```
+AFL_LLVM_SKIP_NEVERZERO=1
+```
+
+If the target does not have extensive loops or functions that are called a lot
+then this can give a small performance boost.
+
+Please note that the default counter implementations are not thread safe!
+
+Support for thread safe counters in mode LLVM CLASSIC can be activated with
+setting `AFL_LLVM_THREADSAFE_INST=1`.
\ No newline at end of file
diff --git a/utils/README.md b/utils/README.md
index 5f5745b9..debc86e8 100644
--- a/utils/README.md
+++ b/utils/README.md
@@ -48,7 +48,7 @@ Here's a quick overview of the stuff you can find in this directory:
- defork - intercept fork() in targets
- distributed_fuzzing - a sample script for synchronizing fuzzer instances
- across multiple machines (see parallel_fuzzing.md).
+ across multiple machines.
- libdislocator - like ASAN but lightweight.
--
cgit 1.4.1
From bb506de0b809f97a4221ee1b6e040dcb5f9ca56a Mon Sep 17 00:00:00 2001
From: llzmb <46303940+llzmb@users.noreply.github.com>
Date: Sun, 5 Dec 2021 19:04:45 +0100
Subject: Fix various missed issues - 1st run
---
custom_mutators/gramatron/README.md | 43 ++++-----
dictionaries/README.md | 20 ++--
docs/afl-fuzz_approach.md | 11 ++-
docs/custom_mutators.md | 2 +-
docs/env_variables.md | 2 +-
docs/features.md | 4 +-
utils/autodict_ql/readme.md | 180 ++++++++++++++++++++++--------------
utils/libdislocator/README.md | 29 +++---
8 files changed, 168 insertions(+), 123 deletions(-)
(limited to 'docs/afl-fuzz_approach.md')
diff --git a/custom_mutators/gramatron/README.md b/custom_mutators/gramatron/README.md
index 5e10f97b..8aa0cc44 100644
--- a/custom_mutators/gramatron/README.md
+++ b/custom_mutators/gramatron/README.md
@@ -1,17 +1,17 @@
# GramaTron
GramaTron is a coverage-guided fuzzer that uses grammar automatons to perform
-grammar-aware fuzzing. Technical details about our framework are available
-in the [ISSTA'21 paper](https://nebelwelt.net/files/21ISSTA.pdf).
-The artifact to reproduce the experiments presented in the paper are present
-in `artifact/`. Instructions to run a sample campaign and incorporate new
-grammars is presented below:
+grammar-aware fuzzing. Technical details about our framework are available in
+the [ISSTA'21 paper](https://nebelwelt.net/files/21ISSTA.pdf). The artifact to
+reproduce the experiments presented in the paper are present in `artifact/`.
+Instructions to run a sample campaign and incorporate new grammars is presented
+below:
-# Compiling
+## Compiling
Execute `./build_gramatron_mutator.sh`.
-# Running
+## Running
You have to set the grammar file to use with `GRAMATRON_AUTOMATION`:
@@ -23,26 +23,27 @@ export GRAMATRON_AUTOMATION=grammars/ruby/source_automata.json
afl-fuzz -i in -o out -- ./target
```
-# Adding and testing a new grammar
+## Adding and testing a new grammar
-- Specify in a JSON format for CFG. Examples are correspond `source.json` files
+- Specify in a JSON format for CFG. Examples are correspond `source.json` files.
- Run the automaton generation script (in `src/gramfuzz-mutator/preprocess`)
which will place the generated automaton in the same folder.
-```
-./preprocess/prep_automaton.sh [stack_limit]
+ ```
+ ./preprocess/prep_automaton.sh [stack_limit]
-E.g., ./preprocess/prep_automaton.sh ~/grammars/ruby/source.json PROGRAM
-```
+ E.g., ./preprocess/prep_automaton.sh ~/grammars/ruby/source.json PROGRAM
+ ```
-- If the grammar has no self-embedding rules then you do not need to pass the
- stack limit parameter. However, if it does have self-embedding rules then you
+- If the grammar has no self-embedding rules, then you do not need to pass the
+ stack limit parameter. However, if it does have self-embedding rules, then you
need to pass the stack limit parameter. We recommend starting with `5` and
- then increasing it if you need more complexity
-- To sanity-check that the automaton is generating inputs as expected you can use the `test` binary housed in `src/gramfuzz-mutator`
+ then increasing it if you need more complexity.
+- To sanity-check that the automaton is generating inputs as expected, you can
+ use the `test` binary housed in `src/gramfuzz-mutator`.
-```
-./test SanityCheck
+ ```
+ ./test SanityCheck
-E.g., ./test SanityCheck ~/grammars/ruby/source_automata.json
-```
\ No newline at end of file
+ E.g., ./test SanityCheck ~/grammars/ruby/source_automata.json
+ ```
\ No newline at end of file
diff --git a/dictionaries/README.md b/dictionaries/README.md
index f3b8a9e5..0b3b4d90 100644
--- a/dictionaries/README.md
+++ b/dictionaries/README.md
@@ -2,17 +2,17 @@
(See [../README.md](../README.md) for the general instruction manual.)
-This subdirectory contains a set of dictionaries that can be used in
-conjunction with the -x option to allow the fuzzer to effortlessly explore the
-grammar of some of the more verbose data formats or languages.
+This subdirectory contains a set of dictionaries that can be used in conjunction
+with the -x option to allow the fuzzer to effortlessly explore the grammar of
+some of the more verbose data formats or languages.
-These sets were done by Michal Zalewski, various contributors, and imported
-from oss-fuzz, go-fuzz and libfuzzer.
+These sets were done by Michal Zalewski, various contributors, and imported from
+oss-fuzz, go-fuzz and libfuzzer.
Custom dictionaries can be added at will. They should consist of a
reasonably-sized set of rudimentary syntax units that the fuzzer will then try
-to clobber together in various ways. Snippets between 2 and 16 bytes are
-usually the sweet spot.
+to clobber together in various ways. Snippets between 2 and 16 bytes are usually
+the sweet spot.
Custom dictionaries can be created in two ways:
@@ -34,9 +34,9 @@ In the file mode, every name field can be optionally followed by @, e.g.:
`keyword_foo@1 = "foo"`
Such entries will be loaded only if the requested dictionary level is equal or
-higher than this number. The default level is zero; a higher value can be set
-by appending @ to the dictionary file name, like so:
+higher than this number. The default level is zero; a higher value can be set by
+appending @ to the dictionary file name, like so:
`-x path/to/dictionary.dct@2`
-Good examples of dictionaries can be found in xml.dict and png.dict.
+Good examples of dictionaries can be found in xml.dict and png.dict.
\ No newline at end of file
diff --git a/docs/afl-fuzz_approach.md b/docs/afl-fuzz_approach.md
index 01888935..2da61cc4 100644
--- a/docs/afl-fuzz_approach.md
+++ b/docs/afl-fuzz_approach.md
@@ -468,7 +468,8 @@ cd ../../
sudo make install
```
-To learn more about remote monitoring and metrics visualization with StatsD, see [rpc_statsd.md](rpc_statsd.md).
+To learn more about remote monitoring and metrics visualization with StatsD, see
+[rpc_statsd.md](rpc_statsd.md).
### Addendum: status and plot files
@@ -524,9 +525,9 @@ into each of them or deploy scripts to read the fuzzer statistics. Using
`AFL_STATSD` (and the other related environment variables `AFL_STATSD_HOST`,
`AFL_STATSD_PORT`, `AFL_STATSD_TAGS_FLAVOR`) you can automatically send metrics
to your favorite StatsD server. Depending on your StatsD server, you will be
-able to monitor, trigger alerts, or perform actions based on these metrics (e.g:
-alert on slow exec/s for a new build, threshold of crashes, time since last
-crash > X, etc.).
+able to monitor, trigger alerts, or perform actions based on these metrics
+(e.g.: alert on slow exec/s for a new build, threshold of crashes, time since
+last crash > X, etc.).
The selected metrics are a subset of all the metrics found in the status and in
the plot file. The list is the following: `cycle_done`, `cycles_wo_finds`,
@@ -537,6 +538,6 @@ the plot file. The list is the following: `cycle_done`, `cycles_wo_finds`,
definitions can be found in the addendum above.
When using multiple fuzzer instances with StatsD, it is *strongly* recommended
-to setup the flavor (AFL_STATSD_TAGS_FLAVOR) to match your StatsD server. This
+to setup the flavor (`AFL_STATSD_TAGS_FLAVOR`) to match your StatsD server. This
will allow you to see individual fuzzer performance, detect bad ones, see the
progress of each strategy...
\ No newline at end of file
diff --git a/docs/custom_mutators.md b/docs/custom_mutators.md
index 2f632e1f..7b4e0516 100644
--- a/docs/custom_mutators.md
+++ b/docs/custom_mutators.md
@@ -276,7 +276,7 @@ gcc -shared -Wall -O3 example.c -o example.so
```
Note that if you specify multiple custom mutators, the corresponding functions
-will be called in the order in which they are specified. e.g. first
+will be called in the order in which they are specified. E.g., the first
`post_process` function of `example_first.so` will be called and then that of
`example_second.so`.
diff --git a/docs/env_variables.md b/docs/env_variables.md
index 0952b960..c45f4ab9 100644
--- a/docs/env_variables.md
+++ b/docs/env_variables.md
@@ -585,7 +585,7 @@ The FRIDA wrapper used to instrument binary-only code supports many of the same
options as `afl-qemu-trace`, but also has a number of additional advanced
options. These are listed in brief below (see
[frida_mode/README.md](../frida_mode/README.md) for more details). These
-settings are provided for compatibiltiy with QEMU mode, the preferred way to
+settings are provided for compatibility with QEMU mode, the preferred way to
configure FRIDA mode is through its [scripting](../frida_mode/Scripting.md)
support.
diff --git a/docs/features.md b/docs/features.md
index 06b1bcbe..431d9eb1 100644
--- a/docs/features.md
+++ b/docs/features.md
@@ -1,7 +1,7 @@
# Important features of AFL++
AFL++ supports llvm from 3.8 up to version 12, very fast binary fuzzing with
-QEMU 5.1 with laf-intel and redqueen, frida mode, unicorn mode, gcc plugin, full
+QEMU 5.1 with laf-intel and redqueen, FRIDA mode, unicorn mode, gcc plugin, full
*BSD, Mac OS, Solaris and Android support and much, much, much more.
| Feature/Instrumentation | afl-gcc | llvm | gcc_plugin | FRIDA mode(9) | QEMU mode(10) |unicorn_mode(10) |coresight_mode(11)|
@@ -30,7 +30,7 @@ QEMU 5.1 with laf-intel and redqueen, frida mode, unicorn mode, gcc plugin, full
versions that write to a file to use with afl-fuzz' `-x`
8. the snapshot LKM is currently unmaintained due to too many kernel changes
coming too fast :-(
-9. frida mode is supported on Linux and MacOS for Intel and ARM
+9. FRIDA mode is supported on Linux and MacOS for Intel and ARM
10. QEMU/Unicorn is only supported on Linux
11. Coresight mode is only available on AARCH64 Linux with a CPU with Coresight
extension
diff --git a/utils/autodict_ql/readme.md b/utils/autodict_ql/readme.md
index 789cd152..f61026b7 100644
--- a/utils/autodict_ql/readme.md
+++ b/utils/autodict_ql/readme.md
@@ -2,21 +2,35 @@
## What is this?
-`Autodict-QL` is a plugin system that enables fast generation of Tokens/Dictionaries in a handy way that can be manipulated by the user (unlike The LLVM Passes that are hard to modify). This means that autodict-ql is a scriptable feature which basically uses CodeQL (a powerful semantic code analysis engine) to fetch information from a code base.
+`Autodict-QL` is a plugin system that enables fast generation of
+Tokens/Dictionaries in a handy way that can be manipulated by the user (unlike
+The LLVM Passes that are hard to modify). This means that autodict-ql is a
+scriptable feature which basically uses CodeQL (a powerful semantic code
+analysis engine) to fetch information from a code base.
-Tokens are useful when you perform fuzzing on different parsers. The AFL++ `-x` switch enables the usage of dictionaries through your fuzzing campaign. If you are not familiar with Dictionaries in fuzzing, take a look [here](https://github.com/AFLplusplus/AFLplusplus/tree/stable/dictionaries) .
+Tokens are useful when you perform fuzzing on different parsers. The AFL++ `-x`
+switch enables the usage of dictionaries through your fuzzing campaign. If you
+are not familiar with Dictionaries in fuzzing, take a look
+[here](https://github.com/AFLplusplus/AFLplusplus/tree/stable/dictionaries).
-## Why CodeQL ?
+## Why CodeQL?
-We basically developed this plugin on top of the CodeQL engine because it gives the user scripting features, it's easier and it's independent of the LLVM system. This means that a user can write his CodeQL scripts or modify the current scripts to improve or change the token generation algorithms based on different program analysis concepts.
+We basically developed this plugin on top of the CodeQL engine because it gives
+the user scripting features, it's easier and it's independent of the LLVM
+system. This means that a user can write his CodeQL scripts or modify the
+current scripts to improve or change the token generation algorithms based on
+different program analysis concepts.
## CodeQL scripts
-Currently, we pushed some scripts as defaults for Token generation. In addition, we provide every CodeQL script as an standalone script because it's easier to modify or test.
+Currently, we pushed some scripts as defaults for Token generation. In addition,
+we provide every CodeQL script as an standalone script because it's easier to
+modify or test.
-Currently we provided the following CodeQL scripts :
+Currently we provided the following CodeQL scripts:
-`strcmp-str.ql` is used to extract strings that are related to the `strcmp` function.
+`strcmp-str.ql` is used to extract strings that are related to the `strcmp`
+function.
`strncmp-str.ql` is used to extract the strings from the `strncmp` function.
@@ -24,13 +38,18 @@ Currently we provided the following CodeQL scripts :
`litool.ql` extracts Magic numbers as Hexadecimal format.
-`strtool.ql` extracts strings with uses of a regex and dataflow concept to capture the string comparison functions. If `strcmp` is rewritten in a project as Mystrcmp or something like strmycmp, then this script can catch the arguments and these are valuable tokens.
+`strtool.ql` extracts strings with uses of a regex and dataflow concept to
+capture the string comparison functions. If `strcmp` is rewritten in a project
+as Mystrcmp or something like strmycmp, then this script can catch the arguments
+and these are valuable tokens.
-You can write other CodeQL scripts to extract possible effective tokens if you think they can be useful.
+You can write other CodeQL scripts to extract possible effective tokens if you
+think they can be useful.
## Usage
-Before you proceed to installation make sure that you have the following packages by installing them:
+Before you proceed to installation make sure that you have the following
+packages by installing them:
```shell
sudo apt install build-essential libtool-bin python3-dev python3 automake git vim wget -y
@@ -38,66 +57,91 @@ sudo apt install build-essential libtool-bin python3-dev python3 automake git vi
The usage of Autodict-QL is pretty easy. But let's describe it as:
-1. First of all, you need to have CodeQL installed on the system. We make this possible with `build-codeql.sh` bash script. This script will install CodeQL completety and will set the required environment variables for your system.
-Do the following:
-
-```shell
-# chmod +x codeql-build.sh
-# ./codeql-build.sh
-# source ~/.bashrc
-# codeql
-```
-
-Then you should get:
-
-```shell
-Usage: codeql ...
-Create and query CodeQL databases, or work with the QL language.
-
-GitHub makes this program freely available for the analysis of open-source software and certain other uses, but it is
-not itself free software. Type codeql --license to see the license terms.
-
- --license Show the license terms for the CodeQL toolchain.
-Common options:
- -h, --help Show this help text.
- -v, --verbose Incrementally increase the number of progress messages printed.
- -q, --quiet Incrementally decrease the number of progress messages printed.
-Some advanced options have been hidden; try --help -v for a fuller view.
-Commands:
- query Compile and execute QL code.
- bqrs Get information from .bqrs files.
- database Create, analyze and process CodeQL databases.
- dataset [Plumbing] Work with raw QL datasets.
- test Execute QL unit tests.
- resolve [Deep plumbing] Helper commands to resolve disk locations etc.
- execute [Deep plumbing] Low-level commands that need special JVM options.
- version Show the version of the CodeQL toolchain.
- generate Generate formatted QL documentation.
- github Commands useful for interacting with the GitHub API through CodeQL.
-```
-
-2. Compile your project with CodeQL: For using the Autodict-QL plugin, you need to compile the source of the target you want to fuzz with CodeQL. This is not something hard.
- - First you need to create a CodeQL database of the project codebase, suppose we want to compile `libxml` with codeql. Go to libxml and issue the following commands:
- - `./configure --disable-shared`
- - `codeql create database libxml-db --language=cpp --command=make`
- - Now you have the CodeQL database of the project :-)
-3. The final step is to update the CodeQL database you created in step 2 (Suppose we are in `aflplusplus/utils/autodict_ql/` directory):
- - `codeql database upgrade /home/user/libxml/libxml-db`
+1. First of all, you need to have CodeQL installed on the system. We make this
+ possible with `build-codeql.sh` bash script. This script will install CodeQL
+ completety and will set the required environment variables for your system.
+ Do the following:
+
+ ```shell
+ # chmod +x codeql-build.sh
+ # ./codeql-build.sh
+ # source ~/.bashrc
+ # codeql
+ ```
+
+ Then you should get:
+
+ ```shell
+ Usage: codeql ...
+ Create and query CodeQL databases, or work with the QL language.
+
+ GitHub makes this program freely available for the analysis of open-source software and certain other uses, but it is
+ not itself free software. Type codeql --license to see the license terms.
+
+ --license Show the license terms for the CodeQL toolchain.
+ Common options:
+ -h, --help Show this help text.
+ -v, --verbose Incrementally increase the number of progress messages printed.
+ -q, --quiet Incrementally decrease the number of progress messages printed.
+ Some advanced options have been hidden; try --help -v for a fuller view.
+ Commands:
+ query Compile and execute QL code.
+ bqrs Get information from .bqrs files.
+ database Create, analyze and process CodeQL databases.
+ dataset [Plumbing] Work with raw QL datasets.
+ test Execute QL unit tests.
+ resolve [Deep plumbing] Helper commands to resolve disk locations etc.
+ execute [Deep plumbing] Low-level commands that need special JVM options.
+ version Show the version of the CodeQL toolchain.
+ generate Generate formatted QL documentation.
+ github Commands useful for interacting with the GitHub API through CodeQL.
+ ```
+
+2. Compile your project with CodeQL: For using the Autodict-QL plugin, you need
+ to compile the source of the target you want to fuzz with CodeQL. This is not
+ something hard.
+ - First you need to create a CodeQL database of the project codebase, suppose
+ we want to compile `libxml` with codeql. Go to libxml and issue the
+ following commands:
+ - `./configure --disable-shared`
+ - `codeql create database libxml-db --language=cpp --command=make`
+ - Now you have the CodeQL database of the project :-)
+3. The final step is to update the CodeQL database you created in step 2
+ (Suppose we are in `aflplusplus/utils/autodict_ql/` directory):
+ - `codeql database upgrade /home/user/libxml/libxml-db`
4. Everything is set! Now you should issue the following to get the tokens:
- - `python3 autodict-ql.py [CURRECT_DIR] [CODEQL_DATABASE_PATH] [TOKEN_PATH]`
- - example : `python3 /home/user/AFLplusplus/utils/autodict_ql/autodict-ql.py $PWD /home/user/libxml/libxml-db tokens`
- - This will create the final `tokens` dir for you and you are done, then pass the tokens path to AFL++'s `-x` flag.
+ - `python3 autodict-ql.py [CURRECT_DIR] [CODEQL_DATABASE_PATH] [TOKEN_PATH]`
+ - example: `python3 /home/user/AFLplusplus/utils/autodict_ql/autodict-ql.py
+ $PWD /home/user/libxml/libxml-db tokens`
+ - This will create the final `tokens` dir for you and you are done, then
+ pass the tokens path to AFL++'s `-x` flag.
5. Done!
## More on dictionaries and tokens
-Core developer of the AFL++ project Marc Heuse also developed a similar tool named `dict2file` which is a LLVM pass which can automatically extract useful tokens, in addition with LTO instrumentation mode, this dict2file is automatically generates token extraction. `Autodict-QL` plugin gives you scripting capability and you can do whatever you want to extract from the Codebase and it's up to you. In addition it's independent from LLVM system.
-On the other hand, you can also use Google dictionaries which have been made public in May 2020, but the problem of using Google dictionaries is that they are limited to specific file formats and specifications. For example, for testing binutils and ELF file format or AVI in FFMPEG, there are no pre-built dictionaries, so it is highly recommended to use `Autodict-QL` or `Dict2File` features to automatically generate dictionaries based on the target.
-
-I've personally prefered to use `Autodict-QL` or `dict2file` rather than Google dictionaries or any other manually generated dictionaries as `Autodict-QL` and `dict2file` are working based on the target.
-In overall, fuzzing with dictionaries and well-generated tokens will give better results.
-
-There are 2 important points to remember :
-
-- If you combine `Autodict-QL` with AFL++ cmplog, you will get much better code coverage and hence better chances to discover new bugs.
-- Do not forget to set `AFL_MAX_DET_EXTRAS` at least to the number of generated dictionaries. If you forget to set this environment variable, then AFL++ uses just 200 tokens and use the rest of them only probabilistically. So this will guarantee that your tokens will be used by AFL++.
\ No newline at end of file
+Core developer of the AFL++ project Marc Heuse also developed a similar tool
+named `dict2file` which is a LLVM pass which can automatically extract useful
+tokens, in addition with LTO instrumentation mode, this dict2file is
+automatically generates token extraction. `Autodict-QL` plugin gives you
+scripting capability and you can do whatever you want to extract from the
+Codebase and it's up to you. In addition it's independent from LLVM system. On
+the other hand, you can also use Google dictionaries which have been made public
+in May 2020, but the problem of using Google dictionaries is that they are
+limited to specific file formats and specifications. For example, for testing
+binutils and ELF file format or AVI in FFMPEG, there are no pre-built
+dictionaries, so it is highly recommended to use `Autodict-QL` or `Dict2File`
+features to automatically generate dictionaries based on the target.
+
+I've personally preferred to use `Autodict-QL` or `dict2file` rather than Google
+dictionaries or any other manually generated dictionaries as `Autodict-QL` and
+`dict2file` are working based on the target. In overall, fuzzing with
+dictionaries and well-generated tokens will give better results.
+
+There are 2 important points to remember:
+
+- If you combine `Autodict-QL` with AFL++ cmplog, you will get much better code
+ coverage and hence better chances to discover new bugs.
+- Do not forget to set `AFL_MAX_DET_EXTRAS` at least to the number of generated
+ dictionaries. If you forget to set this environment variable, then AFL++ uses
+ just 200 tokens and use the rest of them only probabilistically. So this will
+ guarantee that your tokens will be used by AFL++.
\ No newline at end of file
diff --git a/utils/libdislocator/README.md b/utils/libdislocator/README.md
index 64a5f14c..7150c205 100644
--- a/utils/libdislocator/README.md
+++ b/utils/libdislocator/README.md
@@ -10,8 +10,8 @@ heap-related security bugs in several ways:
subsequent PROT_NONE page, causing most off-by-one reads and writes to
immediately segfault,
- - It adds a canary immediately below the allocated buffer, to catch writes
- to negative offsets (won't catch reads, though),
+ - It adds a canary immediately below the allocated buffer, to catch writes to
+ negative offsets (won't catch reads, though),
- It sets the memory returned by malloc() to garbage values, improving the
odds of crashing when the target accesses uninitialized data,
@@ -19,35 +19,34 @@ heap-related security bugs in several ways:
- It sets freed memory to PROT_NONE and does not actually reuse it, causing
most use-after-free bugs to segfault right away,
- - It forces all realloc() calls to return a new address - and sets
- PROT_NONE on the original block. This catches use-after-realloc bugs,
+ - It forces all realloc() calls to return a new address - and sets PROT_NONE
+ on the original block. This catches use-after-realloc bugs,
- - It checks for calloc() overflows and can cause soft or hard failures
- of alloc requests past a configurable memory limit (AFL_LD_LIMIT_MB,
+ - It checks for calloc() overflows and can cause soft or hard failures of
+ alloc requests past a configurable memory limit (AFL_LD_LIMIT_MB,
AFL_LD_HARD_FAIL).
- Optionally, in platforms supporting it, huge pages can be used by passing
USEHUGEPAGE=1 to make.
- - Size alignment to `max_align_t` can be enforced with AFL_ALIGNED_ALLOC=1.
- In this case, a tail canary is inserted in the padding bytes at the end
- of the allocated zone. This reduce the ability of libdislocator to detect
+ - Size alignment to `max_align_t` can be enforced with AFL_ALIGNED_ALLOC=1. In
+ this case, a tail canary is inserted in the padding bytes at the end of the
+ allocated zone. This reduce the ability of libdislocator to detect
off-by-one bugs but also it make slibdislocator compliant to the C standard.
Basically, it is inspired by some of the non-default options available for the
OpenBSD allocator - see malloc.conf(5) on that platform for reference. It is
-also somewhat similar to several other debugging libraries, such as gmalloc
-and DUMA - but is simple, plug-and-play, and designed specifically for fuzzing
-jobs.
+also somewhat similar to several other debugging libraries, such as gmalloc and
+DUMA - but is simple, plug-and-play, and designed specifically for fuzzing jobs.
Note that it does nothing for stack-based memory handling errors. The
-fstack-protector-all setting for GCC / clang, enabled when using AFL_HARDEN,
can catch some subset of that.
The allocator is slow and memory-intensive (even the tiniest allocation uses up
-4 kB of physical memory and 8 kB of virtual mem), making it completely unsuitable
-for "production" uses; but it can be faster and more hassle-free than ASAN / MSAN
-when fuzzing small, self-contained binaries.
+4 kB of physical memory and 8 kB of virtual mem), making it completely
+unsuitable for "production" uses; but it can be faster and more hassle-free than
+ASAN / MSAN when fuzzing small, self-contained binaries.
To use this library, run AFL++ like so:
--
cgit 1.4.1