diff options
author | van Hauser <vh@thc.org> | 2024-02-03 11:55:51 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-02-03 10:55:51 +0000 |
commit | 602eceed8b56eef62d673a54b6011541bf1ab60a (patch) | |
tree | 01a13c6bf88bf3c2bf68a2c4a0614c8f86ed2e92 /src/afl-fuzz-one.c | |
parent | eda770fd32b804e3ebd6a43738c0002f6118a463 (diff) | |
download | afl++-602eceed8b56eef62d673a54b6011541bf1ab60a.tar.gz |
push to stable (#1983)
* Output afl-clang-fast stuffs only if necessary (#1912) * afl-cc header * afl-cc common declarations - Add afl-cc-state.c - Strip includes, find_object, debug/be_quiet/have_*/callname setting from afl-cc.c - Use debugf_args in main - Modify execvp stuffs to fit new aflcc struct * afl-cc show usage * afl-cc mode selecting 1. compiler_mode by callname in argv[0] 2. compiler_mode by env "AFL_CC_COMPILER" 3. compiler_mode/instrument_mode by command line options "--afl-..." 4. instrument_mode/compiler_mode by various env vars including "AFL_LLVM_INSTRUMENT" 5. final checking steps 6. print "... - mode: %s-%s\n" 7. determine real argv[0] according to compiler_mode * afl-cc macro defs * afl-cc linking behaviors * afl-cc fsanitize behaviors * afl-cc misc * afl-cc body update * afl-cc all-in-one formated with custom-format.py * nits --------- Co-authored-by: vanhauser-thc <vh@thc.org> * changelog * update grammar mutator * lto llvm 12+ * docs(custom_mutators): fix missing ':' (#1953) * Fix broken LTO mode and response file support (#1948) * Strip `-Wl,-no-undefined` during compilation (#1952) Make the compiler wrapper stripping `-Wl,-no-undefined` in addition to `-Wl,--no-undefined`. Both versions of the flag are accepted by clang and, therefore, used by building systems in the wild (e.g., samba will not build without this fix). * Remove dead code in write_to_testcase (#1955) The custom_mutators_count check in if case is duplicate with if condition. The else case is custom_mutators_count == 0, neither custom_mutator_list iteration nor sent check needed. Signed-off-by: Xeonacid <h.dwwwwww@gmail.com> * update qemuafl * WIP: Add ability to generate drcov trace using QEMU backend (#1956) * Document new drcov QEMU plugin * Add link to lightkeeper for QEMU drcov file loading --------- Co-authored-by: Jean-Romain Garnier <jean-romain.garnier@airbus.com> * code format * changelog * sleep on uid != 0 afl-system-config * fix segv about skip_next, warn on unsupported cases of linking options (#1958) * todos * ensure afl-cc only allows available compiler modes * update grammar mutator * disable aslr on apple * fix for arm64 * help selective instrumentation * typos * macos * add compiler test script * apple fixes * bump nyx submodules (#1963) * fix docs * update changelog * update grammar mutator * improve compiler test script * gcc asan workaround (#1966) * fix github merge fuckup * fix * Fix afl-cc (#1968) - Check if too many cmdline params here, each time before insert a new param. - Check if it is "-fsanitize=..." before we do sth. - Remove improper param_st transfer. * Avoid adding llvmnative instrumentation when linking rust sanitizer runtime (#1969) * Dynamic instrumentation filtering for LLVM native (#1971) * Add two dynamic instrumentation filter methods to runtime * Always use pc-table with native pcguard * Add make_symbol_list.py and README * changelog * todos * new forkserver check * fix * nyx test for CI * improve nyx docs * Fixes to afl-cc and documentation (#1974) * Always compile with -ldl when building for CODE_COVERAGE When building with CODE_COVERAGE, the afl runtime contains code that calls `dladdr` which requires -ldl. Under most circumstances, clang already adds this (e.g. when building with pc-table), but there are some circumstances where it isn't added automatically. * Add visibility declaration to __afl_connected When building with hidden visibility, the use of __AFL_LOOP inside such code can cause linker errors due to __afl_connected being declared "hidden". * Update docs to clarify that CODE_COVERAGE=1 is required for dynamic_covfilter * nits * nyx build script updates * test error output * debug ci * debug ci * Improve afl-cc (#1975) * update response file support - full support of rsp file - fix some segv issues * Improve afl-cc - remove dead code about allow/denylist options of sancov - missing `if (!aflcc->have_msan)` - add docs for each function - typo * enable nyx * debug ci * debug ci * debug ci * debug ci * debug ci * debug ci * debug ci * debug ci * fix ci * clean test script * NO_NYX * NO_NYX * fix ci * debug ci * fix ci * finalize ci fix * Enhancement on Deterministic stage (#1972) * fuzzer: init commit based on aflpp 60dc37a8cf09f8e9048e4b6a2204d6c90b27655a * fuzzers: adding the skip variables and initialize * log: profile the det/havoc finding * log: add profile log output * fuzzers: sperate log/skipdet module * fuzzers: add quick eff_map calc * fuzzers: add skip_eff_map in fuzz_one * fuzzers: mark whole input space in eff_map * fuzzers: add undet bit threshold to skip some seeds * fuzzers: fix one byte overflow * fuzzers: fix overflow * fix code format * add havoc only again * code format * remove log to INTROSPECTION, rename skipdet module * rename skipdet module * remove log to stats * clean redundant code * code format * remove redundant code format check * remove redundant doc * remove redundant objects * clean files * change -d to default skipdet * disable deterministic when using CUSTOM_MUTATOR * revert fix * final touches for skipdet * remove unused var * remove redundant eff struct (#1977) * update QEMU-Nyx submodule (#1978) * update QEMU-Nyx submodule (#1980) * Fix type in AFL_NOOPT env variable in afl-cc help message (#1982) * nits * 2024 v4.10c release * fixes --------- Signed-off-by: Xeonacid <h.dwwwwww@gmail.com> Co-authored-by: Sonic <50692172+SonicStark@users.noreply.github.com> Co-authored-by: Xeonacid <h.dwwwwww@gmail.com> Co-authored-by: Nils Bars <nils.bars@rub.de> Co-authored-by: Jean-Romain Garnier <7504819+JRomainG@users.noreply.github.com> Co-authored-by: Jean-Romain Garnier <jean-romain.garnier@airbus.com> Co-authored-by: Sergej Schumilo <sergej@schumilo.de> Co-authored-by: Christian Holler (:decoder) <choller@mozilla.com> Co-authored-by: Han Zheng <35988108+kdsjZh@users.noreply.github.com> Co-authored-by: Khaled Yakdan <yakdan@code-intelligence.com>
Diffstat (limited to 'src/afl-fuzz-one.c')
-rw-r--r-- | src/afl-fuzz-one.c | 237 |
1 files changed, 114 insertions, 123 deletions
diff --git a/src/afl-fuzz-one.c b/src/afl-fuzz-one.c index 01e34b69..d9c074ec 100644 --- a/src/afl-fuzz-one.c +++ b/src/afl-fuzz-one.c @@ -9,7 +9,7 @@ Andrea Fioraldi <andreafioraldi@gmail.com> Copyright 2016, 2017 Google Inc. All rights reserved. - Copyright 2019-2023 AFLplusplus Project. All rights reserved. + Copyright 2019-2024 AFLplusplus Project. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -329,9 +329,9 @@ u8 fuzz_one_original(afl_state_t *afl) { u32 len, temp_len; u32 j; u32 i; - u8 *in_buf, *out_buf, *orig_in, *ex_tmp, *eff_map = 0; + u8 *in_buf, *out_buf, *orig_in, *ex_tmp; u64 havoc_queued = 0, orig_hit_cnt, new_hit_cnt = 0, prev_cksum, _prev_cksum; - u32 splice_cycle = 0, perf_score = 100, orig_perf, eff_cnt = 1; + u32 splice_cycle = 0, perf_score = 100, orig_perf; u8 ret_val = 1, doing_det = 0; @@ -545,12 +545,37 @@ u8 fuzz_one_original(afl_state_t *afl) { } + u64 before_det_time = get_cur_time(); +#ifdef INTROSPECTION + + u64 before_havoc_time; + u32 before_det_findings = afl->queued_items, + before_det_edges = count_non_255_bytes(afl, afl->virgin_bits), + before_havoc_findings, before_havoc_edges; + u8 is_logged = 0; + +#endif + if (!afl->skip_deterministic) { + + if (!skip_deterministic_stage(afl, in_buf, out_buf, len, before_det_time)) { + + goto abandon_entry; + + } + + } + + u8 *skip_eff_map = afl->queue_cur->skipdet_e->skip_eff_map; + /* Skip right away if -d is given, if it has not been chosen sufficiently often to warrant the expensive deterministic stage (fuzz_level), or if it has gone through deterministic testing in earlier, resumed runs (passed_det). */ + /* if skipdet decide to skip the seed or no interesting bytes found, + we skip the whole deterministic stage as well */ if (likely(afl->skip_deterministic) || likely(afl->queue_cur->passed_det) || + likely(!afl->queue_cur->skipdet_e->quick_eff_bytes) || likely(perf_score < (afl->queue_cur->depth * 30 <= afl->havoc_max_mult * 100 ? afl->queue_cur->depth * 30 @@ -609,6 +634,10 @@ u8 fuzz_one_original(afl_state_t *afl) { afl->stage_cur_byte = afl->stage_cur >> 3; + if (!skip_eff_map[afl->stage_cur_byte]) continue; + + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } + FLIP_BIT(out_buf, afl->stage_cur); #ifdef INTROSPECTION @@ -725,6 +754,10 @@ u8 fuzz_one_original(afl_state_t *afl) { afl->stage_cur_byte = afl->stage_cur >> 3; + if (!skip_eff_map[afl->stage_cur_byte]) continue; + + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } + FLIP_BIT(out_buf, afl->stage_cur); FLIP_BIT(out_buf, afl->stage_cur + 1); @@ -760,6 +793,10 @@ u8 fuzz_one_original(afl_state_t *afl) { afl->stage_cur_byte = afl->stage_cur >> 3; + if (!skip_eff_map[afl->stage_cur_byte]) continue; + + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } + FLIP_BIT(out_buf, afl->stage_cur); FLIP_BIT(out_buf, afl->stage_cur + 1); FLIP_BIT(out_buf, afl->stage_cur + 2); @@ -787,34 +824,6 @@ u8 fuzz_one_original(afl_state_t *afl) { afl->queue_cur->stats_mutated += afl->stage_max; #endif - /* Effector map setup. These macros calculate: - - EFF_APOS - position of a particular file offset in the map. - EFF_ALEN - length of a map with a particular number of bytes. - EFF_SPAN_ALEN - map span for a sequence of bytes. - - */ - -#define EFF_APOS(_p) ((_p) >> EFF_MAP_SCALE2) -#define EFF_REM(_x) ((_x) & ((1 << EFF_MAP_SCALE2) - 1)) -#define EFF_ALEN(_l) (EFF_APOS(_l) + !!EFF_REM(_l)) -#define EFF_SPAN_ALEN(_p, _l) (EFF_APOS((_p) + (_l)-1) - EFF_APOS(_p) + 1) - - /* Initialize effector map for the next step (see comments below). Always - flag first and last byte as doing something. */ - - eff_map = afl_realloc(AFL_BUF_PARAM(eff), EFF_ALEN(len)); - if (unlikely(!eff_map)) { PFATAL("alloc"); } - memset(eff_map, 0, EFF_ALEN(len)); - eff_map[0] = 1; - - if (EFF_APOS(len - 1) != 0) { - - eff_map[EFF_APOS(len - 1)] = 1; - ++eff_cnt; - - } - /* Walking byte. */ afl->stage_name = "bitflip 8/8"; @@ -828,6 +837,10 @@ u8 fuzz_one_original(afl_state_t *afl) { afl->stage_cur_byte = afl->stage_cur; + if (!skip_eff_map[afl->stage_cur_byte]) continue; + + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } + out_buf[afl->stage_cur] ^= 0xFF; #ifdef INTROSPECTION @@ -837,59 +850,19 @@ u8 fuzz_one_original(afl_state_t *afl) { if (common_fuzz_stuff(afl, out_buf, len)) { goto abandon_entry; } - /* We also use this stage to pull off a simple trick: we identify - bytes that seem to have no effect on the current execution path - even when fully flipped - and we skip them during more expensive - deterministic stages, such as arithmetics or known ints. */ - - if (!eff_map[EFF_APOS(afl->stage_cur)]) { - - u64 cksum; - - /* If in non-instrumented mode or if the file is very short, just flag - everything without wasting time on checksums. */ - - if (!afl->non_instrumented_mode && len >= EFF_MIN_LEN) { - - cksum = hash64(afl->fsrv.trace_bits, afl->fsrv.map_size, HASH_CONST); - - } else { - - cksum = ~prev_cksum; - - } - - if (cksum != prev_cksum) { - - eff_map[EFF_APOS(afl->stage_cur)] = 1; - ++eff_cnt; - - } - - } - out_buf[afl->stage_cur] ^= 0xFF; } - /* If the effector map is more than EFF_MAX_PERC dense, just flag the - whole thing as worth fuzzing, since we wouldn't be saving much time - anyway. */ - - if (eff_cnt != (u32)EFF_ALEN(len) && - eff_cnt * 100 / EFF_ALEN(len) > EFF_MAX_PERC) { + /* New effective bytes calculation. */ - memset(eff_map, 1, EFF_ALEN(len)); + for (i = 0; i < len; i++) { - afl->blocks_eff_select += EFF_ALEN(len); - - } else { - - afl->blocks_eff_select += eff_cnt; + if (skip_eff_map[i]) afl->blocks_eff_select += 1; } - afl->blocks_eff_total += EFF_ALEN(len); + afl->blocks_eff_total += len; new_hit_cnt = afl->queued_items + afl->saved_crashes; @@ -914,12 +887,9 @@ u8 fuzz_one_original(afl_state_t *afl) { /* Let's consult the effector map... */ - if (!eff_map[EFF_APOS(i)] && !eff_map[EFF_APOS(i + 1)]) { - - --afl->stage_max; - continue; + if (!skip_eff_map[i]) continue; - } + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } afl->stage_cur_byte = i; @@ -959,13 +929,10 @@ u8 fuzz_one_original(afl_state_t *afl) { for (i = 0; i < len - 3; ++i) { /* Let's consult the effector map... */ - if (!eff_map[EFF_APOS(i)] && !eff_map[EFF_APOS(i + 1)] && - !eff_map[EFF_APOS(i + 2)] && !eff_map[EFF_APOS(i + 3)]) { - --afl->stage_max; - continue; + if (!skip_eff_map[i]) continue; - } + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } afl->stage_cur_byte = i; @@ -1016,12 +983,9 @@ skip_bitflip: /* Let's consult the effector map... */ - if (!eff_map[EFF_APOS(i)]) { - - afl->stage_max -= 2 * ARITH_MAX; - continue; + if (!skip_eff_map[i]) continue; - } + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } afl->stage_cur_byte = i; @@ -1103,12 +1067,9 @@ skip_bitflip: /* Let's consult the effector map... */ - if (!eff_map[EFF_APOS(i)] && !eff_map[EFF_APOS(i + 1)]) { - - afl->stage_max -= 4 * ARITH_MAX; - continue; + if (!skip_eff_map[i]) continue; - } + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } afl->stage_cur_byte = i; @@ -1236,13 +1197,9 @@ skip_bitflip: /* Let's consult the effector map... */ - if (!eff_map[EFF_APOS(i)] && !eff_map[EFF_APOS(i + 1)] && - !eff_map[EFF_APOS(i + 2)] && !eff_map[EFF_APOS(i + 3)]) { - - afl->stage_max -= 4 * ARITH_MAX; - continue; + if (!skip_eff_map[i]) continue; - } + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } afl->stage_cur_byte = i; @@ -1374,12 +1331,9 @@ skip_arith: /* Let's consult the effector map... */ - if (!eff_map[EFF_APOS(i)]) { - - afl->stage_max -= sizeof(interesting_8); - continue; + if (!skip_eff_map[i]) continue; - } + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } afl->stage_cur_byte = i; @@ -1437,12 +1391,9 @@ skip_arith: /* Let's consult the effector map... */ - if (!eff_map[EFF_APOS(i)] && !eff_map[EFF_APOS(i + 1)]) { + if (!skip_eff_map[i]) continue; - afl->stage_max -= sizeof(interesting_16); - continue; - - } + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } afl->stage_cur_byte = i; @@ -1528,13 +1479,9 @@ skip_arith: /* Let's consult the effector map... */ - if (!eff_map[EFF_APOS(i)] && !eff_map[EFF_APOS(i + 1)] && - !eff_map[EFF_APOS(i + 2)] && !eff_map[EFF_APOS(i + 3)]) { + if (!skip_eff_map[i]) continue; - afl->stage_max -= sizeof(interesting_32) >> 1; - continue; - - } + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } afl->stage_cur_byte = i; @@ -1626,6 +1573,10 @@ skip_interest: u32 last_len = 0; + if (!skip_eff_map[i]) continue; + + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } + afl->stage_cur_byte = i; /* Extras are sorted by size, from smallest to largest. This means @@ -1643,9 +1594,7 @@ skip_interest: if ((afl->extras_cnt > afl->max_det_extras && rand_below(afl, afl->extras_cnt) >= afl->max_det_extras) || afl->extras[j].len > len - i || - !memcmp(afl->extras[j].data, out_buf + i, afl->extras[j].len) || - !memchr(eff_map + EFF_APOS(i), 1, - EFF_SPAN_ALEN(i, afl->extras[j].len))) { + !memcmp(afl->extras[j].data, out_buf + i, afl->extras[j].len)) { --afl->stage_max; continue; @@ -1693,6 +1642,10 @@ skip_interest: for (i = 0; i <= (u32)len; ++i) { + if (!skip_eff_map[i % len]) continue; + + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } + afl->stage_cur_byte = i; for (j = 0; j < afl->extras_cnt; ++j) { @@ -1755,6 +1708,10 @@ skip_user_extras: u32 last_len = 0; + if (!skip_eff_map[i]) continue; + + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } + afl->stage_cur_byte = i; u32 min_extra_len = MIN(afl->a_extras_cnt, (u32)USE_AUTO_EXTRAS); @@ -1763,9 +1720,7 @@ skip_user_extras: /* See the comment in the earlier code; extras are sorted by size. */ if (afl->a_extras[j].len > len - i || - !memcmp(afl->a_extras[j].data, out_buf + i, afl->a_extras[j].len) || - !memchr(eff_map + EFF_APOS(i), 1, - EFF_SPAN_ALEN(i, afl->a_extras[j].len))) { + !memcmp(afl->a_extras[j].data, out_buf + i, afl->a_extras[j].len)) { --afl->stage_max; continue; @@ -1813,6 +1768,10 @@ skip_user_extras: for (i = 0; i <= (u32)len; ++i) { + if (!skip_eff_map[i % len]) continue; + + if (is_det_timeout(before_det_time, 0)) { goto custom_mutator_stage; } + afl->stage_cur_byte = i; for (j = 0; j < afl->a_extras_cnt; ++j) { @@ -2020,6 +1979,19 @@ custom_mutator_stage: havoc_stage: +#ifdef INTROSPECTION + + if (!is_logged) { + + is_logged = 1; + before_havoc_findings = afl->queued_items; + before_havoc_edges = count_non_255_bytes(afl, afl->virgin_bits); + before_havoc_time = get_cur_time(); + + } + +#endif + if (unlikely(afl->custom_only)) { /* Force UI update */ @@ -3430,6 +3402,25 @@ retry_splicing: ret_val = 0; +#ifdef INTROSPECTION + + afl->havoc_prof->queued_det_stage = + before_havoc_findings - before_det_findings; + afl->havoc_prof->queued_havoc_stage = + afl->queued_items - before_havoc_findings; + afl->havoc_prof->total_queued_det += afl->havoc_prof->queued_det_stage; + afl->havoc_prof->edge_det_stage = before_havoc_edges - before_det_edges; + afl->havoc_prof->edge_havoc_stage = + count_non_255_bytes(afl, afl->virgin_bits) - before_havoc_edges; + afl->havoc_prof->total_det_edge += afl->havoc_prof->edge_det_stage; + afl->havoc_prof->det_stage_time = before_havoc_time - before_det_time; + afl->havoc_prof->havoc_stage_time = get_cur_time() - before_havoc_time; + afl->havoc_prof->total_det_time += afl->havoc_prof->det_stage_time; + + plot_profile_data(afl, afl->queue_cur); + +#endif + /* we are through with this queue entry - for this iteration */ abandon_entry: |