Evaluating Synthetic Bugs Article Swipe
YOU?
·
· 2021
· Open Access
·
· DOI: https://doi.org/10.1145/3433210.3453096
· OA: W3170526652
Fuzz testing has been used to find bugs in programs since the 1990s, but\ndespite decades of dedicated research, there is still no consensus on which\nfuzzing techniques work best. One reason for this is the paucity of ground\ntruth: bugs in real programs with known root causes and triggering inputs are\ndifficult to collect at a meaningful scale. Bug injection technologies that add\nsynthetic bugs into real programs seem to offer a solution, but the differences\nin finding these synthetic bugs versus organic bugs have not previously been\nexplored at a large scale. Using over 80 years of CPU time, we ran eight\nfuzzers across 20 targets from the Rode0day bug-finding competition and the\nLAVA-M corpus. Experiments were standardized with respect to compute resources\nand metrics gathered. These experiments show differences in fuzzer performance\nas well as the impact of various configuration options. For instance, it is\nclear that integrating symbolic execution with mutational fuzzing is very\neffective and that using dictionaries improves performance. Other conclusions\nare less clear-cut; for example, no one fuzzer beat all others on all tests. It\nis noteworthy that no fuzzer found any organic bugs (i.e., one reported in a\nCVE), despite 50 such bugs being available for discovery in the fuzzing corpus.\nA close analysis of results revealed a possible explanation: a dramatic\ndifference between where synthetic and organic bugs live with respect to the\n''main path'' discovered by fuzzers. We find that recent updates to bug\ninjection systems have made synthetic bugs more difficult to discover, but they\nare still significantly easier to find than organic bugs in our target\nprograms. Finally, this study identifies flaws in bug injection techniques and\nsuggests a number of axes along which synthetic bugs should be improved.\n