Security and Privacy Research
Democratizing
Fuzzing at Scale
Abhishek Arya
May 27, 2024
Security and Privacy Research
About me
● Engineering Director, Google Open Source and
Supply Chain Security
● Founding member and TAC representative,
Open Source Security Foundation (OpenSSF)
● Founding Chrome Security member
Security and Privacy Research
What is fuzzing?
Automated bug finding with
unexpected inputs
Security and Privacy Research
Fuzzing: art of controlled chaos
Reward = Security vulnerability ||
Stability bug ||
State assertion
Input = Malicious or unexpected data
Security and Privacy Research
Agenda
History: The Early Days
Platform: Pillars of Fuzzing
Community: Scaling Research
AI/ML: The Next Frontier
Future: Trends and Challenges
Security and Privacy Research
History
The Early Days
Security and Privacy Research
1988: The origin story: Barton Miller CS736
(1) Operating System Utility Program Reliability −
The Fuzz Generator: The goal of this project is to evaluate the
robustness of various UNIX utility programs, given an unpredictable
input stream. This project has two parts. First, you will build a fuzz
generator. This is a program that will output a random character
stream. Second, you will take the fuzz generator and use it to attack
as many UNIX utilities as possible, with the goal of trying to break
them. For the utilities that break, you will try to determine what type
of input cause the break.
Security and Privacy Research
2008: MS SAGE: Automated Whitebox Testing
…evaluates the recorded trace, and
gathers constraints on inputs
capturing how the program uses these.
The collected constraints are then
negated one by one and solved with a
constraint solver, producing new inputs
that exercise different control paths in
the program. This process is repeated
with the help of a code-coverage
maximizing heuristic designed to find
defects as fast as possible.
Security and Privacy Research
2009: Tavis O: Automated Corpus Distillation
…simply calculate the cardinality of our large
corpus, and then attempt to find the smallest
sub-collection such that the union of those
inputs has the same cardinality.
…Just simple mutation of our distilled corpus
would break most software (or a corpus distilled
using coverage data for program A would break
similar program B without modification!)
Security and Privacy Research
2010-11: Structured File Format Fuzzing
● Randomized, black-box testing
with no-feedback loop
● Good understanding of file
formats (parsers, pits, etc)
● Mutations focused on generating
almost-valid testcases
Security and Privacy Research
Platform
Pillars of Modern Fuzzing
Security and Privacy Research
Platform Goals
Find regressions before they impact users
Reliably reproduce a fault testcase with negligible overhead
Automate all parts of continuous fuzzing pipeline including build
management, crash handling, regression analysis and fix verification
Simple to write, easy to integrate fuzzer unit tests in day-to-day
developer workflows.
Testing
Instrumentation
Automation
Scale
Security and Privacy Research
Testing: AFL (American Fuzzy Lop)
● First coverage guided fuzzer
● Support both fast compiler instrumentation
and QEMU for binary apps
● Efficient fork processes without execve()
● Novel mutation strategies - bit flipping,
input fragment slicing, dict insertions, etc
● Several triage features, e.g. minimization
Security and Privacy Research
Testing: libFuzzer
● First in-process evolutionary fuzzer (later “persistent” mode in AFL)
● Foundation for developer-focused fuzzer unit tests
● New novel mutation strategies, e.g. value profiling
● Support for custom mutators - libprotobuf-mutator (also FuzzTest)
● Natively integrated in the LLVM toolchain
#include "libxml/parser.h"
#include "libxml/tree.h"
extern "C" int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size) {
if (auto doc = xmlReadMemory(reinterpret_cast<const char *>(data), size, "noname.xml", NULL, 0))
xmlFreeDoc(doc);
return 0;
}
Security and Privacy Research
Instrumentation: catch bugs reliably
● Sanitizers for all platforms
● Static >>> DBI (1.5-2x vs 10-50x)
● Reliable, comprehensive coverage
for bug classes (e.g. stack, global,
container overflows, undef behavior)
● Enable Security ASSERTs.
Security and Privacy Research
Automation: The ClusterFuzz Platform
● Continuous fuzzing on main/master
● Automated build mgmt, crash dedup,
triage, regression and fixed testing
● Automated corpus cross-pollination,
variant analysis, corpus culling, etc
● Support for custom mutators
● Ensemble fuzzing, incl support for
popular fuzzing engines and tools
Security and Privacy Research
ClusterFuzz: Sample Testcase Report
Security and Privacy Research
Scale: catch regressions before stable
● OSS-Fuzz: Large-scale Linux cluster on GCP
● ClusterFuzz supports Win/Android/Mac,
but not relevant for fuzz unit tests
● Auto-scale based on project criticality, new fuzzers,
coverage changes, roadblocks, etc
● ~77% of all bugs are regressions
100k
cores
Security and Privacy Research
Community
Scaling Research through Collaboration
Security and Privacy Research
OSS-Fuzz: continuous fuzzing for OSS
● Finds HeartBleed in a few seconds
● Project integration in <100 LoC
● Focus on automation, ease-of-use for
resource-constrained OSS devs
● 1.2K Projects, 12K vulns, 91% fix rate
● Follows Google 90 day disclosure policy
Security and Privacy Research
OSS-Fuzz Rewards: fueling a Safer OSS
Type Reward and Criteria
Initial
integration
Up to $5,000
Fuzz targets need to be checked into their upstream repository and integrated into the build
system with sanitizer support.
Projects are accepted by the OSS-Fuzz team based on their criticality, e.g. >=0.7 criticality score
or if they are used as part of critical infrastructure and/or have a large user base.
Ideal
fuzzing
integration
Up to $15,000, based on the following criteria:
○ The upstream development process has CIFuzz enabled to fuzz all pull requests.
○ The fuzzing coverage is at least 50% across the entire project, and targets are efficient.
○ At least 2 reported bugs are fixed.
○ Discretion bonus to recognize outstanding work.
Security and Privacy Research
Fuzzing Research: Lost in the Noise
Evaluating Fuzz Testing
George Klees, Andrew Ruef, Benji Cooper, Shiyi Wei, Michael Hicks
…Such new ideas are primarily evaluated experimentally so an important
question is: What experimental setup is needed to produce trustworthy results?
We surveyed the recent research literature and assessed the experimental
evaluations carried out by 32 fuzzing papers. We found problems in every
evaluation we considered. We then performed our own extensive experimental
evaluation using an existing fuzzer. Our results showed that the general problems
we found in existing experimental evaluations can indeed translate to actual
wrong or misleading assessments.
Security and Privacy Research
Fuzzer Benchmarking: FuzzBench and Magma
FuzzBench (Init coverage-based) Magma (bug-based)
Security and Privacy Research
FuzzBench: community benchmarking service
● Foster innovations beyond afl / libFuzzer
● Understand capability differences of
current fuzzing engines
● Zero-cost research experiments
● Diverse, real-world OSS-Fuzz benchmarks
● Fully reproducible results
● Code coverage and bug based evals
● Support for private experiments
Security and Privacy Research
FuzzBench: impact stories (e.g. AFL++)
Security and Privacy Research
Preregistration-based publication process
Stage 1: Evaluate for novelty and
significance of idea / approach.
Authors submit a full paper, including
a detailed description of the
methodology to be used to obtain the
study results, as well as preliminary
results demonstrating the feasibility of
the approach minus the results of
the proposed study.
Stage 2: Validate agreed methodology
and correct interpretation of results.
Authors submit the full paper, including
the results of their study and
non-design related revisions if any.
Security and Privacy Research
Preregistration-based publication process
Security and Privacy Research
AI-powered Fuzzing
The Next Frontier of Bug Hunting
Security and Privacy Research
The formidable barrier: code coverage wall
“After weeks or months of continuous testing, fuzzing
can hit an unexpected plateau, limiting the ability to find
critical vulnerabilities in unexplored code paths”
Security and Privacy Research
FuzzIntrospector: Interesting functions to fuzz
Function name Function source file Accumulated cyclomatic
complexity
Code
coverage
tinyxml2::XMLElement::ShallowClone(tinyxml2::XMLDocument*) /src/tinyxml2/tinyxml2.cpp 115 0.0%
tinyxml2::XMLDocument::LoadFile(charconst*) /src/tinyxml2/tinyxml2.cpp 112 0.0%
tinyxml2::XMLElement::SetAttribute(charconst*,charconst*) /src/tinyxml2/tinyxml2.h 106 0.0%
tinyxml2::XMLPrinter::VisitEnter(tinyxml2::XMLElementconst&, …) /src/tinyxml2/tinyxml2.cpp 104 0.0%
tinyxml2::XMLDocument::LoadFile(_IO_FILE*) /src/tinyxml2/tinyxml2.cpp 102 0.0%
tinyxml2::XMLElement::FindOrCreateAttribute(charconst*) /src/tinyxml2/tinyxml2.cpp 102 0.0%
tinyxml2::XMLElement::BoolText(bool)const /src/tinyxml2/tinyxml2.cpp 101 0.0%
tinyxml2::XMLElement::QueryBoolText(bool*)const /src/tinyxml2/tinyxml2.cpp 99 0.0%
tinyxml2::XMLDocument::SaveFile(charconst*,bool) /src/tinyxml2/tinyxml2.cpp 92 0.0%
tinyxml2::XMLElement::Int64Text(long)const /src/tinyxml2/tinyxml2.cpp 91 0.0%
Security and Privacy Research
OSS-Fuzz-Gen: LLM-powered fuzzing framework
OSS-Fuzz
Function signature
+ project context
LLM
Build and fuzz
Extracted compilation errors
+ runtime crashes
OSS-Fuzz-Gen
Existing project
Fuzz targets
Raw logs from
compilation & runtime
Refined fuzz targets
Build and evaluate
Security and Privacy Research
OSS-Fuzz-Gen: tinyxml2 case study
+11.12% coverage
Fuzz target #3:
+3.54% coverage
Fuzz target #1:
+11.12% coverage
extern "C" int LLVMFuzzerTestOneInput(const uint8_t* data, size_t size) {
std::string data_string(reinterpret_cast<const char*>(data), size);
tinyxml2::XMLDocument doc;
doc.Parse(data_string.c_str());
return 0;
}
doc.SaveFile("out.xml",true);
Fuzz target #2:
+10.25% coverage
XMLNode *node = doc.FirstChild();
if (node) {
XMLElement *element = node->ToElement();
if (element) {
element->ShallowClone( &doc );
}
}
XMLElement *root = doc.FirstChildElement( "root" );
if (root == nullptr) {
root = doc.NewElement( "root" );
doc.InsertFirstChild( root );
}
XMLElement *child = doc.NewElement( "child" );
root->InsertEndChild( child );
child->SetAttribute( "name", "foo" );
child->SetAttribute( "value", 1 );
tinyxml2::XMLNode* node =
doc.FirstChild();
if (node != nullptr) {
node->ShallowEqual(node);
}
Fuzz target #5:
+2.04% coverage
tinyxml2::XMLPrinter printer;
doc.Print(&printer);
Fuzz target #4:
+2.07% coverage
Security and Privacy Research
OSS-Fuzz-Gen: early impact on 160+ OSS projects
Security and Privacy Research
Future of Fuzzing
Trends and Challenges
Security and Privacy Research
Fuzzing: Open Challenges
Coverage-guided AI Testing
Find reproducible cases of unexpected behavior in AI models (e.g. prompt injection)
LLM-powered Fuzz Target Writing
Given a project source code, use the AI model to generate new, efficient fuzz targets
LLM-powered Fuzzer Generator
Given a project source code, use the AI model to
suggest code that can generate valid testcases
3
1
2
Security and Privacy Research
Thank you!
We look forward to collaborating
closely with you on fuzzing research

Democratizing Fuzzing at Scale by Abhishek Arya

  • 1.
    Security and PrivacyResearch Democratizing Fuzzing at Scale Abhishek Arya May 27, 2024
  • 2.
    Security and PrivacyResearch About me ● Engineering Director, Google Open Source and Supply Chain Security ● Founding member and TAC representative, Open Source Security Foundation (OpenSSF) ● Founding Chrome Security member
  • 3.
    Security and PrivacyResearch What is fuzzing? Automated bug finding with unexpected inputs
  • 4.
    Security and PrivacyResearch Fuzzing: art of controlled chaos Reward = Security vulnerability || Stability bug || State assertion Input = Malicious or unexpected data
  • 5.
    Security and PrivacyResearch Agenda History: The Early Days Platform: Pillars of Fuzzing Community: Scaling Research AI/ML: The Next Frontier Future: Trends and Challenges
  • 6.
    Security and PrivacyResearch History The Early Days
  • 7.
    Security and PrivacyResearch 1988: The origin story: Barton Miller CS736 (1) Operating System Utility Program Reliability − The Fuzz Generator: The goal of this project is to evaluate the robustness of various UNIX utility programs, given an unpredictable input stream. This project has two parts. First, you will build a fuzz generator. This is a program that will output a random character stream. Second, you will take the fuzz generator and use it to attack as many UNIX utilities as possible, with the goal of trying to break them. For the utilities that break, you will try to determine what type of input cause the break.
  • 8.
    Security and PrivacyResearch 2008: MS SAGE: Automated Whitebox Testing …evaluates the recorded trace, and gathers constraints on inputs capturing how the program uses these. The collected constraints are then negated one by one and solved with a constraint solver, producing new inputs that exercise different control paths in the program. This process is repeated with the help of a code-coverage maximizing heuristic designed to find defects as fast as possible.
  • 9.
    Security and PrivacyResearch 2009: Tavis O: Automated Corpus Distillation …simply calculate the cardinality of our large corpus, and then attempt to find the smallest sub-collection such that the union of those inputs has the same cardinality. …Just simple mutation of our distilled corpus would break most software (or a corpus distilled using coverage data for program A would break similar program B without modification!)
  • 10.
    Security and PrivacyResearch 2010-11: Structured File Format Fuzzing ● Randomized, black-box testing with no-feedback loop ● Good understanding of file formats (parsers, pits, etc) ● Mutations focused on generating almost-valid testcases
  • 11.
    Security and PrivacyResearch Platform Pillars of Modern Fuzzing
  • 12.
    Security and PrivacyResearch Platform Goals Find regressions before they impact users Reliably reproduce a fault testcase with negligible overhead Automate all parts of continuous fuzzing pipeline including build management, crash handling, regression analysis and fix verification Simple to write, easy to integrate fuzzer unit tests in day-to-day developer workflows. Testing Instrumentation Automation Scale
  • 13.
    Security and PrivacyResearch Testing: AFL (American Fuzzy Lop) ● First coverage guided fuzzer ● Support both fast compiler instrumentation and QEMU for binary apps ● Efficient fork processes without execve() ● Novel mutation strategies - bit flipping, input fragment slicing, dict insertions, etc ● Several triage features, e.g. minimization
  • 14.
    Security and PrivacyResearch Testing: libFuzzer ● First in-process evolutionary fuzzer (later “persistent” mode in AFL) ● Foundation for developer-focused fuzzer unit tests ● New novel mutation strategies, e.g. value profiling ● Support for custom mutators - libprotobuf-mutator (also FuzzTest) ● Natively integrated in the LLVM toolchain #include "libxml/parser.h" #include "libxml/tree.h" extern "C" int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size) { if (auto doc = xmlReadMemory(reinterpret_cast<const char *>(data), size, "noname.xml", NULL, 0)) xmlFreeDoc(doc); return 0; }
  • 15.
    Security and PrivacyResearch Instrumentation: catch bugs reliably ● Sanitizers for all platforms ● Static >>> DBI (1.5-2x vs 10-50x) ● Reliable, comprehensive coverage for bug classes (e.g. stack, global, container overflows, undef behavior) ● Enable Security ASSERTs.
  • 16.
    Security and PrivacyResearch Automation: The ClusterFuzz Platform ● Continuous fuzzing on main/master ● Automated build mgmt, crash dedup, triage, regression and fixed testing ● Automated corpus cross-pollination, variant analysis, corpus culling, etc ● Support for custom mutators ● Ensemble fuzzing, incl support for popular fuzzing engines and tools
  • 17.
    Security and PrivacyResearch ClusterFuzz: Sample Testcase Report
  • 18.
    Security and PrivacyResearch Scale: catch regressions before stable ● OSS-Fuzz: Large-scale Linux cluster on GCP ● ClusterFuzz supports Win/Android/Mac, but not relevant for fuzz unit tests ● Auto-scale based on project criticality, new fuzzers, coverage changes, roadblocks, etc ● ~77% of all bugs are regressions 100k cores
  • 19.
    Security and PrivacyResearch Community Scaling Research through Collaboration
  • 20.
    Security and PrivacyResearch OSS-Fuzz: continuous fuzzing for OSS ● Finds HeartBleed in a few seconds ● Project integration in <100 LoC ● Focus on automation, ease-of-use for resource-constrained OSS devs ● 1.2K Projects, 12K vulns, 91% fix rate ● Follows Google 90 day disclosure policy
  • 21.
    Security and PrivacyResearch OSS-Fuzz Rewards: fueling a Safer OSS Type Reward and Criteria Initial integration Up to $5,000 Fuzz targets need to be checked into their upstream repository and integrated into the build system with sanitizer support. Projects are accepted by the OSS-Fuzz team based on their criticality, e.g. >=0.7 criticality score or if they are used as part of critical infrastructure and/or have a large user base. Ideal fuzzing integration Up to $15,000, based on the following criteria: ○ The upstream development process has CIFuzz enabled to fuzz all pull requests. ○ The fuzzing coverage is at least 50% across the entire project, and targets are efficient. ○ At least 2 reported bugs are fixed. ○ Discretion bonus to recognize outstanding work.
  • 22.
    Security and PrivacyResearch Fuzzing Research: Lost in the Noise Evaluating Fuzz Testing George Klees, Andrew Ruef, Benji Cooper, Shiyi Wei, Michael Hicks …Such new ideas are primarily evaluated experimentally so an important question is: What experimental setup is needed to produce trustworthy results? We surveyed the recent research literature and assessed the experimental evaluations carried out by 32 fuzzing papers. We found problems in every evaluation we considered. We then performed our own extensive experimental evaluation using an existing fuzzer. Our results showed that the general problems we found in existing experimental evaluations can indeed translate to actual wrong or misleading assessments.
  • 23.
    Security and PrivacyResearch Fuzzer Benchmarking: FuzzBench and Magma FuzzBench (Init coverage-based) Magma (bug-based)
  • 24.
    Security and PrivacyResearch FuzzBench: community benchmarking service ● Foster innovations beyond afl / libFuzzer ● Understand capability differences of current fuzzing engines ● Zero-cost research experiments ● Diverse, real-world OSS-Fuzz benchmarks ● Fully reproducible results ● Code coverage and bug based evals ● Support for private experiments
  • 25.
    Security and PrivacyResearch FuzzBench: impact stories (e.g. AFL++)
  • 26.
    Security and PrivacyResearch Preregistration-based publication process Stage 1: Evaluate for novelty and significance of idea / approach. Authors submit a full paper, including a detailed description of the methodology to be used to obtain the study results, as well as preliminary results demonstrating the feasibility of the approach minus the results of the proposed study. Stage 2: Validate agreed methodology and correct interpretation of results. Authors submit the full paper, including the results of their study and non-design related revisions if any.
  • 27.
    Security and PrivacyResearch Preregistration-based publication process
  • 28.
    Security and PrivacyResearch AI-powered Fuzzing The Next Frontier of Bug Hunting
  • 29.
    Security and PrivacyResearch The formidable barrier: code coverage wall “After weeks or months of continuous testing, fuzzing can hit an unexpected plateau, limiting the ability to find critical vulnerabilities in unexplored code paths”
  • 30.
    Security and PrivacyResearch FuzzIntrospector: Interesting functions to fuzz Function name Function source file Accumulated cyclomatic complexity Code coverage tinyxml2::XMLElement::ShallowClone(tinyxml2::XMLDocument*) /src/tinyxml2/tinyxml2.cpp 115 0.0% tinyxml2::XMLDocument::LoadFile(charconst*) /src/tinyxml2/tinyxml2.cpp 112 0.0% tinyxml2::XMLElement::SetAttribute(charconst*,charconst*) /src/tinyxml2/tinyxml2.h 106 0.0% tinyxml2::XMLPrinter::VisitEnter(tinyxml2::XMLElementconst&, …) /src/tinyxml2/tinyxml2.cpp 104 0.0% tinyxml2::XMLDocument::LoadFile(_IO_FILE*) /src/tinyxml2/tinyxml2.cpp 102 0.0% tinyxml2::XMLElement::FindOrCreateAttribute(charconst*) /src/tinyxml2/tinyxml2.cpp 102 0.0% tinyxml2::XMLElement::BoolText(bool)const /src/tinyxml2/tinyxml2.cpp 101 0.0% tinyxml2::XMLElement::QueryBoolText(bool*)const /src/tinyxml2/tinyxml2.cpp 99 0.0% tinyxml2::XMLDocument::SaveFile(charconst*,bool) /src/tinyxml2/tinyxml2.cpp 92 0.0% tinyxml2::XMLElement::Int64Text(long)const /src/tinyxml2/tinyxml2.cpp 91 0.0%
  • 31.
    Security and PrivacyResearch OSS-Fuzz-Gen: LLM-powered fuzzing framework OSS-Fuzz Function signature + project context LLM Build and fuzz Extracted compilation errors + runtime crashes OSS-Fuzz-Gen Existing project Fuzz targets Raw logs from compilation & runtime Refined fuzz targets Build and evaluate
  • 32.
    Security and PrivacyResearch OSS-Fuzz-Gen: tinyxml2 case study +11.12% coverage Fuzz target #3: +3.54% coverage Fuzz target #1: +11.12% coverage extern "C" int LLVMFuzzerTestOneInput(const uint8_t* data, size_t size) { std::string data_string(reinterpret_cast<const char*>(data), size); tinyxml2::XMLDocument doc; doc.Parse(data_string.c_str()); return 0; } doc.SaveFile("out.xml",true); Fuzz target #2: +10.25% coverage XMLNode *node = doc.FirstChild(); if (node) { XMLElement *element = node->ToElement(); if (element) { element->ShallowClone( &doc ); } } XMLElement *root = doc.FirstChildElement( "root" ); if (root == nullptr) { root = doc.NewElement( "root" ); doc.InsertFirstChild( root ); } XMLElement *child = doc.NewElement( "child" ); root->InsertEndChild( child ); child->SetAttribute( "name", "foo" ); child->SetAttribute( "value", 1 ); tinyxml2::XMLNode* node = doc.FirstChild(); if (node != nullptr) { node->ShallowEqual(node); } Fuzz target #5: +2.04% coverage tinyxml2::XMLPrinter printer; doc.Print(&printer); Fuzz target #4: +2.07% coverage
  • 33.
    Security and PrivacyResearch OSS-Fuzz-Gen: early impact on 160+ OSS projects
  • 34.
    Security and PrivacyResearch Future of Fuzzing Trends and Challenges
  • 35.
    Security and PrivacyResearch Fuzzing: Open Challenges Coverage-guided AI Testing Find reproducible cases of unexpected behavior in AI models (e.g. prompt injection) LLM-powered Fuzz Target Writing Given a project source code, use the AI model to generate new, efficient fuzz targets LLM-powered Fuzzer Generator Given a project source code, use the AI model to suggest code that can generate valid testcases 3 1 2
  • 36.
    Security and PrivacyResearch Thank you! We look forward to collaborating closely with you on fuzzing research