pw_fuzzer: Adding Fuzzers Using FuzzTest#
pw_fuzzer: Better C++ code through easier fuzzing
Note
FuzzTest is currently only supported on Linux and MacOS using Clang.
Step 0: Set up FuzzTest for your project#
Note
This workflow only needs to be done once for a project.
FuzzTest and its dependencies are not included in Pigweed and need to be added.
See the following:
You may not want to use upstream GoogleTest all the time. For example, it may not be supported on your target device. In this case, you can limit it to a specific toolchain used for fuzzing. For example:
import("$dir_pw_toolchain/host/target_toolchains.gni")
my_toolchains = {
...
clang_fuzz = {
name = "my_clang_fuzz"
forward_variables_from(pw_toolchain_host.clang_fuzz, "*", ["name"])
pw_unit_test_MAIN = "$dir_pw_fuzzer:fuzztest_main"
pw_unit_test_BACKEND = "$dir_pw_fuzzer:gtest"
}
...
}
FuzzTest is enabled by setting several CMake variables. The easiest way to
set these is to extend your toolchain.cmake
file.
For example:
include(my_project_toolchain.cmake)
set(dir_pw_third_party_fuzztest
"path/to/fuzztest"
CACHE STRING "" FORCE
)
set(dir_pw_third_party_googletest
"path/to/googletest"
CACHE STRING "" FORCE
)
set(pw_unit_test_BACKEND
"pw_third_party.fuzztest"
CACHE STRING "" FORCE
)
Include Abseil-C++ and GoogleTest in your WORKSPACE
file. For example:
http_archive(
name = "com_google_absl",
sha256 = "338420448b140f0dfd1a1ea3c3ce71b3bc172071f24f4d9a57d59b45037da440",
strip_prefix = "abseil-cpp-20240116.0",
url = "https://github.com/abseil/abseil-cpp/releases/download/20240116.0/abseil-cpp-20240116.0.tar.gz",
)
git_repository(
name = "com_google_googletest",
commit = "3b6d48e8d5c1d9b3f9f10ac030a94008bfaf032b",
remote = "https://pigweed.googlesource.com/third_party/github/google/googletest",
)
Then, import the FuzzTest build configurations in your .bazelrc
file
by adding and adapting the following:
# Include FuzzTest build configurations.
try-import %workspace%/path/to/pigweed/pw_fuzzer/fuzztest.bazelrc
Step 1: Write a unit test for the target#
As noted previously, the very first step is to identify one or more target behavior that would benefit from testing. See FuzzTest Use Cases for more details on how to identify this code.
Once identified, it is useful to start from a unit test. You may already have a unit test writtern, but if not it is likely still be helpful to write one first. Many developers are more familiar with writing unit tests, and there are detailed guides available. See for example the GoogleTest documentation.
This guide will use code from //pw_fuzzer/examples/fuzztest/
. This code
includes the following object as an example of code that would benefit from
fuzzing for undefined behavior and from roundtrip fuzzing.
Note
To keep the example simple, this code uses the standard library. As a result, this code may not work with certain devices.
1// Represent a named value. In order to transmit these values efficiently, they
2// can be referenced by fixed length, generated keys instead of names.
3struct Metric {
4 using Key = uint16_t;
5 using Value = uint32_t;
6
7 static constexpr size_t kMaxNameLen = 32;
8
9 Metric() = default;
10 Metric(std::string_view name_, Value value_);
11
12 InlineString<kMaxNameLen> name;
13 Key key = 0;
14 Value value = 0;
15};
16
17// Represents a set of measurements from a particular source.
18//
19// In order to transmit metrics efficiently, the names of metrics are hashed
20// internally into fixed length keys. The names can be shared once via `GetKeys`
21// and `SetKeys`, after which metrics can be efficiently shared via `Serialize`
22// and `Deserialize`.
23class Metrics {
24 public:
25 static constexpr size_t kMaxMetrics = 32;
26 static constexpr size_t kMaxSerializedSize =
27 sizeof(size_t) +
28 kMaxMetrics * (sizeof(Metric::Key) + sizeof(Metric::Value));
29
30 // Retrieves the value of a named metric and stores it in `out_value`. The
31 // name must consist of printable ASCII characters. Returns false if the named
32 // metric was not `Set` or `Import`ed.
33 std::optional<Metric::Value> GetValue(std::string_view name) const;
34
35 // Sets the value of a named metric. The name must consist of printable ASCII
36 // characters, and will be added to the mapping of names to keys.
37 Status SetValue(std::string_view name, Metric::Value value);
38
39 // Returns the current mapping of names to keys.
40 const Vector<Metric>& GetMetrics() const;
41
42 // Replaces the current mapping of names to keys.
43 Status SetMetrics(const Vector<Metric>& metrics);
44
45 // Serializes this object to the given `buffer`. Does not write more bytes
46 // than `buffer.size()`. Returns the number of number of bytes written or an
47 // error if insufficient space.
48 StatusWithSize Serialize(pw::ByteSpan buffer) const;
49
50 // Populates this object from the data in the given `buffer`.
51 // Returns whether this buffer could be deserialized.
52 Status Deserialize(pw::ConstByteSpan buffer);
53
54 private:
55 Vector<Metric, kMaxMetrics> metrics_;
56};
Unit tests for this class might attempt to deserialize previously serialized objects and to deserialize invalid data:
1TEST(MetricsTest, SerializeAndDeserialize) {
2 std::array<std::byte, Metrics::kMaxSerializedSize> buffer;
3
4 // Add and copy the names only.
5 Metrics src, dst;
6 EXPECT_TRUE(src.SetValue("one", 0).ok());
7 EXPECT_TRUE(src.SetValue("two", 0).ok());
8 EXPECT_TRUE(src.SetValue("three", 0).ok());
9 EXPECT_TRUE(dst.SetMetrics(src.GetMetrics()).ok());
10
11 // Modify the values.
12 EXPECT_TRUE(src.SetValue("one", 1).ok());
13 EXPECT_TRUE(src.SetValue("two", 2).ok());
14 EXPECT_TRUE(src.SetValue("three", 3).ok());
15
16 // Transfer the data and check.
17 EXPECT_TRUE(src.Serialize(buffer).ok());
18 EXPECT_TRUE(dst.Deserialize(buffer).ok());
19 EXPECT_EQ(dst.GetValue("one").value_or(0), 1U);
20 EXPECT_EQ(dst.GetValue("two").value_or(0), 2U);
21 EXPECT_EQ(dst.GetValue("three").value_or(0), 3U);
22}
23
24TEST(MetricsTest, DeserializeDoesNotCrash) {
25 std::array<std::byte, Metrics::kMaxSerializedSize> buffer;
26 std::fill(buffer.begin(), buffer.end(), std::byte(0x5C));
27
28 // Just make sure this does not crash.
29 Metrics dst;
30 dst.Deserialize(buffer).IgnoreError();
31}
Step 2: Convert your unit test to a function#
Examine your unit tests and identify any places you have fixed values that could vary. Turn your unit test into a function that takes those values as parameters. Since fuzzing may not occur on all targets, you should preserve your unit test by calling the new function with the previously fixed values.
1void ArbitrarySerializeAndDeserialize(const Vector<Metric>& metrics) {
2 std::array<std::byte, Metrics::kMaxSerializedSize> buffer;
3
4 // Add and copy the names only.
5 Metrics src, dst;
6 for (const auto& metric : metrics) {
7 EXPECT_TRUE(src.SetValue(metric.name, 0).ok());
8 }
9 EXPECT_TRUE(dst.SetMetrics(src.GetMetrics()).ok());
10
11 // Modify the values.
12 for (const auto& metric : metrics) {
13 EXPECT_TRUE(src.SetValue(metric.name, metric.value).ok());
14 }
15
16 // Transfer the data and check.
17 EXPECT_TRUE(src.Serialize(buffer).ok());
18 EXPECT_TRUE(dst.Deserialize(buffer).ok());
19 for (const auto& metric : metrics) {
20 EXPECT_EQ(dst.GetValue(metric.name).value_or(0), metric.value);
21 }
22}
23
24// This unit test will run on host and may run on target devices (if supported).
25TEST(MetricsTest, SerializeAndDeserialize) {
26 Vector<Metric, 3> metrics;
27 metrics.emplace_back("one", 1);
28 metrics.emplace_back("two", 2);
29 metrics.emplace_back("three", 3);
30 ArbitrarySerializeAndDeserialize(metrics);
31}
1void ArbitraryDeserialize(pw::ConstByteSpan buffer) {
2 // Just make sure this does not crash.
3 Metrics dst;
4 dst.Deserialize(buffer).IgnoreError();
5}
6
7// This unit test will run on host and may run on target devices (if supported).
8TEST(MetricsTest, DeserializeDoesNotCrash) {
9 ArbitraryDeserialize(std::vector<std::byte>(100, std::byte(0x5C)));
10}
Note that in ArbitrarySerializeAndDeserialize
we no longer assume the
marshalling will always be successful, and exit early if it is not. You may need
to make similar modifications to your unit tests if constraints on parameters
are not expressed by domains as described below.
Step 3: Add a FUZZ_TEST macro invocation#
Now, include "fuzztest/fuzztest.h"
and pass the test suite name and your
function name to the FUZZ_TEST
macro. Call WithDomains
on the returned
object to specify the input domain for each parameter of the function. For
example:
1auto ArbitraryMetric() {
2 return ConstructorOf<Metric>(PrintableAsciiString<Metric::kMaxNameLen>(),
3 Arbitrary<uint32_t>());
4}
5
6// This fuzz test will only run on host.
7FUZZ_TEST(MetricsTest, ArbitrarySerializeAndDeserialize)
8 .WithDomains(VectorOf<Metrics::kMaxMetrics>(ArbitraryMetric()));
1// This fuzz test will only run on host.
2FUZZ_TEST(MetricsTest, ArbitraryDeserialize)
3 .WithDomains(VectorOf<Metrics::kMaxSerializedSize>(Arbitrary<std::byte>()));
You may know of specific values that are “interesting”, i.e. that represent boundary conditions, involve, special handling, etc. To guide the fuzzer towards these code paths, you can include them as seeds. However, as noted in the comments of the examples, it is recommended to include a unit test with the original parameters to ensure the code is tested on target devices.
FuzzTest provides more detailed documentation on these topics. For example:
Refer to The FUZZ_TEST Macro reference for more details on how to use this macro.
Refer to the FuzzTest Domain Reference for details on all the different types of domains supported by FuzzTest and how they can be combined.
Refer to the Test Fixtures reference for how to create fuzz tests from unit tests that use GoogleTest fixtures.
Step 4: Add the fuzzer to your build#
Next, indicate that the unit test includes one or more fuzz tests.
The pw_fuzz_test
template can be used to add the necessary FuzzTest
dependency and generate test metadata.
For example, consider the following BUILD.gn
:
1pw_test("metrics_fuzztest") {
2 sources = [ "metrics_fuzztest.cc" ]
3 deps = [
4 ":metrics_lib",
5 "$dir_pw_fuzzer:fuzztest", # <- Added!
6 ]
7
8 # TODO: b/283156908 - Re-enable with a fixed seed.
9 enable_if = false
10}
11
Unit tests can support fuzz tests by simply adding a dependency on FuzzTest.
For example, consider the following CMakeLists.txt
:
1pw_add_test(pw_fuzzer.examples.fuzztest.metrics_fuzztest
2 SOURCES
3 metrics_fuzztest.cc
4 PRIVATE_DEPS
5 pw_fuzzer.fuzztest # <- Added!
6 pw_fuzzer.examples.fuzztest.metrics_lib
7 GROUPS
8 modules
9 pw_fuzzer
10)
Unit tests can support fuzz tests by simply adding a dependency on FuzzTest.
For example, consider the following BUILD.bazel
:
1pw_cc_test(
2 name = "metrics_fuzztest",
3 srcs = ["metrics_fuzztest.cc"],
4 deps = [
5 ":metrics_lib",
6 "//pw_fuzzer:fuzztest", # <- Added!
7 "//pw_unit_test",
8 ],
9)
Step 5: Build the fuzzer#
Build using ninja
on a target that includes your fuzzer with a
fuzzing toolchain.
Pigweed includes a //:fuzzers
target that builds all tests, including
those with fuzzers, using a fuzzing toolchain. You may wish to add a
similar top-level to your project. For example:
group("fuzzers") {
deps = [ ":pw_module_tests.run($dir_pigweed/targets/host:host_clang_fuzz)" ]
}
Build using cmake
with the FuzzTest and GoogleTest variables set. For
example:
cmake ... \
-Ddir_pw_third_party_fuzztest=path/to/fuzztest \
-Ddir_pw_third_party_googletest=path/to/googletest \
-Dpw_unit_test_BACKEND=pw_third_party.fuzztest
By default, bazel
will simply omit the fuzz tests and build unit
tests. To build these tests as fuzz tests, specify the fuzztest
config. For example:
bazel build //... --config=fuzztest
Step 6: Running the fuzzer locally#
When building. Most toolchains will simply omit the fuzz tests and build and run unit tests. A fuzzing toolchain will include the fuzzers, but only run them for a limited time. This makes them suitable for automated testing as in CQ.
If you used the top-level //:fuzzers
described in the previous
section, you can find available fuzzers using the generated JSON test
metadata file:
jq '.[] | select(contains({tags: ["fuzztest"]}))' \
out/host_clang_fuzz/obj/pw_module_tests.testinfo.json
To run a fuzz with different options, you can pass additional flags to the fuzzer binary. This binary will be in a subdirectory related to the toolchain. For example:
out/host_clang_fuzz/obj/my_module/test/metrics_test \
--fuzz=MetricsTest.Roundtrip
Additional sanitizer flags may be passed uisng environment variables.
When built with FuzzTest and GoogleTest, the fuzzer binaries can be run directly from the CMake build directory. By default, the fuzzers will only run for a limited time. This makes them suitable for automated testing as in CQ. To run a fuzz with different options, you can pass additional flags to the fuzzer binary.
For example:
build/my_module/metrics_test --fuzz=MetricsTest.Roundtrip
By default, bazel
will simply omit the fuzz tests and build and run
unit tests. To build these tests as fuzz tests, specify the “fuzztest”
config. For example:
bazel test //... --config=fuzztest
This will build the tests as fuzz tests, but only run them for a limited time. This makes them suitable for automated testing as in CQ.
To run a fuzz with different options, you can use run
and pass
additional flags to the fuzzer binary. For example:
bazel run //my_module:metrics_test --config=fuzztest \
--fuzz=MetricsTest.Roundtrip
Running the fuzzer should produce output similar to the following:
[.] Sanitizer coverage enabled. Counter map size: 21290, Cmp map size: 262144
Note: Google Test filter = MetricsTest.Roundtrip
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from MetricsTest
[ RUN ] MetricsTest.Roundtrip
[*] Corpus size: 1 | Edges covered: 131 | Fuzzing time: 504.798us | Total runs: 1.00e+00 | Runs/secs: 1
[*] Corpus size: 2 | Edges covered: 133 | Fuzzing time: 934.176us | Total runs: 3.00e+00 | Runs/secs: 3
[*] Corpus size: 3 | Edges covered: 134 | Fuzzing time: 2.384383ms | Total runs: 5.30e+01 | Runs/secs: 53
[*] Corpus size: 4 | Edges covered: 137 | Fuzzing time: 2.732274ms | Total runs: 5.40e+01 | Runs/secs: 54
[*] Corpus size: 5 | Edges covered: 137 | Fuzzing time: 7.275553ms | Total runs: 2.48e+02 | Runs/secs: 248