pw_allocator: Flexible, safe, and measurable memory allocation

Get started#

Add @pigweed//pw_allocator to the deps list in your Bazel target:

cc_library("...") {
  # ...
  deps = [
    # ...
    # ...

For a narrower dependency, depend on subtargets, e.g. @pigweed//pw_allocator:tracking_allocator.

This assumes @pigweed is the name you pulled Pigweed into your Bazel WORKSPACE as.

This assumes that your Bazel WORKSPACE has a repository named @pigweed that points to the upstream Pigweed repository.

Add dir_pw_allocator to the deps list in your build target:

pw_executable("...") {
  # ...
  deps = [
    # ...
    # ...

For a narrower dependency, depend on subtargets, e.g. "$dir_pw_allocator:tracking_allocator".

If the build target is a dependency of other targets and includes allocators in its public interface, e.g. its header file, use public_deps instead of deps.

Add pw_allocator to your pw_add_library or similar CMake target:

pw_add_library(my_library STATIC
    # ...
    # ...

For a narrower dependency, depend on subtargets, e.g. pw_allocator.tracking_allocator, etc.

If the build target is a dependency of other targets and includes allocators in its public interface, e.g. its header file, use PUBLIC_DEPS instead of PRIVATE_DEPS.

Inject allocators#

Routines that need to allocate memory dynamically should do so using the generic Allocator interface. By using dependency injection, memory users can be kept decoupled from the details of how memory is provided. This yields the most flexibility for modifying or replacing the specific allocators.

Use the Allocator interface#

Write or refactor your memory-using routines to take a pointer or reference to such an object and use the Allocate, Deallocate, Reallocate, and Resize methods to manage memory. For example:

1using pw::allocator::Layout;
3void* AllocateNamedU32(pw::Allocator& allocator) {
4  return allocator.Allocate(Layout::Of<NamedU32>());
1void DeallocateNamedU32(pw::Allocator& allocator, void* ptr) {
2  allocator.Deallocate(ptr);

To invoke methods or objects that inject allocators now requires an allocator to have been constructed. The exact location where allocators should be instantiated will vary from project to project, but it’s likely to be early in a program’s lifecycle and in a device-specific location of the source tree.

For initial testing on host, a simple allocator such as LibCAllocator can be used. This allocator is trivally constructed and simply wraps malloc and free.

Use New and Delete#

In addition to managing raw memory, an Allocator can also be used to create and destroy objects:

1NamedU32* NewNamedU32(pw::Allocator& allocator,
2                      std::string_view name,
3                      uint32_t value) {
4  return allocator.New<NamedU32>(name, value);
7void DeleteNamedU32(pw::Allocator& allocator, NamedU32* named_u32) {
8  allocator.Delete<NamedU32>(named_u32);

Use UniquePtr<T>#

Where possible, using RAII is a recommended approach for making memory management easier and less error-prone. UniquePtr is a smart pointer that makes allocating and deallocating memory more transparent:

1pw::UniquePtr<NamedU32> MakeNamedU32(pw::Allocator& allocator,
2                                     std::string_view name,
3                                     uint32_t value) {
4  return allocator.MakeUnique<NamedU32>(name, value);

Determine an allocation’s Layout#

Several of the Allocator methods take a parameter of the Layout type. This type combines the size and alignment requirements of an allocation. It can be constructed directly, or if allocating memory for a specific type, by using a templated static method:

1template <typename T, typename... Args>
2T* my_new(Args... args) {
3  void* ptr = allocator.Allocate(pw::allocator::Layout::Of<T>());
4  return ptr == nullptr ? nullptr : new (ptr) T(std::forward<Args>(args)...);

As stated above, you should generally try to keep allocator implementation details abstracted behind the Allocator interface. One exception to this guidance is when integrating allocators into existing code that assumes malloc and free semantics. Notably, free does not take any parameters beyond a pointer describing the memory to be freed.

1void* my_malloc(size_t size) {
2  return allocator.Allocate(
3      pw::allocator::Layout(size, alignof(std::max_align_t)));
6void my_free(void* ptr) { allocator.Deallocate(ptr); }

Use standard library containers#

All of C++’s standard library containers are AllocatorAwareContainers, with the exception of std::array. These types include a template parameter used to specify an allocator type, and a constructor which takes a reference to an object of this type.

While there are Differences with C++ polymorphic allocators, an Allocator can be used with these containers by wrapping them with a PMR adapter type, AsPmrAllocator:

 1#include <map>
 2#include <string_view>
 4#include "pw_allocator/allocator.h"
 5#include "pw_allocator/as_pmr_allocator.h"
 7namespace examples {
 9class LibraryIndex {
10 public:
11  using StringType = ::std::pmr::string;
12  using MapType = ::std::pmr::multimap<StringType, StringType>;
13  using Iterator = typename MapType::iterator;
15  LibraryIndex(pw::allocator::Allocator& allocator)
16      : allocator_(allocator), by_author_(allocator_) {}
18  void AddBook(std::string_view title, std::string_view author) {
19    by_author_.emplace(author, title);
20  }
22  std::pair<Iterator, Iterator> FindByAuthor(std::string_view author) {
23    return by_author_.equal_range(StringType(author, allocator_));
24  }
26 private:
27  pw::allocator::AsPmrAllocator allocator_;
28  MapType by_author_;
31}  // namespace examples


Some of the standard library containers may add a significant amount of additional code size and/or memory overhead. In particular, implementations of std::deque are known to preallocate significant memory in order to meet its complexity requirements, e.g. O(1) insertion at the front of the deque.


The standard library containers expect their allocators to throw an exception on allocation failure, and do not check for failure themselves. If exceptions are disabled, AsPmrAllocator instead asserts that allocation succeeded. Care must be taken in this case to ensure that memory is not exhausted.

Choose the right allocator#

Once one or more routines are using allocators, you can consider how best to implement memory allocation for each particular scenario.

Concrete allocator implementations#

This module provides several allocator implementations. The following is an overview. Consult the API reference for additional details.

  • LibCAllocator: Uses malloc, realloc, and free. This should only be used if the libc in use provides those functions. This allocator is a stateless singleton that may be referenced using GetLibCAllocator().

  • NullAllocator: Always fails. This may be useful if allocations should be disallowed under specific circumstances. This allocator is a stateless singleton that may be referenced using GetNullAllocator().

  • BumpAllocator: Allocates objects out of a region of memory and only frees them all at once when the allocator is destroyed.

  • BuddyAllocator: Allocates objects out of a chunks with sizes that are powers of two. Chunks are split evenly for smaller allocations and merged on free.

  • BlockAllocator: Tracks memory using Block. Derived types use specific strategies for how to choose a block to use to satisfy a request. See also pw::allocator::Block. Derived types include:

    • FirstFitBlockAllocator: Chooses the first block that’s large enough to satisfy a request. This strategy is very fast, but may increase fragmentation.

    • LastFitBlockAllocator: Chooses the last block that’s large enough to satisfy a request. This strategy is fast, and may fragment memory less than FirstFitBlockAllocator when satisfying aligned memory requests.

    • BestFitBlockAllocator: Chooses the smallest block that’s large enough to satisfy a request. This strategy maximizes the avilable space for large allocations, but may increase fragmentation and is slower.

    • WorstFitBlockAllocator: Chooses the largest block if it’s large enough to satisfy a request. This strategy minimizes the amount of memory in unusably small blocks, but is slower.

    • DualFirstFitBlockAllocator: Acts like FirstFitBlockAllocator or LastFitBlockAllocator depending on whether a request is larger or smaller, respectively, than a given threshold value. This strategy preserves the speed of the two other strategies, while fragmenting memory less by co-locating allocations of similar sizes.

    • BucketBlockAllocator: Sorts and stores each free blocks in a Bucket with a given maximum chunk size.

  • TypedPool: Efficiently creates and destroys objects of a single given type.

Forwarding allocator implementations#

This module provides several “forwarding” allocators, as described in Forwarding allocator concept. The following is an overview. Consult the API reference for additional details.

  • FallbackAllocator: Dispatches first to a primary allocator, and, if that fails, to a secondary allocator.

  • AsPmrAllocator: Adapts an allocator to be a std::pmr::polymorphic_allocator, which can be used with standard library containers that use allocators, such as std::pmr::vector<T>.

  • SynchronizedAllocator: Synchronizes access to another allocator, allowing it to be used by multiple threads.

  • TrackingAllocator: Wraps another allocator and records its usage.

Custom allocator implementations#

If none of the allocator implementations provided by this module meet your needs, you can implement your allocator and pass it into any routine that uses the generic interface.

Allocator uses an NVI pattern. To add a custom allocator implementation, you must at a miniumum implement the DoAllocate and DoDeallocate methods.

For example, the following is a forwarding allocator that simply writes to the log whenever a threshold is exceeded:

 1#include <cstddef>
 3#include "pw_allocator/allocator.h"
 5namespace examples {
 7class CustomAllocator : public pw::Allocator {
 8 public:
 9  using Layout = pw::allocator::Layout;
11  CustomAllocator(pw::Allocator& allocator, size_t threshold);
13 private:
14  /// @copydoc pw::Allocator::Allocate
15  void* DoAllocate(Layout layout) override;
17  /// @copydoc pw::Allocator::Deallocate
18  void DoDeallocate(void* ptr) override;
20  /// @copydoc pw::Allocator::Deallocate
21  void DoDeallocate(void* ptr, Layout) override { DoDeallocate(ptr); }
23  pw::Allocator& allocator_;
24  size_t used_ = 0;
25  size_t threshold_ = 0;
28}  // namespace examples
 1#include "examples/custom_allocator.h"
 3#include <cstdint>
 5#include "pw_allocator/capability.h"
 6#include "pw_log/log.h"
 7#include "pw_result/result.h"
 9namespace examples {
11CustomAllocator::CustomAllocator(Allocator& allocator, size_t threshold)
12    : Allocator(pw::allocator::Capabilities()),
13      allocator_(allocator),
14      threshold_(threshold) {}
16// Allocates, and reports if allocated memory exceeds its threshold.
17void* CustomAllocator::DoAllocate(Layout layout) {
18  void* ptr = allocator_.Allocate(layout);
19  if (ptr == nullptr) {
20    return nullptr;
21  }
22  size_t prev = used_;
23  pw::Result<Layout> allocated = GetAllocatedLayout(allocator_, ptr);
24  if (allocated.ok()) {
25    used_ += allocated->size();
26  }
27  if (prev <= threshold_ && threshold_ < used_) {
28    PW_LOG_INFO("more than %zu bytes allocated.", threshold_);
29  }
30  return ptr;
33void CustomAllocator::DoDeallocate(void* ptr) {
34  if (ptr == nullptr) {
35    return;
36  }
37  pw::Result<Layout> allocated = GetAllocatedLayout(allocator_, ptr);
38  if (allocated.ok()) {
39    used_ -= allocated->size();
40  }
41  allocator_.Deallocate(ptr);
44}  // namespace examples

There are also several optional methods you can provide:

  • If an implementation of DoResize isn’t provided, then Resize will always return false.

  • If an implementation of DoReallocate isn’t provided, then Reallocate will try to Resize, and, if unsuccessful, try to Allocate, copy, and Deallocate.

  • If an implementation of DoGetInfo isn’t provided, then GetInfo will always return pw::Status::Unimplmented.

Custom allocators can indicate which optional methods they implement and what optional behaviors they want from the base class by specifying Capabilities when invoking the base class constructor.

Measure memory usage#

You can observe how much memory is being used for a particular use case using a TrackingAllocator.

1struct CustomMetrics {
2  PW_ALLOCATOR_METRICS_ENABLE(allocated_bytes);
3  PW_ALLOCATOR_METRICS_ENABLE(peak_allocated_bytes);
1  constexpr pw::metric::Token kToken = PW_TOKENIZE_STRING("CustomMetrics");
2  pw::allocator::TrackingAllocatorImpl<CustomMetrics> tracker(kToken,
3                                                              allocator);

Metric data can be retrieved according to the steps described in Exporting metrics, or by using the Dump method of Group:

1  tracker.metric_group().Dump();

The CustomMetrics type used in the example above is a struct provided by the developer. You can create your own metrics structs that enable zero or more of the following metrics:

  • requested_bytes: The number of bytes currently requested from this allocator.

  • peak_requested_bytes: The most bytes requested from this allocator at any one time.

  • cumulative_requested_bytes: The total number of bytes that have been requested from this allocator across its lifetime.

  • allocated_bytes: The number of bytes currently allocated by this allocator.

  • peak_allocated_bytes: The most bytes allocated by this allocator at any one time.

  • cumulative_allocated_bytes: The total number of bytes that have been allocated by this allocator across its lifetime.

  • num_allocations: The number of allocation requests this allocator has successfully completed.

  • num_deallocations: The number of deallocation requests this allocator has successfully completed.

  • num_resizes: The number of resize requests this allocator has successfully completed.

  • num_reallocations: The number of reallocation requests this allocator has successfully completed.

  • num_failures: The number of requests this allocator has failed to complete.

If you only want a subset of these metrics, you can implement your own metrics struct. For example:

1struct CustomMetrics {
2  PW_ALLOCATOR_METRICS_ENABLE(allocated_bytes);
3  PW_ALLOCATOR_METRICS_ENABLE(peak_allocated_bytes);

If you have multiple routines that share an underlying allocator that you want to track separately, you can create multiple tracking allocators that forward to the same underlying allocator:

 1  using MyTrackingAllocator =
 2      pw::allocator::TrackingAllocatorImpl<pw::allocator::internal::AllMetrics>;
 4  constexpr pw::metric::Token kToken0 = PW_TOKENIZE_STRING("Combined");
 5  MyTrackingAllocator combined(kToken0, allocator);
 7  constexpr pw::metric::Token kToken1 = PW_TOKENIZE_STRING("Tracker1");
 8  MyTrackingAllocator tracker1(kToken1, combined);
10  constexpr pw::metric::Token kToken2 = PW_TOKENIZE_STRING("Tracker2");
11  MyTrackingAllocator tracker2(kToken2, combined);
13  combined.metric_group().Add(tracker1.metric_group());
14  combined.metric_group().Add(tracker2.metric_group());

Measure fragmentation#

If you are using a BlockAllocator, you can use the MeasureFragmentation method to examine how fragmented the heap is. This method returns a Fragmentation struct, which includes the “sum of squares” and the sum of the inner sizes of the current free blocks. On a platform or host with floating point support, you can divide the square root of the sum of squares by the sum to obtain a number that ranges from 0 to 1 to indicate maximal and minimal fragmenation, respectively. Subtracting this number from 1 can give a more intuitive “fragmenation score”.

For example, consider a heap consisting of the following blocks:

  • 100 bytes in use.

  • 200 bytes free.

  • 50 bytes in use.

  • 10 bytes free.

  • 200 bytes in use.

  • 300 bytes free.

For such a heap, MeasureFragmentation will return 130100 and 510. The above calculation gives a fragmentation score of 1 - sqrt(130100) / 510, which is approximately 0.29.

Detect memory corruption#

The pw::allocator::Block class provides a few different mechanisms to help detect memory corruptions when they happen. First, on every deallocation it will check the integrity of the block header and assert if it has been modified.

Additionally, you can enable poisoning to detect additional memory corruptions such as use-after-frees. The Block type has a template parameter that enables the Poison and CheckPoison methods. Allocators can use these methods to “poison” blocks on deallocation with a set pattern, and later check on allocation that the pattern is intact. If it’s not, some routine has modified unallocated memory.

The BlockAllocator class has a kPoisonInterval template parameter to control how frequently blocks are poisoned on deallocation. This allows projects to stochiastically sample allocations for memory corruptions while mitigating the performance impact.

1// Poisons every third deallocation.
2pw::allocator::LastFitBlockAllocator<uint16_t, 3> allocator(buffer);

Test custom allocators#

If you create your own allocator implementation, it’s strongly recommended that you test it as well. If you’re creating a forwarding allocator, you can use AllocatorForTest. This simple allocator provides its own backing storage and automatically frees any outstanding allocations when it goes out of scope. It also tracks the most recent values provided as parameters to the interface methods.

For example, the following tests the custom allocator from Custom allocator implementations:

 1// TODO: b/328076428 - Use pwrev/193642.
 2TEST(CustomAllocatorExample, MakeUnique) {
 3  AllocatorForTest allocator;
 4  constexpr size_t kThreshold = sizeof(examples::NamedU32) * 3;
 5  examples::CustomAllocator custom(allocator, kThreshold);
 6  // log_basic::test::LogCache<32> logs;
 8  auto ptr1 = custom.MakeUnique<examples::NamedU32>("test", 111);
 9  auto ptr2 = custom.MakeUnique<examples::NamedU32>("test", 222);
10  auto ptr3 = custom.MakeUnique<examples::NamedU32>("test", 333);
11  // EXPECT_FALSE(logs.HasNext());
13  auto ptr4 = custom.MakeUnique<examples::NamedU32>("test", 444);
14  // ASSERT_TRUE(logs.HasNext());
15  // log_basic::test::LogMessage log = logs.Next();
16  // EXPECT_EQ(log.level, PW_LOG_LEVEL_INFO);
18  // StringBuffer<40> expected;
19  // expected << "more than " << kThreshold << " bytes allocated.";
20  // EXPECT_STREQ(log.message.c_str(), expected.c_str());

You can also extend the TestHarness to perform pseudorandom sequences of allocations and deallocations, e.g. as part of a performance test:

 1#include <cstddef>
 3#include "examples/custom_allocator.h"
 4#include "pw_allocator/test_harness.h"
 5#include "pw_allocator/testing.h"
 7namespace examples {
 9class CustomAllocatorTestHarness : public pw::allocator::test::TestHarness<64> {
10 public:
11  static constexpr size_t kCapacity = 0x1000;
12  static constexpr size_t kThreshold = 0x800;
14  CustomAllocatorTestHarness() : custom_(allocator_, kThreshold) {}
15  ~CustomAllocatorTestHarness() override = default;
17  pw::Allocator* Init() override { return &custom_; }
19 private:
20  pw::allocator::test::AllocatorForTest<kCapacity> allocator_;
21  CustomAllocator custom_;
24}  // namespace examples
 1#include <cstdint>
 3#include "examples/custom_allocator_test_harness.h"
 4#include "pw_perf_test/perf_test.h"
 5#include "pw_perf_test/state.h"
 6#include "pw_random/xor_shift.h"
 8namespace examples {
10void PerformAllocations(pw::perf_test::State& state, uint64_t seed) {
11  static CustomAllocatorTestHarness harness;
12  pw::random::XorShiftStarRng64 prng(seed);
13  while (state.KeepRunning()) {
14    harness.GenerateRequest(prng, CustomAllocatorTestHarness::kCapacity);
15  }
16  harness.Reset();
19PW_PERF_TEST(CustomAllocatorPerfTest, PerformAllocations, 1);
21}  // namespace examples

Even better, you can easily add fuzz tests for your allocator. This module uses the TestHarness to integrate with pw_fuzzer and provide FuzzTest support.

 1void NeverCrashes(const pw::Vector<pw::allocator::test::Request>& requests) {
 2  static examples::CustomAllocatorTestHarness harness;
 3  harness.HandleRequests(requests);
 6constexpr size_t kMaxRequests = 256;
 7constexpr size_t kMaxSize = examples::CustomAllocatorTestHarness::kCapacity / 2;
 8FUZZ_TEST(CustomAllocatorFuzzTest, NeverCrashes)
 9    .WithDomains(
10        pw::allocator::test::ArbitraryRequests<kMaxRequests, kMaxSize>());

Measure custom allocator size#

If you create your own allocator implementation, you may wish to measure its code size, similar to measurements in the module’s own Code size analysis. You can use pw_bloat and SizeReporter to create size reports as described in Defining size reports in GN.

For example, the C++ code for a size report binary might look like:

 1#include <cstdint>
 3#include "examples/custom_allocator.h"
 4#include "pw_allocator/first_fit_block_allocator.h"
 5#include "pw_allocator/size_reporter.h"
 7int main() {
 8  pw::allocator::SizeReporter reporter;
 9  reporter.SetBaseline();
11  pw::allocator::FirstFitBlockAllocator<uint16_t> allocator(reporter.buffer());
12  examples::CustomAllocator custom(allocator, 128);
13  reporter.Measure(custom);
15  return 0;

The resulting binary could be compared with the binary produced from pw_allocator/size_report/ to identify just the code added in this case by CustomAllocator.

For example, the GN build rule to generate a size report might look liek:

pw_size_diff("size_report") {
  title = "Example size report"
  binaries = [
      target = ":size_report"
      base = "$dir_pw_allocator/size_report:first_fit_block_allocator"
      label = "CustomAllocator"

The size report produced by this rule would render as: