API reference#
pw_allocator: Flexible, safe, and measurable memory allocation
This module provides the following:
Generic allocator interfaces that can be injected into routines that need dynamic memory. These include Allocator, as well as the Layout type that is passed to it and the UniquePtr returned from it.
Concrete allocator implementations used to provide memory dynamically.
“Forwarding” allocators, as described by Forwarding allocator concept.
Additional allocator utility classes. These are typically used by allocator implementers.
Test utilities for testing allocator implementations. These are typically used by allocator implementers.
Core interfaces#
This module defines several types as part of a generic interface for memory users.
Layout#
A request for memory includes a requested size and alignment as a Layout
:
-
class Layout#
Describes the layout of a block of memory.
Layouts are passed to allocators, and consist of a (possibly padded) size and a power-of-two alignment no larger than the size. Layouts can be constructed for a type
T
usingLayout::Of
.Example:
struct MyStruct { uint8_t field1[3]; uint32_t field2[3]; }; constexpr Layout layout_for_struct = Layout::Of<MyStruct>();
Allocator#
Allocator
is the most commonly used interface. It can be used to request
memory of different layouts:
-
class Allocator : public pw::allocator::Deallocator#
Abstract interface for variable-layout memory allocation.
The interface makes no guarantees about its implementation. Consumers of the generic interface must not make any assumptions around allocator behavior, thread safety, or performance.
Subclassed by pw::allocator::TrackingAllocator< internal::AllMetrics >, pw::allocator::BuddyAllocator< kMinChunkSize, kNumBuckets >, pw::allocator::BumpAllocator, pw::allocator::FallbackAllocator, pw::allocator::LibCAllocator, pw::allocator::NullAllocator, pw::allocator::SynchronizedAllocator< LockType >, pw::allocator::TrackingAllocator< MetricsType >, pw::allocator::internal::GenericBlockAllocator, pw::allocator::test::AllocatorForTest< kBufferSize, MetricsType >
Public Functions
-
inline void *Allocate(Layout layout)#
Allocates a block of memory with the specified size and alignment.
Returns
nullptr
if the allocation cannot be made, or thelayout
has a size of 0.- Parameters:
layout – [in] Describes the memory to be allocated.
-
template<typename T, int&... ExplicitGuard, typename ...Args>
inline T *New(Args&&... args)# Constructs and object of type
T
from the givenargs
The return value is nullable, as allocating memory for the object may fail. Callers must check for this error before using the resulting pointer.
- Parameters:
args... – [in] Arguments passed to the object constructor.
-
template<typename T, int&... ExplicitGuard, typename ...Args>
inline UniquePtr<T> MakeUnique(Args&&... args)# Constructs and object of type
T
from the givenargs
, and wraps it in aUniquePtr
The returned value may contain null if allocating memory for the object fails. Callers must check for null before using the
UniquePtr
.- Parameters:
args... – [in] Arguments passed to the object constructor.
-
inline bool Resize(void *ptr, size_t new_size)#
Modifies the size of an previously-allocated block of memory without copying any data.
Returns true if its size was changed without copying data to a new allocation; otherwise returns false.
In particular, it always returns true if the
old_layout.size()
equalsnew_size
, and always returns false if the given pointer is null, theold_layout.size()
is 0, or thenew_size
is 0.- Parameters:
ptr – [in] Pointer to previously-allocated memory.
new_size – [in] Requested new size for the memory allocation.
-
inline bool Resize(void *ptr, Layout layout, size_t new_size)#
Deprecated version of
Resize
that takes aLayout
. Do not use this method. It will be removed. TODO(b/326509341): Remove when downstream consumers migrate.
-
inline void *Reallocate(void *ptr, Layout new_layout)#
Modifies the size of a previously-allocated block of memory.
Returns pointer to the modified block of memory, or
nullptr
if the memory could not be modified.The data stored by the memory being modified must be trivially copyable. If it is not, callers should themselves attempt to
Resize
, thenAllocate
, move the data, andDeallocate
as needed.If
nullptr
is returned, the block of memory is unchanged. In particular, if thenew_layout
has a size of 0, the given pointer will NOT be deallocated.TODO(b/331290408): This error condition needs to be better communicated to module users, who may assume the pointer is freed.
Unlike
Resize
, providing a null pointer will return a new allocation.If the request can be satisfied using
Resize
, thealignment
parameter may be ignored.- Parameters:
ptr – [in] Pointer to previously-allocated memory.
new_layout – [in] Describes the memory to be allocated.
-
inline void *Reallocate(void *ptr, Layout old_layout, size_t new_size)#
Deprecated version of
Reallocate
that takes aLayout
. Do not use this method. It will be removed. TODO(b/326509341): Remove when downstream consumers migrate.
-
inline Result<Layout> GetLayout(const void *ptr) const#
Returns the layout used to allocate a given pointer.
NOTE: This method will eventually be deprecated. Use
GetAllocatedLayout
instead.- Returns:
Code
Description
Returns the actual layout of allocated memory.
The allocator does not recognize the pointer as one of its allocations.
Allocator cannot recover allocation details.
-
inline void *Allocate(Layout layout)#
Pool#
Pool
differs from Allocator
in that it can be used to request memory of
a single, fixed layout:
-
class Pool : public pw::allocator::Deallocator#
Abstract interface for fixed-layout memory allocation.
The interface makes no guarantees about its implementation. Consumers of the generic interface must not make any assumptions around allocator behavior, thread safety, or performance.
Subclassed by pw::allocator::AllocatorAsPool, pw::allocator::ChunkPool
Public Functions
-
inline void *Allocate()#
Returns a chunk of memory with this object’s fixed layout.
Like
pw::allocator::Allocate
, returns null if memory is exhausted.- Return values:
The – allocated memory.
-
inline void *Allocate()#
Deallocator#
Both Allocator
and Pool
derive from and extend Deallocator
. This
type is intended for allocator implementers and not for module consumers.
-
class Deallocator#
Abstract interface for releasing memory.
Subclassed by pw::allocator::Allocator, pw::allocator::Pool
Public Functions
-
inline bool HasCapability(Capability capability) const#
Returns whether a given capabilityis enabled for this object.
-
inline void Deallocate(void *ptr)#
Releases a previously-allocated block of memory.
The given pointer must have been previously provided by this memory resource; otherwise the behavior is undefined.
- Parameters:
ptr – [in] Pointer to previously-allocated memory.
-
inline void Deallocate(void *ptr, Layout layout)#
Deprecated version of
Deallocate
that takes aLayout
. Do not use this method. It will be removed. TODO(b/326509341): Remove when downstream consumers migrate.
-
template<typename T>
inline void Delete(T *ptr)# Destroys the object at
ptr
and deallocates the associated memory.The given pointer must have been previously obtained from a call to
New
using the same object; otherwise the behavior is undefined.- Parameters:
ptr – [in] Pointer to previously-allocated object.
-
inline StatusWithSize GetCapacity() const#
Returns the total amount of memory provided by this object.
This is an optional method. Some memory providers may not have an easily defined capacity, e.g. the system allocator. If implemented, the returned capacity may be less than the memory originally given to an allocator, e.g. if the allocator must align the region of memory, its capacity may be reduced.
-
inline bool IsEqual(const Deallocator &other) const#
Returns whether the given object is the same as this one.
This method is used instead of
operator==
in keeping withstd::pmr::memory_resource::is_equal
. There currently is no corresponding virtualDoIsEqual
, as objects that would requiredynamic_cast
to properly determine equality are not supported. This method will allow the interface to remain unchanged should a future need for such objects arise.- Parameters:
other – [in] Object to compare with this object.
-
inline bool HasCapability(Capability capability) const#
Capabilities#
Types deriving from MemoryResource
can communicate about their optional
methods and behaviors using Capabilities
. This type is intended for
allocator implementers and not for module consumers.
-
class Capabilities#
A collection of
Capability
s.Concrete allocators should declare a constant set of capabilities, and pass it to the
Allocator
constructor.class MyConcreteAllocator : public Allocator { public: static constexpr Capabilities kCapabilities = kCapability1 | kCapability2; MyConcreteAllocator() : Allocator(kCapabilities) {} };
Forwarding allocators should pass the underlying allocator’s capabilities, potentially with modifications:
class MyForwardingAllocator : public Allocator { public: MyForwardingAllocator(Allocator& allocator) : Allocator(allocator.capabilities() | kCapability3), allocator_(allocator) {} };
UniquePtr#
The UniquePtr
smart pointer type can be created by any type deriving from
MemoryResource
.
-
template<typename T>
class UniquePtr : public pw::allocator::internal::BaseUniquePtr# An RAII pointer to a value of type
T
stored in memory provided by aDeallocator
.This is analogous to
std::unique_ptr
, but includes a few differences in order to supportDeallocator
and encourage safe usage. Most notably,UniquePtr<T>
cannot be constructed from aT*
.Public Functions
-
inline constexpr UniquePtr()#
Creates an empty (
nullptr
) instance.NOTE: Instances of this type are most commonly constructed using
Deallocator::MakeUnique
.
-
inline constexpr UniquePtr(std::nullptr_t)#
Creates an empty (
nullptr
) instance.NOTE: Instances of this type are most commonly constructed using
Deallocator::MakeUnique
.
-
template<typename U>
inline UniquePtr(UniquePtr<U> &&other) noexcept# Move-constructs a
UniquePtr<T>
from aUniquePtr<U>
.This allows not only pure move construction where
T == U
, but also converting construction whereT
is a base class ofU
, likeUniquePtr<Base> base(deallocator.MakeUnique<Child>());
.
-
template<typename U>
inline UniquePtr &operator=(UniquePtr<U> &&other) noexcept# Move-assigns a
UniquePtr<T>
from aUniquePtr<U>
.This operation destructs and deallocates any value currently stored in
this
.This allows not only pure move assignment where
T == U
, but also converting assignment whereT
is a base class ofU
, likeUniquePtr<Base> base = deallocator.MakeUnique<Child>();
.
-
inline UniquePtr &operator=(std::nullptr_t)#
Sets this
UniquePtr
to null, destructing and deallocating any currently-held value.After this function returns, this
UniquePtr
will be in an “empty” (nullptr
) state until a new value is assigned.
-
inline ~UniquePtr()#
Destructs and deallocates any currently-held value.
-
inline Deallocator *deallocator() const#
Returns a pointer to the object that can destroy the value.
-
inline T *Release()#
Releases a value from the
UniquePtr
without destructing or deallocating it.After this call, the object will have an “empty” (
nullptr
) value.
-
inline void Reset()#
Destructs and deallocates any currently-held value.
After this function returns, this
UniquePtr
will be in an “empty” (nullptr
) state until a new value is assigned.
-
explicit operator bool() const = delete#
operator bool
is not provided in order to ensure that there is no confusion surroundingif (foo)
vs.if (*foo)
.nullptr
checking should instead useif (foo == nullptr)
.
-
inline bool operator==(std::nullptr_t) const#
Returns whether this
UniquePtr
is in an “empty” (nullptr
) state.
-
inline bool operator!=(std::nullptr_t) const#
Returns whether this
UniquePtr
is not in an “empty” (nullptr
) state.
-
inline T *operator->()#
Permits accesses to members of
T
viamy_unique_ptr->Member
.The behavior of this operation is undefined if this
UniquePtr
is in an “empty” (nullptr
) state.
-
inline T &operator*()#
Returns a reference to any underlying value.
The behavior of this operation is undefined if this
UniquePtr
is in an “empty” (nullptr
) state.
-
inline UniquePtr(PrivateConstructorType, T *value, Deallocator *deallocator)#
Private constructor that is public only for use with
emplace
and other in-place construction functions.Constructs a
UniquePtr
from an already-allocated value.NOTE: Instances of this type are most commonly constructed using
Deallocator::MakeUnique
.
-
inline constexpr UniquePtr()#
Allocator implementations#
This module provides several concrete allocator implementations of the Allocator interface:
BlockAllocator#
-
template<typename OffsetType, uint16_t kPoisonInterval, uint16_t kAlign>
class BlockAllocator : public pw::allocator::internal::GenericBlockAllocator# A memory allocator that uses a list of blocks.
This class does not implement
ChooseBlock
and cannot be used directly. Instead, use one of its specializations.NOTE!! Do NOT use memory returned from this allocator as the backing for another allocator. If this is done, the
Query
method may incorrectly think pointers returned by that allocator were created by this one, and report that this allocator can de/reallocate them.Subclassed by pw::allocator::BestFitBlockAllocator< OffsetType, kPoisonInterval, kAlign >, pw::allocator::DualFirstFitBlockAllocator< OffsetType, kPoisonInterval, kAlign >, pw::allocator::FirstFitBlockAllocator< OffsetType, kPoisonInterval, kAlign >, pw::allocator::LastFitBlockAllocator< OffsetType, kPoisonInterval, kAlign >, pw::allocator::WorstFitBlockAllocator< OffsetType, kPoisonInterval, kAlign >
Public Functions
-
inline constexpr BlockAllocator()#
Constexpr constructor. Callers must explicitly call
Init
.
-
inline explicit BlockAllocator(ByteSpan region)#
Non-constexpr constructor that automatically calls
Init
.Errors are fatal.
- Parameters:
region – [in] The memory region for this allocator.
-
Range blocks() const#
Returns a
Range
of blocks tracking the memory of this allocator.
-
Status Init(ByteSpan region)#
Sets the memory region to be used by this allocator.
This method will instantiate an initial block using the memory region.
- Parameters:
region – [in] The memory region for this allocator.
- Returns:
Code
Description
The allocator is initialized.
The memory region is null.
The region is too small for
BlockType
.The region too large for
BlockType
.
-
Status Init(BlockType *begin, BlockType *end = nullptr)#
Sets the blocks to be used by this allocator.
This method will use the sequence blocks as-is, which must be valid. If
end
is not provided, the sequence extends to a block marked “last”.- Parameters:
region – [in] The memory region for this allocator.
- Returns:
Code
Description
The allocator is initialized.
The block sequence is empty.
-
void Reset()#
Initializes the allocator with preconfigured blocks.
This method does not Resets the allocator to an uninitialized state.
At the time of the call, there MUST NOT be any outstanding allocated blocks from this allocator.
-
inline constexpr BlockAllocator()#
FirstFitBlockAllocator#
-
template<typename OffsetType = uintptr_t, size_t kPoisonInterval = 0, size_t kAlign = alignof(OffsetType)>
class FirstFitBlockAllocator : public pw::allocator::BlockAllocator<uintptr_t, 0, alignof(uintptr_t)># Block allocator that uses a “first-fit” allocation strategy.
In this strategy, the allocator handles an allocation request by starting at the beginning of the range of blocks and looking for the first one which can satisfy the request.
This strategy may result in slightly worse fragmentation than the corresponding “last-fit” strategy, since the alignment may result in unused fragments both before and after an allocated block.
LastFitBlockAllocator#
-
template<typename OffsetType = uintptr_t, size_t kPoisonInterval = 0, size_t kAlign = alignof(OffsetType)>
class LastFitBlockAllocator : public pw::allocator::BlockAllocator<uintptr_t, 0, alignof(uintptr_t)># Block allocator that uses a “last-fit” allocation strategy.
In this strategy, the allocator handles an allocation request by starting at the end of the range of blocks and looking for the last one which can satisfy the request.
This strategy may result in slightly better fragmentation than the corresponding “first-fit” strategy, since even with alignment it will result in at most one unused fragment before the allocated block.
BestFitBlockAllocator#
-
template<typename OffsetType = uintptr_t, size_t kPoisonInterval = 0, size_t kAlign = alignof(OffsetType)>
class BestFitBlockAllocator : public pw::allocator::BlockAllocator<uintptr_t, 0, alignof(uintptr_t)># Block allocator that uses a “best-fit” allocation strategy.
In this strategy, the allocator handles an allocation request by looking at all unused blocks and finding the smallest one which can satisfy the request.
This algorithm may make better use of available memory by wasting less on unused fragments, but may also lead to worse fragmentation as those fragments are more likely to be too small to be useful to other requests.
WorstFitBlockAllocator#
-
template<typename OffsetType = uintptr_t, size_t kPoisonInterval = 0, size_t kAlign = alignof(OffsetType)>
class WorstFitBlockAllocator : public pw::allocator::BlockAllocator<uintptr_t, 0, alignof(uintptr_t)># Block allocator that uses a “worst-fit” allocation strategy.
In this strategy, the allocator handles an allocation request by looking at all unused blocks and finding the biggest one which can satisfy the request.
This algorithm may lead to less fragmentation as any unused fragments are more likely to be large enough to be useful to other requests.
DualFirstFitBlockAllocator#
-
template<typename OffsetType = uintptr_t, size_t kPoisonInterval = 0, size_t kAlign = alignof(OffsetType)>
class DualFirstFitBlockAllocator : public pw::allocator::BlockAllocator<uintptr_t, 0, alignof(uintptr_t)># Block allocator that uses a “dual first-fit” allocation strategy split between large and small allocations.
In this strategy, the strategy includes a threshold value. Requests for more than this threshold are handled similarly to
FirstFit
, while requests for less than this threshold are handled similarly toLastFit
.This algorithm approaches the performance of
FirstFit
andLastFit
while improving on those algorithms fragmentation.Public Functions
-
inline void set_threshold(size_t threshold)#
Sets the threshold value for which requests are considered “large”.
-
inline void set_threshold(size_t threshold)#
BuddyAllocator#
-
template<size_t kMinChunkSize, size_t kNumBuckets>
class BuddyAllocator : public pw::allocator::Allocator# Allocator that uses the buddy memory allocation algorithm.
This allocator allocates chunks of memory whose sizes are powers of two. This allows the allocator to satisfy requests to acquire and release memory very quickly, at the possible cost of higher internal fragmentation. In particular:
The maximum alignment for this allocator is
kMinChunkSize
.The minimum size of an allocation is
kMinChunkSize
. Less may be requested, but it will be satisfied by a minimal chunk.The maximum size of an allocation is
kMinChunkSize << (kNumBuckets - 1)
.
Use this allocator if you know the needed sizes are close to but less than chunk sizes and you need high allocator performance.
- Template Parameters:
kMinChunkSize – Size of the smallest allocatable chunk. Must be a a power of two. All allocations will use at least this much memory.
kNumBuckets – Number of buckets. Must be at least 1. Each additional bucket allows combining chunks into larger chunks.
Public Functions
-
inline BuddyAllocator(ByteSpan region)#
Construct an allocator.
- Parameters:
region – [in] Region of memory to use when satisfying allocation requests. The region must be large enough to fit a least one minimally-size chunk aligned to the size of the chunk.
BumpAllocator#
-
class BumpAllocator : public pw::allocator::Allocator#
Allocator that does not automatically delete.
A “bump” or “arena” allocator provides memory by simply incrementing a pointer to a memory region in order to “allocate” memory for requests, and doing nothing on deallocation. All memory is freed only when the allocator itself is destroyed. As a result, this allocator is extremely fast.
Bump allocators are useful when short-lived to allocate objects from small buffers with almost zero overhead. Since memory is not deallocated, a bump allocator with a longer lifespan than any of its allocations will end up holding unusable memory. An example of a good use case might be decoding RPC messages that may require a variable amount of space only for as long as it takes to decode a single message.
On destruction, the destructors for any objects allocated using
New
orMakeUnique
are NOT called. To have these destructors invoked, you can allocate “owned” objects usingNewOwned
andMakeUniqueOwned
. This adds a small amount of overhead to the allocation.Public Functions
-
template<typename T, int&... ExplicitGuard, typename ...Args>
inline T *NewOwned(Args&&... args)# Constructs an “owned” object of type
T
from the givenargs
Owned objects will have their destructors invoked when the allocator goes out of scope.
The return value is nullable, as allocating memory for the object may fail. Callers must check for this error before using the resulting pointer.
- Parameters:
args... – [in] Arguments passed to the object constructor.
-
template<typename T, int&... ExplicitGuard, typename ...Args>
inline UniquePtr<T> MakeUniqueOwned(Args&&... args)# Constructs and object of type
T
from the givenargs
, and wraps it in aUniquePtr
Owned objects will have their destructors invoked when the allocator goes out of scope.
The returned value may contain null if allocating memory for the object fails. Callers must check for null before using the
UniquePtr
.- Parameters:
args... – [in] Arguments passed to the object constructor.
-
template<typename T, int&... ExplicitGuard, typename ...Args>
ChunkPool#
-
class ChunkPool : public pw::allocator::Pool#
Implementation of
Pool
that uses a list of free chunks.The first
sizeof(void*)
bytes of each free chunk is used to store a pointer to the next free chunk, or null for the last free chunk.Subclassed by pw::allocator::TypedPool< T >
Public Functions
-
ChunkPool(ByteSpan region, const Layout &layout)#
Construct a
Pool
that allocates from a region of memory.- Parameters:
region – The memory to allocate from. Must be large enough to allocate at least one chunk with the given layout.
layout – The size and alignment of the memory to be returned from this pool.
-
ChunkPool(ByteSpan region, const Layout &layout)#
LibCAllocator#
NullAllocator#
-
class NullAllocator : public pw::allocator::Allocator#
A memory allocator that always fails to allocate memory.
A null allocator may be useful as part of a larger framework if allocation should be disallowed under certain circumstances. For example, a function that returns different allocators based on an input parameter may return a null allocator when given an invalid or unsupported parameter value.
TypedPool#
-
template<typename T>
class TypedPool : public pw::allocator::ChunkPool# Typed pool that can be used for slab allocation.
This class is a special purpose pool designed to allocate objects of one specific type,
T
. It is useful when you need a dynamic pool of objects with very low performance and memory overhead costs. For example, a dispatcher might use such an allocator to manage memory for a set of task objects.- Template Parameters:
T – The type of object to allocate memory for.
Public Functions
-
template<size_t kNumObjects>
inline TypedPool(Buffer<kNumObjects> &buffer)# Construct a
TypedPool
.This constructor uses the
Buffer
type to minimize wasted memory. For example:TypedPool<MyObject>::Buffer<100> buffer; TypedPool<MyObject> my_pool(buffer);
- Parameters:
buffer – The memory to allocate from.
-
inline TypedPool(ByteSpan region)#
Construct a
TypedPool
.To minimize wasted memory, align the buffer provided to the allocator to the object type’s alignment. For example:
alignas(MyObject) std::array<std::byte, 0x1000> buffer; TypedPool<MyObject> my_pool(buffer);
- Parameters:
region – The memory to allocate from. Must be large enough to allocate memory for at least one object.
-
template<int&... ExplicitGuard, typename ...Args>
inline T *New(Args&&... args)# Constructs and object from the given
args
This method is similar to
Allocator::New
, except that it is specific to the pool’s object type.- Parameters:
args... – [in] Arguments passed to the object constructor.
-
template<int&... ExplicitGuard, typename ...Args>
inline UniquePtr<T> MakeUnique(Args&&... args)# Constructs and object from the given
args
, and wraps it in aUniquePtr
This method is similar to
Allocator::MakeUnique
, except that it is specific to the pool’s object type.- Parameters:
args... – [in] Arguments passed to the object constructor.
Public Static Functions
-
static inline constexpr size_t SizeNeeded(size_t num_objects)#
Returns the amount of memory needed to allocate
num_objects
.
-
static inline constexpr size_t AlignmentNeeded()#
Returns the optimal alignment for the backing memory region.
-
template<size_t kNumObjects>
struct Buffer# Provides aligned storage for
kNumObjects
of typeT
.
Forwarding Allocators#
This module provides several “forwarding” allocators, as described in Forwarding allocator concept.
AllocatorAsPool#
FallbackAllocator#
-
class FallbackAllocator : public pw::allocator::Allocator#
This class simply dispatches between a primary and secondary allocator. Any attempt to allocate memory will first be handled by the primary allocator. If it cannot allocate memory, e.g. because it is out of memory, the secondary alloator will try to allocate memory instead.
SynchronizedAllocator#
-
template<typename LockType>
class SynchronizedAllocator : public pw::allocator::Allocator# Wraps an
Allocator
with a lock to synchronize access.Depending on the
LockType
, this object may be thread- and/or interrupt- safe. For example,SynchronizedAllocator<pw::sync::Mutex>
is thread-safe, whileSynchronizedAllocator<pw::sync::InterruptSpinLock>
is thread- and interrupt-safe.- Template Parameters:
LockType – The type of the lock used to synchronize allocator access. Must be default-constructible.
TrackingAllocator#
-
template<typename MetricsType>
class TrackingAllocator : public pw::allocator::Allocator# Wraps an
Allocator
and records details of its usage.Metric collection is performed using the provided template parameter type. Callers can not instantiate this class directly, as it lacks a public constructor. Instead, callers should use derived classes which provide the template parameter type, such as
TrackingAllocator
which uses the default metrics implementation, orTrackingAllocatorForTest
which always uses the real metrics implementation.If the
requested_bytes
metric is enabled, then this allocator will add overhead to each allocation. It uses one of two layouts:If the underlying allocator implements
GetAllocated
, this allocator will store the requested size of an allocation after that allocation’s usable space. For example, assumeLayout{.size=256, .alignment=16}
,sizeof(size_t) == 4
, and that ‘U’ indicates “usable space”. An allocation might look like:..1f0 | UU UU UU UU UU UU UU UU UU UU UU UU UU UU UU UU ….. | … ..2e0 | UU UU UU UU UU UU UU UU UU UU UU UU UU UU UU UU ..2f0 | 00 00 01 00 .. .. .. .. .. .. .. .. .. .. .. .. // size suffix
If the underlying allocator does NOT implement
GetAllocated
, this allocator will store aLayout
of the requested bytes and alignment before the usable space. This is more expensive, asalignment
bytes must be added to keep the usable space aligned. For example, with the same assumptions as beforea and ‘x’ indicating padding, an allocation might look like:..1f0 | xx xx xx xx xx xx xx xx 00 00 01 00 00 00 00 10 ..2e0 | UU UU UU UU UU UU UU UU UU UU UU UU UU UU UU UU ….. | … ..2f0 | UU UU UU UU UU UU UU UU UU UU UU UU UU UU UU UU
If the
requested_bytes
metric is disabled, no additional overhead will be added.
Utility Classes#
In addition to providing allocator implementations themselves, this module includes some utility classes.
Block#
-
template<typename OffsetType = uintptr_t, size_t kAlign = alignof(OffsetType), bool kCanPoison = false>
class Block# Memory region with links to adjacent blocks.
The blocks do not encode their size directly. Instead, they encode offsets to the next and previous blocks using the type given by the
OffsetType
template parameter. The encoded offsets are simply the offsets divded by the minimum block alignment,kAlignment
.The
kAlignment
constant provided by the derived block is typically the minimum value ofalignof(OffsetType)
. Since the addressable range of a block is given bystd::numeric_limits<OffsetType>::max() * kAlignment
, it may be advantageous to set a higher alignment if it allows using a smaller offset type, even if this wastes some bytes in order to align block headers.Blocks will always be aligned to a
kAlignment
boundary. Block sizes will always be rounded up to a multiple ofkAlignment
.If
kCanPoison
is set, allocators may callPoison
to overwrite the contents of a block with a poison pattern. This pattern will subsequently be checked when allocating blocks, and can detect memory corruptions such as use-after-frees.As an example, the diagram below represents two contiguous
Block<uint32_t, true, 8>
s. The indices indicate byte offsets:Block 1: +---------------------+------+--------------+ | Header | Info | Usable space | +----------+----------+------+--------------+ | Prev | Next | | | | 0......3 | 4......7 | 8..9 | 10.......280 | | 00000000 | 00000046 | 8008 | <app data> | +----------+----------+------+--------------+ Block 2: +---------------------+------+--------------+ | Header | Info | Usable space | +----------+----------+------+--------------+ | Prev | Next | | | | 0......3 | 4......7 | 8..9 | 10......1056 | | 00000046 | 00000106 | 6008 | f7f7....f7f7 | +----------+----------+------+--------------+
The overall size of the block (e.g. 280 bytes) is given by its next offset multiplied by the alignment (e.g. 0x106 * 4). Also, the next offset of a block matches the previous offset of its next block. The first block in a list is denoted by having a previous offset of
0
.- Template Parameters:
OffsetType – Unsigned integral type used to encode offsets. Larger types can address more memory, but consume greater overhead.
kCanPoison – Indicates whether to enable poisoning free blocks.
kAlign – Sets the overall alignment for blocks. Minimum is
alignof(OffsetType)
(the default). Larger values can address more memory, but consume greater overhead.
Public Functions
-
inline size_t OuterSize() const#
- Returns:
The total size of the block in bytes, including the header.
-
inline size_t InnerSize() const#
- Returns:
The number of usable bytes inside the block.
-
inline std::byte *UsableSpace()#
- Returns:
A pointer to the usable space inside this block.
-
Status CanAllocFirst(size_t inner_size, size_t alignment) const#
Checks if an aligned block could be split from the start of the block.
This method will return the same status as
AllocFirst
without performing any modifications.- Pre:
The block must not be in use.
- Returns:
Code
Description
The split would complete successfully.
This block is in use and cannot be split.
The requested size plus padding needed for alignment is greater than the current size.
-
Status CanAllocLast(size_t inner_size, size_t alignment) const#
Checks if an aligned block could be split from the end of the block.
This method will return the same status as
AllocLast
without performing any modifications.- Pre:
The block must not be in use.
- Returns:
Code
Description
The split completed successfully.
This block is in use and cannot be split.
The requested size is greater than the current size.
The remaining space is too small to hold a new block.
-
Block *Next() const#
Fetches the block immediately after this one.
For performance, this always returns a block pointer, even if the returned pointer is invalid. The pointer is valid if and only if
Last()
is false.Typically, after calling
Init
callers may save a pointer past the end of the list usingNext()
. This makes it easy to subsequently iterate over the list:auto result = Block<>::Init(byte_span); Block<>* begin = *result; Block<>* end = begin->Next(); ... for (auto* block = begin; block != end; block = block->Next()) { // Do something which each block. }
-
Block *Prev() const#
- Returns:
The block immediately before this one, or a null pointer if this is the first block.
-
inline size_t Alignment() const#
Returns the current alignment of a block.
-
inline bool Used() const#
Indicates whether the block is in use.
- Returns:
true
if the block is in use orfalse
if not.
-
inline bool Last() const#
Indicates whether this block is the last block or not (i.e. whether
Next()
points to a valid block or not). This is needed becauseNext()
points to the end of this block, whether there is a valid block there or not.- Returns:
true
is this is the last block orfalse
if not.
-
inline void MarkUsed()#
Marks this block as in use.
-
inline void MarkFree()#
Marks this block as free.
-
inline void MarkLast()#
Marks this block as the last one in the chain.
-
inline void ClearLast()#
Clears the last bit from this block.
-
void Poison(bool should_poison = true)#
Poisons the block’s usable space.
This method does nothing if
kCanPoison
is false, or if the block is in use, or ifshould_poison
is false. The decision to poison a block is deferred to the allocator to allow for more nuanced strategies than simply all or nothing. For example, an allocator may want to balance security and performance by only poisoning every n-th free block.- Parameters:
should_poison – Indicates tha block should be poisoned, if poisoning is enabled.
-
inline bool IsValid() const#
Checks if a block is valid.
- Returns:
true
if and only if the following conditions are met:The block is aligned.
The prev/next fields match with the previous and next blocks.
The poisoned bytes are not damaged (if poisoning is enabled).
-
void CrashIfInvalid() const#
Crashes with an informtaional message if a block is invalid.
Does nothing if the block is valid.
Public Static Functions
-
static Result<Block*> Init(ByteSpan region)#
Creates the first block for a given memory region.
- Returns:
Code
Description
Returns a block representing the region.
The region is null.
The region is too small for a block.
The region is too big to be addressed using
OffsetType
.
-
template<int&... DeducedTypesOnly, typename PtrType, bool is_const_ptr = std::is_const_v<std::remove_pointer_t<PtrType>>, typename BytesPtr = std::conditional_t<is_const_ptr, const std::byte*, std::byte*>, typename BlockPtr = std::conditional_t<is_const_ptr, const Block*, Block*>>
static inline BlockPtr FromUsableSpace( - PtrType usable_space,
This is the inverse of
UsableSpace()
.Warning
This method does not do any checking; passing a random pointer will return a non-null pointer.
- Returns:
A pointer to a
Block
, given a pointer to the start of the usable space inside the block.
-
static Status AllocFirst(Block *&block, size_t inner_size, size_t alignment)#
Splits an aligned block from the start of the block, and marks it as used.
If successful,
block
will be replaced by a block that has an inner size of at leastinner_size
, and whose starting address is aligned to analignment
boundary. If unsuccessful,block
will be unmodified.This method is static in order to consume and replace the given block pointer with a pointer to the new, smaller block. In total, up to two additional blocks may be created: one to pad the returned block to an alignment boundary and one for the trailing space.
- Pre:
The block must not be in use.
- Returns:
Code
Description
The split completed successfully.
This block is in use and cannot be split.
The requested size plus padding needed for alignment is greater than the current size.
-
static Status AllocLast(Block *&block, size_t inner_size, size_t alignment)#
Splits an aligned block from the end of the block, and marks it as used.
If successful,
block
will be replaced by a block that has an inner size of at leastinner_size
, and whose starting address is aligned to analignment
boundary. If unsuccessful,block
will be unmodified.This method is static in order to consume and replace the given block pointer with a pointer to the new, smaller block. An additional block may be created for the leading space.
- Pre:
The block must not be in use.
- Returns:
Code
Description
The split completed successfully.
This block is in use and cannot be split.
The requested size is greater than the current size.
The remaining space is too small to hold a new block.
-
static void Free(Block *&block)#
Marks the block as free and merges it with any free neighbors.
This method is static in order to consume and replace the given block pointer. If neither member is free, the returned pointer will point to the original block. Otherwise, it will point to the new, larger block created by merging adjacent free blocks together.
-
static Status Resize(Block *&block, size_t new_inner_size)#
Grows or shrinks the block.
If successful,
block
may be merged with the block after it in order to provide additional memory (when growing) or to merge released memory (when shrinking). If unsuccessful,block
will be unmodified.This method is static in order to consume and replace the given block pointer with a pointer to the new, smaller block.
- Pre:
The block must be in use.
- Returns:
Code
Description
The resize completed successfully.
This block is not in use.
The requested size is greater than the available space.
-
static Result<Block*> Split(Block *&block, size_t new_inner_size)#
Attempts to split this block.
If successful, the block will have an inner size of
new_inner_size
, rounded up to akAlignment
boundary. The remaining space will be returned as a new block.This method may fail if the remaining space is too small to hold a new block. If this method fails for any reason, the original block is unmodified.
This method is static in order to consume and replace the given block pointer with a pointer to the new, smaller block.
- Pre:
The block must not be in use.
- Returns:
Code
Description
The split completed successfully.
This block is in use and cannot be split.
The requested size for this block is greater than the current
inner_size
.The remaining space is too small to hold a new block.
-
static Status MergeNext(Block *&block)#
Merges this block with the one that comes after it.
This method is static in order to consume and replace the given block pointer with a pointer to the new, larger block.
- Pre:
The blocks must not be in use.
- Returns:
Code
Description
The merge was successful.
The given block is the last block.
One or more of the blocks is in use.
-
class Iterator#
Represents an iterator that moves forward through a list of blocks.
This class is not typically instantiated directly, but rather using a range-based for-loop using
Block::Range
.
-
class Range#
Represents a range of blocks that can be iterated over.
The typical usage of this class is in a range-based for-loop, e.g.
for (auto* block : Range(first, last)) { ... }
-
class ReverseIterator#
Represents an iterator that moves forward through a list of blocks.
This class is not typically instantiated directly, but rather using a range-based for-loop using
Block::ReverseRange
.
-
class ReverseRange#
Represents a range of blocks that can be iterated over in the reverse direction.
The typical usage of this class is in a range-based for-loop, e.g.
for (auto* block : ReverseRange(last, first)) { ... }
Tip
Avoid converting pointers to allocations into Block
instances, even if
you know your memory is coming from a BlockAllocator
. Breaking the
abstraction in this manner will limit your flexibility to change to a
different allocator in the future.
Metrics#
-
template<typename MetricsType>
class Metrics# Encapsulates the metrics struct for
pw::allocator::TrackingAllocator
.This class uses the type traits from
PW_ALLOCATOR_METRICS_DECLARE
to conditionally include or exclude code to update metrics based on calls to thepw::allocator::Allocator
API. This minimizes code size without adding additional conditions to be evaluated at runtime.- Template Parameters:
MetricsType – The struct defining which metrics are enabled.
Public Functions
-
void ModifyRequested(size_t increase, size_t decrease)#
Updates how much memory was requested and successfully allocated.
This will update the current, peak, and cumulative amounts of memory requests that were satisfied by an allocator.
- Parameters:
increase – How much memory was requested to be allocated.
decrease – How much memory was requested to be freed.
-
void ModifyAllocated(size_t increase, size_t decrease)#
Updates how much memory is allocated.
This will update the current, peak, and cumulative amounts of memory that has been actually allocated or freed. This method acts as if it frees memory before allocating. If a routine suchas
Reallocate
allocates before freeing, the update should be separated into two calls, e.g.ModifyAllocated(increase, 0); ModifyAllocated(0, decrease);
- Parameters:
increase – How much memory was allocated.
decrease – How much memory was freed.
-
void IncrementAllocations()#
Records that a call to
Allocate
was made.
-
void IncrementDeallocations()#
Records that a call to
Deallocate
was made.
-
void IncrementResizes()#
Records that a call to
Resize
was made.
-
void IncrementReallocations()#
Records that a call to
Reallocate
was made.
-
void RecordFailure(size_t requested)#
Records that a call to
Allocate
,Resize
, orReallocate
failed.This may indicated that memory becoming exhausted and/or highly fragmented.
- Parameters:
requested – How much memory was requested in the failed call.
This class is templated on a MetricsType
struct. See
Allocator metrics for additional details on how the
struct, this class, and TrackingAllocator
interact.
This modules also includes two concrete metrics types that can be used as the template parameter of this class:
AllMetrics
, which enables all metrics.NoMetrics
, which disables all metrics.
Additionally, module consumers can define their own metrics structs using the following macros:
-
PW_ALLOCATOR_METRICS_DECLARE(metric_name)#
Declares the names of metrics used by
pw::allocator::Metrics
.Only the names of declared metrics may be passed to
PW_ALLOCATOR_METRICS_ENABLE
as part of a metrics struct definition.This macro generates trait types that are used by
Metrics
to conditionally include metric-related code.Note: if enabling `
peak_allocated_bytes
orcumulative_allocated_bytes
,allocated_bytes
should also be enabled.
-
PW_ALLOCATOR_METRICS_ENABLE(metric_name)#
Enables a metric for in a metrics struct.
The
pw::allocator::TrackingAllocator
template takes a struct that enables zero or more of the metrics enumerated byPW_ALLOCATOR_METRICS_DECLARE
`.This struct may be one of
AllMetrics
orNoMetrics
, or may be a custom struct that selects a subset of metrics.Note that this must be fully-qualified since the metric struct may be defined in any namespace.
Example:
struct MyMetrics { PW_ALLOCATOR_METRICS_ENABLE(allocated_bytes); PW_ALLOCATOR_METRICS_ENABLE(peak_allocated_bytes); PW_ALLOCATOR_METRICS_ENABLE(num_failures); };
SizeReporter#
This modules includes a utility class for generating size reports. It is intended for allocator implementers and not for module consumers.
-
class SizeReporter#
Utility class for generating allocator size reports.
The
pw_bloat
module can be used to compare the size of binaries. This class facilitates creating binaries with and without a given allocator type.To create a size report:
Make a copy of //pw_allocator/size_report/base.cc
Instantiate your allocator and pass it to
MeasureAllocator
.Create build target(s) for your binary, and a
pw_size_diff
target that compares it to “$dir_pw_allocator/size_report:base”.
Public Functions
-
struct Bar#
Nested type used for exercising an allocator.
-
struct Baz#
Nested type used for exercising an allocator.
-
struct Foo#
Nested type used for exercising an allocator.
Buffer management#
-
template<typename T, size_t kBufferSize, size_t kAlignment = 1>
class WithBuffer# Associates a default-constructed type with a memory buffer.
Although the type is arbitrary, the intended purpose of of this class is to provide allocators with memory to use, e.g. when testing.
This class uses composition instead of inheritance in order to allow the wrapped type’s destructor to reference the memory without risk of a use-after-free. As a result, the specific methods of the wrapped type are not directly accesible. Instead, they can be accessed using the
*
and->
operators, e.g.WithBuffer<MyAllocator, 256> allocator; allocator->MethodSpecificToMyAllocator();
Note that this class does NOT initialize the allocator, since initialization is not specified as part of the
Allocator
interface and may vary from allocator to allocator. As a result, typical usage includes deriving a class that initializes the wrapped allocator with the buffer in a constructor. SeeAllocatorForTest
for an example.- Template Parameters:
T – The wrapped object.
kBufferSize – The size of the backing memory, in bytes.
kAlignment – Buffer memory will be aligned to this alignment boundary.
The following utility functions are available to allocator implementers. They are not intended to be used by module consumers.
-
Result<ByteSpan> pw::allocator::GetAlignedSubspan(ByteSpan bytes, size_t alignment)#
Returns the largest aligned subspan of a given byte span.
The subspan will start and end on alignment boundaries.
- Returns:
Code
Description
Returns the aligned subspan.
The given span does not contain an alignment boundary.
-
bool pw::allocator::IsWithin(const void *ptr, size_t size, ConstByteSpan outer)#
Returns whether one region is completely contained within another.
- Parameters:
ptr – [in] Points to the start of the subregion.
layout – [in] Describes the subregion.
outer – [in] The memory that may contain the subregion.
Test support#
This modules includes test utilities for allocator implementers. These facilitate writing unit tests and fuzz tests for both concrete and forwarding allocator implementations. They are not intended to be used by module consumers.
AllocatorForTest#
-
template<size_t kBufferSize, typename MetricsType = internal::AllMetrics>
class AllocatorForTest : public pw::allocator::Allocator# An
AllocatorForTest
that is automatically initialized on construction.
AllocatorTestHarness#
-
template<size_t kMaxConcurrentAllocations>
class AllocatorTestHarness : public pw::allocator::test::AllocatorTestHarnessGeneric# Associates an
Allocator
with a vector to store allocated pointers.This class differes from its base class only in that it uses its template parameter to explicitly size the vector used to store allocated pointers.
This class does NOT implement
WithAllocationsGeneric::Init
. It must be extended further with a method that provides an initialized allocator.For example, one create a fuzzer for
MyAllocator
that verifies it never crashes by adding the following class, function, and macro:constexpr size_t kMaxRequests = 256; constexpr size_t kMaxAllocations = 128; constexpr size_t kMaxSize = 2048; class MyAllocatorFuzzer : public AllocatorTestHarness<kMaxAllocations> { private: Allocator* Init() override { return &allocator_; } MyAllocator allocator_; }; void MyAllocatorNeverCrashes(const Vector<AllocatorRequest>& requests) { static MyAllocatorFuzzer fuzzer; fuzzer.HandleRequests(requests); } FUZZ_TEST(MyAllocator, MyAllocatorNeverCrashes) .WithDomains(ArbitraryAllocatorRequests<kMaxRequests, kMaxSize>());
FuzzTest support#
-
fuzzer::Domain<AllocatorRequest> pw::allocator::test::ArbitraryAllocatorRequest(size_t max_size)#
Returns a FuzzTest domain for producing arbitrary allocator requests.
This method integrates with FuzzTest to use code coverage to produce guided mutations.
See https://github.com/google/fuzztest/blob/main/doc/domains-reference.md
- Parameters:
max_size – Size of the largest allocation that can be requested.
-
template<size_t kMaxRequests, size_t kMaxSize>
auto pw::allocator::test::ArbitraryAllocatorRequests()# Returns a FuzzTest domain for producing sequences of arbitrary allocator requests.
This method can be used to drive an
AllocatorTestHarness
as part of a fuzz test.See https://github.com/google/fuzztest/blob/main/doc/domains-reference.md
- Parameters:
max_size – Size of the largest allocation that can be requested.
-
template<size_t kIndex, typename ...Args>
AllocatorRequest pw::allocator::test::MakeAllocatorRequest(Args... args)# Builds an AllocatorRequest from an index and values.
Unfortunately, the reproducer emitted by FuzzTest for vectors of AllocatorRequests cannot simply be copied and pasted. To create a reproducer, create a
pw::Vector
of the appropriate size, and populate it using this method with the correct index.For example, consider the following sample output:
The test fails with input: argument 0: {(index=0, value={0, 1}), (index=0, value={1209, 8}), (index=2, value={9223372036854775807, 2048})} ================================================================= === Reproducer test TEST(MyAllocatorFuzzTest, NeverCrashesU16Regression) { NeverCrashes( {{0, 1}, {1209, 8}, {9223372036854775807, 2048}, ); }
A valid reproducer might be:
TEST(MyAllocatorFuzzTest, NeverCrashesU16Regression) { Vector<test::AllocatorRequest, 3> input({ test::MakeAllocatorRequest<0>(0u, 1u), test::MakeAllocatorRequest<0>(1209u, 8u), test::MakeAllocatorRequest<2>(9223372036854775807u, 2048u), }); NeverCrashes(input); // }