Futures#

pw_async2: Cooperative async tasks for embedded

A Future is an object that represents the value of an asynchronous operation which may not yet be complete. Upon completion, the future produces the result of the operation, if it has one.

Futures are the core interface to pw_async2 asynchronous APIs.

Core concepts#

Futures operate using the informed poll model on which pw_async2 is built. This model is summarized below, but it is recommended to read the full description for important background knowledge.

Future API#

Futures use a standard API. There is no Future class; futures are unique types with a common interface, but no shared base. In C++20 and later, pw::async2::Future is a C++ concept that describes the future interface.

A Future<T> exposes the following API:

  • value_type: Type alias for the value produced by the future.

  • Poll<value_type> Pend(Context& cx): Calling Pend advances the asynchronous operation until no further progress is possible. Returns Ready if the operation completes. Otherwise, uses the provided Context to store a waker and returns Pending. The waker wakes the task when Pend should be called again.

  • bool is_complete(): Returns whether the future has already completed and had its result consumed.

Futures are single-use and track their completion status. It is an error to poll a future after it has already completed.

pw_async2 provides a pw::async2::Future concept that future implementations must satisfy. Futures do not share a common base class, but may use common helpers such as FutureCore.

Ownership and lifetime#

Futures are owned by the caller of an asynchronous operation. The task that receives the future is responsible for storing and polling it.

The provider of a future must either outlive the future or arrange for the future to be resolved in an error state when the provider is destroyed.

Polling#

Futures are lazy and do nothing on their own. The task owning a future must poll it to drive it to completion. Calling a future’s Pend function advances its operation and returns a Poll containing one of two values:

  • Pending(): The asynchronous operation has not yet finished. The value is not available. The task polling the future is be scheduled to wake when the future can make additional progress.

    Typically, your task should propagate a Pending return upwards to notify the dispatcher that it is blocked and should sleep.

  • Ready(T): The operation has completed, and the value is now available.

Once a future returns Ready, its state is final. Attempting to poll it again results in an assertion.

This polling model allows a single thread to manage many concurrent operations without blocking.

Composability#

The power of futures is their ability to compose to construct complex asynchronous logic from smaller building blocks.

Futures can be classified into two categories: leaf futures and composite futures. Leaf futures represent a specific asynchronous operation, such as a read from a channel, or waiting for a timer. They contain the required state for their operations and manage the task waiting on them.

Composite futures are built on top of other futures, combining their results to build advanced asynchronous execution graphs. For example, a Join future waits for multiple other futures to complete, returning all of their results at once. Composite futures can be used to express complex logic in a declarative way.

Coroutine support#

Futures’ simple Pend API makes them easy to use with async2’s coroutine adapter. You can co_await a function that returns a future directly, automatically polling the future to completion.

Working with futures#

Calling functions that return futures#

Consider some asynchronous call which produces a simple value on completion. Pigweed provides ValueFuture<T> for this common case. The async function has the following signature:

class NumberGenerator {
  ValueFuture<int> GetNextNumber();
};

You would write a task that calls this operation as follows:

class MyTask : public pw::async2::Task {
 private:
  pw::async2::Poll<> DoPend(pw::async2::Context& cx) override {
    // Obtain and store the future, then poll it to completion.
    if (!future_.has_value()) {
      future_.emplace(generator_.GetNextNumber());
    }

    PW_TRY_READY_ASSIGN(int number, future_->Pend(cx));
    PW_LOG_INFO("Received number: %d", number);

    return pw::async2::Ready();
  }

  NumberGenerator& generator_;

  // The future is stored in an optional so it can be lazily initialized
  // inside DoPend. Most concrete futures are not default constructible.
  std::optional<ValueFuture<int>> future_;
};
pw::async2::Coro<pw::Status> MyCoroutineFunction(pw::async2::CoroContext&,
                                                 NumberGenerator& generator) {
  // Pigweed's coroutine integration allows futures to be awaited directly.
  int number = co_await generator.GetNextNumber();
  PW_LOG_INFO("Received number: %d", number);
  co_return pw::OkStatus();
}

Writing functions that return futures#

All future-based pw_async2 APIs have the signature

Future<T> DoThing(Args... args);

Where Future<T> is some concrete future implementation (e.g. ValueFuture) which resolves to a value of type T and Args represents any arguments to the operation.

When defining an asynchronous API, the function should always return a Future directly — not a Result<Future> or std::optional<Future>. If the operation is fallible, that should be expressed by the future’s output, e.g. Future<Result<T>>.

This is necessary for proper composability. It makes using asynchronous APIs consistent and enables higher-level futures which compose other futures to function cleanly. Additionally, returning a Future directly is essential to be able to work with coroutines: co_await can be used directly and will resolve to a Result<T>.

Implementing a future#

pw_async2 provides futures like ValueFuture for common asynchronous patterns. However, you may want to implement a custom leaf future if your operation has complex logic where Pend() would benefit from reaching deeper into the underlying system, e.g. waiting for a hardware interrupt.

FutureCore is the primary tool for creating futures.

FutureCore#

This class provides the essential machinery for most custom leaf futures:

  • It stores the Waker of the task that polls it.

  • It manages its membership in an intrusive list of futures.

  • It tracks completion internally.

Future implementations typically have a FutureCore member.

FutureList#

After you vend a future from an asynchronous operation, you need a way to track and resolve it once the operation has completed. FutureCores can be stored in a FutureList, which wraps an pw::IntrusiveForwardList.

FutureList allows multiple concurrent tasks to wait on an operation. Pending futures are pushed to the list. When an operation completes, futures are popped from the list and resolved.

FutureList stores its futures as a linked list of FutureCores in its BaseFutureList base. This maximizes code reuse between different future implementations.

A FutureList is declared with a pointer to the future implementation’s FutureCore member: FutureList<&FutureType::future_core_>. For example:

 1class MyFuture {
 2 public:
 3  // Future API: value_type, Pend(), is_completed()
 4
 5 private:
 6  friend class MyFutureProvider;
 7
 8  void Resolve(int) { /* ... */ }
 9
10  // The FutureCore is a member of the future.
11  pw::async2::FutureCore core_;
12};
13
14class MyFutureProvider {
15 public:
16  MyFuture Get() {
17    MyFuture future;
18    std::lock_guard lock(lock_);
19    futures_.Push(future);
20    return future;
21  }
22
23  void ResolveOne() {
24    std::lock_guard lock(lock_);
25    // Pop the future from the list as a MyFuture&.
26    futures_.Pop().Resolve(123);
27  }
28
29 private:
30  pw::sync::InterruptSpinLock lock_;
31
32  // FutureList is declared with a pointer to the future type's FutureCore.
33  pw::async2::FutureList<&MyFuture::core_> futures_ PW_GUARDED_BY(lock_);
34};

Waking mechanism#

When a task polls a future and it returns Pending, the future must store the task’s Waker from the provided Context. This is handled automatically by FutureCore::DoPend.

On the other side of the asynchronous operation (e.g., in an interrupt handler), when the operation completes, the provider is used to retrieve the future, and its Wake() function is called. This notifies the dispatcher that the task waiting on this future is ready to make progress and should be polled again.

Example: Waiting for a GPIO interrupt#

Below is an example of a custom future that waits for a GPIO button press using interfaces from pw_digital_io.

 1class ButtonReceiver;
 2
 3class ButtonFuture {
 4 public:
 5  // Provide a descriptive reason which can be used to debug blocked tasks.
 6  static constexpr const char kWaitReason[] = "Waiting for button press";
 7
 8  // FutureCore is movable and handles list management automatically.
 9  ButtonFuture(ButtonFuture&&) = default;
10  ButtonFuture& operator=(ButtonFuture&&) = default;
11
12  // Polls the future to see if the button has been pressed.
13  pw::async2::Poll<> Pend(pw::async2::Context& cx) {
14    return core_.DoPend(*this, cx);
15  }
16
17 private:
18  friend class ButtonReceiver;
19  friend class pw::async2::FutureCore;
20
21  // Private constructor used by ButtonReceiver.
22  explicit ButtonFuture() : core_(pw::async2::FutureCore::kPending) {}
23
24  // Callback invoked by FutureCore::DoPend.
25  pw::async2::Poll<> DoPend(pw::async2::Context&) {
26    if (core_.is_ready()) {
27      return pw::async2::Ready();
28    }
29    return pw::async2::Pending();
30  }
31
32  pw::async2::FutureCore core_;
33};
34
35class ButtonReceiver {
36 public:
37  explicit ButtonReceiver(pw::digital_io::DigitalInterrupt& line)
38      : line_(line) {
39    PW_CHECK_OK(line_.SetInterruptHandler(
40        pw::digital_io::InterruptTrigger::kActivatingEdge,
41        [this](pw::digital_io::State) { HandleInterrupt(); }));
42    PW_CHECK_OK(line_.EnableInterruptHandler());
43  }
44
45  // Returns a future that completes when the button is pressed.
46  ButtonFuture WaitForPress() {
47    std::lock_guard lock(lock_);
48    ButtonFuture future;
49    // Only allow one waiter at a time.
50    list_.PushRequireEmpty(future);
51    return future;
52  }
53
54 private:
55  // Executed in interrupt context.
56  void HandleInterrupt() {
57    std::lock_guard lock(lock_);
58    list_.ResolveAll();
59  }
60
61  pw::digital_io::DigitalInterrupt& line_;
62  pw::sync::InterruptSpinLock lock_;
63  pw::async2::FutureList<&ButtonFuture::core_> list_ PW_GUARDED_BY(lock_);
64};

This example demonstrates the core mechanics of creating a custom future. This pattern of waiting for a single value from a producer is so common that pw_async2 provides ValueFuture and ValueProvider to handle it. In practice, you would return a VoidFuture (alias for ValueFuture<void>) from WaitForPress instead of writing a custom ButtonFuture.

Combinators#

Combinators allow you to compose multiple futures into a single future to express complex control flow.

Join#

Join() waits for multiple futures to complete and returns a tuple of their results.

#include "pw_async2/join.h"

ValueFuture<pw::Status> DoWork(int id);
class JoinTask : public pw::async2::Task {
 private:
  pw::async2::Poll<> DoPend(pw::async2::Context& cx) override {
    if (!future_.has_value()) {
      // Start three futures concurrently and wait for all of them
      // to complete.
      future_.emplace(pw::async2::Join(DoWork(1), DoWork(2), DoWork(3)));
    }

    PW_TRY_READY_ASSIGN(auto results, future_->Pend(cx));
    auto [status1, status2, status3] = *results;

    if (!status1.ok() || !status2.ok() || !status3.ok()) {
      PW_LOG_ERROR("Operation failed");
    } else {
      PW_LOG_INFO("All operations succeeded");
    }

    return pw::async2::Ready();
  }

  std::optional<JoinFuture<ValueFuture<pw::Status>,
                           ValueFuture<pw::Status>,
                           ValueFuture<pw::Status>>>
      future_;
};
pw::async2::Coro<pw::Status> JoinExample(pw::async2::CoroContext&) {
  // Start three futures concurrently and wait for all of them to complete.
  auto [status1, status2, status3] =
      co_await pw::async2::Join(DoWork(1), DoWork(2), DoWork(3));

  if (!status1.ok() || !status2.ok() || !status3.ok()) {
    PW_LOG_ERROR("Operation failed");
    co_return pw::Status::Internal();
  }
  PW_LOG_INFO("All operations succeeded");
  co_return pw::OkStatus();
}

Select#

Select() waits for the first of multiple futures to complete. It returns a SelectFuture which resolves to an OptionalTuple containing the result. If additional futures happen to complete between the first future completing the task re-running, the tuple stores all of their results.

#include "pw_async2/select.h"

ValueFuture<int> DoWork();
ValueFuture<int> DoOtherWork();
class SelectTask : public pw::async2::Task {
 private:
  pw::async2::Poll<> DoPend(pw::async2::Context& cx) override {
    if (!future_.has_value()) {
      // Race two futures and wait for the first one to complete.
      future_.emplace(pw::async2::Select(DoWork(), DoOtherWork()));
    }

    PW_TRY_READY_ASSIGN(auto results, future_->Pend(cx));

    // Check which future(s) completed.
    // In this example, we check all of them, but it's common to return
    // after the first result.
    if (results.has_value<0>()) {
      PW_LOG_INFO("DoWork completed with: %d", results.get<0>());
    }
    if (results.has_value<1>()) {
      PW_LOG_INFO("DoOtherWork completed with: %d", results.get<1>());
    }

    return pw::async2::Ready();
  }

  std::optional<SelectFuture<ValueFuture<int>, ValueFuture<int>>> future_;
};
pw::async2::Coro<int> SelectExample(pw::async2::CoroContext&) {
  // Race two futures and wait for the first one to complete.
  auto results = co_await pw::async2::Select(DoWork(), DoOtherWork());

  // Check which future(s) completed.
  // In this example, we check all of them, but it's common to return
  // after the first result.
  if (results.has_value<0>()) {
    int result = results.get<0>();
    PW_LOG_INFO("DoWork completed with: %d", result);
  }

  if (results.has_value<1>()) {
    int result = results.get<1>();
    PW_LOG_INFO("DoOtherWork completed with: %d", result);
  }

  co_return pw::OkStatus();
}

Setting up wakers#

You can set up a waker to a non-empty value using one of four macros we provide:

  • PW_ASYNC_STORE_WAKER and PW_ASYNC_CLONE_WAKER

    The first creates a waker for a given context. The second clones an existing waker, allowing the original and/or the clone to wake the task.

    This pair of macros ensure a single task will be woken. They will assert if a waker for a different task is created (or cloned) when the destination waker already is set up for some task.

  • PW_ASYNC_TRY_STORE_WAKER and PW_ASYNC_TRY_CLONE_WAKER

    This is an alternative to PW_ASYNC_STORE_WAKER, and returns false instead of crashing. This lets the pendable to signal to the caller that the Pend() operation failed, so it can be handled in some other way.

Timing-out Futures#

If you create a future, you can also combine it with a TimeFuture to get a new composite future (a FutureWithTimeout) that can time out.

There are three main factory functions that construct useful variants of the composite type.

  • Timeout(future, [time_provider,] delay)

    This function returns a composite future that times out after the specified delay. If no time provider is given, the function will default to GetSystemTimeProvider. The time provider will then be used to construct the TimeFuture to use in the composite future.

    If the original value future is for a value of type T, the created composite future uses pw::Result<T>. On timeout, the status associated with that result will be Status.DeadlineExceeded() to make it clear no value is available.

    The composite future will handle waiting on both futures, and will prefer to resolve to the value provided by the first future if both futures are ready when they are next pended.

  • TimeoutOr(future, [time_provider,] delay, sentinel_value_or_func)

    Like the first function, this function will construct a composite future that will time out after the specified delay.

    However on timeout, this version will resolve to a sentinel value, either using a value passed in, or calling a function to obtain it, if that is what is passed in.

    Note that a copy of the value that is passed will be stored as part of the internal data for a future. For a small and trivially constructible type, this makes sense, but for a large type or a type that is not trivially constructible you should prefer to pass a function which constructs the value.

    Caution

    You should only use a sentinel when it there is no chance of confusing the sentinel value with the normal values you would obtain from the future when it does not time out.

  • TimeoutOrClosed(channel_future, [time_provider], delay)

    Like the first function, but intended to be used with the SendFuture, ReceiveFuture, and ReserveSendFuture futures returned from using a channel.

    On timeout, these act like the channel was closed while waiting, and release their reference to the channel. For SendFuture, this means it resolves to false, and for the other two it means resolving to std::nullopt.

Example#

Using them to construct the composite future is easy.

// Obtain a basic ValueFuture<T> or similar from some provider.
auto value_future = value_provider.Get()
                    // Construct the composite future, which will either resolve
                    // to a `T`, or timeout after 15ms using the system clock.
                    auto future_with_timeout_ex1 =
    Timeout(std::move(value_future), 15ms);

// You can also construct one this way.
auto future_with_timeout_ex2 = Timeout(value_provider.Get(), 15ms);

// For a sentinel with a simple constant:
auto future_with_timeout_ex3 = TimeoutOr(int_value_provider.Get(), 15ms, -1);

// To use a function to obtain a sentinnel value:
auto future_with_timeout_ex3 =
    TimeoutOr(int_value_provider.Get(), 15ms, []() { return -1; });

You can find more examples showing how to use these functions in pw_async2/examples/timeout_test.cc.