std::execution, Sender/Receiver, and the Continuation Monad

Some thoughts on the std::execution proposal and my understanding of the underlying theory.

What’s proposed

From the paper’s Introduction

This paper proposes a self-contained design for a Standard C++ framework for managing asynchronous execution on generic execution contexts. It is based on the ideas in [P0443R14] and its companion papers.

Which doesn’t tell you much.

It proposes a framework where the principle abstractions are Senders, Receivers, and Schedulers.

Sender
A composable unit of work.
Receiver
Delimits work, handling completion, exceptions, or cancellation.
Schedulers
Arranges for the context work is done in.

The primary user facing concept is the sender. Values and functions can be lifted directly into senders. Senders can be stacked together, with a sender passing its value on to another function. Or stacking exception or cancellation handling the same way.

Receivers handle the three ways a sender can complete, by returning a value, throwing an exception, or being canceled. As described, receivers are most likely to be implemented within particular algorithms that combine senders, such as `then` or `retry`.

Schedulers provide access to execution contexts. Like inline, single thread, a thread pool, a GPU, and so on, would all have schedulers that provide for putting a sender into the context they manage.

There’s a fairly large API surface being proposed. But there’s an underlying theory about this, governing what algorithms need to be there and how the pieces fit together.

Continuation Passing Style and the Continuation Monad

Continuation passing style is a transformation from a normal function and a call stack to a direction to send the result to the “continuation” without returning. This means the functions context can be cleaned up. Delimited continuations are a slight variation, where instead of an unbounded “rest of the program”, the continuation has an end point and a value. It’s essentially a function, and can be handled as such. There is a purely mechanical method for converting all of the lambda calculus transforms into CPS form, and this can be profitable for compilers based on lambda, or related logics, like system F.

The mechanical transformation also means that all the control structures, like loops, gotos, coroutines, exceptions, have CPS equivalents.

CPS is tedious, though. Having to explicitly add a continuation to everything is complicated.

However, there’s also a typeclass, or concept, that allows you to convert regular functions into continuation passing style, automatically. It’s then rather straightforward to involve concerns like where work is being run for something that wraps up the entire work. Even being able to switch back and forth between contexts. That’s the continuation monad.

And unfortunately monads became an organizing principle in programming language theory one or two decades after most CS programs were standardized. So it’s all complicated and involves things we weren’t trained on. Fitting it into C++ has been an ongoing challenge, and until we had generic lambda was neither reasonbly concise nor idiomatic.

See, however, the new monadic interface additions for std::optional for why you want this. Or Ranges, which are solidly based in the ‘list’ or non-deterministic monad.

We have a poor relationship with Theory

There is no satisfactory PL theory for object oriented programming. There’s lots of work, but it mostly ends up describing something that OO programmers don’t think is quite the same as what they do. Even the ones who spend a lot of time doing theory.

Yet OO was, and is, a successful discipline. Working with identity, behavior, and state has produced remarkable results. Temporal calculi, not so much.

For a long while, we as a discipline thought that multi-threading was similar. There was poor theory, but we had hardware, and libraries that let us use that hardware to do concurrent work correctly.

That turned out not to be the case.

Concurrency can’t be just a library, unfortunately. Concurrency models that hardware vendors will commit to won’t promise not to violate causality. That makes producing a programming model programmers can use frighteningly difficult.

Which is why having a sound theory for std::execution is a good thing, even if the theory is unfamiliar.

But as a group, we learned the wrong lessons from the 80s and thought it was a researcher’s job to take the successes of practitioners and put a sound basis to them. Ignoring that it is a feedback loop. In the 60s and 70s, those researchers were also the practitioners. It’s not wrong to get out ahead of theory, but we do need to check back.

p2300 std::execution

Senders, via the Decorator pattern, lift ordinary functions into the continuation passing style. People writing functions only need to be concerned with handling the arguments they are passed, without concern for execution context or continuations. Functions used by senders act like, and are, normal functions.

Senders manage a bundle of channels, representing normal return of a value, throwing an exception, or an error channel to handle cancellation, or other errors not within the bound of ordinary functions. All of these channels can be composed taking the result to another function, or monadically with a function returning a sender, where that function can determine the kind of sender based on the values of the arguments. The channels can be combined or rerouted, connecting one to another, or presenting a variant containing either result, exception, and/or error to the continuation function.

Although senders form a logical graph of units of work, the physical type model is containment, much like expression templates. The result of binding senders together via an algorithm is a sender that contains the bound together senders. There are no nodes or allocations inherent to the model, just function calls.

C++ coroutines fit into this model. C++ coroutines are, from the outside, functions with rules about the interaction patterns with the returned value. Making a coroutine owning type a sender, and a sender co_awaitable, is possible and has been demonstrated.

std::execution takes the Continuation Monad and fits it to C++ control flow, return or exception, and adds cancellation, which incidentally allows a channel for failures from execution contexts. The thread pool can potentially signal failure via the error channel, without aliasing problems from application function code. However, for advanced users, these can be folded back into the normal function arguments and handled by application code. Policy decisions are not burned into the ROM of std::execution, but there are defaults that can be provided by application infrastructure authors.

Those infrastructure authors do not have to be std library vendors. The protocols, rendered as concepts, are available to normal users.

Network TS

Eppur si muove
And yet it moves

I do not believe ASIO’s model is a firm foundation for all async programming. However, it is well proven, and exists. It works.

And …

I have confidence that a networking library can and will be built using p2300. I am less confident that can be done in the timeframe for C++26. I do not believe for a moment we could have one for C++23, even with an existence proof a networking library appearing now. It’s simply too late to review and agree. We’re in the same place as coroutines. We can have the machinery, but without all of the application user facing infrastructure we should have.

I think this was the right choice with coroutines, and I think providing the machinery for general continuation based async in the standard library so that we can build on top of it is the right choice. The authors have committed to making sure all the facilities are available for programmers, in particular the pipe syntax (an issue for ranges) as well as providing bases or adapters for coroutine promises and typed senders. We can experiment and add existing practice as we go.

Disclaimer

This is all my personal opinion, based on my own understanding. I’ve been in the meetings, I’ve been in discussions, asked questions. But if I’m wrong about some aspect of the proposal, that’s on me. Certainly not a formal opinion of Bloomberg, where I work. While we do lots of network services, and async programming, this isn’t what our tech looks like at all. Getting from here to there is an open question, but it would be for ASIO, too.

At least it isn’t CORBA.

Source For Blog

Leave a comment

Your email address will not be published. Required fields are marked *