uvco 0.1
|
C++20 standard library coroutines running on libuv
.
Currently, a bit of an experiment - but it works for real! I am aiming for an ergonomic, intuitive, asynchronous experience. In some parts, uvco
implements the bare minimum to still be joyful to use. Eventually, all of libuv
's functionality should be available with low overhead.
Supported functionality:
getaddrinfo
)sleep
, tick
)read
, write
, mkdir
, unlink
, ...)SelectSet
for polling multiple promises at oncePromises (backed by coroutines) are run eagerly; you don't have to schedule or await them for the underlying coroutine to run.
Where I/O or other activity causes a coroutine to be resumed, the coroutine will typically be run by the scheduler, which you don't need to care about. Depending on the RunMode
, pending coroutines are either run once per event loop turn, or immediately from the libuv callback (Immediate
). By default, they are run all at once in every event loop turn (Deferred
). While you can set the run mode for I/O events in uvco::runMain()
(Deferred
vs. Immediate
), the externally visible behavior should be the same, and code will work in both modes. If it doesn't: that's a bug in uvco.
Some types - like buffers filled by sockets - use simple types like strings, which are easy to handle but not super efficient. This may need to be generalized.
Provide ergonomic asynchronous abstractions of all libuv functionality, at satisfactory performance.
To run a coroutine, you need to set up an event loop. This is done by calling uvco::runMain
with a callable taking a single const Loop&
argument and returns a uvco::Promise<T>
. runMain()
either returns the resulting value after the event loop has finished, or throws an exception if a coroutine threw one.
A Promise<T>
is a coroutine promise, and can be awaited. It is the basic unit, and only access to concurrency; there is no Task
or such. Awaiting a promise will save the current execution state, and resume it as soon as the promise is ready. Currently, promises cannot be properly cancelled; however, suspended coroutines can be cancelled - although the coroutine they're waiting on will still run to completion (that's the downside of not having a Task
primitive).
When in doubt, refer to the examples in test/
; they are actively maintained.
Return a promise from the main function run by runMain()
. runMain()
will return a promised result, or throw an exception if a coroutine threw one. The event loop runs until all callbacks are finished and all coroutines have been completed. Callbacks (by libuv) trigger coroutine resumption from the event loop, which is defined in src/run.cc
.
Here we download a single file. The Curl
class is a wrapper around libcurl, and provides a download
method, which is a generator method returning a MultiPromise
. The MultiPromise
yields std::optional<std::string>
, which is a std::nullopt
once the download has finished. An exception is thrown if the download fails. To build the curl-test
, which demonstrates this using a real server, make sure to have libcurl
and its headers installed. CMake should find it automatically.
Of course, more than one download can be triggered at once: download()
MultiPromises are independent.
Build the project, and run the test-http10
binary. It works like the following code:
Some more examples can be found in the test/
directory. Those test files ending in .exe.cc
are end-to-end binaries which also show how to set up the event loop.
Passing references and pointers into a coroutine (i.e. a function returning [Multi]Promise<T>
) is fine as long as the underlying value outlives the coroutine returning. Typically, this is done like this:
The temporary value is kept alive in the coroutine frame, which has been allocated dynamically.
It's a different story when moving around promises: if the calling coroutine returns before the awaited promise is finished, the result will be an illegal stack access. Don't do this :) Instead make sure to e.g. use a shared_ptr
instead of a reference, or a std::string
instead of a std::string_view
.
Be extra careful of the following dangerous pattern:
I may try to change the uvco types to prevent this pattern, but it's not easy to do so without impairing the ease of use in other places.
The Loop
is singular, and outlives all coroutines running on it; therefore it's passed as const Loop&
to any coroutine needing to initiate I/O.
If your application exits prematurely and receives an error about EAGAIN
, and "unwrap called on unfulfilled promise", it means that the event loop - either libuv's loop or uvco's - have decided that there's no more work to do, and thus leave the loop:
In a properly written application, this is the case once all the work has been done, i.e. no more open handles on the libuv event loop, and no coroutines waiting to be resumed; for example, every unit test is required to behave like this (see test/
) and finish all operations before terminating.
Forgetting to co_await
a Promise can lead to this condition, but a bug in uvco is also a potential explanation. The most frequent mistake leading to this kind of error is forgetting to add
at the end of obj
's lifetime; many sockets, clients, streams, etc. have an asynchronous close()
method that must be awaited (and can therefore not be part of a destructor call). Check the documentation for whether you need to do this.
Exceptions are propagated through the coroutine stack. If a coroutine throws an exception, it will be thrown at the point of the co_await
that started the coroutine.
There are two difficulties:
close()
d explicitly, because closing is an asynchronous operation. Complaints will be printed to the console if you forget to close a resource. Usually though, the resource will be closed asynchronously although a small amount of memory may be leaked (see StreamBase::~StreamBase()
).runMain()
call if the event loop has finished. If a single active libuv handle is present, this will not be the case, and the application will appear to hang. Therefore, prefer handling exceptions within your asynchronous code.Standard cmake build:
You can then use uvco
in your own projects by linking against uvco
. CMake packages are exported, so you can use it in your cmake project as follows:
This will result in the following compiler invocations:
Please let me know if this doesn't work for you (although I don't promise any help, I'm tired enough of cmake already).
The code is tested by unit tests in test/
; the coverage is currently > 90%. Unit tests are especially helpful when built and run with -DENABLE_ASAN=1 -DENABLE_COVERAGE=1
, detecting memory leaks and illegal accesses - the most frequent bugs when writing asynchronous code.
For coverage information, you need gcovr
.
Generally, run it like this:
You can obtain coverage information using make coverage
or ninja coverage
. The report is stored in build/coverage/uvco.html
, and generated by gcovr, which should be installed. Alternatively, use make grcov
in order to use the grcov
tool. The coverage html is in build/coverage/uvco.html
respectively build/grcov/html/index.html
.
For coverage, I recommend using clang++
(-DCMAKE_CXX_COMPILER=clang++
) because g++
does not take into account lines within coroutines - which is kind of pointless in a coroutine library. The gcovr
invocation defined in CMakeLists.txt
handles both cases, invoking llvm-cov
when compiling with clang++
.
Documentation can be built using doxygen
:
and is delivered to the doxygen/
directory.