uvco 0.1
Loading...
Searching...
No Matches
Public Types | Public Member Functions | Private Attributes | List of all members
uvco::Scheduler Class Reference

#include <scheduler.h>

Public Types

enum class  RunMode { RunMode::Immediate = 0 , RunMode::Deferred = 1 }
 

Public Member Functions

 Scheduler (RunMode mode=RunMode::Deferred)
 
 Scheduler (const Scheduler &)=delete
 
 Scheduler (Scheduler &&)=delete
 
Scheduleroperator= (const Scheduler &)=delete
 
Scheduleroperator= (Scheduler &&)=delete
 
 ~Scheduler ()
 
void setUpLoop (uv_loop_t *loop)
 
void enqueue (std::coroutine_handle<> handle)
 Schedule a coroutine for resumption.
 
void runAll ()
 Run all scheduled coroutines sequentially.
 
void close ()
 
bool empty () const
 

Private Attributes

std::vector< std::coroutine_handle<> > resumableActive_
 
std::vector< std::coroutine_handle<> > resumableRunning_
 
RunMode run_mode_
 

Detailed Description

The Scheduler is attached to the UV loop as field data, and contains the coroutine scheduler. Currently, it works on a fairly simple basis: callbacks can add coroutines for resumption to the scheduler, and the scheduler runs all coroutines once per event loop turn, right after callbacks.

This is in contrast to the "conventional" model, on which most asynchronous code in uvco is built: Almost all resumptions are triggered by libuv callbacks during I/O polling; the resumed coroutines work on the stack of the callback. for awaiting promises, the waiting coroutine is run directly on the stack of the coroutine resolving the promise in question.

This has the lowest latency but has some downsides; mainly, execution of coroutines is very interleaved. There may be unexpected internal states while executing such an interleaved system of coroutines.

In contrast, this scheduler works through all waiting (but ready) coroutines sequentially. Each resumed coroutine returns when it is either finished or has reached its next suspension point. This is easier to understand, but incurs the cost of first enqueuing a coroutine and only executing it after all I/O has been polled.

This is currently only used for UDP sockets. It is intended to be 100% compatible with the conventional model. The benefit of using this scheduler is currently unclear (theoretically, it simplifies the execution stack and makes bugs less likely, or easier to find due to non-interleaved execution of coroutines), thus it is not introduced everywhere yet.


The documentation for this class was generated from the following files: