-
Notifications
You must be signed in to change notification settings - Fork 31
general architecture
This page describes how libevfibers does its job.
First of all, one needs to understand what the coroutines are and how do they work in plain C. Wikipedia defines coroutine as follows:
Coroutines are computer program components that generalize subroutines to allow multiple entry points for suspending and resuming execution at certain locations.
High-level scripting languages (e.g. Lua) have a built-in support for coroutines, while the C standard doesn't provide any definition of coroutines or similar entity.
But there are some libraries, which implement coroutines for C. The one, used in libevfibers, is libcoro.
To understand the basics of coroutines, let's analyse libcoro switches from one coroutine to another on amd64 architecture:
coro_transfer:
push %rbp
push %rbx
push %r12
push %r13
push %r14
push %r15
mov %rsp,(%rdi)
mov (%rsi),%rsp
pop %r15
pop %r14
pop %r13
pop %r12
pop %rbx
pop %rbp
retq
That's it: 15 instructions for a coroutine switch, but what does this function do? Let's have a look at its C prototype:
void coro_transfer(coro_context *prev, coro_context *next);
So first of all it stores some registers on the stack, their list is platform and ABI specific. According to System V AMD64 ABI, which I believe my linux laptop adheres to, RDI and RSI should be the first and the second function parameters respectively. So in this line:
mov %rsp,(%rdi)
we store the current stack pointer into the prev context, which is defined as follows for asm backed:
struct coro_context {
void **sp;
};
Basically it's just a stack pointer. And in this line
mov (%rsi),%rsp
we restore the stack pointer, stored in next. Afterwards we restore the same registers from the new stack, that we pushed to the old one.
This function is called on one stack, which is then switched to another one, and it returns in the second stack. Later the second stack can call coro_transfer with inversed order of parameters so as to jump back to the first stack.
Stack in C is the execution context and coroutine is just a separate stack.
We've discussed how can we switch back and forth from one coroutine to another. Now let's take a look on how a new coroutine is created. The following function:
void coro_create(
coro_context *ctx, /* an uninitialised coro_context */
coro_func coro, /* the coroutine code to be executed */
void *arg, /* a single pointer passed to the coro */
void *sptr, /* start of stack area */
long ssize); /* size of stack area */
creates a new coroutine. Comments describe what each parameter means. coro_func is defined as follows:
typedef void (*coro_func)(void *);
The caller is expected to allocate a suitable stack (e.g. using malloc()) and pass its pointer along with the size. The coro_create prototype should be familiar to pthread users, one should pass a function and a void * argument, and it will be called on a separate stack. But in case of coro_create, it will not be called until you call coro_transfer explicitly.
Documentation says the following about initial context:
As a special case, if coro, arg, sptr and ssize are all zero, then an "empty" coro_context will be created that is suitable as an initial source for coro_transfer.
This approach is used when you need to feed something to coro_transfer when you want to jump to some coroutine from your main().
Now we will establish a link between coroutines and fibers.
First let's take a look at classic asynchronous programming approach. This paradigm implies a running event loop (libevfibers uses libev as the event loop), which dispatches arriving events to some user defined callbacks.
With this approach business logic usually ends up split across multiple callbacks, which are usually designed to be called multiple times e.g. until all data has arrived on a socket.
Humans understand sequential deterministic programs better, than asynchronous callback flows. What if we create a coroutine, which will be resumed by the event loop when some expected event has arrived? This way we will work with a sequential blocking code which is easy to reason about.
In this approach one does not interact with the event loop directly, but creates a coroutine, which is controlled by a special scheduler. Such a coroutine is called a fiber.
The scheduler provides fibers with blocking API and manages the execution of multiple fibers, all of which are working in a single physical thread.
Using fibers simplifies asynchronous programmer's life considerably, without introducing much overhead. But libevfibers API is organized in a way that does not block you from using libev event loop directly.
Sometimes you want to have just a timer, not a full-blown fiber, that sleeps most of the time.
Fibers are organized into a call stack of a fixed size. The so called root fiber is at the bottom of this stack. It is also the main() initial coroutine context and the scheduler.
When an execution is transferred from one fiber (A) to another (B), B gets pushed onto the stack. When B has nothing left to do, it yields, which moves the execution back to previous fiber on the stack, which is A. B is removed from the stack afterwards.
The root fiber runs event loop, which in turns invokes special callbacks, that transfer execution to fibers, waiting for certain events, like I/O on a socket.
The function fbr_ev_wait() is the central dispatch point of libevfibers. It serves the same purposes as select(), waites for a number of events to arrive. The following types of events are supported:
- FBR_EV_WATCHER --- libev watcher event
- FBR_EV_MUTEX --- mutex event
- FBR_EV_COND_VAR --- conditional variable event
- FBR_EV_EIO --- libeio event
These event types cover all the functionality, supported by libevfibers. fbr_ev_wait() receives the array of event watchers and yields the execution to the scheduler until at least one event arrives. When the fiber is invoked back by the scheduler, at least one of events must have an arrived flag set.
Specialized version of fbr_ev_wait() exists for cases when only one event is awaited --- fbr_ev_wait_one(). It serves as a basis for all higher level wrappers like fbr_read() and fbr_write(), both of which create libev watcher event, initialize it and call fbr_ev_wait_one() on the event.
All POSIX wrappers provided by libevfibers do not block the whole OS thread, but only a fiber, because they wrap fbr_ev_wait_one(). Any 3rdparty blocking function will not play nicely with fibers out of the box: either it needs to be modified to be fiber-aware, or it needs to be wrapped with fbr_eio_custom(). The latter is the preferred approach if the wrapped function is not called frequently, as thread synchronization has a cost.