Skip to content

Communication architecture

Sevoris Doe edited this page Jun 23, 2023 · 4 revisions

Communications architecture

Norgopolis uses the following architecture:

  • a singleton server, norgopolis-server, which runs in a standalone process in the background
  • norgopolis-clients, which are lightweight clients for connecting with the server and run as part of the applicaton process which they enable access.

Server and clients communicate with each other using gRPC protocol at localhost:62020. Communication by default occurs as two-way streams; however either party may emulate a single-message behavior by closing its stream after a single message. This is provided for transfer of large data structures over time (such as when large amounts of tree-sitter CSTs are being consumed by a module).

2023_05_20_Norgpolis_Architecture-Drawing-Page-1

The server is responsible for "loading" modules of native executables or interpreted code in child threads of itself. The modules provide neorg ecosystem services of some form to each other as well as frontend modules that run as part of the applications which also house the clients, with services. The purpose of the modules is to house stateful logic which triggers updates on the file system beyond the scope of single neorg files currently loaded to the buffer, or similar services which you want to centralize to one provider across many frontends. Modules and server communicate wich each other using RPC protocol buffers piped through stdin/stdout connections.

The server inbetween acts as a router of calls from clients (and utilities calling through them) to modules loaded in the server, as well as between modules (when one module consumes services from another for its own functions). The server is also responsible for managing the loading, maintenance and unloading of modules.

The norgopolis server and clients maintain stateful, reference-counted connections. Individual connections are modelled on the Tokio connection, and are implicitly heartbeat as a consequence. When the last connection is dropped, a default ten minutes timer startes counting down. If no new client connects in those ten minutes, Norgopolis will exit, by first sending shutdown information to all connected modules and on the gRPC port as a debug info, then exiting. This allows modules that have chosen to likewise become fully independent processes to implement their own graceful shutdown if they wish.

Routing behavior

There are two main kinds of communications mediated by Norgopolis.

Type 1: Between client and module:

2023_05_20_Norgpolis_Architecture-Drawing-Client-to-module routing

In this case, the connection is routed between the internal stdin-stdout-based RPC connections, and the external gRPC connection at the server's port. These connections are used to provide services to frontend clients.

Note that any application that can replicate the protocol buffer of the Norgopolis standard using the gRPC protocol, can communicate with the gRPC frontend. You are responsible for handling the details of the communication. If you want to register yourself as a full client, be aware of the stateful logic required to be counted as an active connection. The norgopolis server may otherwise exit along with all loaded modules once the last active connection has been completed and closed.

Type 2: between module and module:

2023_05_20_Norgpolis_Architecture-Drawing-Module-to-module routing

This is internal communication which allows modules to consume each other's services. Here routing occurs completely over the stdin/stdout-based connections.

Any executable that conforms to the stdin/stdout-based communication using the specified protocol buffers and can be spawned as a child process using Command::new() can register as a module. Ideally modules persist until the Norgopolis server exits, but transient existence is allowed. It should however be understood what the consequences can be. Note that Norgopolis is in either case responsible for launching and attaching the thread; the stdin/stdout-based connections are not exposed otherwise.

Override hooks

Modules may register override hooks with the server. This can be used to route calls headed for a specific module or a specific module and function tuple, to a different destination instead. This is our offer of implementing "decorator" or quasi-polymorphistic patterns, replacing a module's services wholesale or partially or otherwise changing behavior.

Override hook should be used with care. Note also that only one function can claim new reception of a call. If you want to split messages to n recipients, this requires custom behavior and is not offered at standard to keep routing behavior and return properties as expected. Note also that if you override calls, you must still conform to the messagepack spec of the call procedure being executed.

Communication specifications

gRPC protocol buffer

The gRPC- and stdin/stdout-transported protocol buffer used between clients and server and modules and server follows a similar schema and standard. In both cases, a two-way streaming connnection is exposed for use; this allows modules and clients to stream information over time, without need for caching and chunking. (This may be abstracted by the methods wrapping the connnection). Single call-and-response is modelled by closing the stream in the outgoing direction after a single message.

Protocol buffers establishing a new connection contain a set of routing information specifying

  • what module is aimed as the recipient
  • which function of the module is to be invoked
  • and a messagepack binary containing function data as well as other data.

MessagePack content

To transmit arbitrary data inside of standard protocol buffers, messagepack is used to encapsulate the data. See the messagepack spec for insights into the structure building. MessagePack content is not inspected by the server. The packaging structure must match the expectations of the founction the call is routed to insofar as that the function must be able to recover all expected, named variables from the mesagepack binary. Failure to comply with the expectation will return an error.

Note: without custom logic for the call receiver, method overloading patterns cannot be readily supported. Consider using unique method names to keep binding of requests workable.