Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add blockchain event notifier service #2378

Closed
wants to merge 17 commits into from

Conversation

ecioppettini
Copy link
Contributor

@ecioppettini ecioppettini commented Jun 16, 2020

Add a Notifier service

relates to #2354

  • Add websocket endpoint
  • Process block events and send message
  • Process tip change event and send message
  • Add automated test (technically done, but still needs some though)
  • Add setting to limit the number of connections (and set a default)
  • Define the message format, right now they are just strings manually crafted (New Block: hash New Tip: hash) json strings with serde. Maybe we can provide binary too depending on the client
  • Add this to documentation.
  • Introduce some idea of versioning the API? Something like
    • Add a version in the endpoint path, maybe the same that the rest api.
    • Use websocket subprotocols
  • Send connection close event when shutting down

The max_connection setting is under the Rest settings, for example:

rest:
  listen: "127.0.0.1:8443"
  notifier:
      max_connections: 5

Questions

NewBlock events are always sent even when NewTip has the same hash. I'm not sure if we want to filter these or not.

It can be manually quick-tested by opening a websocket from a browser, for example:

<!DOCTYPE html>
<html>
    <head>
        <title>Notifier reader</title>
    </head>
    <body>
        <h1>new blocks</h1>
        <div id="status">
            <p><em>Connecting...</em></p>
        </div>
        <script type="text/javascript">
        var uri = 'ws://localhost:8443/notifier';
        var ws = new WebSocket(uri);
        function message(data) {
            var line = document.createElement('p');
            line.innerText = data;
            var status = document.getElementById('status');
            status.appendChild(line);
        }
        ws.onopen = function() {
            var status = document.getElementById('status');
            status.innerHTML = "<p><em>Connected!</em></p>";
        }
        ws.onmessage = function(msg) {
            message(msg.data);
        };
        </script>
    </body>
</html>

@ecioppettini ecioppettini self-assigned this Jun 16, 2020
@NicolasDP NicolasDP added enhancement New feature or request A-jormungandr Area: Issues affecting jörmungandr subsys-rest issues or PRs for the REST interface labels Jun 17, 2020
@ecioppettini ecioppettini force-pushed the add-blockchain-event-notifier-service branch 2 times, most recently from 533e789 to 2caba0d Compare June 30, 2020 13:44
Copy link
Contributor

@NicolasDP NicolasDP left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to find a way not to have an async mutex for the whole client connections. This will create long run issues.

One way to explore this is to use a tokio::sync::watch so you can notify multiple services that a value has been updated and here be the latest version.

Cargo.lock Outdated Show resolved Hide resolved
jormungandr/src/rest/mod.rs Outdated Show resolved Hide resolved
testing/jormungandr-integration-tests/Cargo.toml Outdated Show resolved Hide resolved
jormungandr/src/notifier/mod.rs Outdated Show resolved Hide resolved
jormungandr/src/notifier/mod.rs Outdated Show resolved Hide resolved
jormungandr/src/notifier/mod.rs Outdated Show resolved Hide resolved
@ecioppettini ecioppettini force-pushed the add-blockchain-event-notifier-service branch from 958c1f8 to 5ca7915 Compare July 7, 2020 19:51
@ecioppettini ecioppettini force-pushed the add-blockchain-event-notifier-service branch from 5ca7915 to 5c7253c Compare September 2, 2020 15:10
@ecioppettini ecioppettini force-pushed the add-blockchain-event-notifier-service branch from 5c7253c to 10afb4e Compare September 10, 2020 16:55
@ecioppettini ecioppettini requested a review from a team September 10, 2020 18:33
@eugene-babichenko
Copy link
Contributor

eugene-babichenko commented Sep 15, 2020

The overall idea is good but I'd go for a different approach to send messages to clients. Namely, having an actor that collects all events from other actors, builds WS messages right away and just broadcast these messages to client connection handlers. As an optimization you could skip serialization when there is no listeners.

Such approach would allow to perform the message serialization only once per each message. Now it is done n times, where n is the number of subscriptions. This will also simplify the WS handling code. In fact, this code will become universal, because all handlers will receive the same type of messages.

@ecioppettini
Copy link
Contributor Author

@eugene-babichenko I'm not sure I get your point. The difference I see between the current code and what you are describing is the

builds WS messages right away

part. Which I think I get, we can move the serialization before/up in the flow, right now we are just serializing ids, so it's probably not that expensive, but makes sense.

Maybe you are referring to something different, though.

@eugene-babichenko
Copy link
Contributor

@Enzoc4

I mean the following data flow:

+----------------+                                                       +-----------+
| block events   +--------------+                      +---------------->+ ws 1      |
+----------------+      +-------v-------+      +---------------+         +-----------+
                        | make message  +----->+ serialize msg |
+----------------+      +-------^-------+      +---------------+         +-----------+
| tip   events   +--------------+                      +---------------->+ ws 2      |
+----------------+                                                       +-----------+

@ecioppettini
Copy link
Contributor Author

@Enzoc4

I mean the following data flow:

+----------------+                                                       +-----------+
| block events   +--------------+                      +---------------->+ ws 1      |
+----------------+      +-------v-------+      +---------------+         +-----------+
                        | make message  +----->+ serialize msg |
+----------------+      +-------^-------+      +---------------+         +-----------+
| tip   events   +--------------+                      +---------------->+ ws 2      |
+----------------+                                                       +-----------+

I think now the data flow is just like that, but I don't see how does it simplify any code tbh. The websocket handlers were always receiving text anyway. It still makes sense to serialize at once, probably.

@ecioppettini ecioppettini force-pushed the add-blockchain-event-notifier-service branch from 941a3f0 to 7c6c721 Compare October 30, 2020 23:01
@ecioppettini ecioppettini force-pushed the add-blockchain-event-notifier-service branch from 7c6c721 to 5c92bf8 Compare November 25, 2020 15:33
add also setting to configuration builder, in order to test it, and
improve the panic handling of the notifier client
in order to avoid platform specific dependencies in Cargo.lock
the new connection requests are handled through a new message
tip changes are handled with tokio::watch because we only care about the
latest state, but block changes use a broadcast to keep old ones too
@ecioppettini ecioppettini force-pushed the add-blockchain-event-notifier-service branch from 5c92bf8 to 1cc7bf3 Compare March 3, 2021 19:50
@ecioppettini ecioppettini deleted the add-blockchain-event-notifier-service branch October 20, 2021 02:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-jormungandr Area: Issues affecting jörmungandr enhancement New feature or request subsys-rest issues or PRs for the REST interface
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants