Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated "real network interface ... for delegate server" with per-client messages support #242

Conversation

uhbif19
Copy link
Contributor

@uhbif19 uhbif19 commented Apr 3, 2023

Based on @MangoIV PR #217

Changes:

Docstrings may be out of sync now.

MangoIV and others added 16 commits March 27, 2023 22:23
- add Delegate Server and types
- rewrite the Main to work with a proper server
- some changes to handle serialisation
- two channels, an incoming and an outgoing one
- incoming channel receives from all clients and gets flushed by the
  consumer
- outgoing channel is a broadcast channel that gets cloned by each
  client and then sends all events back to the frontend
- factor out Tracing in a separate module
- remove tagless final style where sensible
- haddock
- fixes that Gregory requested
- nicer logging
- some fork off handler
…ent-real-network-interface-to-be-used-by-frontend-for-delegate-server
…ent-real-network-interface-to-be-used-by-frontend-for-delegate-server
- derivive pretty via show instead of using viaShow directly
- more documentation
- more where clauses where possible
- separation of the startWorker function
- factor out some part of the serverlog
…ace-to-be-used-by-frontend-for-delegate-server
Also some workers renaming.

Merge remote-tracking branch 'origin/uhbif19/split-delegate-step' into uhbif19/175-implement-real-network-interface-to-be-used-by-frontend-for-delegate-server
@uhbif19 uhbif19 requested a review from a team as a code owner April 3, 2023 23:06
Comment on lines +133 to +158
receiveAndPutInQueue ::
Tracer IO DelegateServerLog ->
Connection ->
ClientId ->
IO ()
receiveAndPutInQueue tracer connection clientId =
runWithTracer' tracer $ do
inp <- liftIO $ receiveData connection
case eitherDecode @FrontendRequest inp of
Left msg -> do
let err = FrontendNoParse msg
trace (DelegateError err)
liftIO $ sendToClient connection err
Right request -> do
trace $ FrontendInput request
liftIO . atomically $
writeTQueue frontendRequestQueue (clientId, request)
sendFromChannel ::
Connection ->
ClientId ->
TChan (ClientResponseScope, DelegateResponse) ->
IO ()
sendFromChannel connection clientId broadcastCopy = do
(scope, message) <- atomically (readTChan broadcastCopy)
when (clientIsInScope clientId scope) $
sendToClient connection message
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it was good documentation to have it return an IO void to signal that it's running infinitely, now with forever in the block, you will have to read the whole do block again, wasn't that what you were trying to avoid?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

think it was good documentation to have it return an IO void to signal that it's running infinitely

Yes.

you will have to read the whole do block again

But how one should get to send/receive functions signature other way then reading do-block? They are local functions.

Comment on lines +168 to +169
TQueue DelegateEvent ->
TQueue (ClientId, FrontendRequest) ->
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we have two Queues here, I don't understand how we need more than one queue, why are frontendrequests not an "event" anymore?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With DelegateEvent devegateStep typing seems to be wrong for broadcasting. You can see why in #234

where
freshClientIdGenerator clientCounter = do
v <- takeMVar clientCounter
putMVar clientCounter (v + 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we not return the fresh one but the old one? that sounds dangerous if we don't wrap the MVar

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the difference? This changes atomicity somehow?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Im curious about this as well, what scenario are you thinking of @MangoIV?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's not so important, but if we take out the MVar and expect to get the same next value in the counter, then that's an incorrect assumption because it will be the next next value.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and expect to get the same next value in the counter

Why would somebody except that?

Comment on lines +131 to +132
sendToClient :: forall a. ToJSON a => Connection -> a -> IO ()
sendToClient connection = sendTextData connection . encode
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this name has less information than before, sendTextData also sends to client, I don't know how this is better than encodeSend which at least expresses all it does.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All local functions send something to queue or to client. So while I was reading this code I tried to understand which part send where.

And encoding to text or to bytecode is implementation detail which is only important to client, and does not matter for design of this module.

newtype ViaShow a = ViaShow {unViaShow :: a}
deriving stock (Eq, Ord, Generic)

instance Show a => Pretty (ViaShow a) where
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the issue is if we say Show a => Pretty a then we make instance resolution non-deterministic... but why can't GHC pick a Pretty instance if it has one in scope, and only if it can't find one then follow the Show branch?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because you don’t want that. This is the motivation for deriving via. You don’t want to have catch all instances because of the open world assumption of Haskell classes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but why can't GHC pick a Pretty instance if it has one in scope, and only if it can't find one then follow the Show branch?

Is not that what the OVERLAPPABLE does?

newtype ViaShow

It is strange to me, that Pretty does not already have that implemented. Probably cuz derviving via is a recent thing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The good thing is that the whole point of deriving via is that we do not have to care for whether it's implemented already, we can adhoc deriveable instances by just defining a newtype wrapper

DelegateEvent (HydraEvent ReadyToFanout) -> return []
DelegateEvent (HydraEvent _) -> return []
(ClientId, FrontendRequest) ->
DelegateRunnerT m [(ClientResponseScope, DelegateResponse)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we have a list in the return type here? I think the current interface (Broadcast/Perclient) lets us do everything we need.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case of returning multiple responses. Like one for broadcast and one per-request. It is not used now, but maybe there is some case where this is required. Probably we do not need it, but it does not seem to hurt now, no?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean it’s a bit a distinction between a list of pairs and a pair of lists. (Besides perhaps performance characteristics)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pair of lists does not seem to work here, cuz we need to process in pairs here.

Also we may need change this pair into ADT later, cuz delegateXStep may do other kinds of responses, like adding DelegateEvent.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@uhbif19 I think you are right. As a matter of fact, in your next PR I see a good case for returning multiple responses.

When a client places a newBid we want to: confirm the bid is placed and give the user the ClosingTxTemplate, but also broadcast an update of the delegate state.

@uhbif19 uhbif19 requested a review from MangoIV April 6, 2023 11:47
@uhbif19 uhbif19 merged commit 140bd7a into staging Apr 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants