-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated "real network interface ... for delegate server" with per-client messages support #242
Updated "real network interface ... for delegate server" with per-client messages support #242
Conversation
- add Delegate Server and types - rewrite the Main to work with a proper server - some changes to handle serialisation
- two channels, an incoming and an outgoing one - incoming channel receives from all clients and gets flushed by the consumer - outgoing channel is a broadcast channel that gets cloned by each client and then sends all events back to the frontend
- factor out Tracing in a separate module - remove tagless final style where sensible - haddock
- fixes that Gregory requested - nicer logging - some fork off handler
…ent-real-network-interface-to-be-used-by-frontend-for-delegate-server
…ent-real-network-interface-to-be-used-by-frontend-for-delegate-server
- derivive pretty via show instead of using viaShow directly - more documentation - more where clauses where possible - separation of the startWorker function - factor out some part of the serverlog
…ace-to-be-used-by-frontend-for-delegate-server
Also some workers renaming. Merge remote-tracking branch 'origin/uhbif19/split-delegate-step' into uhbif19/175-implement-real-network-interface-to-be-used-by-frontend-for-delegate-server
receiveAndPutInQueue :: | ||
Tracer IO DelegateServerLog -> | ||
Connection -> | ||
ClientId -> | ||
IO () | ||
receiveAndPutInQueue tracer connection clientId = | ||
runWithTracer' tracer $ do | ||
inp <- liftIO $ receiveData connection | ||
case eitherDecode @FrontendRequest inp of | ||
Left msg -> do | ||
let err = FrontendNoParse msg | ||
trace (DelegateError err) | ||
liftIO $ sendToClient connection err | ||
Right request -> do | ||
trace $ FrontendInput request | ||
liftIO . atomically $ | ||
writeTQueue frontendRequestQueue (clientId, request) | ||
sendFromChannel :: | ||
Connection -> | ||
ClientId -> | ||
TChan (ClientResponseScope, DelegateResponse) -> | ||
IO () | ||
sendFromChannel connection clientId broadcastCopy = do | ||
(scope, message) <- atomically (readTChan broadcastCopy) | ||
when (clientIsInScope clientId scope) $ | ||
sendToClient connection message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it was good documentation to have it return an IO void to signal that it's running infinitely, now with forever
in the block, you will have to read the whole do
block again, wasn't that what you were trying to avoid?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
think it was good documentation to have it return an IO void to signal that it's running infinitely
Yes.
you will have to read the whole do block again
But how one should get to send/receive
functions signature other way then reading do-block? They are local functions.
TQueue DelegateEvent -> | ||
TQueue (ClientId, FrontendRequest) -> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we have two Queues here, I don't understand how we need more than one queue, why are frontendrequests not an "event" anymore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With DelegateEvent
devegateStep
typing seems to be wrong for broadcasting. You can see why in #234
where | ||
freshClientIdGenerator clientCounter = do | ||
v <- takeMVar clientCounter | ||
putMVar clientCounter (v + 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we not return the fresh one but the old one? that sounds dangerous if we don't wrap the MVar
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the difference? This changes atomicity somehow?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Im curious about this as well, what scenario are you thinking of @MangoIV?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not so important, but if we take out the MVar and expect to get the same next value in the counter, then that's an incorrect assumption because it will be the next next value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and expect to get the same next value in the counter
Why would somebody except that?
sendToClient :: forall a. ToJSON a => Connection -> a -> IO () | ||
sendToClient connection = sendTextData connection . encode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this name has less information than before, sendTextData
also sends to client, I don't know how this is better than encodeSend
which at least expresses all it does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All local functions send something to queue or to client. So while I was reading this code I tried to understand which part send where.
And encoding to text or to bytecode is implementation detail which is only important to client, and does not matter for design of this module.
newtype ViaShow a = ViaShow {unViaShow :: a} | ||
deriving stock (Eq, Ord, Generic) | ||
|
||
instance Show a => Pretty (ViaShow a) where |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the issue is if we say Show a => Pretty a
then we make instance resolution non-deterministic... but why can't GHC pick a Pretty
instance if it has one in scope, and only if it can't find one then follow the Show
branch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because you don’t want that. This is the motivation for deriving via. You don’t want to have catch all instances because of the open world assumption of Haskell classes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but why can't GHC pick a Pretty instance if it has one in scope, and only if it can't find one then follow the Show branch?
Is not that what the OVERLAPPABLE does?
newtype ViaShow
It is strange to me, that Pretty does not already have that implemented. Probably cuz derviving via
is a recent thing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The good thing is that the whole point of deriving via is that we do not have to care for whether it's implemented already, we can adhoc deriveable instances by just defining a newtype wrapper
DelegateEvent (HydraEvent ReadyToFanout) -> return [] | ||
DelegateEvent (HydraEvent _) -> return [] | ||
(ClientId, FrontendRequest) -> | ||
DelegateRunnerT m [(ClientResponseScope, DelegateResponse)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we have a list in the return type here? I think the current interface (Broadcast/Perclient) lets us do everything we need.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case of returning multiple responses. Like one for broadcast and one per-request. It is not used now, but maybe there is some case where this is required. Probably we do not need it, but it does not seem to hurt now, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean it’s a bit a distinction between a list of pairs and a pair of lists. (Besides perhaps performance characteristics)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pair of lists does not seem to work here, cuz we need to process in pairs here.
Also we may need change this pair into ADT later, cuz delegateXStep
may do other kinds of responses, like adding DelegateEvent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@uhbif19 I think you are right. As a matter of fact, in your next PR I see a good case for returning multiple responses.
When a client places a newBid
we want to: confirm the bid is placed and give the user the ClosingTxTemplate, but also broadcast an update of the delegate state.
Based on @MangoIV PR #217
Changes:
delegateStep
into two parts for per-client response support #234Docstrings may be out of sync now.