-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Serialize required requests internally #562
Comments
please serialize requests |
@jl777 We're now queueing requests one at a time to resolve this issue. However it might make more sense for you to implement the queue logic on your end inside Another benefit of queueing on your end is that if you know internally which commands need to be queued and which are safe to be called asynchronously then you can bypass the queue for those commands. This can result in a pretty huge performance win for GUI applications where they may be waiting on a very quick request to show a certain UI element but it's getting blocked by a slow query that's in the queue before it. |
all good points. however the current version has been tested, under load and it is working. I am reluctant to make such a big change as it would require requalifying it, especially under load. so this is a wontfix for now, but certainly a good feature for version 2.0 |
Understandable. Can we reopen this issue to keep it tracked? Maybe under a different title such as "Serialize required requests internally". |
ok, go ahead |
marketmaker
doesn't seem to be able to handle concurrent connections
Updated the title, doesn't look like I have permission to reopen. |
Just been speaking to @pondsea and he said if I could give you a little more insight into how this is causing issues you might be able to implement some async logic. So to give you an example, there's no socket interface for things like balance or UTXO count that we can subscribe to, so to provide a good user experience on the exchange page and keep all data in sync we need to poll However due to problems mentioned previously in this issue we are now adding all API requests to a queue and running them one at a time. Any slow requests ( Lots of these requests are totally unrelated so should be able to be executed asynchronously. For example it doesn't make sense that a slow UTXO lookup should delay us from updating the order book or enabling a new electrum coin. It would be ideal if you could implement the queueing on your side because you know what information depends on what. We could then make as many requests as the UI needs in parralel and if you can return the data instantly (like That way a single slow request won't hang the entire app. Does that make sense? Is this something you can implement? It's causing pretty major usability issues with our GUI. If you can implement this we just need to remove the queue logic from our API client and these performance issues will just disappear. TLDR: |
What you are talking about would require rewriting a significant part of mm and practically speaking will need to wait until v2. However, I am wondering why you dont just query electrum servers directly? a lot of the mm api functions are just convenience functions and as such they are not implemented with massive performance. If you want a fully asynchronous everything, then split out all the electrum calls and do them however you wish. No sense to use the limited mm variant when you can directly query the SPV servers, however many you want to, using whatever logic you want. I think I can safely add queuing of adding an electrum coin if that is a specific slowness that you need fixed. listunspent you surely can do directly and bypass the serialization. |
The Electrum stuff was just an example, but yes, moving all Electrum/balance/utxo stuff internally is definitely something we can look into to reduce the load on However one concern of mine is that then we have two separate electrum implementations which could result in data inconstancy. For example I think There are also issues if our code or your code fails over to a different electrum server they could be at different heights meaning the UI thinks we have one balance but The inconsistencies that could occur with two isolated Electrum instances in the same app could cause very unpredictable behaviour. But still, definitely something we can look in to. However even if we manage to reliably move all balance/Electrum stuff over to our end, we still need to poll So we'd still need a reliable way poll One alternative solution might be for you to provide a socket that we can connect to and subscribe to events. So for example we could connect and subscribe to the |
the problem with the electrum support as it is is due to its support being added at the last minute and wasnt part of the initial design, so unfortunately all the fancy things will need to be for 2.0. I am worried that if each client makes 10 SPV requests that increases the overall load on the servers 10x, which will make people want to issue 20 SPV requests, ... not a good feedback loop the balance calcs mm uses is just a sum of unspent utxos, for swaps in progress they are updated as spent in electrumX (at least with the latest versions that made 0conf support much better) I am not seeing any crashing based on polling mm. if you have a test setup that crashes mm on polling, I need to fix that. seems better to fix the few remaining bugs than to rearchitect major dataflows at this late stage. So let us try to be specific with reproducible bugs, preferably at the cli level, ie. a script with a bunch of curl requests to the mm api that causes a crash. |
That's fair enough, I understand. I actually had an idea re electrum support that could solve some of the concerns I had. We could implement all the groovy Electrum logic on our end in our app. We handle it robustly, connecting to multiple servers, using the first response we get, so it’s decentralised and fast, resistant to a single electrum server going down. Then we open a local socket for each electrum coin that just proxies everything through our Electrum logic and mm connects to that. So as far as mm is concerned it's just connected to a single Electrum server running on localhost. In reality it's connecting to our proxy and will run through our routing logic that will ping multiple servers etc. It'll be totally transparent as far as mm is concerned and require no code changes on your end. But our GUI and mm will both be using the same electrum servers routed via the same logic. Does that seem like a good solution to you? Re dropping requests I'll try and get a repro over to you. |
pretty clever! I like it. |
This came about due to a dream he had last night lol.... |
From your message above: #562 (comment)
Is this still an option? When the user logs in to the app we show them a loading screen while we wait for the mm process to initialise and set everything up. Currently adding electrum servers is a major bottleneck. If you could do this concurrently safely that would give a drastic reduction in login times. Obviously safety is the priority, if there's a chance it's going to cause instability then it's not worth the risk. |
anything async could make things unstable. one question is why do you need to add all electrum servers before completing login? only one coin would be needed for login to have occurred |
We have certain coins enabled by default and users can enable any extras manually. These are all enabled and then we call If we just enabled a single coin at login their portfolio would appear to be missing funds and would cause confusion for users. Obviously better Electrum support for v2 or moving al Electrum queries in app is the ultimate solution, but just working with what we've got available at the moment. |
on startup the prevaccess time is 0, so it would issue the SPV call. I could start things out so it would get the balance from the cache, but of course if balance has changed it would be showing the old values at first |
The issue is that we initialise each electrum server after each other and each call takes quite a long time. Obviosuly if the users has 20-30 currencies enabled it's gonna take a very long time to login. If you can make it async it should instantly solve the issue. If not then anything you can do to improve the electrum enabling speed is great. So with your last comment, does that just mean the balances would be wrong for a few seconds? So if we're polling portfolio every seconds it would all sync up pretty fast? |
yes changing the initial state of the usecache logic would be in the safe category. messing with electrum async, cant guarantee it wont destabilize I will add the using cache files first logic, it wont always speed it up but should avoid the long time for the initial setting of cache |
OK, its there in dev branch |
Awesome, thanks James! Btw, does this mean that however long it takes to sync the balances properly will be the same amount of time it would of taken to login without the optimisation? Or will this get us to a logged in state with synced balances sooner? |
I am not 100% sure, I hope the latter |
Ok, thanks, will test. |
@jl777 Just tested on the latest dev and it seems to have messed up the portfolio command. Not sure if that's from the the electrum changes or portfolio changes (858fbf7#commitcomment-28912115) But I have eight currencies enabled in electrum mode and when I run the portfolio command the response only contains three currencies. Even after leaving mm running for a ~10 minutes and polling |
Looks like was only the currencies that had balances previously that are now showing up. But I even tried sending coins to a new currency and it's not showing. Allthough that seems like a possibly unrelated issue as I'm not seeing the new balance in a previous version of mm either. |
Possibly related: #833 |
I guess i need to revert the portfolio caching |
its in dev now without the caching of portfolio, it was issuing a getbalance in the portfolio call, which would trigger a listunspent. so a portfolio will again be making SPV calls, but the listunspent cache will make it not so slow. |
Working, thanks! |
If I make a few concurrent connections to
marketmaker
the connections are just dropped which makes it hard to know whats going on or provide useful information in a GUI implementation.It doesn't have to be a huge amount of requests just a small amount happening concurrently. For example making 4 simultaneous requests to enable Electrum coins regularly results in one of the requests being dropped which gives an ugly
net::ERR_SOCKET_NOT_CONNECTED
error in Chromium which doesn't give a lot of insight into what's going on.If we queue all our api requests and limit them to only execute one at a time everything works but there are pretty large performance issues with doing this in our GUI.
Is this expected behaviour or a bug? Are we supposed to be able to issue commands concurrently?
The text was updated successfully, but these errors were encountered: