Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
server: Support concurrent getdata requests.
Currently, each peer only supports one getdata request at a time and the inbound handler is blocked until all of the requested data is served. This means messages that would otherwise be fast to process, such as pings, can potentially be delayed for long periods of time which is clearly not ideal. Moreover, the existing behavior means that it is theoretically possible for a pair of peers to experience a situation where neither one can make progress because they're both waiting on each other to serve each other data which then would ultimately result in hitting idle timeouts and disconnection. In practice, the aforementioned live lock situation basically never happens currently due to a variety of other factors such as duplicate request filtering and not notifying peers about inventory they are known to have. However, the combination of factors that prevent the situation will not necessarily apply to newer data types such as those for mesh-based mixing. The aforementioned blocking behavior is a holdover from very early network code that relied on it once upon a time, but is no longer relevant. Thus, to improve overall throughput and address the root of the aforementioned concerns, this changes the semantics to serve getdata requests asynchronously so other inbound messages can be processed concurrently. This means the remote peer is then free to send other messages while waiting for the requested data to be served. Next, limits to the amount of concurrent getdata requests and the total maximum number of pending individual data item requests are now imposed in order to prevent peers from be able to consume copious amounts of memory and other malicious behavior this change would otherwise allow. The limits are enforced such that peers may now mix and match simultaneous getdata requests for varying amounts of data items so long as they do not exceed the maximum number of allowed simultaneous pending getdata messages or the maximum number of total overall pending individual data item requests. This approach of applying the limits on both dimensions offers more flexibility and the potential for future efficiency gains while still keeping memory usage bounded to reasonable limits and protecting against other malicious behavior.
- Loading branch information