feat: add waitAsRateLimit
option on http
transport
#3698
+151
−76
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR introduces an option to enable a batch queue system by setting
batch.waitAsRateLimit
totrue
. I was searching for a solution until I discovered #1305, which motivated me to explore ways to update the HTTP client.The core idea is to modify the
batchScheduler
by changing theshouldSplitBatch
parameter to agetBatchSize
parameter. Once the scheduler can determine the batch size, it becomes capable of queuing requests.I also updated multicall to utilize
getBatchSize
with behavior similar to the previous version.The main drawback is that if too many requests are queued, the queue could continuously grow, potentially causing delays or triggering cache limits. I have added a warning about this in the documentation.
Therefore, for users interacting with rate-limited endpoints, this option could be extremely beneficial for managing such interactions.