-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kernel queue #164
Comments
We should deal with priorities because I think the most important part is to keep commands running otherwise users wouldn't understand why their commands don't do anything. Second this could be mitigated by having plugins crawling (like |
When github app installation token is rate limited then the kernel stops executing any command in the chain so prioritizing plugins in the chain doesn't prevent rate limiting.
Yes, right now To sum up there're actually 3 issues:
|
I've temporarily removed time estimate because the spec is not clear. As far as I understand from the docs github app installation token @gentlementlegen What if we restrict plugins' github API usage this way:
This way plugin developers will have to optimize github usage of their plugins. This solves the issue with github API "griefing". |
My storage layer app needs installed by each org and gives us a new access token to work with that is bound to each individual org and each org can be tracked via installs easily. If we create a storage repo instead of using the config repo as the storage location
|
I don't have much context regarding the storage feature so it's not really clear:
|
https://github.com/ubq-testing/telegram--bot/blob/gh-storage/src/adapters/github/storage-layer.ts
Maybe I'm mistaken but isn't it possible that with an abundance of partner' all using the same token, plugin devs could optimize to respect this threshold but have the rate limit consumed to nil because of the size of the partner network? There are 3 or 4 rn all ours but not super crazy active like onboarding a couple of huge blue chip partners might be. It's just a suggestion, obv there are overlaps that would need ironed out but my core suggestion is for plugins to use a token bound to the partner using a GitHub App' installs which would require exposing app_private_key, in this way I believe that we obtain a partner-bound token |
Yes, this approach prevents partners from abusing github API rate limits but not solves the scalability issue when there're simply too many partners and thus too many incoming webhook events. We need some kind of a github API cache or queue system, whatever is simpler/faster to implement which is enough for "scalability v1". |
I updated my comment just as you posted sorry
Plugins would use the partner-bound token and the kernel could also use it per-org I believe (@whilefoo @gentlementlegen rfc?) which would help but your right it's still very likely to max with things as-is |
It would be nice if somehow these quotas would count against their app / repo instead, no idea if this can be achieved. Otherwise yes that could work, don't grasp the complexity of such change. |
I feel like I need a primer on the kernel. I actually thought that the kernel uses |
I don't think there's incentive to grief because the rate limits are bound to a specific org and don't interrupt other orgs. The partner can just disable the plugin if it's using all of their rate limits
That's correct, the kernel uses org-bound installation tokens and passes it to plugins
The tokens passed to plugins are already bound to the partner/org so I don't quite understand how the storage solution would help to solve this
if we rate limit is hit then we can't even download the config though unless there is a separate rate limit for that |
Oh I didn't realise that was the case already my mistake |
See how each plugin has conditions that will cause it to "skip" or stop execution, some plugins have these checks right at the entry point and the check itself is very simple. If we can expose these conditions from the manifest of plugins and have the kernel perform it first then we'd save a lot of false starts for actions at least. E.g: Surely possible to reduce the boolean logic down to something that the kernel can consume. In This is less of a global solve and more of a potential optimization. |
There might be cases when the kernel receives too many webhook events, for example:
ubiquity-os
appWhen the kernel receives too many webhooks it exceeds github API rate limits which causes the kernel to be unresponsive and throwing the
Rate limit hit with \"GET /repos/{owner}/{repo}/contents/{path}\", retrying in 1469 seconds
error.There should not be any downtimes in the kernel work.
As a part of this issue we should implement a webhook queue:
The text was updated successfully, but these errors were encountered: