-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use the proper configuration (repo scoped) #68
Comments
@0x4007 continuing the discussion here to not pollute the other issue. I think the proper solution is to only run the |
I disagree. There's many repos without much activity and a few with the most so the former would be very unmaintained by the disqualifier. Instead the disqualifier should build a mapping of every (merged) config to every repo. Ideally it should use the identical SDK logic the kernel uses to interpret the configs. |
Then we should simply ensure that the disqualifier runs not based on event but time intervals. Having the disqualifier being able to read configs would just lead to code duplication and complex workflow logic, which should be avoided. Also if the base org does not have the disqualifer but the repo does have it, it means that either way events would only be triggered from this repo and the config would apply to other repos (which should not happen since the org does not have the plugin enabled). |
I'm not convinced. I think the SDK handles most of the config related logic so duplication is fine because it's imported. |
The SDK does not handle the configuration for plugins, only the kernel knows how to read / decode / validate it. Plugins are only aware of their own configuration which is sent by the kernel. Consider the following scenario:
in this case, the |
@whilefoo rfc I'm not sure what else can be done here |
To me the solutions are either:
The base of the problem is calling all the repositories for an event. |
Another idea is to hand off the event from the kernel to a GitHub Action, and have the Action continue to retry, with an exponential backoff, if there are failures. In less than six hours it should be able to sort itself out with this strategy. |
Yes but the run itself is triggered from the kernel, which tells the plugin to run. The problem is that this plugin runs on the whole organization using its own configuration which should be specific to its repository only. |
No my idea is that it should run on every repository in the organization. The problem we are trying to solve is related to rate limits and exponential backoffs are a solution to this. |
Yes but this issue was filled to fix the following scenario:
Last step is incorrect, and would be solved by running |
There's no easy answer to this. |
I had an idea that maybe would solve our problems, without needing to change anything inside of the kernel nor implementing crons within our system. To handle the cron part, we could have an Action script like The workflow would be the following:
This way, no useless runs would happen from the cron, we would properly watch repositories and use the correct configuration during runs. This would also lower our API usage significantly. |
If I understand correctly every time |
You are correct. For the installation token, we have |
We can try your suggestion I just hope it's not too brittle but that depends on how well it's implemented |
Originally posted by @gentlementlegen in ubiquity/business-development#103 (comment)
The text was updated successfully, but these errors were encountered: