You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please describe what you would like to see in this project
With the current design, the python deployer only works properly as the default deployer via the config file, and no other deployers can be used inline in the workflow file. This is because the plugin path references for the python deployer point directly to module code rather than to containers. Because all plugins in a workflow must first be run by the default deployer in order to read the schemas from the ATP, all plugin paths must be compatible with the default deployer.
Please describe your use case
For a use case of the python deployer where all steps are only run with the default deployer, this could be acceptable. However, if a workflow needs to also trigger container-based plugins, there is currently no way for the python deployer as the default to run the containers to read the ATP.
A potential use case is a workflow that runs a local plugin via the python deployer that has Kubernetes API connections built-in to the plugin, such as the kill-pod plugin, but then also wants to run a system benchmark or load plugin such as sysbench via Arcaflow's integrated connections to the Kubernetes API. The sysbench plugin would be defined in the workflow to use the kubernetes deployer and the plugin path would point to the container image, but the container image could not be loaded by the default python deployer to read the ATP, creating a catch-22 situation.
The scope of this change is probably outside of just this repo, but I've opened the issue here to start the discussion and planning.
One possibility that has been discussed is to create a central registry of supported plugins. This could simply be a JSON file that we maintain via CI, which for each plugin could have a simple name, a description, and version/build tag, and links to both the python module path and the container path with the matching versions/builds. The workflow could then provide the plugin simply by name and version, and we could resolve to the correct path depending on the deployer that is used.
The text was updated successfully, but these errors were encountered:
Please describe what you would like to see in this project
With the current design, the python deployer only works properly as the default deployer via the config file, and no other deployers can be used inline in the workflow file. This is because the
plugin
path references for the python deployer point directly to module code rather than to containers. Because all plugins in a workflow must first be run by the default deployer in order to read the schemas from the ATP, allplugin
paths must be compatible with the default deployer.Please describe your use case
For a use case of the python deployer where all steps are only run with the default deployer, this could be acceptable. However, if a workflow needs to also trigger container-based plugins, there is currently no way for the python deployer as the default to run the containers to read the ATP.
A potential use case is a workflow that runs a local plugin via the python deployer that has Kubernetes API connections built-in to the plugin, such as the kill-pod plugin, but then also wants to run a system benchmark or load plugin such as sysbench via Arcaflow's integrated connections to the Kubernetes API. The sysbench plugin would be defined in the workflow to use the kubernetes deployer and the
plugin
path would point to the container image, but the container image could not be loaded by the default python deployer to read the ATP, creating a catch-22 situation.config.yaml
:workflow.yaml
:Additional context
The scope of this change is probably outside of just this repo, but I've opened the issue here to start the discussion and planning.
One possibility that has been discussed is to create a central registry of supported plugins. This could simply be a JSON file that we maintain via CI, which for each plugin could have a simple name, a description, and version/build tag, and links to both the python module path and the container path with the matching versions/builds. The workflow could then provide the plugin simply by name and version, and we could resolve to the correct path depending on the deployer that is used.
The text was updated successfully, but these errors were encountered: