Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added deployment run script #127

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

JordyBottelier
Copy link

Hi,

I've been using this plugin for a while now and I love it, but I wanted to add some functionality in regards to deployments. I sometimes want to run a job/script on one of the pods of a deployment (randomly), with the option of retrying the job automatically if it fails.

I tested the functionality both locally and on a Kubernetes cluster and it seems to work fine. I used a lot of code from the pods-run-script.py and even extracted some parts to the common.py file.

If there's any questions or improvements you'd like to see let me know

@kdebisschop
Copy link

I'm facing a similar problem...

Because the pod resource model can have multiple pods per deployment, if I try to run an script across all my deployments, it actually runs across all my pods -- which can be a problem if two pods are doing the same thing at the same time and could deadlock or conflict.

If I had a list of deployments or namespaces, I could use that to run this updated code. But it seems that I still end up running the code once per pod if I derice the list of deployment names from my nodes/pods. I have no way to de-duplicate those nodes so there is only one per deployment.

In Rancher 1.x, which we used before k8s, I updated the plugin to create a counter so container would increment each time it found a container within the service/deployment. It worked, but was a bit of a hack.

Do you have a way to run only one pod per deployment based on the node inventory with your approach here?

exit(1)

if not resp:
log.error("Namespace %s does not exits.", namespace)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spelling: "exits" should be "exist"

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review, I fixed the spelling mistake :)

@JordyBottelier
Copy link
Author

I'm facing a similar problem...

Because the pod resource model can have multiple pods per deployment, if I try to run an script across all my deployments, it actually runs across all my pods -- which can be a problem if two pods are doing the same thing at the same time and could deadlock or conflict.

If I had a list of deployments or namespaces, I could use that to run this updated code. But it seems that I still end up running the code once per pod if I derice the list of deployment names from my nodes/pods. I have no way to de-duplicate those nodes so there is only one per deployment.

In Rancher 1.x, which we used before k8s, I updated the plugin to create a counter so container would increment each time it found a container within the service/deployment. It worked, but was a bit of a hack.

Do you have a way to run only one pod per deployment based on the node inventory with your approach here?

I don't fully understand your question or what you are actually trying to accomplish here? The code that I wrote executes a job on exactly 1 pod per deployment, but I do not fully understand your use case

@kdebisschop
Copy link

You're right -- my comment was a bit unclear. A minor difference is that I want to run a command rather than a script. The main difference is that I want to use the node selection interface in Rundeck -- so I can run across all deployments and have the node set automatically update and run on exactly one pod per deployment. It looks like I would have to add jobs and specify the deployment as I added deployments to my k8s environment.

@kdebisschop
Copy link

I have just created a pull request (#131) to illustrate the approach I have taken...it may not be as aesthetically correct as defining a node resource based on the ReplicaSet (which is what I think I would need to do to use node selection to identify one node per deployment) but it is effective and is only a few lines of code change that enables me to use all the rest of the code in pods-* python scripts without needing to add new copied code.

@JordyBottelier
Copy link
Author

@kdebisschop I checked out your code and left a comment

We have a somewhat similar use-case, however, they are still quite different :)

@ltamaster
Copy link
Contributor

Hi @JordyBottelier

I think you can do the same using an Orchestrator plugin and a node filter with the deployment label, something like this:

Screen Shot 2022-08-25 at 11 05 50
Screen Shot 2022-08-25 at 11 06 32

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants