Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: "IncludeEnvironment" or allow passing some environment variables to job environment #321

Open
Ahuge opened this issue Apr 29, 2024 · 4 comments

Comments

@Ahuge
Copy link
Contributor

Ahuge commented Apr 29, 2024

Use Case

There are cases where we want to provide information such as license servers, or DCC configuration settings to the job environment when submitting.

Simple examples may include RLM license server environment variables, and more complicated example may tie in with the pipeline or allow jobs to run on a single fleet but utilize environment variables to target specific versions of DCCs for example.

Proposed Solution

I propose that the default Deadline OpenJD submission also include an environment_values.yaml file which can be referenced in the OpenJD template (as a sidecar like asset references and parameter_values).

On the UI side, I was thinking some tableView that would

@epmog
Copy link
Contributor

epmog commented May 9, 2024

I'd think a lot of these use-cases work well within a queue environment, for all the infrastructure related ones (ie. DCC versions, license servers, etc.)

What sort of use-cases are you looking for that doesn't fit within that?

@Ahuge
Copy link
Contributor Author

Ahuge commented May 13, 2024

Hi @epmog

Queue environment is what I've ended up doing in the interim. It doesn't 100% solve my solution but at the time of this feature request, I didn't fully understand queue environments. So you're correct that a QE does most of what would be needed.

I think one use case that I would like to be able to support is having a single render fleet with multiple versions of a DCC on them and then being able to use environment variables such as "NUKE_ADAPTOR_NUKE_EXECUTABLE" (for the Nuke submitter for example) to be able to control the DCC version that gets used to run the job.

My best understanding is that the only other way to support that would be to add a separate fleet for each version of the DCC that is required. However perhaps I am mistaken and there is an easier way.

Disclaimer that I haven't looked into Conda packaging yet so I don't know how to run my own conda packaging environments. If you have any good resources for that, perhaps that will help.

@epmog
Copy link
Contributor

epmog commented May 13, 2024

think one use case that I would like to be able to support is having a single render fleet with multiple versions of a DCC on them and then being able to use environment variables such as "NUKE_ADAPTOR_NUKE_EXECUTABLE" (for the Nuke submitter for example) to be able to control the DCC version that gets used to run the job.

So there's a few ways to approach this depending on how you want to organize your resources. The way I see it, is users probably have multiple projects going on with different versions of software/plugins being used. So you could set up each project with a queue with one or many queue environments (and budgets, etc). These queue environments would configure the version of the DCC to use. This is effectively how we leverage our conda packages in our integrations. The adaptors expect the application executables they're adapting to be made available via PATH discovery, and the conda package adds the location of the executable in the conda environment to the PATH.

You would then create a queue-fleet-association with your 1 fleet to all of those queues. Thus, depending on which queue the users are submitting to, it'll change the available version of the application. All while still specifying nuke, etc.

Thoughts?

There are workflows that queue environments will not solve, off-the-top of my head would be something like user-specific information that should be captured/applied. So continuing that part of the conversation is still important. Capturing environment variables at submission time does sound interesting and seems applicable there.

Any thoughts on how the environment_values.yaml would work? Would it have the values already baked into it when created? How would we apply them? Job level? Should they be able to specify Job/Step level?

@Ahuge
Copy link
Contributor Author

Ahuge commented May 14, 2024

Hi @epmog that make sense.
And in the interim until I get conda workflows up, I can just use the nice adaptor environment variables to point to the correct version of the DCC.

Regarding the environment_values.yaml file. What I was thinking was the file would be created at submit time (bundle creation time or something) and then there would be a step included in the job which would load in and apply the environment for the length of the job.

I think in a production environment we would need to decide if we are replacing or appending/inserting to the environment that the worker has (ie from a Queue Environment or something).
I think appending/inserting would be the case that most people expect, that would allow Queue Environments to exist and work as expected.

I am going to continue working on my STEP hook PR this week and use that functionality to implement this myself for testing locally.

Let me know if you have any thoughts about my proposed methodology.

Thanks
-Alex

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants