Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated for FLU v8 #1

Open
wants to merge 12 commits into
base: master
Choose a base branch
from
Open

Updated for FLU v8 #1

wants to merge 12 commits into from

Conversation

rjq
Copy link

@rjq rjq commented Jul 8, 2019

  • Tweaked the Docker container and start.sh to work with the new (very different) version 8 of the Foundry License Utility.

  • The extra ISV port is so a client can point to the container on a remote machine to use as the license server.

@tokejepsen
Copy link
Owner

Thank you for the PR!

I'll take a look as soon as I get a spare moment.

Initially I was wondering about the spare ISV port. We tend to map the port from the container to the host, then connect to the host port. Is this a different workflow?

@rjq
Copy link
Author

rjq commented Jul 8, 2019

You're welcome. Thank you for starting the project. It's been super helpful.

I was having issues grabbing a license while the container was running on a separate host on my network. I did the explicit mapping of the ISV port because I have it set to a consistent port in my license file. I think that might have been the missing piece for my particular issue, but you don't need to add it to the code. If you do take that part of the commit, then there may need to be a note about how to explicitly set an ISV port.

Here is the article where I learned about the possible solution:
Q100374: How to make the RLM server use a dedicated ISV port

It is probably that I did something else to get it the connection working. I'm still fairly new to Docker and network admin stuff has never been my strong suit.

Please tweak anything I did to make it better. I'd love to hear your thoughts.

@rjq
Copy link
Author

rjq commented Jul 9, 2019

After doing some testing this morning and re-reading of the support article, I have shed some more light for my situation.

According to the article, the RLM port (what the client machine should use when setting the license info e.g. [port]@[host]) defaults to 4101. The port for accessing the web admin is set to 4102 by default. The ISV port is always set to a random port unless explicitly set in the license file.

I did a test this morning if setting my ISV port was the kicker for my client machine being unable to grab a license and in fact, it was. I updated my fork with some tweaks to the comments in the Dockerfile and the ports used to be more consistent with the Foundry support article. The ISV port number is arbitrary, I only chose 4500 because that was what was used in the article.

We can close this Pull Request and I can do another with the new code. I can also switch back the ISV port number to 5053 in my code if you like.

@tokejepsen
Copy link
Owner

tokejepsen commented Jul 9, 2019

We can close this Pull Request and I can do another with the new code.

No, you can just continue committing to your fork and update this PR.

Better to keep the discussion and information in one PR.

I can also switch back the ISV port number to 5053 in my code if you like.

If this is possible, then I would prefer it since we already have machines setup for looking at port 5053.

@rjq
Copy link
Author

rjq commented Jul 9, 2019

Easy to set it back.

No, you can just continue committing to your fork and update this PR.

Done

If this is possible, then I would prefer it since we already have machines setup for looking at port 5053.

Done

@tokejepsen
Copy link
Owner

Did a quick test and getting this error:

standard_init_linux.go:207: exec user process caused "no such file or directory"

BTW what OS are you running on?

@rjq
Copy link
Author

rjq commented Jul 11, 2019

Hmmm...you got that error on the client machine, the host for the container, or the container itself?
What OS are you using for the host running the container?

Host for the container: Ubuntu Server 19.04
Client machine: Ubuntu 18.04 LTS.

This doesn't seem to correspond with the bug you are seeing, but worth noting...
I just ran a quick test running the container on a Mac running 10.13.6 and I have a hunch if you're running that the Docker Desktop that you have to run on MacOS runs as a virtual machine, which the FLU doesn't like by default. I suspect that it might be the same if you're trying to run on Windows you will get the same issue because that also uses a desktop client by default.

This support article mentions what to do if you're running on a VM. I'm not sure if this is new in FLU v8 or if that's always been the case.

@tokejepsen
Copy link
Owner

This is on the host machine when running the container with the arguments outlined in the README.

I'm running on a Windows machine with Docker Desktop.

We already have the additional license required, and have been running this container for a while on Windows with Docker Desktop.

I'll try digging into the issue when I have some more time. Did the previous version of the container? If so what was the use case for trying to upgrade rlm?

@rjq
Copy link
Author

rjq commented Jul 12, 2019

Did the previous version of the container?

The previous version of the container did work.

If so what was the use case for trying to upgrade rlm?

My client machine kept showing that it could not obtain a license for the version of Nuke that I was running. The only fix that I could find was using the v8 of the FLU. Hence my modification to the code.

I'll try digging into the issue when I have some more time.

Yeah, if yours is working fine with v7 and no one else is reporting issues then there is no rush. We can even keep them separate. I don't have a Windows box readily available to try to diagnose the cause of the problem or else I would.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants