-
-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bring-your-own RabbitMQ #523
Comments
We do not need to add our own rabbit MQ for running servicex on arm64 based processors. I have updated the versions of all the helm charts in the requirements.yaml and also suggested changes to the ncsa/checks chart. I have performed a local testing with the same and it works for both amd64 and arm64 based processors. There is a branch created on the serviceX repo called rabbitmq-rosetta-fix that has the helm chart changes. In ncsa/checks, we have to change the base image to python3.10 in the docker file and also update the psychopg2-binary package to psychopg2>=2.9.3 < 3 |
@BenGalewsky sounds to me like you've decided to remove this from the roadmap? |
As a ServiceX admin I want to use an existing RabbitMQ instance so I don't have to deploy RabbitMQ specifically for a ServiceX deployment
Assumptions
a. App deployment
b. CERN OpenData DID Finder deployment
c. Rucio DID finder deployment
Acceptance Criteria
The text was updated successfully, but these errors were encountered: