You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm wondering whether the best place to store the sql dump of the galaxy database is the postgres container, being the main caveat that you could be using incompatible tags of Galaxy and postgres containers without knowing (galaxy requires one version of the schema, but the container for postgres has a different one). In this scenario, I see two alternatives:
1.- The dump file stays with one of the galaxy containers (one that can actually connect to the database, so probably at the web container level), this dump is somehow produced as part of the galaxy release cycle and hence the schema and galaxy code are in sync. Database sql loading is invoked from that container. Bonus: we then don't need to have a separate postgres container and use directly the base one in orchestration at a defined tag.
2.- The dump file stays with the postgres container, but the postgres container is tagged to include not only the postgres version but also the revision number of the schema. Then you can always request a particular schema revision number by setting the correct container tag.
Option 1 makes more sense to me since it avoids having to build/maintain one more container and reduces sync issues.
In my current case, I'm using a galaxy revision stemming from 17.05 which expects schema 134, but current postgres container loads schema 135.
Are the galaxy-postgres containers stored somewhere in a registry (dockerhub or quay.io) from where I could pull a 134 revision?
Thanks!
The text was updated successfully, but these errors were encountered:
@pcm32 sorry I'm currently on our yearly de.NBI meeting. I'm following your work and it is great to know that you got this working.
I have nothing against including the schema into the galaxy-web container. I think that this is a good idea. The entire problem is that I haven't implemented yet a nice way to distribute the compose setup/images. Should we build them automatically or push them?
I'm worried that building them automatically from Dockerhub/Quay.io will yield in non-synced builds. buildlocal.sh at least build all in one go and we can push it. Any opinions here?
I'm wondering whether the best place to store the sql dump of the galaxy database is the postgres container, being the main caveat that you could be using incompatible tags of Galaxy and postgres containers without knowing (galaxy requires one version of the schema, but the container for postgres has a different one). In this scenario, I see two alternatives:
1.- The dump file stays with one of the galaxy containers (one that can actually connect to the database, so probably at the web container level), this dump is somehow produced as part of the galaxy release cycle and hence the schema and galaxy code are in sync. Database sql loading is invoked from that container. Bonus: we then don't need to have a separate postgres container and use directly the base one in orchestration at a defined tag.
2.- The dump file stays with the postgres container, but the postgres container is tagged to include not only the postgres version but also the revision number of the schema. Then you can always request a particular schema revision number by setting the correct container tag.
Option 1 makes more sense to me since it avoids having to build/maintain one more container and reduces sync issues.
In my current case, I'm using a galaxy revision stemming from 17.05 which expects schema 134, but current postgres container loads schema 135.
Are the galaxy-postgres containers stored somewhere in a registry (dockerhub or quay.io) from where I could pull a 134 revision?
Thanks!
The text was updated successfully, but these errors were encountered: