Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prune docker containers (continuously) #37

Merged
merged 4 commits into from
Jul 24, 2023

Conversation

bernt-matthias
Copy link
Collaborator

otherwise the 1st job does not profit from this

There was a recent discussion with @mvdbeek (galaxyproject/tools-iuc#4935 (comment)) to do this in a loop running in the background. I thought about this: I think the problem is that the containers would be pruned also inbetween tests of the same tool / tool repo with the same requirements (and therefore the same container).

@mvdbeek
Copy link
Member

mvdbeek commented Feb 14, 2023

I think the problem is that the containers would be pruned also inbetween tests of the same tool / tool repo with the same requirements (and therefore the same container).

theoretically, but they are now in the same chunk, so I don't think that's a problem.

@bernt-matthias
Copy link
Collaborator Author

So you think of starting an infinite loop like the following at the beginning of the test job (not sure about the sleep):

while 1; do docker system prune --all --volumes; sleep ?; done &

The advantage would be to have only one Galaxy startup.
A potential disadvantage is that data in database/files/ might accumulate (job dirs are fine since they are pruned).

Also wondering

  • If the prune hits the gap between two executions of tests of the same tool then image is pruned unnecessarily. If the sleep is long enough this should not happen.

I guess we do not have to consider workflow tests since those take the images from CVMFS?

@mvdbeek
Copy link
Member

mvdbeek commented Feb 14, 2023

Active images aren't deleted. If disk space becomes an issue we can always tell the interactor to purge histories. sleep 60 seems fine. Only tool data comes from CVMFS.

otherwise the 1st job does not profit from this
@bernt-matthias bernt-matthias changed the title prune docker containers at the start of the loop prune docker containers (continuously) Jul 17, 2023
Copy link
Member

@mvdbeek mvdbeek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's do it! Then we can also drop the for loop in the action and just run a single galaxy process for all tool tests.

@mvdbeek mvdbeek merged commit d70a31a into galaxyproject:main Jul 24, 2023
14 checks passed
@bernt-matthias bernt-matthias deleted the topic/docker-prune-1st branch July 25, 2023 14:15
@bernt-matthias
Copy link
Collaborator Author

Then we can also drop the for loop in the action

Can do. But then we should also purge histories after tests? or is this already done?

@mvdbeek
Copy link
Member

mvdbeek commented Jul 26, 2023

I don't think it is, but I think that'd be a good option. Maybe we should have a CI profile in planemo that collects all those options. FYI galaxy-tool-test has a bunch of relevant options we should also expose in planemo:

  --history-per-suite   Create new history per test suite (all tests in same history).
  --history-per-test-case
                        Create new history per test case.
  --history-name HISTORY_NAME
                        Override default history name
  --no-history-reuse    Do not reuse histories if a matching one already exists.
  --no-history-cleanup  Perserve histories created for testing.
  --publish-history     Publish test history. Useful for CI testing

@bernt-matthias
Copy link
Collaborator Author

OK. Then I would leave it as is and prepare a new release if you agree.

I filed an issue for the history options galaxyproject/planemo#1380

@mvdbeek
Copy link
Member

mvdbeek commented Jul 27, 2023

Sounds good, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants