We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
While looking for resources on what KillMode to use for Concourse, I came across this: concourse/concourse#480
KillMode
From my preliminary searches:
It seems like leaving TasksMax unset makes it default to DefaultTasksMax which defaults to:
TasksMax
DefaultTasksMax
Defaults to 15%, which equals 4915 with the kernel's defaults on the host, but might be smaller in OS containers.
MemoryLimit seems to be deprecated and replaced with MemoryMax.
MemoryLimit
MemoryMax
So far we've not had any problems hitting high CPU/Memory utilisations on t3.large instances, but this warrants further investigation...
t3.large
The text was updated successfully, but these errors were encountered:
Relevant issue for potential problems we could encounter: systemd/systemd#3211
Sorry, something went wrong.
For their smoke tests, Concourse uses the following systemd configuration (relevant parts highlighted): https://github.com/concourse/concourse/blob/master/ci/deployments/smoke/systemd/concourse-worker.service#L9-L12
Example of current limits in our own environment for workers
I would suggest we increase our systemd concourse config to
LimitNOFILE=4096:8192
neilh456
Successfully merging a pull request may close this issue.
While looking for resources on what
KillMode
to use for Concourse, I came across this:concourse/concourse#480
From my preliminary searches:
It seems like leaving
TasksMax
unset makes it default toDefaultTasksMax
which defaults to:MemoryLimit
seems to be deprecated and replaced withMemoryMax
.So far we've not had any problems hitting high CPU/Memory utilisations on
t3.large
instances, but this warrants further investigation...The text was updated successfully, but these errors were encountered: