[systemd] limit memory usage of a user #71
Labels
documentation
Improvements or additions to documentation
enhancement
New feature or request
question
Further information is requested
To limit
docker
users, try this solution instead #60For bare-metal users, read on.
Lately I wrote a script to
kill
a process that is using most RAM.But for a more friendly, less prone-to-unexpected behaviors of a system, this is another way. I found the method below NOT WORKING as I expected, maybe it's because it is not a hard limitation? or maybe my processes has something to do with
root
user... Anyway, you guys can try it anyway to see if it works.step 1: find the
uid
of a userid -u username
step 2:
Making the following file as
/etc/systemd/system/user-1000.slice
, where1000
is theuid
. Replace with theuid
you have.where 24GB is the RAM limitation.
then update the
systemd
systemctl enable user-1000.slice
step 3:
let's occupy 68GB in 60s, using
stress
:sudo apt install stress
stress --vm 1 --vm-bytes 68G --timeout 60s
In my case, it still went through
Check if needed.
### Note: this method is for
systemd
, for other systems, you may want to usecgroups
,selinux
, etc.References:
https://unix.stackexchange.com/questions/351466/set-a-default-resource-limit-for-all-users-with-systemd-cgroups
Note: You may want to use
MemoryMax
, keep reading.When you set the
MemoryHigh
control in theuser-1000.slice
configuration, what you're setting is a soft limit for memory usage for processes that run under this slice. If the processes within this slice go over theMemoryHigh
limit, the kernel will begin to apply some pressure to the processes to reduce their memory usage, but it won't enforce this limit as a hard cap; it's a "best effort" threshold.What this means is that the kernel will throttle the processes within the slice to try and bring memory usage down, starting with reclaiming memory from page cache and then moving on to swap and direct reclaim, but it doesn't guarantee that processes can't allocate more memory than the
MemoryHigh
limit. It's especially true when memory demand is spiky or increased too rapidly, as can happen with a tool likestress
.The
stress
command you mentioned tries to allocate 38G of memory very quickly, which can exceed theMemoryHigh
value before the system's memory management mechanisms have a chance to react properly.There are a few points to consider:
System and Process State: If there is enough free memory or swap space available, the system may be able to temporarily allocate more memory than your
MemoryHigh
limit until system pressure leads to a reduction.Limit Enforcement: The system will enforce the limits in different manners. If the system is not under memory pressure (i.e., there is plenty of free memory), then it's possible that the kernel won't actively throttle the processes immediately even if they exceed
MemoryHigh
.Behavior Under Pressure: When the system is under memory pressure, exceeding
MemoryHigh
will have a much more noticeable effect, with the offending processes being the first to be throttled and, in the case of hitting a hard limit (MemoryMax
), potentially killed by the OOM (Out of Memory) killer.To enforce a hard limit on memory usage that will trigger OOM killer actions once exceeded, you should also set the
MemoryMax
directive for the slice. Here's an example of doing this:With this configuration, once processes under
user-1000.slice
try to allocate more than 25GB, they will be strictly denied additional memory and might be killed if no additional memory can be freed up.Remember, setting a
MemoryMax
limit that is too low can result in processes being killed unexpectedly, leading to system instability or loss of work. Always ensure you have planned and tested your configuration changes carefully, especially on production systems.The text was updated successfully, but these errors were encountered: