You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Problem: There is currently no automatic or dynamic way to controll the memory usage of php-fpm threads.
I am noticing many OOMKiller events / pod restarts in kubernetes. The reason for this is that my php pod has a max pod memory limit of 3 GB. Sooner or later it will hit 3GB because it will consume for memory over time. Currently I can`t tell if the reason is that the threads are just to many or if I have some threads with too much memory. In any case I think there is a smart way needed to controll the common memory footprint of the php-fpm, but currently there is none. I currently have no proper advise yet i can give to make it work in all the cases.
With the default settings in my scenario of a symfony application. I can do the following calculation: One php thread at 400 MB * 50 threads are 20 GB per POD plus maybe 1 GB Apache. It is obvious that a desired common footprint should be something like 1-4 GB per default.
bash-5.1$ grep -i "max_children" /etc/php-fpm.d/www.conf
; static - a fixed number (pm.max_children) of child processes;; pm.max_children - the maximum number of children that can
; pm.max_children - the maximum number of children that
pm.max_children = 50
bash-5.1$ ps -ylC php-fpm --sort:rss
S UID PID PPID C PRI NI RSS SZ WCHAN TTY TIME CMD
S 1001 63 1 0 80 0 15376 294786 ep_pol ? 00:00:00 php-fpm
S 1001 66 63 0 80 0 202108 318985 skb_wa ? 00:00:17 php-fpm
R 1001 501 63 1 80 0 266220 338824 - ? 00:00:05 php-fpm
S 1001 367 63 0 80 0 317184 349248 skb_wa ? 00:00:22 php-fpm
S 1001 65 63 3 80 0 321776 349415 skb_wa ? 00:03:00 php-fpm
S 1001 64 63 0 80 0 331004 350440 skb_wa ? 00:00:32 php-fpm
S 1001 292 63 1 80 0 333932 352134 skb_wa ? 00:01:49 php-fpm
S 1001 365 63 0 80 0 341204 390597 skb_wa ? 00:00:28 php-fpm
S 1001 68 63 0 80 0 345152 352108 skb_wa ? 00:00:24 php-fpm
S 1001 67 63 0 80 0 406312 406638 skb_wa ? 00:00:33 php-fpm
What do you think?
Reproducer
No response
The text was updated successfully, but these errors were encountered:
export_vars=$(cgroup-limits); export $export_vars
export PHP_MEMORY_PER_CHILD=${PHP_MEMORY_PER_CHILD:-500}
max_childs_computed=$((MEMORY_LIMIT_IN_BYTES/1024/1024/$PHP_MEMORY_PER_CHILD))
# The pm.max_children should never be lower than pm.min_spare_servers, which is set to 5.
[[ $max_childs_computed -le 5 ]] && max_childs_computed=5
export PHP_MAX_CHILDREN=${PHP_MAX_CHILDREN:-$max_childs_computed}
echo "-> pm.max_children and pm.max_spare_servers is set to $PHP_MAX_CHILDREN, using PHP_MEMORY_PER_CHILD=${PHP_MEMORY_PER_CHILD}"
sed -i "s/.*pm.max_children.*=.*/pm.max_children = $PHP_MAX_CHILDREN/g" /etc/php-fpm.d/www.conf
sed -i "s/.*pm.max_spare_servers.*=.*/pm.max_spare_servers = $PHP_MAX_CHILDREN/g" /etc/php-fpm.d/www.conf
Container platform
No response
Version
Any with FPM.
OS version of the container image
RHEL 8
Bugzilla, Jira
No response
Description
Problem: There is currently no automatic or dynamic way to controll the memory usage of php-fpm threads.
I am noticing many OOMKiller events / pod restarts in kubernetes. The reason for this is that my php pod has a max pod memory limit of 3 GB. Sooner or later it will hit 3GB because it will consume for memory over time. Currently I can`t tell if the reason is that the threads are just to many or if I have some threads with too much memory. In any case I think there is a smart way needed to controll the common memory footprint of the php-fpm, but currently there is none. I currently have no proper advise yet i can give to make it work in all the cases.
With the default settings in my scenario of a symfony application. I can do the following calculation: One php thread at 400 MB * 50 threads are 20 GB per POD plus maybe 1 GB Apache. It is obvious that a desired common footprint should be something like 1-4 GB per default.
What do you think?
Reproducer
No response
The text was updated successfully, but these errors were encountered: