You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Radarr will randomly start using 100% CPU on the VM I have all of my Arrs installed on and will cause issues. I have to go into Portainer and restart the container to get it to stop. At first it would fill up the swap so I disabled that 2 days ago then it started hitting 100% CPU yesterday and today. The VM has 4 vCPU and 16GB RAM. I am also running a different 4K instance for Radarr and it is not exhibiting the same behavior as this one. It is installed as a part of the same stack.
Expected Behavior
that is won't use 100% CPU for one container. Don't know what else to put here.
Steps To Reproduce
it is random so I would say that you setup a VM with Ubuntu 22.04 and install Docker, Compose and Portainer and then disable the swap space on the VM. Then let Radarr run for a day or two and it should peg the CPU to 100%
Environment
- OS: Ubuntu 22.04
- How docker service was installed: As a stack with all of the Arrs in one stack and on the same docker network.
[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:24 +00:00", the heartbeat has been running for"00:00:01.3019212" which is longer than "00:00:01". This could be caused by thread pool starvation.
[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:27 +00:00", the heartbeat has been running for"00:00:01.6742217" which is longer than "00:00:01". This could be caused by thread pool starvation.
[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:33 +00:00", the heartbeat has been running for"00:00:01.6027067" which is longer than "00:00:01". This could be caused by thread pool starvation.
[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:38 +00:00", the heartbeat has been running for"00:00:01.6800331" which is longer than "00:00:01". This could be caused by thread pool starvation.
[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:43 +00:00", the heartbeat has been running for"00:00:01.7568994" which is longer than "00:00:01". This could be caused by thread pool starvation.
[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:47 +00:00", the heartbeat has been running for"00:00:01.7845453" which is longer than "00:00:01". This could be caused by thread pool starvation.
[Warn] Microsoft.AspNetCore.Server.Kestrel: As of "10/03/2024 18:05:53 +00:00", the heartbeat has been running for"00:00:01.2124929" which is longer than "00:00:01". This could be caused by thread pool starvation. ```
The text was updated successfully, but these errors were encountered:
Is there an existing issue for this?
Current Behavior
Radarr will randomly start using 100% CPU on the VM I have all of my Arrs installed on and will cause issues. I have to go into Portainer and restart the container to get it to stop. At first it would fill up the swap so I disabled that 2 days ago then it started hitting 100% CPU yesterday and today. The VM has 4 vCPU and 16GB RAM. I am also running a different 4K instance for Radarr and it is not exhibiting the same behavior as this one. It is installed as a part of the same stack.
Expected Behavior
that is won't use 100% CPU for one container. Don't know what else to put here.
Steps To Reproduce
it is random so I would say that you setup a VM with Ubuntu 22.04 and install Docker, Compose and Portainer and then disable the swap space on the VM. Then let Radarr run for a day or two and it should peg the CPU to 100%
Environment
CPU architecture
x86-64
Docker creation
# movie management radarr: image: ghcr.io/linuxserver/radarr:latest container_name: radarr restart: unless-stopped environment: - PUID=1000 - PGID=1000 - TZ=America/New_York volumes: - /portainer/Files/AppData/Config/radarr:/config - /etc/localtime:/etc/localtime:ro - /etc/timezone:/etc/timezone - /home/{redacted-user}/stuff:/stuff - /home/{redacted-user}/backup:/backup ports: - 7878:7878
Container logs
The text was updated successfully, but these errors were encountered: