-
-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The portmaster process is a cpu hog #422
Comments
Hey @boistordu, thanks for reporting this!
This is very interesting, especially as even if the Portmaster uses more resources for processing updates, this should never result in dropping packets.
This is the case.
Are you sure the host did not have access to anything? How would it then be able to download and process updates, and thus using more resources? Please elaborate. Another great way to help us better understand the issue is, if you could create a screen recording where we see the Portmaster logs and htop. You can view the logs in the console using |
As I've stated in the on the torrents thread, the interface is unresponsive because the process is taking too much ressources and since your interface is waiting on answers on it... well you can imagine yourself the problem. So unless there is a command line to force it ? I can do the journal cli that s for sure. Also most of the times it happens if for example my pi4 is loosing vpn connection in the way that the vpn is not answering anymore even if the pi4 has still the tun0 interface activated. But not only that, because then I would not notice that the packets would be also dropping in other cases. And since it's not happening all the time, and a bit like randomly, I m mostly sure that it comes when portmaster is updating its lists something like that because of the high need of cpu process when it does that. |
Thanks for elaborating on the issue. |
And responded there |
$ curl http://127.0.0.1:817/api/v1/debug/core?style=github Version 0.7.6
Platform: ubuntu 21.10
Status: Trusted
No Module ErrorUnexpected Logs
Goroutine Stack
got lucky during one of the loops but since not torrent program were running I don't thing it's related to the other post. |
Sweet! Well, I've found the hog: The Portmaster was trying to attribute about 200 network packets to a process at the same time. This would obviously cause a massive slow down. The Portmaster only does this for the first packet of a connection and for incoming dns requests. |
Okey. You don't have any more informations about those packets? Destination or type of those packets or the process number or the port? Of it's the dns solution then the only answer to that is related to docker or qemu networks created for containers or vm. |
Now that's happening I would see it is due to qemu vm . Also the result of this cpu hog give me a message from the UI that portmaster has not been able to update. So maybe, just maybe, it is because of your own process update which is somehow forbidden(maybe my Pi-hole filtering) and so you have 200+ packets the same trying to do the same thing which is reaching the update repo for your app. |
No, the debug data tries to not only include any sensitive data, which connections very much are.
At this point, the Portmaster does not currently support Docker or VMs (see #166), but this really depends on the exact technology. Most of them use the forward chain in the firewall and therefore bypass the Portmaster.
The update process only runs once at a time and is not parallelized in any way, so there'd only be one connection active from the update process at any time. |
I will do that soon. |
The new v0.7.11 (in beta release channel) has a couple improvements that might affect this too. |
Hey @boistordu, has this been resolved with v0.7.11? |
I m at 7.10 for the moment not yet the one you are talking about. I m going to check and then monitor it for a couple of days |
The 7.12 is again a cpu hog. |
hope this is the good one because I4ve made 2 and I didn't save at the time. |
port 5001 seems to cause some problem
it goes straight to my mobile setup router. So I would guess something about the lpdserver |
I've looked into the logs and the second one shows well over 3000 UDP connections that the Portmaster is trying to attribute to a process at the same time. This crippled the Portmaster. The connection you posted is TCP, I've only seen a couple dozen of them in the logs, so I don't think they are the problem. Do you know of any application making such a massive amount of connections? |
Okey so here a few of the logs over time with various piece of software I4ve identified as problematic. from .0.7.12 to 0.7.18 there has been some retrograding in terms of effeciency of portmaster or maybe it was handling more kind of packets? Then I've identified another package that might be causing problems too: Portmaster is still consuming too much cpu ressources while loading new pages in firefox but at least it stops relatively sooner and so on don't get into a stalled state. The problem is that it's intimately related to the hardware essence of the intel Y-serie. Portmaster is very demanding and so from 3,2Ghz the thermal protection kicks in immediately to reduces it from 2,5 then 1,6Ghz. Which means that Portmaster needs to be optimized to actually use less cpu while engaging in the few hundreds tcp connection that firefox is making when loading a page. Or being able to control its own cpu consumption to still be multi-threaded but not using 100% of the cpu if needed. There should also be a view mode only, where portmaster just report the packets/connections being done and a control mode where it actually analyzes and filters the packets. I could provide a tcpdump or wireshark dump if you want so you see which connections are being made at the same time of a new page in firefox is open and see in the same time why portmaster is using 250 of the cpu (htop value)? |
Here is a tcdump and debug info when protmaster is stalling. portmaster_stalled_with_nothing_running.txt The problem is that at that moment I've closed nearly everything except teams, sessions protonmail bridge etc but there are not using any ressources at all at that moment so it can't be that. |
It's at least the same result than me yes |
@dhaavi got some new logs for you this was copied before i noticed the portmaster is cpu heavy. first i suspected SPN because the network was slow / laggy. this was copied some time later, no more than few minutes. may be almost the same as the first one. this is some this is some i both logs a noticed a lot of errors, some bad file descriptors and some pre-auth fails. i've also got a screenshot and short screencast but github' aws s3 is timeouting for some reason for me right now... i couldn't also uplod an image to google images to search using it like 6 hours ago... that was fixed by restarting computer. restarting portmaster did not help. i also couldn't go above 1MB/s over local network before reboot. not it works flawlessly and I can go up to 25MB/s. SPN behaved weirldy (pls notice the cute little cat next to portmaster icon not running at 99% that means it wasn't consuming all cpu at this moment but the network was already unusable) HTOP of portmaster CPU hog (pls notice the cute little cat next to portmaster running at 99%) |
got another debug log https://support.safing.io/privatebin/?1b4875121f4fd9a7#B9mAnqrQurz59ZUrbcNpCR18AD1shJ6XZxy9GWCfmHxB and also some logs - there are some weird "verdict" lines https://support.safing.io/privatebin/?ee24159b91306ae4#8PgcUoiMtsJWjNLbwWyBxZbSMp5yg5UAbsJx4kK9t6N5 |
It's still happening. And all logs contains something like this (copied from copy debug info)... I'm posting it here because it looks like that this issue has low priority and all other logs I've posted expired already... :(
Here are more logs from syslog expiring in 1 year. It looks like the CPU usage gets back to normal immediately after those nfqueue logs stops. https://support.safing.io/privatebin/?ebb1bed14cf6b96c#BYaL2MCgsowsNPzbyCZpuv8bCaNCRjmkJWEqNTt6AzRj |
Its always that nfq verdict thing. any updates on this? I would be happy just to know that it happens to you as well or something :D It's nothing that wouldn't work it just spins my CPU for 1 minute or a so few times a week.
|
today it happened like 20 times already https://support.safing.io/privatebin/?39e99bc98354b296#E63MiZvpwTLjvJHLU5KMvAZqY4JSCmDypygFVHnHj47h |
It is still doing it... |
Auto-closing this issue after waiting for input for a month. If anyone finds the time to provide the requested information, please re-open the issue and we will continue handling it. |
Pre-Submit Checklist:
What happened:
Ubuntu 21.10
Each time port master update it takes up all the cpu ressources which results in dropping packages, Icmp packets, dns requests etc.
What did you expect to happen?:
What should be good programming, is that if there is no pingable connection to the outside world, the update process should not occur. Or if it fails it should timeout in 10 seconds and retry it's process 10 minutes later.
How did you reproduce it?:
Just put portmaster on a low end device like a device with a U cpu or Y cpu and put your connection active but in a way where the host don't have access to anything. You'll see your update process eating the whole cpu .
Debug Information:
Will come later
The text was updated successfully, but these errors were encountered: