-
Notifications
You must be signed in to change notification settings - Fork 574
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Grype scan command appears to hang when downloading db or listing file #1731
Comments
Hi @githala-deepak, Thanks for the report. It sounds like grype is having trouble downloading its updated vulnerability DB, which it will try to do about once per day. If you run If you download the db directly, with a command like this: curl -vvv -o /tmp/db.tar.gz 'https://toolbox-data.anchore.io/grype/databases/vulnerability-db_v5_2024-02-28T01:23:28Z_ea5efb77a61bf939917f.tar.gz' Do you see any errors? Does the download succeed? I think you probably need to troubleshoot a network issue, and that curl command will start you in the right direction. |
I am having a similar issue.
|
I have recently noticed that occasionally requests to fetch the Additionally it seems as though there is no retry/timeout logic on the db update process, so that may also be an area to look into improving. Are the DB files located in S3 or in an S3 bucket fronted by Cloudflare? Or just in Cloudflare R2 directly? Some examples from earlier today if it's helpful for you to look into logs on The following requests were made around Tue, 12 Mar 2024 15:42:00 GMT
And then some requests are quick, like we're hitting a bad/slow backend in the rotation:
|
Thanks for the detailed info @mathrock! I've also seen grype db updates be slow, but haven't yet figured out why. We're investigating on our end. |
Hi all! Thanks for reporting this. We've changed some configs with our CDN to try to fix the issue. Since it's only intermittent, it's hard to know for sure that it's fixed, so please let us know if you continue having anymore slowness or hangs with grype database downloads. We'll also look into putting in some timeouts in grype, since that should prevent the client from hanging regardless of the behavior of the CDN / database download. I'll leave this issue open while we continue to monitor, and until we have client side timeouts merged. |
I'm having the issue today:
It's stuck on the last line ^ : "[0000] INFO downloading new vulnerability DB" |
FYI: It fixed itself after a few hours. |
Hey everyone! Check out the latest release of grype where we now have default timeouts included (user configurable as well). PR that was merged: #1777 We're currently looking into why the CDN that hosts the listing and db files ever gets into the state where it connects, but fails to transfer the bytes. |
@spiffcs Any update on why CDN is acting so slow? |
Hi @Fajkowsky, can you tell us a bit about when you're seeing this slowness? The only deterministic bit of slowness we've found is when new Grype DBs come out, there's some slowness shortly after, because all the Grype invocations shortly after the new DB is published download the new DB, but after this initial burst of traffic, a large percentage of Grype clients have the new DB cached and the download traffic is greatly reduced. We're looking at ways to put some jitter in there. So when you see the slow downloads, is it short after 5AM UTC or so? If so, we expect this situation to improve when we introduce some jitter/staggering in when different Grype installs download the new DB. If it's at a different time, we would really appreciate some more details if you don't mind sharing them, like what time the slow runs were at and what geographic region they're in. (Feel free to join the community slack and DM one of us if you'd rather not post that information publicly.) |
Hi @willmurphyscode, Today is the day.
The transfer is so low I was downloading json file with listings for 31 seconds. |
We also have a similar complaint over on scan-action: anchore/scan-action#306 |
Related issue at #1939 |
Another related issue at #1885 It seems like a number of users are still having CDN problems after the last round of attempted fixes. We will investigate and see what can be improved on the CDN side. |
Hi all! After some discussion on our Discourse instance we are going to try to reduce the probability that Grype checks for an updated DB by building in a delay where, if Grype's local database was built more recently than N hours ago, Grype should not check whether a new database is available, thus saving a network call. I think N will be configurable, and I'll post an update when this is rolled out and we'll see whether there's some improvement here. Thanks for your patience! |
Hi all, We have rolled out a change to the DB hosting infrastructure on Grype to reduce the number of bytes Grype needs to download when checking for a new database by about 95%. This change is server-side only, so you don't need to upgrade grype to benefit. We have also set up some metrics on this. So far, the fix seems to have helped. You can read more here. Please let us know if you're still impacted by slow checks for new grype databases. If the metrics improvements hold for the next week or so, and there aren't new complaints, we'll close this issue. Thanks for your patience on this one. |
Hi all! Our metrics indicate that the reduced size of the listing file has fixed this problem. There are more details on the measurements we did on the community Discourse. If we've missed something, please let us know on Discourse or by opening a new issue. Thanks! |
We've been issues with this again (see also #846) e.g. this Tuesday 13th grype tried for nearly 2 hours before giving up:
and again today I've got a current invocation which has been stuck for 2h35 and counting without any progress...
Are there ongoing infrastructure issues? |
There seem to be continued issues downloading the database. See also: anchore/scan-action#306. As noted earlier, we believe that a change in the size of the file has solved the issues while downloading the listing, but it's not possible to shrink the size of the database in a similar manner, which is now where the failures have moved. @sparrowt we have not been able to identify any specific issues that are within our power to fix with the current CDN hosting setup we have, unfortunately. We do have a number of options to pursue. But are you using the latest version of Grype? There should be a significantly shorter timeout than 2 hours. |
I was also experiencing this issue with download of todays db not completing. Workaround: I was able to manually download yesterdays vulnerability db and import it. I did the following: to obtain links to dbs: Hope this helps until the root issue is resolved. |
Many thanks @vica-atlassian for the workaround, it saved me a lot of time. Funny: I was just able to wget the yesterday's vulnerability database in 18 secs (~ 10.6 MiB/sec). |
We are also having the same issue. It started yesterday morning. We had been using an old version of Grype, so updated it to the latest version. The problem seemed to be intermittent and resolved itself. However, it is now happening consistently again
|
Our team is also experiencing this issue with current day DB not completing the download (with previous day DB working fine). We are looking at workarounds :( |
Hey Everyone. Can GRYPE_DB_AUTO_UPDATE=false be used as an input in GH actions? I don't see it listed? https://github.com/anchore/scan-action |
@jonathanbro yes, the action supports all grype settings via
Run: |
For those affected by this issue, the team deployed the changes to how the grype vulnerability database is served late last night (UK time). So, runs should now no longer exhibit the same network stalling. Please report if you see any further issues. |
Confirming it works now for me. |
It's also working for us now. Thank you for your speedy action. |
Hi all, we've made a change to our database hosting that we believe should fix these issues, there is some more information on Discourse |
What happened:
Grype command gets stuck and I get the error after 3 hours
failed to load vulnerability db: unable to update vulnerability database: unable to download db: stream error: stream ID 1; INTERNAL_ERROR; received from peer
What you expected to happen:
Grype scan should get completed in under a minute
How to reproduce it (as minimally and precisely as possible):
Occurs randomly, can't reproduce
Anything else we need to know?:
Environment:
Output of
grype version
: Application: grypeVersion: 0.74.5
BuildDate: 2024-02-07T21:34:47Z
GitCommit: 7478090
GitDescription: v0.74.5
Platform: linux/amd64
GoVersion: go1.21.6
Compiler: gc
Syft Version: v0.104.0
Supported DB Schema: 5
OS (e.g:
cat /etc/os-release
or similar): PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
The text was updated successfully, but these errors were encountered: