Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

@smithy/node-http-handler:WARN socket usage at capacity and additional requests are enqueued #89

Open
siebrand opened this issue Feb 26, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@siebrand
Copy link

siebrand commented Feb 26, 2024

I very recently started using s3p latest (3.5.x), and I'm regularly seeing the message "@smithy/node-http-handler:WARN socket usage at capacity and additional requests are enqueued". That output seems unexpected.

$ time npx s3p sync --bucket <source> --to-bucket <target> --overwrite --storage-class INTELLIGENT_TIERING --large-copy-concurrency 20
s3p: d: 1s 29ms, items: 3039, items/s: 2953, listRequests: 8, listWorkers: 4, copied 0/0 0_B/0_B 0_B/s inFlight: 0_B copyWorkers: 0 + 0 same: 3039(6.68gB) toDelete: 0(0_B)
s3p: d: 2s 29ms, items: 6039, items/s: 2976, listRequests: 14, listWorkers: 12, copied 84/1587 84.29mB/2.99gB 41.54mB/s inFlight: 1.04gB copyWorkers: 500 + 0 copyQueue: 1003 + 0 same: 4452(9.58gB) toDelete: 0(0_B)
s3p: d: 3s 29ms, items: 6039, items/s: 1993, listRequests: 14, listWorkers: 12, copied 475/1587 811.33mB/2.99gB 267.84mB/s inFlight: 1.05gB copyWorkers: 500 + 0 copyQueue: 612 + 0 same: 4452(9.58gB) toDelete: 0(0_B)
s3p: d: 4s 29ms, items: 10042, items/s: 2492, listRequests: 22, listWorkers: 16, copied 744/4460 1.29gB/8.53gB 328.54mB/s inFlight: 1.08gB copyWorkers: 500 + 0 copyQueue: 3216 + 0 same: 5582(12.89gB) toDelete: 0(0_B)
s3p: d: 5s 36ms, items: 14042, items/s: 2788, listRequests: 30, listWorkers: 15, copied 1081/8298 2.00gB/15.86gB 407.63mB/s inFlight: 1.04gB copyWorkers: 500 + 0 copyQueue: 6717 + 0 same: 5744(13.68gB) toDelete: 0(0_B)
s3p: d: 6s 36ms, items: 17044, items/s: 2823, listRequests: 36, listWorkers: 28, copied 1350/10300 2.51gB/19.44gB 424.98mB/s inFlight: 1.01gB copyWorkers: 500 + 0 copyQueue: 8450 + 0 same: 6744(15.95gB) toDelete: 0(0_B)
s3p: d: 7s 55ms, items: 19044, items/s: 2699, listRequests: 40, listWorkers: 27, copied 1644/12273 3.06gB/22.63gB 443.74mB/s inFlight: 932.37mB copyWorkers: 500 + 0 copyQueue: 10129 + 0 same: 6771(16.11gB) toDelete: 0(0_B)
s3p: d: 8s 182ms, items: 31044, items/s: 3794, listRequests: 64, listWorkers: 48, copied 1818/14196 3.39gB/26.38gB 424.65mB/s inFlight: 881.52mB copyWorkers: 500 + 0 copyQueue: 11878 + 0 same: 16848(36.96gB) toDelete: 0(0_B)
s3p: d: 9s 185ms, items: 31044, items/s: 3379, listRequests: 64, listWorkers: 50, copied 2157/14196 3.90gB/26.38gB 434.61mB/s inFlight: 1.10gB copyWorkers: 500 + 0 copyQueue: 11539 + 0 same: 16848(36.96gB) toDelete: 0(0_B)
@smithy/node-http-handler:WARN socket usage at capacity=50 and 510 additional requests are enqueued. See https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/node-configuring-maxsockets.html or increase socketAcquisitionWarningTimeout=(millis) in the NodeHttpHandler config.
s3p: d: 10s 345ms, items: 43044, items/s: 4161, listRequests: 88, listWorkers: 69, copied 2303/14196 4.18gB/26.38gB 414.21mB/s inFlight: 1.07gB copyWorkers: 500 + 0 copyQueue: 11393 + 0 same: 28848(62.51gB) toDelete: 0(0_B)
s3p: d: 11s 450ms, items: 57044, items/s: 4981, listRequests: 116, listWorkers: 76, copied 2415/14196 4.40gB/26.38gB 393.76mB/s inFlight: 1.10gB copyWorkers: 500 + 0 copyQueue: 11281 + 0 same: 42848(92.43gB) toDelete: 0(0_B)
s3p: d: 12s 450ms, items: 57044, items/s: 4581, listRequests: 116, listWorkers: 100, listQueue: 4, copied 2609/14196 4.75gB/26.38gB 390.36mB/s inFlight: 1.14gB copyWorkers: 500 + 0 copyQueue: 11087 + 0 same: 42848(92.43gB) toDelete: 0(0_B)
s3p: d: 13s 791ms, items: 69044, items/s: 5006, listRequests: 140, listWorkers: 100, listQueue: 14, copied 2791/14260 5.14gB/26.56gB 381.52mB/s inFlight: 1.16gB copyWorkers: 500 + 0 copyQueue: 10969 + 0 same: 54784(117.00gB) toDelete: 0(0_B)
s3p: d: 15s 87ms, items: 81044, items/s: 5371, listRequests: 164, listWorkers: 100, listQueue: 36, copied 2930/14260 5.46gB/26.56gB 370.71mB/s inFlight: 1.08gB copyWorkers: 500 + 0 copyQueue: 10830 + 0 same: 66784(142.27gB) toDelete: 0(0_B)
s3p: d: 16s 238ms, items: 94055, items/s: 5792, listRequests: 190, listWorkers: 100, listQueue: 46, copied 2976/14260 5.63gB/26.56gB 354.98mB/s inFlight: 1.04gB copyWorkers: 500 + 0 copyQueue: 10784 + 0 same: 79795(170.18gB) toDelete: 0(0_B)
s3p: d: 17s 773ms, items: 105059, items/s: 5911, listRequests: 212, listWorkers: 100, listQueue: 74, copied 3086/14260 5.86gB/26.56gB 337.74mB/s inFlight: 1.03gB copyWorkers: 500 + 0 copyQueue: 10674 + 0 same: 90799(194.22gB) toDelete: 0(0_B)
s3p: d: 18s 831ms, items: 107059, items/s: 5685, listRequests: 216, listWorkers: 100, listQueue: 92, copied 3278/14260 6.06gB/26.56gB 329.74mB/s inFlight: 1.15gB copyWorkers: 500 + 0 copyQueue: 10482 + 0 same: 92799(198.13gB) toDelete: 0(0_B)
s3p: d: 19s 967ms, items: 118059, items/s: 5912, listRequests: 238, listWorkers: 100, listQueue: 93, copied 3430/14281 6.41gB/26.63gB 328.71mB/s inFlight: 1.15gB copyWorkers: 500 + 0 copyQueue: 10351 + 0 same: 103778(221.29gB) toDelete: 0(0_B)
s3p: d: 21s 225ms, items: 130059, items/s: 6127, listRequests: 262, listWorkers: 100, listQueue: 111, copied 3510/14281 6.71gB/26.63gB 323.74mB/s inFlight: 1.01gB copyWorkers: 500 + 0 copyQueue: 10271 + 0 same: 115778(247.60gB) toDelete: 0(0_B)
s3p: d: 22s 631ms, items: 142059, items/s: 6277, listRequests: 286, listWorkers: 100, listQueue: 134, copied 3617/14281 6.86gB/26.63gB 310.29mB/s inFlight: 1.05gB copyWorkers: 500 + 0 copyQueue: 10164 + 0 same: 127778(272.90gB) toDelete: 0(0_B)
s3p: d: 23s 651ms, items: 150059, items/s: 6344, listRequests: 302, listWorkers: 100, listQueue: 149, copied 3736/14281 7.14gB/26.63gB 309.00mB/s inFlight: 998.59mB copyWorkers: 500 + 0 copyQueue: 10045 + 0 same: 135778(289.80gB) toDelete: 0(0_B)
s3p: d: 24s 669ms, items: 158059, items/s: 6407, listRequests: 318, listWorkers: 100, listQueue: 163, copied 3861/14281 7.39gB/26.63gB 306.77mB/s inFlight: 911.81mB copyWorkers: 500 + 0 copyQueue: 9920 + 0 same: 143778(306.67gB) toDelete: 0(0_B)
@smithy/node-http-handler:WARN socket usage at capacity=50 and 546 additional requests are enqueued. See https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/node-configuring-maxsockets.html or increase socketAcquisitionWarningTimeout=(millis) in the NodeHttpHandler config.
s3p: d: 25s 907ms, items: 168059, items/s: 6486, listRequests: 338, listWorkers: 100, listQueue: 173, copied 4041/14281 7.66gB/26.63gB 302.71mB/s inFlight: 875.40mB copyWorkers: 500 + 0 copyQueue: 9740 + 0 same: 153778(326.92gB) toDelete: 0(0_B)
s3p: d: 27s 246ms, items: 181062, items/s: 6645, listRequests: 364, listWorkers: 100, listQueue: 182, copied 4129/14281 7.91gB/26.63gB 297.20mB/s inFlight: 790.07mB copyWorkers: 500 + 0 copyQueue: 9652 + 0 same: 166781(353.71gB) toDelete: 0(0_B)
s3p: d: 28s 334ms, items: 189062, items/s: 6672, listRequests: 380, listWorkers: 100, listQueue: 213, copied 4264/14281 8.12gB/26.63gB 293.50mB/s inFlight: 757.96mB copyWorkers: 500 + 0 copyQueue: 9517 + 0 same: 174781(370.54gB) toDelete: 0(0_B)

The warning appears to be pretty harmless, but it's annoying to see the s3p output polluted. Would be great if something could be done to suppress it. From https://github.com/smithy-lang/smithy-typescript/blob/3b3774052b1985a680c3db7c244c59d03357b73c/packages/node-http-handler/src/node-http-handler.ts#L71 :

        /**
         * Running at maximum socket usage can be intentional and normal.
         * That is why this warning emits at a delay which can be seen
         * at the call site's setTimeout wrapper. The warning will be cancelled
         * if the request finishes in a reasonable amount of time regardless
         * of socket saturation.
         *
         * Additionally, when the warning is emitted, there is an interval
         * lockout.
         */
        if (socketsInUse >= maxSockets && requestsEnqueued >= 2 * maxSockets) {
          console.warn(
            "@smithy/node-http-handler:WARN",
            `socket usage at capacity=${socketsInUse} and ${requestsEnqueued} additional requests are enqueued.`,
            "See https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/node-configuring-maxsockets.html",
            "or increase socketAcquisitionWarningTimeout=(millis) in the NodeHttpHandler config."
          );

Caused by smithy-lang/smithy-typescript#1164. This feature appears to have been added quite recently.

@shanebdavis
Copy link
Member

Hmm. This appears to be something we do have control over. It looks like it's a NodeJs thing and the AWS SDK has a way to override the socket count. This is probably slowing down S3P...

https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/node-configuring-maxsockets.html

@shanebdavis
Copy link
Member

shanebdavis commented Mar 13, 2024

Yup, that's it. The default max appears to be 50, and you get the message right after you hit 50 list-workers:

s3p: d: 9s 185ms, items: 31044, items/s: 3379, listRequests: 64, listWorkers: 50, copied 2157/14196 3.90gB/26.38gB 434.61mB/s inFlight: 1.10gB copyWorkers: 500 + 0 copyQueue: 11539 + 0 same: 16848(36.96gB) toDelete: 0(0_B)
@smithy/node-http-handler:WARN socket usage at capacity=50 and 510 additional requests are enqueued. See https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/node-configuring-maxsockets.html or increase socketAcquisitionWarningTimeout=(millis) in the NodeHttpHandler config.

So, for now, you can probably avoid that warning by using this option: --list-concurrency 50. Let me know if that works.

@shanebdavis shanebdavis added the bug Something isn't working label Mar 13, 2024
@shanebdavis
Copy link
Member

I should be able to set the AWS client's max-sockets to match the --list-concurrency setting. Something for the future.

@octavd
Copy link

octavd commented Aug 12, 2024

Hello @shanebdavis ,

We are having the same error and we've tried evern with --list-concurrency setting and it's the same issue as explained above.

npx  s3p sync --bucket "$SOURCE_BUCKET" --to-bucket "$DESTINATION_BUCKET" --list-concurrency 100 > "${LOG_FILE}_$CNT"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants