-
-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-blocking GET request not being closed properly #1191
Comments
Sorry for my late reply, my focus has been on a few other things @vecerek. I have not looked at everything.
You need make sure you handle promises in the entire chain. I get back to this.
const makeS3Tokenizer = () =>
return makeTokenizer(s3, { // Note the return here, you don't want to ignore the promise
Bucket: "my-bucket",
Key: "my-key",
VersionId: "dummy-version-id",
});
const res = await fileType({
await makeS3Tokenizer, // Not sure if you wait for the promise elsewhere
tracer,
})(); |
No worries, mate.
I do. This is the part where the code example const s3Tokenizer = await env.tracer.trace(
"makeS3Tokenizer",
{},
env.makeS3Tokenizer
); cc @Borewit |
This is how it looks after the instrumentation of the aws sdk was added. It could be that the problem is on their end. In this screenshot, there are 2 calls to S3 now because the code changed to use the I have no idea why |
Hi there 👋
I've noticed something strange the other day in my datadog traces. The HTTP GET request fired by the
RangeRequestFactory
's initTokenizer method keeps running even aftermakeTokenizer
resolves. This looks in the trace the following way:This trace maps to a code looking something like this (in the real app, these are properly layered, I just crammed everything in one view and removed the irrelevant bits so that it's more-or-less a complete example):
I wasn't sure whether to report it in this repo or the
tokenizer/range
one. Ideally, I would want the library to close all HTTP connections beforemakeTokenizer
returns. I was trying to look for the issue butthis.getHeadRequestInfo
seems to be properly awaited here. I'm not quite sure where this "leakage" happens. The asset validation endpoint I've built usually responds within 300ms, so it's a bit worrying that there are still some resources being used for 5.7 seconds more after the response was sent.@Borewit do you have any ideas where to look further?
I've enabled the debug logging for
range-request-reader
and was able to collect some logs, although not sure how useful they may be in this case. I hope it helps somewhat.Collected logs (.csv)
Dependency list
Node
: 18.13.0file-type
: 16.5.4@tokenizer/s3
: 0.2.3@aws-sdk/client-s3
: 3.264.0The text was updated successfully, but these errors were encountered: