-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use cleanhttp.DefaultPooledClient() by default #1580
Comments
If this solved the problem for you, that’s a good default. To be honest, we’re using this library as-is in a number of significantly big project, and I’ve never seen an issue with it. Your traffic must be significantly different to ours. But then again, using a nicely configured, custom HTTP client is good. |
I am surprised that my pattern would be special in any way:
That fetching of 100s of additional requests without batching seems to produce that many idle connections. BTW, from same people that made cleanhttp client, there is also retryablehttp client. I have not yet tested it, but it might be a better API to not have retrying code inside this package but simply leave it to the caller to specify a HTTP client which knows how to retry. I am mentioning this because when I was looking at the code, I was worried a bit that if there is any retrying, |
As a library user, I would expect the client uses the standard HTTP client of the Go standard library by default. Anything else would be confusing to me. I agree that you would want to configure a custom HTTP client for your production workloads. But then I would do that explicitly, not implicitly. Also, it's yet another dependency, which I really try to avoid these days. |
Which version of Elastic are you using?
[x] elastic.v7 (for Elasticsearch 7.x)
[ ] elastic.v6 (for Elasticsearch 6.x)
[ ] elastic.v5 (for Elasticsearch 5.x)
[ ] elastic.v3 (for Elasticsearch 2.x)
[ ] elastic.v2 (for Elasticsearch 1.x)
I have been doing many
.Get
requests in a loop (10k per second or so) and I noticed that soon I getdial tcp 172.17.0.2:9200: connect: cannot assign requested address
error. After debugging and trying different things (fixingSetURL
to a fixed value, disablingSetSniff
) I realized that at one point idle connections just start growing and growing (I usednestat
and grep:9200
in there), to 30k or so, after which the program crashed with the error above.I tried different things, e.g., adding another
defer
toPerformRequest
, because body has to be both fully consumed and closed for it to be returned to the pool (I suspected that whenLimitReader
is used, body is not fully read):I also read suggestions from Docker wiki page, but in my case it does not apply, my program can access ES at both
127.0.0.1:9200
and172.17.0.2:9200
. It does happen that it starts with127.0.0.1:9200
and then sniffing changes it to172.17.0.2:9200
, which makes it have bunch of idle connections, which can make the issue reported here happen faster (this might be fixed with #1507, I haven't tested it), but fundamentally it is not the reason it collects 30k idle connections, only factor 2x for a bit.But it did not help. What it did solve the issue at the end was using
cleanhttp.DefaultPooledClient
as default client:I suspect it addresses the issue because it has larger
MaxIdleConnsPerHost
value.So maybe this should be the default instead of
http.DefaultClient
?The text was updated successfully, but these errors were encountered: