You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@jiten1551 Are you saying that this is a bug or a feature that you might want. The default RequestsHttpConnection does not support chunked encoding. It would have to be a new flag in the connection class to allow for that. But just to separate things, SigV4 works with compressed requests using http_compress, what you're asking for is compressing and chunking, which could be a new feature?
I think it's a feature request: enable chunked transfer encoding (and ensure it works with Sigv4). A similar problem in the java client was that setting compression would also automatically turn on chunked transfer encoding, which would work, except for Sigv4.
python requests does chunked automatically if a generator is passed. In fact, one could arguably bypass the api straight into the connector.perform_request with a generator, as long as the http_compress is disabled (and then the gzip.compress doesn't run) and the input argument is just happily passed along to requests...
What is the bug?
Opensearch python client using content length header and does not support chunked with compression enabled.
How can one reproduce the bug?
Steps to reproduce the behavior:
repro code:
causing this call to pass, what if content is too large and wanted to use chunked with compression.
What is the expected behavior?
It should support chunked with sigv4 to work with large payload.
similar issue: opensearch-project/OpenSearch#3640
What is your host/environment?
Do you have any screenshots?
If applicable, add screenshots to help explain your problem.
Do you have any additional context?
opensearch-project/OpenSearch#3640
The text was updated successfully, but these errors were encountered: