Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recreate HTTPSConnection after errors #567

Open
wants to merge 2 commits into
base: v3-v2021-02-25
Choose a base branch
from

Conversation

dgilmanAIDENTIFIED
Copy link
Contributor

This is a fix to #556

If a network error happens when self.__conn.request is called the underlying HTTPSConnection is left in a bad state. If you're trying to retry errors the next request raises a CannotSendRequest when you get to the next self.__conn.request call.

This PR keeps track of successful self.__conn.getrequest() calls and resets the connection object if a previous request was not successful.

@ridersofrohan
Copy link

This would also probably fix #378

@dgilmanAIDENTIFIED
Copy link
Contributor Author

We've been using this in production for a year now successfully so I hope this can be merged as it fixes what was for us a frequent issue. I am going to leave this PR open in hopes the Recurly team can take it over and merge the fix. However, I am no longer going to maintain it as we are no longer Recurly customers.

@davidemerritt
Copy link

@bhelx - will this be implemented - I just ran into a similar problem.

# Every __conn must have getresponse() called on it successfully
# or it must be thrown away
if self.__needs_reset:
self._create_conn()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like to recreate the entire connection every call might be wasteful for the problem. However i'm not sure what the result of this is. Does it also recreate the underlying connection or just the connection object? Is there a more precise way to detect when the connection needs to be reset?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs_reset is toggled back to False after a successful Response() (see below). So this path is only ever triggered when you had an exception that prevented the toggling.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems simple enough!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another way of looking at it: if you were creating a new Recurly object for each API call you avoided this bug but you made a lot of redundant connection objects and Recurly instances. If you didn't make a new Recurly object each time your Recurly instance would occasionally break when the unhandled exception was raised below. But with this change you're reusing a connection object if there is no exception raised and re-creating one when there is an exception / it's in the broken state.

@bhelx
Copy link
Contributor

bhelx commented Jul 5, 2023

@davidemerritt i haven't worked for Recurly since 2020. So pinging @douglasmiller for this issue.

@bhelx
Copy link
Contributor

bhelx commented Jul 5, 2023

Just a note: resetting the connection every call seems kind of extreme. Perhaps you can just catch the exception CannotSendRequest, recreate the connection, then retry the request.

But there could be some deeper problem worth looking into.

@davidemerritt
Copy link

thanks @bhelx!

@douglasmiller - I would expect that the client = recurly.Client(settings.RECURLY_API_KEY) client setup would handle connection pooling and be persistent. If it starts breaking at a certain point in production then we have no way to re-initialize this since it is created as a singleton. Would be great for this to be addressed. Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants