-
-
Notifications
You must be signed in to change notification settings - Fork 463
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overseerr does not handle IPv6-only environments #3151
Comments
I actually have a similar issue, but not on a IPv6-only environment but container has IPv6 and IPv4 addresses.
|
The current version of node being used (16.17) was the latest LTS version at the time and is due for a bump to 18.12 which has been promoted to LTS around the end of last year. |
I also see what looks like the same behaviour in Kubernetes. My overseerr pod has both ipv4 and ipv6 addresses, I can resolve fine within the container, but the app isn't able to resolve anything:
For anyone running into this, I worked around it by disabling ipv6 in this pod with the following securityContext:
You will need to allow this unsafe sysctl in your kubelet args |
I'm sorry but you shouldn't disable IPv6. I also don't think that that securityContext will only affect that said pod but potentially others on the same host as well. |
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This is a bug, which probably should be fixed. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Commenting for the bot |
I have the same issue. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
The issue is still very much relevant. |
Description
I am attempting to run overseerr on a server that does not have an IPv4 address, so all external connections on this machine have to connect to IPv6 addresses. Most services are able to fall back to AAAA records when typical A records fail, but the current release version of overseerr does not fall back to IPv6 at all. This is due to a problem with older versions of node not properly handling IPv4 and IPv6 addresses.
Version
1.30.1
Steps to Reproduce
The docker container starts normally but fails each time it tries to make an outgoing connection. This causes almost all functionality to be lost on the app.
Entering the container, we can get the correct AAAA address from dig and even connect to these sites with curl. This proves that the container is set up correctly, there is something in the node application that is not handling IPv6 correctly.
Attempting to build the image from the current Dockerfile also gives an error due to node attempting to fetch packages from IPv4 addresses:
Upgrading the node version in the Dockerfile to
node:19-alpine
(from 16.17-alpine) builds successfully. This is most likely due to a change in how node works with IPv6 addresses in node 17. I imagine there is a reason why node is still on this version and I don't want to just submit a pull request bumping the version without knowledge of what it would affect. However, the current version is causing problems for the increasing number of users without IPv4 addresses.As a note, I am using nat64.net to provide IPv6 addresses for such sites that do not have IPv6 set up yet. This service has been working for every other application on my network, though I am happy to try a different one if it would be helpful for troubleshooting.
Screenshots
No response
Logs
No response
Platform
desktop
Device
Docker version 20.10.21, build baeda1f
Operating System
Debian GNU/Linux 11
Browser
N/A
Additional Context
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: