Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Launching multiple rockets on the same port does not report an error. #209

Closed
anderejd opened this issue Feb 24, 2017 · 14 comments
Closed
Labels
accepted An accepted request or suggestion deficiency Something doesn't work as well as it could
Milestone

Comments

@anderejd
Copy link

To begin with, thank you for this project! The documentation and introduction is fantastic.

I tried to start two processes of the hello world example and expected the second one to fail with some error like "Address already in use" but instead the second process reports success on the same hostname and port as the first process.

I looked through the code for rocket 0.2.0 and hyper 0.10.4 but did not find the problem. When instead trying std::net::TcpListener I do get the expected error.

@anderejd
Copy link
Author

Using rocket::custom does trigger the expected error, but not rocket::ignite in the hello world example.

@SergioBenitez
Copy link
Member

SergioBenitez commented Feb 24, 2017

The path for listening for requests for rocket::custom and rocket::ignite is identical, so there shouldn't be a discrepancy. Are you sure you properly controlled the environment?

But I've noticed the general issue as well. Unfortunately, Rocket just asks Hyper to start listening, so it would seem that the issue, if any, is on Hyper's side.

@anderejd
Copy link
Author

anderejd commented Feb 25, 2017

I've found some interesting details. In my draft like PR for Port auto config (#210) I have changed the logging to print the resulting address from the HttpListener, and this is the results when launching three hello_world-rockets in series:

First launch:
🔧 Configured for development.
=> address: localhost
=> port: 8000
=> log: normal
=> workers: 8
🛰 Mounting '/':
=> GET /
🚀 Rocket has launched from http://[::1]:8000...

Second launch:
🔧 Configured for development.
=> address: localhost
=> port: 8000
=> log: normal
=> workers: 8
🛰 Mounting '/':
=> GET /
🚀 Rocket has launched from http://127.0.0.1:8000...

Third launch attempt:
🔧 Configured for development.
=> address: localhost
=> port: 8000
=> log: normal
=> workers: 8
🛰 Mounting '/':
=> GET /
Error: Failed to start server.
thread 'main' panicked at 'Address already in use (os error 48)', /Users/abc/dev/Rocket/lib/src/rocket.rs:558
note: Run with RUST_BACKTRACE=1 for a backtrace.

It seems like some part of hyper or perhaps std::net is choosing to use ipv4 if the ipv6 port was in use, that seems like a bug or at least a very surprising behavior. :) I have not tried to do the same test with std::net::TcpListener yet (which hyper depends on).

@SergioBenitez
Copy link
Member

Thanks for looking into this! This makes it pretty clear what's going on, actually.

localhost is a hostname typically configured for two different addresses: 127.0.0.1 and ::1. What appears to be happening is that something is trying to bind to addresses a hostname resolves to in sequence. It tries the first, and if it binds then listens, otherwise tries the second, and so on.

Are you certain that TcpListener doesn't exhibit this behavior? It's source code certainly implies that this might be happening, and I can't see that Hyper does anything special here.

@anderejd
Copy link
Author

I just tested the same thing with TcpListener and is behaves in the same way, which seems kind of expected when thinking about it. Like you say the hostname does not specify ipv4 or ipv6.

@SergioBenitez
Copy link
Member

SergioBenitez commented Feb 25, 2017

localhost is just a hostname. You have a config file on your machine that dictates what it maps to; on Unix-like systems, this is usually /etc/hosts. It does specify whether it's IPv4 or IPv6. In this case, it maps to addresses of both types.

What I would expect bind to do is to bind only if both addresses are available, not just one.

@anderejd
Copy link
Author

anderejd commented Feb 25, 2017

It does specify whether it's IPv4 or IPv6.

I had not thought about that. Mine does indeed specify both ::1 and 127.0.0.1.
So yes, binding to the same port on both ipv6 and ipv4 sound reasonable, but I have not encountered any ipv6 issues before so I'm not sure what the expected behavior is in this case. Maybe we should create an issue on https://github.com/rust-lang/rust/issues ?

@SergioBenitez
Copy link
Member

@anderejd So it looks like the Rust folks are aware of this ambiguity. See https://github.com/alexcrichton/rfcs/blob/net2.1/text/0000-io-net-2.1.md#tosocketaddrs-and-multiple-addresses, for instance.

I'm not sure what Rocket should be doing here. We can certainly open sockets to all addresses resolved by a hostname and then create a new Reader that calls select on them, but that seems like more work than it's worth. In my opinion, we should simply check if all of them are available and only then bind to the first one. This way, the bind is deterministic. Alternatively, we can just keep the current behavior and document it.

@SergioBenitez SergioBenitez added enhancement A minor feature request feedback wanted User feedback is needed question A question (converts to discussion) labels Mar 9, 2017
@SergioBenitez SergioBenitez added upstream An unresolvable issue: an upstream dependency bug and removed feedback wanted User feedback is needed labels Apr 14, 2017
@SergioBenitez
Copy link
Member

I think what happens now is suboptimal, but I also don't think that this is something Rocket should deal with directly. The standard library, underlying HTTP library, or both, should have clearer semantics. Let's make sure we make this clear by the time #17 is ready to address. Closing this for now.

@awnumar
Copy link

awnumar commented Oct 19, 2018

Has any more thought gone into the issues raised here and at #491? In order to support both IPv4 and IPv6, the standard way is to create an IPv6 socket and turn off the IPV6_V6ONLY flag. Incoming IPv4 connections are presented in IPv6, in the IPv4-mapped format. Is this current behavior in Rocket?

@adambudziak
Copy link

Hi, based on my (very current) experience, I'd like to suggest adding a note about this issue in some fairly visible place in the docs. I just wasted about an hour to debug why my server doesn't accept a request, searching for bugs in my code, only to realize that all the requests were actually going to a different instance of the server that was still running in a tmux session in the background.

It would never cross my mind that such bug is even possible and I believe that Rocket is the first thing many people will blame if they stumble upon this error themselves.

@MattOates
Copy link

My very first experience of Rust and Rocket was running into this issue... via using siege to benchmark too (there's another ticket for this). This is a seriously weird and problematic behaviour, that I've never run into in the history of developing for the web. Certainly doesn't give me a good impression of this platform for my own projects. Minimally Rocket should have some docs about this. I doubt Im so special Im the only one to run into this straight away. Even if its underlying server deps causing the problem, the instance of user error is via Rocket, you probably want to poke whoever is responsible some more.

@jebrosen
Copy link
Collaborator

Coming back to this issue, I think it would make sense to change the default address for development to 127.0.0.1. staging and production already listen on IPv4 only (0.0.0.0), so that would be a bit more consistent. It would also solve the problem of binding to an ambiguous address like localhost.

@jebrosen jebrosen reopened this Dec 21, 2019
@SergioBenitez SergioBenitez added accepted An accepted request or suggestion and removed enhancement A minor feature request question A question (converts to discussion) upstream An unresolvable issue: an upstream dependency bug labels Jul 25, 2020
@SergioBenitez SergioBenitez added the deficiency Something doesn't work as well as it could label Jul 25, 2020
@SergioBenitez SergioBenitez added this to the 0.5.0 milestone Jul 25, 2020
@SergioBenitez
Copy link
Member

@jebrosen That seems like a fair compromise! Slating for 0.5.0. I'll tackle this along with #852.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted An accepted request or suggestion deficiency Something doesn't work as well as it could
Projects
None yet
Development

No branches or pull requests

6 participants