Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rocket not responding well to siege #541

Closed
diogobaeder opened this issue Jan 15, 2018 · 8 comments
Closed

Rocket not responding well to siege #541

diogobaeder opened this issue Jan 15, 2018 · 8 comments
Labels
question A question (converts to discussion)

Comments

@diogobaeder
Copy link

Versions

rustup 1.9.0
rustc 1.25.0-nightly (3f92e8d89 2018-01-14)
SIEGE 4.0.4

Dependencies

rocket = "0.3.6"
rocket_codegen = "0.3.6"

OS

4.14.13-1-ARCH #1 SMP PREEMPT Wed Jan 10 11:14:50 UTC 2018 x86_64 GNU/Linux

Description

The "hello world" Rocket application doesn't respond to requests made by the "siege" Linux program (a program used to run benchmarking tests); instead, I get an socket: unable to connect sock.c:249: Connection refused error. This benchmarking I'm doing is with a lot of different platforms and technologies, like NodeJS, several different Python frameworks, and the only one failing to respond is Rocket. However, if I request with curl or requests (the Python library), it works fine. So I think it might be an issue with the way the socket is attempted to get opened (not sure though).

How to replicate

$ siege -v -b -p -g http://localhost:8000 (provided that the app is running under localhost:8000)

Expected result

There should be a brief description of the benchmarking session with 100% of the requests correctly responded

Actual result

[error] socket: unable to connect sock.c:249: Connection refused

Transactions:		           0 hits
Availability:		        0.00 %
Elapsed time:		        0.00 secs
Data transferred:	        0.00 MB
Response time:		        0.00 secs
Transaction rate:	        0.00 trans/sec
Throughput:		        0.00 MB/sec
Concurrency:		        0.00
Successful transactions:           0
Failed transactions:	           1
Longest transaction:	        0.00
Shortest transaction:	        0.00

Thanks!

@diogobaeder
Copy link
Author

@SergioBenitez
Copy link
Member

How are you running the Rocket application? Are you sure you're running it on port 8000? Note that Rocket defaults to port 80 when the production environment is enabled, which is what Rocket should be running under when benchmarked.

@diogobaeder
Copy link
Author

I'm running it via the executable generated with cargo build --release and passing the port with ROCKET_PORT. If I just get that very same URL and request via curl, like curl http://localhost:8000, then it runs just fine. Same for Python's requests - which returns a successful response for requests.get('http://localhost:8000').

I tried with ROCKET_ENV=production, and it works fine. Then I tried digging deeper, and found out that it works just fine when ROCKET_ADDRESS is set to either 0.0.0.0 or 127.0.0.1 - the only address it doesn't work fine with is localhost. Which is quite odd, because the name resolution is done by the operating system in this case.

@marcusball
Copy link
Contributor

I think I've had the same issue. Networking is not my strong point so I could be completely wrong, but I think what's happening is when you set localhost as the listen address, Rust listens on ipv6 while the client (Siege) may be requesting in ipv4.

I just tested my own Rocket app, and if I have it listening on localhost:8000, then requests from Firefox to http://localhost:8000 and http://[::1]:8000 work, while http://127.0.0.1:8000 fails to connect.

I just spent way too much time digging through Rocket, Hyper, and the standard library, but everything looks like it should properly return both IPv4 and IPv6 addresses when resolving localhost, so I'm pretty stumped about where the problem is—but it's probably not Rocket.

@SergioBenitez
Copy link
Member

It sounds like this is really #209 in disguise. Do you agree?

@marcusball
Copy link
Contributor

Yeah, that looks the same to me.

This was nagging me so I dug further, and I think I found what I think is the root cause: When binding to localhost the ToSocketAddrs trait, used by TcpListener::bind, should properly return both 127.0.0.1 and ::1, however each_addr appears to only bind to the first address that's available.

@SergioBenitez
Copy link
Member

@marcusball That's right. See my comments in #209.

Looks like everything is working as expected, even if it's not as intuitive as we'd like. See #209 for why we can't do any better at the moment. Closing this out for these reasons.

@SergioBenitez SergioBenitez added no bug The reported bug was confirmed nonexistent question A question (converts to discussion) and removed no bug The reported bug was confirmed nonexistent labels Jan 19, 2018
@diogobaeder
Copy link
Author

Sounds good to me. Thanks for the clarification, guys!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question A question (converts to discussion)
Projects
None yet
Development

No branches or pull requests

3 participants