-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example Request - network threads - listeners and connection handlers #95
Comments
I'm unsure what you mean, are you talking about async tasks? Tokio does not spawn a thread per connection. |
let me clarify - (daily new to rust) so when I spawn an asynchronous function, first for the listener and then for each connection, how do i use your crate to gracefully shutdown all of them? here's the code for example, that spawn each, respectively.
|
I see what you mean now. It might be worth looking into the difference of threads and tasks in tokio, using those two words incorrectly causes a bunch of confusion. I assume all your 'threads' here are 'tasks'. So what you do here is spawn one task per connection via That said, it is not recommended to spawn one Instead, I recommend using this crate for the general plumbing of your app, and then handing over to a more light-weight mechanism for the connections themselves, like A good example of how this can work together to create a high-performance webserver can be seen in this crate's hyper example. Let me know if you have any further questions regarding this topic. |
that’s perfect - yes i was thinking i’d be able to avoid using `tokio::select` using your crate - the issue that was causing confusion was how to use it in the async function, and where — let me read your `hyper` example as i suspect that will clear up much of the confusion reading a working example!
I’ve been reading your other examples as well - but what left me confused was not seeing a tokio-select code block where one had something like the listener loop i described in the code block i shared with you.
thank you!
p.s. aware of the diff between threads and tasks - was being sloppy in my ontology! my bad!
… On Dec 15, 2024, at 7:37 PM, Finomnis ***@***.***> wrote:
I see what you mean now.
Would you mind producing a more complete MRE?
What is notify_producer_clone?
It might we worth looking into the difference of threads and tasks in tokio <https://docs.rs/tokio/latest/tokio/task/index.html>, using those two words incorrectly causes a bunch of confusion. I assume all your 'thread's here are 'tasks'.
So what you do here is spawn one task per connection via tokio::spawn. The task then is idling in handle_connection().await.
handle_connecition() returns a future, which is then awaitable. But further, this future is also cancellable. There are several ways you can cancel a future/task, with the most common ones being tokio::select <https://tokio.rs/tokio/tutorial/select> or the one built-into this crate, future::cancel_on_shutdown() <https://docs.rs/tokio-graceful-shutdown/0.15.2/tokio_graceful_shutdown/trait.FutureExt.html#tymethod.cancel_on_shutdown>.
That said, it is not recommended to spawn one tokio_graceful_shutdown-subsystem per connection. It has too much overhead.
Instead, I recommend using this crate for the general plumbing of your app, and then handing over to a more light-weight mechanism for the connections themselves, like tokio_util::task::TaskTracker.
A good example of how this all can work together to create a high-performance webserver can be seen in this crate's hyper example <https://github.com/Finomnis/tokio-graceful-shutdown/blob/main/examples/hyper.rs>.
Let me know if you have any further questions regarding this topic.
—
Reply to this email directly, view it on GitHub <#95 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABB2J6BV6E5ZMYHN3UNPCSL2FYOFXAVCNFSM6AAAAABTVAZ4PCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNBUGI2DIMRYHE>.
You are receiving this because you authored the thread.
|
After reading the hyper example, let me know if that cleared it up for you or if I need to add further examples. |
So my question is, if I was to adapt this example to use tcp without http, how would you modify line 59 of hyper.rs to handle a persistent tcp connection? On Dec 15, 2024, at 8:08 PM, Finomnis ***@***.***> wrote:
Aber reading the hyper example, let me know if that cleared it up for you or if I need to add further examples.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Line 59 is an empty line for me :) sure that number is correct? |
Perfect! That’s very helpfulOn Dec 16, 2024, at 4:12 AM, Finomnis ***@***.***> wrote:
Does this help? https://github.com/Finomnis/tokio-graceful-shutdown/pull/96/files
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Anything else, or should I close this issue once the example is merged? |
All good - merge! :)
… On Dec 16, 2024, at 2:17 PM, Finomnis ***@***.***> wrote:
Anything else, or should I close this issue once the example is merged?
—
Reply to this email directly, view it on GitHub <#95 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABB2J6HXSP7YL45YESFNI5T2F4RK7AVCNFSM6AAAAABTVAZ4PCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNBWGQZTOMJSGY>.
You are receiving this because you authored the thread.
|
I have another question - how would I modify the 21_tcp_echo_server example to use TLS?
I usually do so like:
```
// Load the identity from a PKCS12 file
let mut file = File::open("robert.p12").unwrap();
let mut identity = vec![];
file.read_to_end(&mut identity).unwrap();
let identity = Identity::from_pkcs12(&identity, "atakatak").unwrap();
let listener = TcpListener::bind("0.0.0.0:8080").unwrap();
let acceptor = TlsAcceptor::builder(identity).build().unwrap();
let acceptor = Arc::new(acceptor);
```
And then get a new connection …
```
for stream in listener.incoming() {
match stream {
Ok(stream) => {
let acceptor = acceptor.clone();
let tx = tx.clone();
let thread_clone = notify_producer_clone.clone();
tokio::spawn(async move {
let stream = acceptor.accept(stream).unwrap();
println!("accepting connection...");
handle_connection(stream, tx, thread_clone).await;
});
}
Err(e) => eprintln!("Connection failed: {}", e),
}
```
But I’m unclear how I’m modifying the `async fn connection_handler` to affect the TLS acceptor
… On Dec 16, 2024, at 2:19 PM, Finomnis ***@***.***> wrote:
Closed #95 <#95> as completed via #96 <#96>.
—
Reply to this email directly, view it on GitHub <#95 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABB2J6E55F2KZ6DNTMUBVGT2F4RSZAVCNFSM6AAAAABTVAZ4PCVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJVGY3TKMRQGA2TMNI>.
You are receiving this because you authored the thread.
|
If you want to take this discussion off of GitHub issue, feel free to email me directly at ***@***.***
… On Dec 16, 2024, at 2:19 PM, Finomnis ***@***.***> wrote:
Closed #95 <#95> as completed via #96 <#96>.
—
Reply to this email directly, view it on GitHub <#95 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABB2J6E55F2KZ6DNTMUBVGT2F4RSZAVCNFSM6AAAAABTVAZ4PCVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJVGY3TKMRQGA2TMNI>.
You are receiving this because you authored the thread.
|
Like this? async fn connection_handler(
subsys: SubsystemHandle,
listener: TcpListener,
connection_tracker: TaskTracker,
) -> Result<()> {
// Load the identity from a PKCS12 file
let mut file = File::open("robert.p12").unwrap();
let mut identity = vec![];
file.read_to_end(&mut identity).unwrap();
let identity = Identity::from_pkcs12(&identity, "atakatak").unwrap();
let acceptor = TlsAcceptor::builder(identity).build().unwrap();
let acceptor = Arc::new(acceptor);
loop {
let connection = match listener.accept().cancel_on_shutdown(&subsys).await {
Ok(connection) => connection,
Err(CancelledByShutdown) => break,
};
let (tcp, addr) = connection
.into_diagnostic()
.context("Error while waiting for connection")?;
// Spawn handler on connection tracker to give the parent subsystem
// the chance to wait for the shutdown to finish
connection_tracker.spawn({
let cancellation_token = subsys.create_cancellation_token();
let acceptor = acceptor.clone();
async move {
tracing::info!("Connected to {} ...", addr);
let mut tcp = acceptor.accept(tcp).unwrap();
let result = tokio::select! {
e = echo_connection(&mut tcp) => e,
_ = cancellation_token.cancelled() => {
tracing::info!("Shutting down {} ...", addr);
echo_connection_shutdown(&mut tcp).await
},
};
if let Err(err) = result {
tracing::warn!("Error serving connection: {:?}", err);
} else {
tracing::info!("Connection to {} closed.", addr);
}
}
});
}
Ok(())
} Didn't test it, though. Just from the general idea, that's how it should work. |
That’s not going to work - you’ve read the identity - but you’ve not used the identity and turned the listener into a TLS acceptor …
This being the straight-forward non-graceful path to doing so after reading in the identity ...
let listener = TcpListener::bind("0.0.0.0:8080").unwrap();
let acceptor = TlsAcceptor::builder(identity).build().unwrap();
let acceptor = Arc::new(acceptor);
…
That’s the part that confuses me -
… On Dec 17, 2024, at 10:00 AM, Finomnis ***@***.***> wrote:
Like this?
async fn connection_handler(
subsys: SubsystemHandle,
listener: TcpListener,
connection_tracker: TaskTracker,
) -> Result<()> {
// Load the identity from a PKCS12 file
let mut file = File::open("robert.p12").unwrap();
let mut identity = vec![];
file.read_to_end(&mut identity).unwrap();
let identity = Identity::from_pkcs12(&identity, "atakatak").unwrap();
loop {
let connection = match listener.accept().cancel_on_shutdown(&subsys).await {
Ok(connection) => connection,
Err(CancelledByShutdown) => break,
};
let (tcp, addr) = connection
.into_diagnostic()
.context("Error while waiting for connection")?;
// Spawn handler on connection tracker to give the parent subsystem
// the chance to wait for the shutdown to finish
connection_tracker.spawn({
let cancellation_token = subsys.create_cancellation_token();
let acceptor = acceptor.clone();
async move {
tracing::info!("Connected to {} ...", addr);
let mut tcp = acceptor.accept(tcp).unwrap();
let result = tokio::select! {
e = echo_connection(&mut tcp) => e,
_ = cancellation_token.cancelled() => {
tracing::info!("Shutting down {} ...", addr);
echo_connection_shutdown(&mut tcp).await
},
};
if let Err(err) = result {
tracing::warn!("Error serving connection: {:?}", err);
} else {
tracing::info!("Connection to {} closed.", addr);
}
}
});
}
Ok(())
}
Didn't test it, though. Just from the general idea, that's how it should work.
—
Reply to this email directly, view it on GitHub <#95 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABB2J6DZ3Y2FGWDWVJVPB6T2GA37HAVCNFSM6AAAAABTVAZ4PCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNBYGY4DANRQGM>.
You are receiving this because you authored the thread.
|
Please respond here on github and not via email, I've already modified my response to include that. And it's really hard to follow your syntax highlighting like that. |
But it honestly feels like I'm starting to do your programming task for you ... do you have actual, real, conceptual problems with the code? What exactly confuses you? Are you stuck somewhere? Why, what exactly puzzles you? |
No worries - sorry to bother you. I’ll work though it myself….On Dec 17, 2024, at 11:56 AM, Finomnis ***@***.***> wrote:
But it honestly feels like I'm starting to do your programming task for you ... do you have actual, real, conceptual problems with the code? What exactly confuses you? Are you stuck somewhere? Why, what exactly puzzles you?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Feel absolutely free to ask if you hit any real road blocks, though. |
so i was able to modify the example to accommodate tokio tls - my next question is not related to TLS but rather how one structures a generic spawn for graceful shutdown (i’m assuming) using tokio select.
here are some generic assumptions:
struct FunctionA {
rx: mpsc::Receiver<CotEvent>
}
impl FunctionA {
async fn run(self, subsys: SubsystemHandle) -> Result<()> {
tracing::info!("CoTEvent Subsystem started. name: {}", self.name);
subsys.on_shutdown_requested().await;
// … spawn a thread that does some stuff and reads msgs from self.rx
tracing::info!("Shutting down CoTEvent Subsystem ...");
sleep(Duration::from_millis(500)).await;
tracing::info!("CoTEvent Subsystem stopped.");
Ok(())
}
}
it’s the structuring of the spawned thread with the shutdown semantics that i’m unclear of …
let me know if I’m unclear or incomplete on anything above…
… On Dec 17, 2024, at 12:24 PM, Finomnis ***@***.***> wrote:
Feel absolutely free to ask if you hit any real road blocks, though.
—
Reply to this email directly, view it on GitHub <#95 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABB2J6CIOWY5TGTZY66HI3D2GBM4XAVCNFSM6AAAAABTVAZ4PCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNBZGEYDENBQGQ>.
You are receiving this because you authored the thread.
|
I'm unsure what you mean... For one, I guess you are talking about tasks again, not threads? Further, please provide more context to your code. Is FunctionA already a subsystem? If so, why do you need another task?
|
Yes task…. The word thread just comes out of my mouth as a reflex -been using threads/go routines/pthreads… for 30 years…. :)FunctionA is a subsystem that’s solely running a task to listen on a channel in a loopMy question is how i write the message reading task (which is effectively in a loop) so that it follows the graceful shutdown semantics. Again this is where I’m excepting to use tokio:: select in that loop to determine when to continue reading messages, or when to shutdown the message reading task.Moreover, I assume that l needed to spawn an asynchronous task in the subsystem to read the messages. Perhaps that’s not required and it can be accomplished in the subsystem run function directly?On Dec 18, 2024, at 1:19 AM, Finomnis ***@***.***> wrote:
I'm unsure what you mean... For one, I guess you are talking about tasks again, not threads?
Further, please provide more context to your code. Is FunctionA already a subsystem? If so, why do you need another task?
tokio::select does not spawn anything, it runs multiple async branches concurrently on the current task, canceling all others once one completed.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Not sure what to send you, because this is basically what the example I merged already contains... What about it is confusing you? Waiting for a queue item is future that can be canceled, as described earlier:
Several examples demonstrate this, including the new example. I kind of understand what you are trying to achieve, I just don't understand the problems you are having. Please provide some code that demonstrates your problem. |
Okay, I think I get it… but I am getting a timeout on shutdown:
Here’s the code I wrote to demonstrate comms between two subsystems using tokio channels use miette::Result;
use tokio::sync::mpsc;
use tokio::time::{sleep, Duration};
use tokio_graceful_shutdown::{SubsystemBuilder, SubsystemHandle, Toplevel};
struct Subsystem1 {
tx: mpsc::Sender<String>,
rx: mpsc::Receiver<String>,
}
impl Subsystem1 {
async fn run(mut self, subsys: SubsystemHandle) -> Result<()> {
tracing::info!("Subsystem1 started.");
if let Err(e) = self.tx.send("hi there from ss1".to_string()).await {
tracing::info!(">>> Subsystem 1 failed to sent to Subsystem 2: {e}");
} else {
tracing::info!(">>> Subsystem 1 sent to Subsystem 2");
}
let tx2 = self.tx.clone();
loop {
tokio::select! {
_ = subsys.on_shutdown_requested() => {
break;
}
Some(message) = self.rx.recv() => {
tracing::info!("Subsystem 1 received: {}", message);
let response = format!("{}", message);
// Send a response back to subsystem 1
match tx2.send(response.clone()).await {
Ok(()) => {
tracing::info!("subsystem 1 sent response...");
},
Err(_e) => {
tracing::error!("subsystem 1 failed to send response...");
},
}
tracing::info!("Subsystem 1 sent: {}", response);
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
}
}
}
subsys.on_shutdown_requested().await;
tracing::info!("Shutting down Subsystem1 ...");
sleep(Duration::from_millis(500)).await;
tracing::info!("Subsystem1 stopped.");
Ok(())
}
}
struct Subsystem2 {
tx: mpsc::Sender<String>,
rx: mpsc::Receiver<String>,
}
impl Subsystem2 {
async fn run(mut self, subsys: SubsystemHandle) -> Result<()> {
tracing::info!("Subsystem1 started.");
if let Err(e) = self.tx.send("hi there from ss2".to_string()).await {
tracing::info!(">>> Subsystem 2 failed to sent to Subsystem 1: {e}");
} else {
tracing::info!(">>> Subsystem 2 sent to Subsystem 1");
}
let tx2 = self.tx.clone();
loop {
tokio::select! {
_ = subsys.on_shutdown_requested() => {
break;
}
Some(message) = self.rx.recv() => {
tracing::info!("Subsystem 2 received: {}", message);
let response = format!("{}", message);
// Send a response back to subsystem 1
match tx2.send(response.clone()).await {
Ok(()) => {
tracing::info!("subsystem 1 sent response...");
},
Err(_e) => {
tracing::error!("subsystem 1 failed to send response...");
},
}
tracing::info!("Subsystem 2 sent: {}", response);
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
}
}
}
subsys.on_shutdown_requested().await;
tracing::info!("Shutting down Subsystem2 ...");
sleep(Duration::from_millis(500)).await;
tracing::info!("Subsystem2 stopped.");
Ok(())
}
}
#[tokio::main]
async fn main() -> Result<()> {
// Init logging
tracing_subscriber::fmt()
.with_max_level(tracing::Level::DEBUG)
.init();
// Create channels for communication
let (tx1, rx1) = mpsc::channel::<String>(100);
let (tx2, rx2) = mpsc::channel::<String>(100);
let subsys1 = Subsystem1 { tx: tx2, rx: rx1 };
let subsys2 = Subsystem2 { tx: tx1, rx: rx2 };
// Setup and execute subsystem tree
Toplevel::new(|s| async move {
s.start(SubsystemBuilder::new("Subsys1", |a| subsys1.run(a)));
s.start(SubsystemBuilder::new("Subsys2", |a| subsys2.run(a)));
})
.catch_signals()
.handle_shutdown_requests(Duration::from_millis(1000))
.await
.map_err(Into::into)
} |
You're almost there.
Your If that's what you want, great; but I assume you want the The easiest solution is to split the init/shutdown/cancellation code and the worker code into two separate functions, and cancel the entire worker code on shutdown, like so: use miette::Result;
use tokio::sync::mpsc;
use tokio::time::{sleep, Duration};
use tokio_graceful_shutdown::{SubsystemBuilder, SubsystemHandle, Toplevel};
struct Subsystem1 {
tx: mpsc::Sender<String>,
rx: mpsc::Receiver<String>,
}
impl Subsystem1 {
async fn communicate(&mut self) -> Result<()> {
while let Some(message) = self.rx.recv().await {
tracing::info!("Subsystem 1 received: {}", message);
let response = format!("{}", message);
// Send a response back to subsystem 2
match self.tx.send(response.clone()).await {
Ok(()) => {
tracing::info!("subsystem 1 sent response...");
}
Err(_e) => {
tracing::error!("subsystem 1 failed to send response...");
}
}
tracing::info!("Subsystem 1 sent: {}", response);
sleep(Duration::from_secs(1)).await;
}
Ok(())
}
async fn run(mut self, subsys: SubsystemHandle) -> Result<()> {
tracing::info!("Subsystem1 started.");
if let Err(e) = self.tx.send("hi there from ss1".to_string()).await {
tracing::info!(">>> Subsystem 1 failed to sent to Subsystem 2: {e}");
} else {
tracing::info!(">>> Subsystem 1 sent to Subsystem 2");
}
tokio::select! {
_ = subsys.on_shutdown_requested() => {
tracing::info!("Subsystem1 received a shutdown request ...");
}
err = self.communicate() => {
err?;
tracing::info!("Subsystem1 receive channel got closed ...");
}
}
tracing::info!("Subsystem1 stopped.");
Ok(())
}
}
struct Subsystem2 {
tx: mpsc::Sender<String>,
rx: mpsc::Receiver<String>,
}
impl Subsystem2 {
async fn communicate(&mut self) -> Result<()> {
while let Some(message) = self.rx.recv().await {
tracing::info!("Subsystem 2 received: {}", message);
let response = format!("{}", message);
// Send a response back to subsystem 1
match self.tx.send(response.clone()).await {
Ok(()) => {
tracing::info!("subsystem 2 sent response...");
}
Err(_e) => {
tracing::error!("subsystem 2 failed to send response...");
}
}
tracing::info!("Subsystem 2 sent: {}", response);
sleep(Duration::from_secs(1)).await;
}
Ok(())
}
async fn run(mut self, subsys: SubsystemHandle) -> Result<()> {
tracing::info!("Subsystem2 started.");
if let Err(e) = self.tx.send("hi there from ss2".to_string()).await {
tracing::info!(">>> Subsystem 2 failed to sent to Subsystem 1: {e}");
} else {
tracing::info!(">>> Subsystem 2 sent to Subsystem 1");
}
tokio::select! {
_ = subsys.on_shutdown_requested() => {
tracing::info!("Subsystem2 received a shutdown request ...");
}
err = self.communicate() => {
err?;
tracing::info!("Subsystem2 receive channel got closed ...");
}
}
tracing::info!("Subsystem2 stopped.");
Ok(())
}
}
#[tokio::main]
async fn main() -> Result<()> {
// Init logging
tracing_subscriber::fmt()
.with_max_level(tracing::Level::DEBUG)
.init();
// Create channels for communication
let (tx1, rx1) = mpsc::channel::<String>(100);
let (tx2, rx2) = mpsc::channel::<String>(100);
let subsys1 = Subsystem1 { tx: tx2, rx: rx1 };
let subsys2 = Subsystem2 { tx: tx1, rx: rx2 };
// Setup and execute subsystem tree
Toplevel::new(|s| async move {
s.start(SubsystemBuilder::new("Subsys1", |a| subsys1.run(a)));
s.start(SubsystemBuilder::new("Subsys2", |a| subsys2.run(a)));
})
.catch_signals()
.handle_shutdown_requests(Duration::from_millis(1000))
.await
.map_err(Into::into)
}
In case you are wondering how my project is set up: [package]
name = "rust-playground"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
miette = { version = "7.4.0", features = ["fancy"] }
tokio = { version = "1.42.0", features = ["full"] }
tokio-graceful-shutdown = "0.15.2"
tracing = "0.1.41"
tracing-subscriber = "0.3.19" |
In case you only receive this via EMail again: I edited the previous message. |
Thank you. That makes complete sense - it was the separation of the run function and the worker that was throwing me when laying out the select … |
can you write an example that show how one would shutdown a listening thread for a network connection as well as all open handled spawned threads?
The text was updated successfully, but these errors were encountered: