-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: fix flaky unit tests #3338
base: main
Are you sure you want to change the base?
Conversation
let mut servers = HashMap::new(); | ||
servers.insert("1".to_string(), server_config); | ||
|
||
rumqttd::Config { | ||
id: 0, | ||
router: router_config, | ||
cluster: None, | ||
console: Some(console_settings), | ||
console: None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know why this was ever enabled, I haven't observed us using it anywhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It appears to be a defensive diagnostic mechanism kept in place to debug the server when some rare flaky failure occurs, as running the test again by turning it ON may not reproduce the failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can safely ignore these console settings.
|
||
// The messages might get processed out of order, we don't care about the ordering of the messages | ||
requests.sort(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This probably isn't technically necessary as I imagine the requests are always sent in the order from the original c8y message, but I think the point still stands that the ordering is irrelevant.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files📢 Thoughts on this report? Let us know! |
Robot Results
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main fix that ensures that all the messages to be published are exhausted properly with acks looks fine. But, the test failures look concerning. Just some queries/comments on the other bits as well.
Ok(Ok(())) => { | ||
// I don't know why it happened, but I have observed this once while testing | ||
// So just log the error and retry starting the broker on a new port | ||
eprintln!("MQTT-TEST ERROR: `broker.start()` should not terminate until after `spawn_broker` returns") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But when this happens and when we retry the broker start on the next iteration, the client thread that's started (line. 137) in the last iteration also must be aborted somehow, right? Because once that client thread enters the loop, I don't see how it can break out even on a connection error because of the very specific if let
check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The client thread is not part of the loop, it's only started once we have a healthy broker.
let mut servers = HashMap::new(); | ||
servers.insert("1".to_string(), server_config); | ||
|
||
rumqttd::Config { | ||
id: 0, | ||
router: router_config, | ||
cluster: None, | ||
console: Some(console_settings), | ||
console: None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It appears to be a defensive diagnostic mechanism kept in place to debug the server when some rare flaky failure occurs, as running the test again by turning it ON may not reproduce the failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for finding and fixing this bug on closing the connection a bit too soon. However, one has to do the same for QoS 2.
if let Poll::Ready(None) = futures::poll!(awaiting_acks.as_mut().peek()) { | ||
// If the channel is dropped, the sender loop has stopped | ||
// and we've received an ack for every message published | ||
mqtt_client.disconnect().await.unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rather ignore errors here.
mqtt_client.disconnect().await.unwrap(); | |
let _ = mqtt_client.disconnect().await; |
Co-authored-by: Didier Wenzek <[email protected]>
This should fix thin-edge#3021
f4bc3fd
to
51408a0
Compare
Proposed changes
This fixes some flaky unit tests I observed locally. One was the
mqtt_channel
tests, which were failing due to a genuine bug where the connection was closed before all messages were published. The other was a mapper test, which implicitly assumed messages would be delivered in a fixed order.A few other things that have changed in this PR:
--status-level fail
tocargo nextest run
injust test
. This stopscargo nextest
listing every passing test name, which was previously obscuring the error output in my terminal.serial-test
frommqtt_channel
in favour of using distinct topic/session names against a single broker in each teststd::env::var
calls withenv!
and some relative file paths with absolute paths so the test processes can be called without running them undercargo test
. This is important for using tools likecargo stress
Types of changes
Paste Link to the issue
I don't believe either of these had associated issues.
Checklist
cargo fmt
as mentioned in CODING_GUIDELINEScargo clippy
as mentioned in CODING_GUIDELINESFurther comments