-
Notifications
You must be signed in to change notification settings - Fork 499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(services/fs,hdfs): fix poll_close when retry #4141
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the fix! Would you like to add an edge test for fs?
Like what we does in https://github.com/apache/opendal/tree/main/core/edge/file_write_on_full_disk, we can have a file_close_with_retry_on_full_disk
.
In which we will test close file on full disk. We will expect to get an error instead of panic.
0206988
to
a33fb02
Compare
3b77df9
to
d4689a0
Compare
3056960
to
76ecb8d
Compare
Hi, @hoslo, it's best to avoid force-pushing so we can track the changes made since the last commit. We will squash all the commits while merging. |
I understand, is there anything wrong with the test I added? Never written this before. |
We can add env:
RUST_BACKTRACE: 1 before And our tests failed for close succeed without error which is not expected. |
let mut builder = Fs::default(); | ||
builder.root(&env::var("OPENDAL_FS_ROOT").expect("root must be set for this test")); | ||
let op = Operator::new(builder)? | ||
.layer(RetryLayer::new().with_max_times(3)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add a logging here so that we know what happened. And also set RUST_LOG
to debug.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like it should return an error, but it returns ok.
futures = "0.3" | ||
opendal = { path = "../../" } | ||
rand = "0.8" | ||
tokio = { version = "1", features = ["full"] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, let's add tracing_subscriber
here to collect the logs:
[dependencies]
tracing-subscriber = "0.3"
use tracing_subscriber;
tracing_subscriber::fmt::init();
builder.root(&env::var("OPENDAL_FS_ROOT").expect("root must be set for this test")); | ||
let op = Operator::new(builder)? | ||
.layer(RetryLayer::new().with_max_times(3)) | ||
.layer(LoggingLayer::default()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The order of layers is important. Please add the logging layer first, then add retry.
Also, please change log level to trace
to get more deep logs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strange, shouldn't it return an error based on the logs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we should encounter an error, but we don't. Something must be wrong.
Hi, @hoslo, I reported the issue to tokio instead: tokio-rs/tokio#6325 |
Hi, @hoslo, could you fixing the HDFS issue first? We have a user facing the exact same problem. We can fix the fs part later. |
I created a new PR for hdfs close, plz check it. |
Closed as stale, conflict and superseded. If the fs case is yet to be resolved, we should open a new PR instead. |
fixes #4058