Replies: 2 comments 6 replies
-
One of the problems with preparing such a document, is that some/most of the steps won't be the correct thing to do in some cases, and people will just frantically copy-paste every command in it to try and get their data back before talking to anyone, then complain that "zpool import --delete-all-data" deleted their data no matter how big a "do not run this except in this one very specific case" you put around it. (I would also argue the problem is not necessarily that the metadata protection is inadequate, but that people keep merging code with the philosophy "ship it as the only testing", unless you have specific examples in mind.) |
Beta Was this translation helpful? Give feedback.
-
It may not solve most of the problems, but it can prevent the current situation from worsening further and help people calm down. |
Beta Was this translation helpful? Give feedback.
-
Hello Guys
I have seen many cases of zpool import failures leading to data loss in the community and mailing lists.
I and my friends have also encountered similar scenarios. Moreover, ZFS monitoring is very important. I have also encountered scenarios where small errors are lost every day, which eventually accumulate into an irreversible error
In the year 2024, it's shocking to see people still relying on 'zpool import -FX'. This practice is incredibly dangerous and should be avoided at all costs.
People use ZFS because it is a large storage pool that can effectively handle massive amounts of data.
Every time I encounter a zpool import failure, whether assisting others or recovering my own data, a chill runs down my spine.
From a different perspective, I believe ZFS's metadata protection is inadequate. In many cases, the data itself is intact, but the metadata or checksums are corrupted. This leads to even valid data being displayed incorrectly.
Would it be possible to create a community document to standardize the procedures?
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions