Replies: 5 comments 5 replies
-
Hi @vasflam I think there is some confusion about these files. RocksDB's application logs are written to files called Now it is definitely odd that the WAL is filling up at this rate and not getting flushed. Usually this aspect of the system is managed by RocksDB and the defaults just work. I don't have an exact number but I believe a 50gb RocksDB instance should suffice for at least a few million keys, if not more. Of course that requires WAL flush and compaction to be happening to reduce the storage size. Can you share some more information:
|
Beta Was this translation helpful? Give feedback.
-
Hi , rocksdb conf can be set in |
Beta Was this translation helpful? Give feedback.
-
Thank you all! I learned more from these three messages than from a couple of days of searching on Google! Next week, I will try all the suggested solutions and share them if there are any interesting results. |
Beta Was this translation helpful? Give feedback.
-
After extensive study, I came to the following conclusion: the root of the problem is that the s3g service cannot control the minimum/maximum multipart upload partition size. A user, either unintentionally or maliciously, can set the partition size to as little as 1 byte (as @errose28 pointed out, a block the size of each upload part is created, and a RocksDB entry is added), which can result in filling up all the space allocated for metadata and disabling the OM service. Here is an example code in Java - OzoneOmSpam.java The only solution is to disable WAL log archiving and set a maximum limit on the WAL size.
The section [CFOptions “default”] is mandatory. If it is omitted, the RocksDB configuration will not be applied, and no entries will be recorded in the log files (I spent some hours on this). Links to the descriptions in RocksDB options.h: |
Beta Was this translation helpful? Give feedback.
-
@vasflam Minimum part size for multipart uploads is 5MB, except for the last part. Unfortunately this is only validated upon |
Beta Was this translation helpful? Give feedback.
-
Hello, I can’t find a way to disable OzoneManager RocksDB logging. I have a test cluster in Kubernetes with one OM node and a 50GB disk for metadata. When uploading a 100GB test file into the cluster, after 10-15 minutes, the OM node fills the entire disk with log files and hangs.
The folder that uses space is om.db. RocksDB generate about 1GB of logs per minute.
I also tried to define options from RocksDBConfiguration.java in ozone-site.xml, but with no success.
Beta Was this translation helpful? Give feedback.
All reactions