Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Commit written data automatically #179

Closed
domnulvlad opened this issue Feb 25, 2024 · 12 comments
Closed

Commit written data automatically #179

domnulvlad opened this issue Feb 25, 2024 · 12 comments

Comments

@domnulvlad
Copy link

I am very much aware that LittleFS does not actually write any data to flash until the file is closed, or until sync functions are called. This is obviously a concern for power losses, especially considering this file system is designed to be resilient to such events.

I need to write data to a file for logging, and the system's power may be interrupted at any point. My solution is to set a timer, and so every 250ms I close and reopen the file. Sometimes, this procedure takes longer than usual, which introduces unwanted hiccups.

Have any advances been made towards fixing this issue? Is it not possible to have it automatically write data to disk once the RAM buffer is filled to, let's say, the size of a LittleFS block? I don't understand why the user must save data manually.

@BrianPugh
Copy link
Member

Have you seen/tried my comment here: #144 (comment)

Similarly, have you tried setting CONFIG_LITTLEFS_FLUSH_FILE_EVERY_WRITE?

@domnulvlad
Copy link
Author

Are you suggesting I run fflush and fsync every time I write anything to the file?
Also, this mentions padding prior to fsync-ing to ensure all data is commited.

Kconfig has the following description for CONFIG_LITTLEFS_FLUSH_FILE_EVERY_WRITE: "With this feature fflush() will write data to the storage". So, after enabling it, I still have to call fflush after every little fprintf?

@BrianPugh
Copy link
Member

Correct, see this comment detailing fflush/fsync.

Also, littlefs-project/littlefs#344 (comment) mentions padding prior to fsync-ing to ensure all data is commited.

I think that padding was misunderstanding fflush/fsync and would implicitly trigger a fflush. A fflush followed by a fsync is required to clear all buffers and commit the writes to disk.

So, after enabling it, I still have to call fflush after every little fprintf?

Correct, since fflush flushes buffers upstream of esp_littlefs. However, with CONFIG_LITTLEFS_FLUSH_FILE_EVERY_WRITE enabled, you will not have to additionally call fsync. Note that there will (not sure how practically significant) be a performance penalty for constantly flushing.

@domnulvlad
Copy link
Author

domnulvlad commented Feb 25, 2024

Then, regarding my initial question, it is not possible for a "commit to disk" to happen automatically and only when necessary (i.e. when enough bytes have been stored by fprintf/fwrite/etc. that could be written in a LittleFS block without padding with garbage)?

I'm not completely sure, but from my understanding, calling fflush (and fsync) every time a few bytes of data are stored would cause a commit that needs padding to reach the size of a block, which wastes time. Wouldn't it be more efficient for the system to just commit data when the size of a block is surpassed? That would definitely be more elegant than calling those functions after every write, or on a timer (and to me it sounds like it would also increase resilience to power loss, without relying on the user to figure it out).

@BrianPugh
Copy link
Member

data is automatically committed to disk as needed; you only need to fflush/fsync/fclose to ensure that the file is in a "good state" that can then be read from again after you lose power.

@domnulvlad
Copy link
Author

What does "automatically" mean if you need to call extra functions after every write? I would expect it to just save the written data directly, or when enough data was written so that it's the most efficient with its block-based system.

Say my app is opening a file, and constantly printing data to it, until the end of time. Suddenly, the power is cut. The file now contains nothing, because all prints happened in the RAM buffer, because the user never fsync'd or fclose'd the file. It this normal behavior? Is it normal for the user to have to call those functions manually to ensure their data is actually saved?

@BrianPugh
Copy link
Member

I believe we are miscommunicating around two different things:

  1. LittleFS (and, by extension, esp_littlefs) automatically performs erases/writes (distinct from a flush!) as necessary to flush caches in memory to make room for more data in the memory buffers.

  2. Explicit fsync (calls the underlying lfs_file_sync). This flushes the littlefs buffers (independent and downstream from higher level filesystem buffers in the stack) to disk and manually updates the file's metadata on disk. This metadata essentially encodes information like "how long the file is" and "where all the chunks are on disk".

Expanding on (2), the point of LittleFS is that the files and filesystem are never in an inconsistent state. I.e. the filesystem won't become corrupt from abrupt power loss. The explicit fsync calls tells littlefs "I want to commit to this file state to revert to if power is loss." And then of course, you should call fflush before fsync so all the data you think has been written gets flushed down to the LittleFS layer.

@domnulvlad
Copy link
Author

Alright. So if I'm using fprintf to write data, from here I understand I have to call fflush to transfer the C layer buffers to the LittleFS buffers, to make LittleFS "know" about the data I just wrote.

Then, if CONFIG_LITTLEFS_FLUSH_FILE_EVERY_WRITE is disabled, if I don't call fsync, then the data is still written to disk, but the metadata isn't updated, so after a power cycle the system won't know about the file?

@domnulvlad
Copy link
Author

domnulvlad commented Feb 26, 2024

Ok, I did some testing myself.

Just fprintf-ing in a loop takes 0ms to execute and, as expected, after reboot the file doesn't contain anything. After many prints, "lfs.c:689:error: No more free space 0x172" appears, which I assume means some buffer is full because it didn't get committed to flash.

Adding an fflush after every fprintf (after enabling CONFIG_LITTLEFS_FLUSH_FILE_EVERY_WRITE) makes the first few writes take 1-5ms, and then the time grows to a semi-steady 50-90ms, with the occasional "hiccup" of 150ms. Of course, after rebooting, the file contains data... But these times are completely unacceptable, especially when the rest of the program has to do other tasks in the order of milliseconds. In this case, I would say my method of periodically closing/flushing the file based on a timer is less time-intensive. Is there really no fix for this?

Edit: with CONFIG_LITTLEFS_FLUSH_FILE_EVERY_WRITE enabled, printf by itself (no flushes or syncs) still shows it takes 0ms, but the file actually contains the written data after cutting the power. Please confirm this is normal behavior and my ESP32 hasn't gone insane, I hope this really fixed the issue.

@BrianPugh
Copy link
Member

Ok, so this is more or less behaving as expected. Upstream LittleFS has performance issues when it comes to appending to a file (or similar). I'd recommend bringing this up in the official repo. This repo's focus is only on the glue between esp-idf's virtual filesystem and LittleFS.

@domnulvlad
Copy link
Author

With the default configuration but CONFIG_LITTLEFS_FLUSH_FILE_EVERY_WRITE enabled, the data is committed automatically to flash every 4096 bytes.

Every write takes 0ms, except for the one that has to commit, which takes ~80ms. I can't really complain, since the original prospect was 80ms EVERY write :)

@BrianPugh
Copy link
Member

glad you got it working!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants