开发者

UBIFS: Unexpected behavior (wear leveling)

开发者 https://www.devze.com 2023-02-22 20:40 出处:网络
I have been playing around with UBIFS some. One test I wrote was a stress test to see if the wear-leveling in the system works as expected. In a nu开发者_Go百科tshell the test

I have been playing around with UBIFS some. One test I wrote was a stress test to see if the wear-leveling in the system works as expected. In a nu开发者_Go百科tshell the test

  • Writes a file with random data to the file system located on the ubi volume
  • Verifies the file contents
  • Deletes the file

This is test is done a certain number of times (around 200,000). The "stressed" UBI volume was mounted on another UBI volume. As expected, the maximum erase count for the "stressed" ubi volume went up. What I also noticed is that the maximum erase count for UBI volume of the mount location also went up. I would not have expected this.

  1. Anyone know what might cause this? Something in UBI? Or some mechanism in the Linux kernel (like logging)?

  2. Has anyone seen this type of behavior with other files systems that implement wear-leveling?


First guess would be that access-time logging is turned on, or maybe modification-time if the tests are being done in the root of the "stressed" volume. Most likely access-time - mount the outer filesystem (actually probably both) with -noatime.


Two processes in the system communicate via a Unix Domain Socket. This socket is created in the "mount" UBI volume (I know not a good location). When I moved this file to a RAM-based location (i.e. /tmp), the writes to the mount UBI volume stopped. During the stress test the socket existed, but was not being used. It would be good to know why the file system thinks it needs to write the file after every sync.

0

精彩评论

暂无评论...
验证码 换一张
取 消