Recently, due to work requirements, comprehensive testing, evaluation, and analysis of the performance of various file systems such as ext3, ext4, reiserfs, reiser4, xfs, jfs, btrfs, nilfs2, logfs on SSD solid-state drives have been conducted. This provides a reference for selecting SSD file systems for practical applications and offers performance optimization suggestions. The benchmarks used in the tests include postmark, randomio, bonnie++, iozone, filebench, as well as dd, kernel compilation, and the creation/deletion of massive file directories.
▶ SSD File System Selection #
EXT4 and Reiserfs perform well in terms of performance, with EXT4 exhibiting impressive data throughput and Reiserfs excelling in IOPS (metadata operations).
Btrfs and Nilfs2 show slightly lower performance, but both are log-structured file systems. Btrfs also features COW/WAFL characteristics and has been optimized for SSDs. This contributes to balanced wear on SSDs and extends their lifespan.
For applications involving frequent small files, it is recommended to choose Reiserfs, ext4, and btrfs. For applications involving large files, selecting ext4 and btrfs is advisable. If SSD lifespan optimization is a priority, choosing Btrfs and nilfs2 is recommended. As for selecting the appropriate file system for production systems, it is advised to consider actual online testing results.
▶ About the LogFS File System #
LogFS is also a type of log-structured file system, and it can directly function with SSDs, which sets it apart from JFFS/YAFFS. However, test results indicate that LogFS is currently quite immature and unstable. It has not yet met the standards for practical applications. Benchmarks such as kernel compilation, massive file directory creation/deletion, and postmark testing have all failed. Due to SSD failures, tests for randomio, bonnie++, iozone, and filebench were also incomplete. Therefore, it is not recommended for actual use at this stage.
▶ SSD Performance Characteristics #
SSD data addressing time is very small, to the point that it can be negligible, and the difference between sequential and random IO performance is not significant. Random IO performance for SAS and SATA disks is notably lower compared to sequential IO.
SSD’s read performance surpasses its write performance due to factors like write-ahead erasing, aligned erasure block boundaries, and wear leveling (A Comprehensive Guide of SSD Wear Leveling).
The performance of SSD directory creation and deletion operations is similar to that of SAS/SATA, influenced by the VFS (Virtual File System) and the specific organization and operational rules of the file system’s metadata.
▶ Optimization of SSD File Systems #
1. Cache
If the SSD is equipped with a DRAM cache, please enable the cache.
2. Readahead
Enable block driver prefetch functionality, and it’s recommended to set the prefetch sector count to 256. (Refer to EXT3 file system optimization.)
3. I/O scheduler
Given the extremely small SSD data addressing time, which can even be considered negligible, there’s no need to insert or sort IO requests. Therefore, ‘noop’ is the most suitable scheduling algorithm.
4. Journal
If the file system supports turning off the journal, then do so; otherwise, if ‘data=writeback’ is supported, specify it during mount.
5. File system parameters
Please refer to Ext3 file system optimization. Generally, keep the ‘defaults’ settings, and for special cases, refer to ‘mkfs’ parameters.
6. Mount parameters
ext3: defaults,async,noatime,nodiratime
ext4: defaults,async,noatime,nodiratime,data=writeback,barrier=0
xfs: defaults,async,noatime,nodiratime,barrier=0
reiser4: defaults,async,noatime,nodiratime
reiserfs: defaults,async,noatime,nodiratime,notail,data=writeback
jfs: defaults,async,noatime,nodiratime
btrfs: defaults,async,noatime,nodiratime,ssd
nilfs2: defaults,async,noatime,nodiratime
logfs: defaults,async,noatime,nodiratime,data=writeback,barrier=0