Tag Archives: SSD

Linux 5.14 SSD Benchmarks With Btrfs vs. EXT4 vs. F2FS vs. XFS


LINUX STORAGE --

A number of Phoronix readers have been asking about some fresh file-system comparisons on recent kernels. With not having the time to conduct the usual kernel version vs. file-system comparison, here are some fresh benchmarks looking at the Btrfs, EXT4, F2FS, and XFS file-system benchmarks on a speedy WD_BLACK SN850 NVMe solid-state drive.

These quick benchmarks are just intended for reference purposes for those wondering how the different file-systems are comparing these days on the latest Linux kernel across the popular Btrfs, EXT4, F2FS, and XFS mainline choices.

All four mainline file-systems were tested off Linux 5.14 Git and tested in their default/out-of-the-box configuration with the default mount options for each. All tests were conducted with a WD_BLACK SN850 NVMe SSD.

Btrfs with its CoW design tends to perform slower in the database tests than others, but these days when running multiple SQLite tests concurrently it is fairing much better than in the past.

F2FS still shows much promise in some areas.

See more of these Linux 5.14 file-system benchmarks via this OpenBenchmarking.org result file.


Optane SSD RAID Performance With ZFS On Linux, EXT4, XFS, Btrfs, F2FS


This round of benchmarking fun consisted of packing two Intel Optane 900p high-performance NVMe solid-state drives into a system for a fresh round of RAID Linux benchmarking atop the in-development Linux 5.2 kernel plus providing a fresh look at the ZFS On Linux 0.8.1 performance.

Two Intel Optane 900p 280GB SSDPED1D280GA PCIe SSDs were the focus of this round of Linux file-system benchmarking. EXT4, XFS, Btrfs, and F2FS were tested both on a single Optane SSD and then in RAID0 and RAID1 with two of these high performance drives. Additionally, ZFS On Linux 0.8.1 was tested on this system both with a single drive and in RAIDZ. For putting the Optane SSD performance in reference, there is also a standalone result provided of a Samsung 970 EVO 500GB NVMe SSD with EXT4. In case you missed out earlier Optane 900P benchmarks on Linux from 2017, see them here for this still very competitive SSD. While there are now the 905P SSDs, the 900P models remain available and cheaper hence why going for those when picking up two of them for this round of Linux RAID testing. All of the file-systems were tested using the Linux 5.2 Git kernel and running with their stock/default mount options. The EXT4/XFS/F2FS RAID was tested using Linux MD RAID while the Btrfs and ZFS RAID were using their file-system’s native RAID capabilities.

These two Intel Optane 900p 280GB SSDs were installed within the AMD Ryzen Threadripper 2990WX test system on the ASUS ROG ZENITH EXTREME motherboard, 4 x 8GB DDR4-3200 Corsair memory, Radeon RX Vega 64, and running Ubuntu 19.04 with the manual upgrade to Linux 5.2 Git.

All of these Linux storage benchmarks were carried out using the open-source Phoronix Test Suite benchmarking software. For those curious, next week are also some fresh Bcachefs benchmarks.


Linux 4.19 I/O Scheduler SSD Benchmarks With Kyber, BFQ, Deadline, CFQ


HARDWARE --

As it has been a while since last running some Linux I/O scheduler benchmarks, here are some fresh results while using the new Linux 4.19 stable kernel and tests carried out from a 500GB Samsung 860 EVO SATA 3.0 SSD within a 2P EPYC Dell PowerEdge R7425 Linux server.

Given the uptick in I/O scheduler interest from Phoronix readers recently with Endless OS switching over to the BFQ I/O scheduler while the CK patch set dropped this Budget Fair Queuing I/O scheduler, here are some fresh benchmarks of the different options.

Using the Linux 4.19 stable kernel running with Ubuntu 18.10 on this 2P AMD EPYC server, CFQ was tested as the default I/O scheduler on Ubuntu systems followed by deadline and noop. After switching over to the multi-queue block layer code (BLK MQ), the MQ-Deadline, Kyber (the Facebook developed I/O scheduler), BFQ (including low_latency run), and no I/O scheduler in the MQ mode were tested.

A variety of Linux benchmarks were carried out with these different I/O scheduler options on the current stable kernel.

Cutting to the chase, winning most often with this Samsung 860 SSD storage on the Dell PowerEdge AMD server was the deadline I/O scheduler with 9 out of 26 wins. The other scheduler options each four wins or less. It was interesting to note though many performance regressions still along the MQ code paths for this SATA 3.0 testing. I’ll be carrying out some NVMe tests soon although in most cases having no I/O scheduler is generally quite effective we have seen some upsets with using Facebook-developed Kyber, for example. Those wishing to dig through more data can find all of the benchmark data via OpenBenchmarking.org.