Tag Archives: Benchmarks

Linux 5.0 File-System Benchmarks: Btrfs vs. EXT4 vs. F2FS vs. XFS

With all of the major file-systems seeing clean-up work during the Linux 4.21 merge window (now known as Linux 5.0 and particularly with F2FS seeing fixes as a result of it being picked up by Google for support on Pixel devices, I was curious to see how the current popular mainline file-system choices compare for performance. Btrfs, EXT4, F2FS, and XFS were tested on a SATA 3.0 solid-state drive, USB SSD, and an NVMe SSD.

As of the Linux Git state from a few days ago following all of the file-system feature pull requests having been honored, I carried out some initial Linux 4.21/5.0 file-system tests using the three solid-state drive configurations with the four tested file-systems. A daily snapshot of Ubuntu 19.04 Disco Dingo was running on the Threadripper setup while using the Linux Git kernel from the mainline PPA. Btrfs, EXT4, F2FS, and XFS were tested in their out-of-the-box state / default mount options.

The SATA 3.0 SSD drive used was a 250GB Samsung 850 PRO solid-state drive connected both via SATA and then a SATA 3.0 to USB 3.0 adapter. For the NVMe SSD testing, an Intel Optane 900p 280GB was used. Via the Phoronix Test Suite a wide range of Linux storage benchmarks were carried out for this initial Linux 5.0 file-system benchmarking.

Linux Gaming Benchmarks For The ASUS TURBO-RTX2070-8G

With having a EVGA GeForce RTX 2070 XC GAMING retail graphics card fail on me, I ended up buying an ASUS TURBO-RTX2070-8G. The benefit of this ASUS GeForce RTX 2070 graphics card is that at times can be found for as low as $499 USD, in line with the cheapest RTX 2070 options and lower than many of the other RTX 2070 AIB models and certainly the RTX 2070 Founder’s Edition at $599 USD. Should you be considering the ASUS TURBO-RTX2070-8G, here are some benchmarks on Ubuntu Linux.

The ASUS TURBO-RTX2070-8G can be found for $499~529 USD making it one of the lower-cost RTX 2070 options should you be looking for a new Linux gaming graphics card for around the ~$500 price point. While it’s $20~30 cheaper than the likes of the EVGA RTX 2070 XC GAMING, it comes with a lower boost clock speed. This ASUS card has a GPU boost clock of 1620MHz (or 1650MHz in its OC mode, which seems only activated by the ASUS Windows software, unless manually overclocking on Linux) and a base clock of 1410MHz. The XC Gaming card meanwhile has a 1710MHz boost clock speed. As a reminder, the NVIDIA GeForce RTX 2070 reference spec is 1620MHz for the boost clock, the same as this ASUS card, while the Founder’s Edition spec is at 1710MHz.

The rest of the ASUS TURBO-RTX2070-8G specifications match what’s expected of the RTX 2070 with 8GB of GDDR6 video memory, HDMI, DisplayPort, and USB-C (VirtualLink) outputs, etc.

This ASUS RTX 2070 graphics card requires 6-pin and 8-pin PCI Express power connections. The TURBO-RTX2070-8G features a blower-style cooler.

Linux 4.14 vs. 4.20 Performance Benchmarks – The Kernel Speed Difference For 2018


As some additional end-of-year kernel benchmarking, here is a look at the Linux 4.14 versus 4.20 kernel benchmarks on the same system for seeing how the kernel performance changed over the course of 2018. Additionally, Linux 4.20 was also tested a second time when disabling the Spectre/Meltdown mitigations that added some performance overhead to the kernel this year.

On a Core i9 7980XE system, Linux 4.14.4 vs. 4.20 Git (with default Spectre/Meltdown mitigations and then again without) were benchmarked.

Here’s a look at the portion of the many kernel benchmarks carried out:

I/O performance was lower on the Linux 4.20 kernel with these tests on EXT4 with an NVMe SSD. Even with disabling Spectre/Meltdown mitigations, in most cases the performance was still lower compared to this point last year.

Disabling these security measures did help in some I/O heavy workloads.

There were performance improvements to note in some of the CPU heavy tests.

But also some performance regressions.

Dozens more benchmark results from this Linux kernel comparison on the Core i9 system can be found via this OpenBenchmarking.org result file.

DragonFlyBSD 5.4 & FreeBSD 12.0 Performance Benchmarks, Comparison Against Linux

Coincidentally the DragonFlyBSD 5.4 release and FreeBSD 12.0 lined up to be within a few days of each other, so for an interesting round of benchmarking here is a look at DragonFlyBSD 5.4 vs. 5.2.2 and FreeBSD 12.0 vs. 11.2 on the same hardware as well as comparing those BSD operating system benchmark results against Ubuntu 18.04.1 LTS, Clear Linux, and CentOS 7 for some Linux baseline figures.

DragonFlyBSD 5.4 introduced NUMA optimizations, upgrading from GCC5 to GCC8 as the base compiler, HAMMER2 file-system improvements, and many other enhancements built up over the past half-year.

FreeBSD 12.0 meanwhile has upgraded its default LLVM Clang compiler, improves support for Threadripper/Ryzen 2 processors, deprecates many of its 10/100 network drivers, ext2fs now provides full read/write support for EXT4, a lot of new hardware support, and other improvements. FreeBSD 12.0 should be officially announced within the next few days while for the purposes of this testing was using 12.0-RC3, which is effectively the final build aside from any last-minute fixes.

Testing of these BSDs and Linux distributions were done on the same system (obviously) and consisted of an Intel Core i9 7980XE (18 cores / 36 threads at stock speeds), ASUS PRIME X299-A motherboard, 4 x 4GB DDR4-3200 memory, 240GB Corsair Force MP510 NVMe SSD, and GeForce GTX TITAN X graphics card. The operating systems were kept “out of the box” as much as possible to represent the default experience users will see in their vendor-supplied state. Highlights of the operating systems tested:

DragonFlyBSD 5.2.2 – The previous stable release of DragonFly, which shipped with the GCC 5.4.1 compiler and was installed with HAMMER2.

DragonFlyBSD 5.4.0 – The newly-minted DragonFlyBSD update that switches over to GCC 8.1 and many other updates in the process, including more mature HAMMER2 support.

FreeBSD 11.2 – The stock 11.2-RELEASE setup with ZFS and using the default Clang 6 compiler.

FreeBSD 12.0 – The RC3 release was tested with its default Clang 6.0.1 compiler and ZFS file-system.

FreeBSD 12.0 + GCC8 – While the FreeBSD camp remains steadfast with using LLVM/Clang over GCC, for those wondering how the performance changes when switching over to GCC, a secondary run was used with GCC 8.2 installed.

CentOS 7.6 – The current community RHEL7 release with its Linux 3.10 based kernel, GCC 4.8.5 compiler, and XFS file-system.

Clear Linux 26670 – Intel’s open-source Linux distribution that often sets the gold standard for Linux performance thanks to its many optimizations from patching of various packages to compiler tuning to a lot of tweaking for yielding incredible performance potential without much work/time by its users. Clear Linux 26670 relies upon Linux 4.19 and GCC 8.2.1 with the EXT4 file-system.

Ubuntu 18.04.1 – The current Ubuntu LTS release with Linux 4.15, GCC 7.3, and EXT4 file-system.

Coming up later this month will be a larger Linux vs. BSD server benchmark comparison done on dual-socket Intel Xeon and AMD EPYC hardware, which will include a more diverse range of distributions, with the purpose of this comparison on the Core i9 just to get an idea for the DragonFlyBSD/FreeBSD performance changes out of their new releases and a few Linux distributions for reference.

All of these BSD and Linux distribution benchmarks were carried out in a fully-automated and reproducible manner using the open-source Phoronix Test Suite benchmarking software.

Linux 4.19 I/O Scheduler SSD Benchmarks With Kyber, BFQ, Deadline, CFQ


As it has been a while since last running some Linux I/O scheduler benchmarks, here are some fresh results while using the new Linux 4.19 stable kernel and tests carried out from a 500GB Samsung 860 EVO SATA 3.0 SSD within a 2P EPYC Dell PowerEdge R7425 Linux server.

Given the uptick in I/O scheduler interest from Phoronix readers recently with Endless OS switching over to the BFQ I/O scheduler while the CK patch set dropped this Budget Fair Queuing I/O scheduler, here are some fresh benchmarks of the different options.

Using the Linux 4.19 stable kernel running with Ubuntu 18.10 on this 2P AMD EPYC server, CFQ was tested as the default I/O scheduler on Ubuntu systems followed by deadline and noop. After switching over to the multi-queue block layer code (BLK MQ), the MQ-Deadline, Kyber (the Facebook developed I/O scheduler), BFQ (including low_latency run), and no I/O scheduler in the MQ mode were tested.

A variety of Linux benchmarks were carried out with these different I/O scheduler options on the current stable kernel.

Cutting to the chase, winning most often with this Samsung 860 SSD storage on the Dell PowerEdge AMD server was the deadline I/O scheduler with 9 out of 26 wins. The other scheduler options each four wins or less. It was interesting to note though many performance regressions still along the MQ code paths for this SATA 3.0 testing. I’ll be carrying out some NVMe tests soon although in most cases having no I/O scheduler is generally quite effective we have seen some upsets with using Facebook-developed Kyber, for example. Those wishing to dig through more data can find all of the benchmark data via OpenBenchmarking.org.