Tag Archives: BFQ

Chromebooks Switching Over To The BFQ I/O Scheduler


GOOGLE --

On Chromebooks when moving to the latest Chrome OS that switches over to a Linux 4.19 based kernel, BFQ has become the default I/O scheduler.

BFQ has been maturing nicely and as of late there’s been an uptick in interest around this I/O scheduler with some also calling for it to be used by default in distributions. Google has decided BFQ is attractive enough to enable by default for Chromebooks to provide better responsiveness.

In our own tests, particularly with slower storage mediums, BFQ delivers good results on recent kernel releases. BFQ aims for low latency on interactive and soft real-time tasks while still being capable of achieving high throughput, among other benefits.

Below is a demo by BFQ developer Paolo Valente on the responsiveness of BFQ on Chromebooks.




Linux 4.19 I/O Scheduler SSD Benchmarks With Kyber, BFQ, Deadline, CFQ


HARDWARE --

As it has been a while since last running some Linux I/O scheduler benchmarks, here are some fresh results while using the new Linux 4.19 stable kernel and tests carried out from a 500GB Samsung 860 EVO SATA 3.0 SSD within a 2P EPYC Dell PowerEdge R7425 Linux server.

Given the uptick in I/O scheduler interest from Phoronix readers recently with Endless OS switching over to the BFQ I/O scheduler while the CK patch set dropped this Budget Fair Queuing I/O scheduler, here are some fresh benchmarks of the different options.

Using the Linux 4.19 stable kernel running with Ubuntu 18.10 on this 2P AMD EPYC server, CFQ was tested as the default I/O scheduler on Ubuntu systems followed by deadline and noop. After switching over to the multi-queue block layer code (BLK MQ), the MQ-Deadline, Kyber (the Facebook developed I/O scheduler), BFQ (including low_latency run), and no I/O scheduler in the MQ mode were tested.

A variety of Linux benchmarks were carried out with these different I/O scheduler options on the current stable kernel.

Cutting to the chase, winning most often with this Samsung 860 SSD storage on the Dell PowerEdge AMD server was the deadline I/O scheduler with 9 out of 26 wins. The other scheduler options each four wins or less. It was interesting to note though many performance regressions still along the MQ code paths for this SATA 3.0 testing. I’ll be carrying out some NVMe tests soon although in most cases having no I/O scheduler is generally quite effective we have seen some upsets with using Facebook-developed Kyber, for example. Those wishing to dig through more data can find all of the benchmark data via OpenBenchmarking.org.