Tag Archives: Benchmarks

Some Additional Chrome vs. Firefox Benchmarks With WebRender, 67 Beta / 68 Alpha


A few days ago I posted some Chrome vs. Firefox benchmarks using the latest Linux builds. Some readers suggested Firefox could be more competitive if forcing WebRender usage and/or moving to the latest nightly builds, so here are some complementary data sets looking at such combinations.

In addition to Firefox 66 stable and Chrome 73 stable, here are results when using Firefox 67 Beta 4 and Firefox 68 Alpha 1 as the latest at the time of testing. In addition to testing those two development channels, additional runs were done on each of them after forcing WebRender with the “MOZ_ACCELERATED=1 MOZ_WEBRENDER=1” environment variables.

Here are the benchmark results via the Phoronix Test Suite:

In the case of ARES-6, Firefox 67 Beta 4 is faster than Firefox 66 stable while Firefox 68 was slightly slower. But Firefox still wasn’t competing with Chrome in this benchmark.

In the old Octane browser benchmark, the newer releases came in a little bit slower than Firefox 66 stable.

WebXPRT is the lone test where Firefox beats out Google Chrome 73 and there wasn’t any benefit to the newer releases.

With Basemark, Firefox is still a great deal behind Chrome.

The MotionMark benchmark with it being focused on the graphics performance is a benchmark where WebRender is stressed and does pay off albeit still doesn’t make it as fast as Google Chrome.

There wasn’t much difference out of the Speedometer web browser benchmark.

Lastly is a look at the geometric mean of the benchmarks carried out. Personally, as a devout Firefox user going back to the Firebird/Phoenix days, this is sad to see albeit are seeing similar results on other Linux desktop systems too between Chrome and Firefox. If any premium supporters have any other web browser benchmark requests, be sure to let me know.

GCC 9 Compiler Tuning Benchmarks On Intel Skylake AVX-512

Recently I carried out a number of GCC 9 compiler benchmarks on AMD EPYC looking at the performance benefits of “znver1” compiler tuning and varying optimization levels to see when this level of compiler tuning pays off. There was interest from that in seeing some fresh Intel Skylake-X / AVX-512 figures, so here are those benchmarks of GCC 9 with various tuning options and their impact on the performance of the generated binaries.

This round of testing was done with an Intel Core i9 7980XE as the most powerful AVX-512 HEDT CPU I have available for testing. The Core i9 7980XE was running Ubuntu 18.10 with the Linux 4.18 kernel and I had manually built the GCC 9.0.1 2019-02-17 compiler snapshot (the most recent at the time of testing) in its release/optimized form.

The CFLAGS/CXXFLAGS used for this GCC 9 compiler tuning benchmarks were:




-O2 -march=skylake-avx512


-O3 -march=x86-64

-O3 -march=skylake

-O3 -march=skylake-avx512

-O3 -march=skylake-avx512 -flto

-Ofast -march=skylake-avx512

This offers a look from no GNU Compiler Collection optimizations through all the standard optimizations, looking at Skylake vs. Skylake-AVX512 tuning, the benefits of link-time optimization on this new compiler, and also being aggressive with performance but at potentially unsafe math via the “-Ofast” level.

71 benchmarks were run at each of these optimization levels on the Intel i9-7980XE system. All of these compiler benchmarks were facilitated in a fully-automated and reproducible manner using the open-source Phoronix Test Suite benchmarking software.

ASRock Rack EPYCD8-2T Makes For A Great Linux/BSD EPYC Workstation – 7-Way OS AMD 7351P Benchmarks

If you are looking to assemble an AMD EPYC workstation, a great ATX motherboard up for the task is the ASRock Rack EPYCD8-2T that accommodates a single EPYC processor, eight SATA 3.0 ports (including SAS HD), dual M.2 PCIe slots, dual 10 Gigabit Ethernet ports,and four PCI Express 3.0 x16 slots all within ATX’s 12 x 9.6-inch footprint. This motherboard has been running well not only with various Linux distributions but also DragonFlyBSD and FreeBSD.

I picked up the ASRock EPYCD8-2T several weeks back and it’s been working out very well as an EPYC 1P board and especially if you are looking more for a desktop/workstation-oriented EPYC build but can work just fine as a server board as well, this board has the common ASpeed AST2500 BMC controller. With the single SP3 socket are eight DDR4 memory slots to keep EPYC happy with its eight DDR4-2666 memory channels compared to four on Threadripper. For plenty of connectivity this motherboard has four PCI Express 3.0 x16 slots as well as three PCI Express 3.0 x8 slots. The PCIe slots and ATX size of the motherboard make this board practical should you be wanting a multi-GPU workstation for some scientific workloads that can also commonly leverage the eight memory channels of EPYC. For storage there are plenty of SATA 3.0 ports as well as two SAS HD headers and also two OCuLink ports for U.2 SSDs.

On the networking side there are dual 10 Gigabit RJ45 connections via Intel X550 controllers and the third RJ45 for the IPMI LAN port. It’s great having dual 10 Gigabit LAN on this board and its other feature set considering this ATX EPYC motherboard retails for just above $500 USD — not out of line with other single-socket EPYC motherboards retailing these days from just under $400 USD to $700 at major Internet retailers.

Rear I/O panel ports include serial, VGA for the ASpeed AST2500 controller, two USB 3.0 ports, and the three RJ45 jacks (dual 10 Gigabit, IPMI LAN). It could have been nice seeing more than two USB3 ports on the rear if you do intend for this board to be more of a workstation-style setup, but is certainly suffice for servers and there’s always USB hubs or utilizing one of the many PCIe slots for an extra adapter.

ASRock Rack officially supports this motherboard for Windows Server 2012/2016 as well as RHEL 6.9 / RHEL/CentOS 7, SUSE Linux Enterprise Server 11, and Ubuntu 16.04. Besides those enterprise Linux targets, the EPYCD8-2T works as well with other Linux distributions especially the many up-to-date Fedora, Ubuntu, Arch, and other releases. These days any Linux distribution released in the past year or two is working fine with AMD EPYC processors. I personally tested this ASRock EPYCD8-2T with Fedora Workstation 29, CentOS 7, Debian 9.8, Clear Linux 27910, and openSUSE Leap 15.0. The experience was pleasant and without any issues to report on the Linux side.

While Linux distributions work well with all the AMD EPYC tests we run at Phoronix, some of the servers/motherboards we have tested have run into various issues with the BSD operating systems. Fortunately, the EPYCD8-2T is also in good shape there: both DragonFlyBSD 5.4.1 and FreeBSD 12.0 booted up, installed, and subsequently run without any problems on this motherboard. It’s great to see all of the major operating systems running nicely on this EPYC ATX board!

NVIDIA GeForce GTX 1660 Ti Linux Gaming Benchmarks Review

Last week NVIDIA unveiled the GeForce GTX 1660 Ti as their first Turing graphics card shipping without the RTX/tensor cores enabled and that allowing the company to introduce their first sub-$300 graphics card of this new generation. I bought an
EVGA GeForce GTX 1660 Ti XC Black graphics card for delivering Linux OpenGL/Vulkan gaming benchmarks of this TU116 GPU and have the initial results to share today compared to a total of 16 different NVIDIA GeForce / AMD Radeon graphics cards on the latest Linux graphics drivers.

The GeForce GTX 1660 Ti features 1536 CUDA cores and the GPU base clock frequency is 1500MHz with a 1770MHz boost clock frequency for the reference specifications. The GTX 1660 Ti features 6GB of GDDR6 video memory yielding 288 GB/s of video memory bandwidth.

The GeForce GTX 1660 Ti has a 120 Watt power rating and requires a single 8-pin PCIe power connector.

With not having any pre-launch access this time around, I ended up buying the EVGA GeForce GTX 1660 Ti XC Black for delivering Linux benchmarks of this new graphics card. The EVGA GeForce GTX 1660 Ti XC Black could be found on launch day for the MSRP price of $279 USD and with immediate availability, which is why I went with this particular model and it does run at NVIDIA’s reference clock speeds.

For being a reference-clocked graphics card and the board power being just 120 Watts, the cooler is massive… It’s a triple-slot graphics card! This caught me by surprise. But, hey, the card did end up running very efficiently during our several days of benchmarking thus far and was very quiet with the fan seldom ramping up. Thermal results later in this article.

This graphics card has a dual-link DVI output as well as DisplayPort and HDMI.

For those curious, the eVGA box does not mention Linux… It’s still a hit or miss whether graphics card AIB partners mention Linux support or not. But the packaging does note “OpenGL 4/5” support… There isn’t OpenGL 5.0, at least not yet. If it was intended for OpenGL 4.5, they have been supporting OpenGL 4.6 for a year and a half already.

Early Intel i965 vs. Iris Gallium3D OpenGL Benchmarks On UHD Graphics 620 With Mesa 19.1

With yesterday’s somewhat of a surprise announcement that Intel is ready to mainline their experimental Iris Gallium3D driver as their “modern” Linux OpenGL driver with numerous design advantages over their long-standing “classic” i965 Mesa driver, here are some fresh benchmarks of that latest driver compared to the current state of their OpenGL driver in Mesa 19.1.

I’ll be working on more Intel Iris OpenGL driver benchmarks in the days ahead as yesterday’s merge request caught me a bit off-guard, but since then I kicked things off by checking out the Iris driver support using the common UHD Graphics 620 as found on many current generation notebooks/ultrabooks. This Intel OpenGL Linux driver comparison was done with a Dell XPS 9370 featuring an Intel Core i7 8550U Kabylake-R with UHD Graphica 620 that top out at 1.15GHz.

This Dell XPS laptop was using an Ubuntu 19.04 daily snapshot while upgrading to the Linux 5.0 Git kernel and building Mesa 19.1.0-devel from Ken’s Iris development branch as of 20 February. This was the first time I tested the Iris Gallium3D driver since last December’s benchmarks when originally evaluating the Iris driver performance. Since then the Iris driver has filled in more missing pieces of OpenGL support as well as some performance optimizations, but more work remains.

As outlined in many Phoronix articles now, the Iris driver is exclusively designed for Broadwell “Gen 8” graphics and newer. Those with Haswell graphics and older will not see Iris driver support, but for those users the i965 Mesa driver is remaining within the Mesa tree and will still be supported/used by those older generations of support. Using Iris also requires a sufficiently new kernel, namely Linux 4.16 and newer.

Overall from this latest Iris open-source driver testing I did overnight, it’s in better shape than my testing from December. There still are some areas where the older i965 driver remains faster, but that’s to be expected since the Intel Open-Source Technology Center crew haven’t exhausted all their optimization work yet but in fact just getting started in squeezing more performance potential out of this Gallium3D-based driver.

For today’s tests I ran various Linux OpenGL games and synthetic tests that worked fine for the UHD Graphics 620. Tests on other hardware coming up soon but the situation should get really interesting later this year with Icelake graphics having much more graphics horsepower followed by the first of Intel’s discrete GPU offerings in 2020.

This testing was done using the development Iris branch of Mesa but already with today’s Mesa 19.1 Git, the driver has landed and more tests of that mainline code will be on the way.