Tag Archives: Performance

Ubuntu 19.04 Radeon Linux Gaming Performance: Popular Desktops Benchmarked, Wayland vs. X.Org


Leading up to the Ubuntu 19.04 release, several premium supporters requested fresh results for seeing the X.Org vs. Wayland performance overhead for gaming, how GNOME Shell vs. KDE Plasma is performing for current AMD Linux gaming, and related desktop comparison graphics/gaming metrics. Here are such benchmarks run from the Ubuntu 19.04 “Disco Dingo” while benchmarking GNOME Shell both with X.Org and Wayland, Xfce, MATE, Budgie, KDE Plasma, LXQt, and Openbox.

Using a Radeon RX Vega 64 graphics card with the stock Ubuntu 19.04 components were used for this desktop graphics/gaming benchmark comparison. Ubuntu 19.04 ships with the Linux 5.0 kernel, Mesa 19.0.2, and X.Org Server 1.20.4 as the most prominent components for this comparison. GNOME Shell 3.32.0, Xfce 4.12, MATE 1.20.4, KDE Plasma 5.15.4, Budgie, LXQt 0.14.1, and Openbox 3.6.1 are the prominent desktop versions to report. KDE Plasma with Wayland wasn’t tested since on this system I wasn’t able to successfully start the session when selecting the Wayland version of Plasma from the log-in manager. The Radeon RX Vega 64 graphics card was running from the common Core i9 9900K used by many of our graphics tests with the ASUS PRIME Z390-A motherboard, 16GB of RAM, Samsung 970 EVO 256GB NVMe SSD, and a 4K display.

Via the Phoronix Test Suite a range of gaming and other desktop graphics benchmarks were carried out under these different Ubuntu 19.04 desktop options. Here are those results. Additional Ubuntu 19.04 performance tests will be coming up on Phoronix soon.


At Least 27% Of Gentoo’s Portage Can Be Easily LTO Optimized For Better Performance


OPERATING SYSTEMS --

GentooLTO is a configuration overlay for Gentoo’s overlay to make it easy to enable Link Time Optimizations (LTO) and other compiler optimizations for enabling better performance out of the Gentoo packages. GentooLTO appears to be inspired in part by the likes of Clear Linux who employ LTO and other compiler optimization techniques like AutoFDO for yielding better performance than what is conventionally shipped by Linux distributions. The GentooLTO developers and users have wrapped up their survey looking at how practical this overlay configuration is on the massive Portage collection.

The initial GentooLTO survey has been going on since last October and they have collected data from more than 30 users. The survey found that of the Gentoo Portage 18,765 packages as of writing, at least 5,146 of them are working with the GentooLTO configuration.

While they survey is user-driven and not systematically testing all available packages, at least from the current numbers they are looking at a minimum of 27% of Gentoo portage working nicely with link-time optimizations without any workarounds, but the total number of working packages is likely quite higher.

They survey did not look at the performance differences from LTO optimizations on these packages. Those interested in the results can find the survey data here. Those wanting to look more at the GentooLTO project itself can find it on GitHub.


A Year Later, Speculative Page Fault Code Revised For Possible Performance Benefits


LINUX KERNEL --

It’s been nearly one year already since the previous patch series working on speculative page faults for the Linux kernel were sent out for review. Fortunately, IBM’s Laurent Dufour has once again updated these patches against the latest code and sent them out for the newest round of discussions.

The simple summary is the set of 31 kernel patches can potentially improve concurrency for highly threaded processes. The improvement comes by handling user-space page faults without holding the mmap semaphore and in turn eliminating some waits within the page fault handler.

When using a “popular in memory multi-threaded database product”, IBM found the Linux performance with earlier revisions of these patches to be up by as much as 30% better in transactions per second. They are still testing these new “v12” patches but are hoping for a similar outcome.

More details via this patch message. Assuming the performance benefits pan out, hopefully it won’t be another year before seeing the next round of revisions or finding the code mainlined within the Linux kernel.


OpenSUSE’s Spectre Mitigation Approach Is One Of The Reasons For Its Slower Performance


SUSE --

OpenSUSE defaults to IBRS for its Spectre Variant Two mitigations rather than the Retpolines approach and that is one of the reasons for the distribution’s slower out-of-the-box performance compared to other Linux distributions.

A Phoronix reader pointed out this opensuse-factory mailing list thread citing a “huge single-core performance loss” on a Lenovo laptop when using openSUSE. There’s a ~21% performance loss in single-threaded performance around the Spectre Variant Two mitigations, which itself isn’t surprising as we’ve shown time and time again about the performance costs of the Spectre/Meltdown mitigations.

OpenSUSE’s kernel is using IBRS (Indirect Branch Restricted Speculation) with the latest Intel CPU microcode images while most Linux distributions are relying upon Retpolines as return trampolines. The IBRS mitigation technique has the potential of incurring more of a performance loss than Retpolines, which has been known to incur a greater performance hit due to the more restricted speculation behavior when paired with the updated Intel CPU microcode.

Switching over to Retpolines for the workload in question restored the performance, per the mailing list discussion.

OpenSUSE users wanting to use that non-default approach can opt for it using the spectre_v2=retpoline,generic kernel command line parameter, which matches the behavior of most other Linux distributions’ kernels.

As for openSUSE changing their defaults, at least from the aforelinked mailing list discussion it doesn’t appear their kernel engineers have any interest in changing their Spectre mitigation default but are just blaming the poor performance on Intel as their problem.

Some have also suggested the openSUSE installer pick-up a toggle within its installer for informing users of security vs. performance preferences in better providing sane/informed defaults, but so far we haven’t seen any action taken to make that happen. It would make sense though considering some of openSUSE’s conservative defaults do have performance ramifications compared to most other Linux distributions, which we’ve shown in past benchmarks, albeit just written off by openSUSE as “mostly crap.”

Previously a barrier to Retpolines usage was needing the Retpolines compiler support, but that support has now been available for quite some time. There was also reported Retpolines issues with Skylake in the past, but those appear to have been resolved as well.


Troubleshooting Network Performance in Cloud Architectures | IT Infrastructure Advice, Discussion, Community


Troubleshooting within public or hybrid clouds can be a challenge when end users begin complaining of network and application performance problems. The loss of visibility of the underlying cloud network renders some traditional troubleshooting methods and tools ineffective. Thus, we must come up with alternative ways to regain that visibility. Let’s look at five tips on how to better troubleshoot application performance in public cloud or hybrid cloud environments.

Tip 1: Verify the application and all services are operational form end-to-end

The first step in the troubleshooting process should be to verify that the cloud provider is not having an issue on their end. Depending on whether your service uses a SaaS, PaaS or IaaS model, the verification process will change. For example, Salesforce SaaS platform has a status page where you can see if there are any incidents/outages or maintenance windows that may be impacting your users.

Also, don’t forget to check other dependent services that can also impact access or performance to cloud services. Services such as DHCP and internal/external DNS are common dependencies can cause problems — making it look like there is something wrong with the network. Depending on where the end user connects from in relation to the cloud application they are trying to access, the DHCP and DNS servers used will vary greatly. Verifying end users are receiving proper IP’s and can resolve domains properly can save a great deal of time and headaches.

Tip 2: Review recent network configuration changes

If a performance problem to a cloud app seemingly crops up out of nowhere, it’s likely a recent network change is to blame. On the corporate LAN, review any firewall, NAT or VLAN adds/changes didn’t inadvertently cause an outage for a portion of your users. These same types of network changes should also be verified within IaaS clouds as well.

QoS or other traffic shaping changes can also accidentally degrade performance between the corporate LAN and remote cloud services. Automated tools can be used to verify that applications are being properly marked — and those markings are being adhered to on a hop-by-hop basis between the end user and as far out to the cloud application or service as possible.

Tip 3: Use traditional network monitoring and troubleshooting tools

Depending on the cloud architecture model you’re using, traditional network troubleshooting tools can be greater or less effective when troubleshooting performance degradation. For instance, if you use IaaS such as AWS EC2 or Microsoft Azure, you have enough visibility to use most network troubleshooting and support tools such as ping, traceroute, and SNMP. You can even get NetFlow/IPFIX data streamed to a collector — or even run packet captures in a limited fashion. However, when troubleshooting PaaS or SaaS cloud models, these tools become far less useful. Thus, you end up having to trust your service provider that everything is operating as it should on their end.

Tip 4: Use built-in application diagnostics and assessment tools

Many enterprise applications have built-in or supplemental diagnostic tools that IT departments can use for troubleshooting purposes. These tools often provide detailed information that help you determine whether performance is an application-related issue — or a problem with the network or infrastructure. For example, if you’re having issues with Microsoft Teams through Office 365, you can test and verify sufficient end-to-end network performance using their Skype for Business Network Assessment Tool. Although this tool is most commonly used to verify whether Teams is a viable option pre-deployment. It can also be used post-deployment for troubleshooting purposes.

Tip 5: Consider SD-WAN built-in analytics or pure-play network analytics tools

Network analytics tools and platforms are the latest way for administrators to troubleshoot network and application performance problems. Network analytics platforms collect streaming telemetry and network health information using several methods and protocols. All data is then combined and analyzed using artificial intelligence (AI). The results of the analysis help pinpoint areas on the corporate network or cloud where network performance problems are occurring.

If you have extended your SD-WAN architecture to the public cloud, you can leverage the myriad of analytics components that are commonly included in these platforms. Alternatively, there are a growing number of pure-play vendors that sell multi-vendor network analytics tools that can be deployed across entire corporate LANs and into public clouds. While these two methods can be expensive and more complicated to deploy initially, they have shown to speed up performance troubleshooting and root cause analysis processes dramatically.



Source link