Tag Archives: Difference

Linux 4.14 vs. 4.20 Performance Benchmarks – The Kernel Speed Difference For 2018


LINUX KERNEL --

As some additional end-of-year kernel benchmarking, here is a look at the Linux 4.14 versus 4.20 kernel benchmarks on the same system for seeing how the kernel performance changed over the course of 2018. Additionally, Linux 4.20 was also tested a second time when disabling the Spectre/Meltdown mitigations that added some performance overhead to the kernel this year.

On a Core i9 7980XE system, Linux 4.14.4 vs. 4.20 Git (with default Spectre/Meltdown mitigations and then again without) were benchmarked.

Here’s a look at the portion of the many kernel benchmarks carried out:

I/O performance was lower on the Linux 4.20 kernel with these tests on EXT4 with an NVMe SSD. Even with disabling Spectre/Meltdown mitigations, in most cases the performance was still lower compared to this point last year.

Disabling these security measures did help in some I/O heavy workloads.

There were performance improvements to note in some of the CPU heavy tests.

But also some performance regressions.

Dozens more benchmark results from this Linux kernel comparison on the Core i9 system can be found via this OpenBenchmarking.org result file.


Converged Vs. Hyperconverged Infrastructure: What’s The Difference?


Traditionally, the responsibility of assembling IT infrastructure falls to the IT team. Vendors provide some guidelines, but the IT staff ultimately does the hard work of integrating them. The ability to pick and choose components is a benefit, but requires effort in qualification of vendors, validation for regulatory compliance, procurement, and deployment.

Converged and hyperconverged infrastructure provides an alternative. In this blog, I’ll examine how they evolved from the traditional infrastructure model and compare their different features and capabilities.

Reference architectures

Reference architectures, which provide blueprints of compatible configurations, help to alleviate some of the burden of IT infrastructure integration. Hardware or software vendors provide defined behavior and performance given selected choices of hardware devices and software, along with configuration parameters. However, since reference architectures may involve different vendors, they can present problems in determining who IT groups need to call for support.

Furthermore, given that the systems combine components from multiple vendors, systems management remained difficult. For example, visibility into all levels of the hardware and software stack is not possible since management tools can’t assume how the infrastructure was set up. Even with systems management standards and APIs, tools aren’t comprehensive enough to understand device-specific information.

Converged infrastructure: ready-made

Converged infrastructures takes the idea of a reference architecture and integrates the system prior to shipping to customers; systems are pre-tested and pre-configured. One unpacks the box, plugs it into the network and power, and the system is ready to use.

IT organizations choose converged systems for ease of deployment and management instead of the benefits of an open, interoperable system with choice of components. Simplicity overcomes choice.

Hyperconverged: The building-block approach

Hyperconverged systems take the convergence concept one step further. These systems are preconfigured, but provide integration via software-defined capabilities and interfaces. Software interfaces act as a glue that supplements the pre-integrated hardware components.

In hyperconverged systems, functions such as storage are integrated through software interfaces, as opposed to the traditional physical cabling, configuration and connections. This type of capability is typically done using virtualization and can exploit commodity hardware and servers.

Local storage not a key differentiator

While converged systems may include traditional storage delivered using discrete NAS or Fibre Channel SAN, hyperconverged systems can take different forms of storage (rotating disk or flash) and present it via software in a unified way.  

A hyperconverged system  may use local storage, but it can use an external system with software interfaces to present a unified storage pool. Some vendors get caught up in the definition of whether the storage is implemented locally (implemented as a disk within the server) or as a separate storage system. I think that’s missing the bigger picture. What’s more important is the ability for the systems to scale.

Scale-out is key

Software enables hyperconverged systems to be used as scale-out building blocks. In the enterprise, storage is often an area of interest, since it has been difficult to scale out storage in the same way compute capacity expands by incrementally adding servers.

Hyperconverged building blocks enables graceful scale out, as capacity may increase without re-architecting the hardware infrastructure. The goal is to unify as many services using software that acts as layer separating the hardware infrastructure from the workload. That extra layer may result in some performance tradeoff, but some vendors believe that the systems are fast enough for most non-critical workloads.

Making a choice

How do enterprises choose converged vs hyperconverged systems? ESG’s research shows that enterprises choose converged infrastructure for mission-critical workloads, citing better performance, reliability, and scalability.  Enterprises choose hyperconverged systems for consolidating multiple functions into one platform, ease of use, and deploying tier-2 workloads.

Converged and hyperconverged systems continue to gain interest since they enable creation of on-premises clouds with elastic workloads and resource pooling. However, they can’t solve all problems for all customers. An ESG survey shows that, even five years out, over half the respondents plan to create an on-premises infrastructure strategy based on best-of-breed components as opposed to converged or hyperconverged infrastructure.

Thus, I recommend that IT organizations examine these technologies, but realize that they can’t solve every problem for every organization.

Hear more from Dan Conde live and in person at Interop ITX, where he will co-present “Things to Know Before You (Hyper) Converge Your Infrastructure,” with Jack Poller, senior lab analyst at Enterprise Strategy Group. Register now for Interop ITX, May 15-19 in Las Vegas.



Source link