Tag Archives: Center

How Spectre and Meltdown Impact Data Center Storage


IT news over the last few weeks has been dominated by stories of vulnerabilities found in Intel x86 chips and almost all modern processors. The two exposures, Spectre and Meltdown, are a result of the speculative execution that all CPUs use to anticipate the flow of execution of code and ensure that internal instruction pipelines are filled as optimally as possible. It’s been reported that Spectre/Meltdown can have an impact on I/O and that means storage products could be affected. So, what are the impacts and what should data center operators and storage pros do?

Speculative execution

Speculative execution is a performance-improvement process used by modern processors where instructions are executed before the processor knows whether they will be needed. Imagine some code that branches as the result of a logic comparison. Without speculative execution, the processor needs to wait for the completion of that logic comparison before continuing to read ahead, resulting in a drop in performance. Speculative execution allows both (or all) branches of the logic to be followed; those that aren’t executed are simply discarded and the processor is kept active.

Both Spectre and Meltdown pose the risk of unauthorized access to data in this speculative execution process. A more detailed breakdown of the problem is available in two papers covering the vulnerabilities (here and here). Vendors have released O/S and BIOS workarounds for the exposures. Meltdown fixes have noticeably impacted performance on systems with high I/O activity due to the extra code needed to isolate user and system memory during context switches (syscalls). Reports range from 5%-50% additional CPU overhead, depending on the specific platform and workload.

Storage repercussions

How could this impact storage appliances and software? Over the last few years, almost all storage appliances and arrays have migrated to the Intel x86 architecture. Many are now built on Linux or Unix kernels and that means they are directly impacted by the processor vulnerabilities, which if patched, result in increased system load and higher latency.

Software-defined storage products are also potentially impacted, as they run on generic operating systems like Linux and Windows. The same applies for virtual storage appliances run in VMs and hyperconverged infrastructure, and of course either public cloud storage instances or high-intensity I/O cloud applications. Quantifying the impact is difficult as it depends on the amount of system calls the storage software has to make. Some products may be more affected than others.  

Vendor response

Storage vendors have had mixed responses to the CPU vulnerabilities. For appliances or arrays that are deemed to be “closed systems” and not able to run user code, their stance is that these systems are unaffected and won’t be patched.

Where appliances can run external code like Pure Storage’s FlashArray, which can execute user code via a feature called Purity Run, there will be a need to patch. Similarly, end users running SDS solutions on generic operating systems will need to patch. HCI and hypervisor vendors have already started to make announcements about patching, although the results have been varied. VMware for instance, released a set of patches only to recommend not installing them due to customer issues. Intel’s advisory earlier this week warning of problems with its patches has added to the confusion.

Some vendors such as Dell EMC haven’t made public statements about the impact of the vulnerabilities for all of their products. For example, Dell legacy storage product information is openly available, while information about Dell EMC products is only available behind support firewalls. I guess if you’re a user of those platforms, then you will have access, however, for wider market context it would have been helpful to see a consolidated response in order to assess the risk.

Reliability

So far, the patches released don’t seem to be very stable. Some have been withdrawn, others have crashed machines or made them unbootable. Vendors are in a difficult position, because the details of the vulnerabilities weren’t widely circulated in the community before they subsequently were made public. Some storage vendors only found out about the issue when the news broke in the press. This means some of the patches may be being rushed to market without full testing of the impact when they are applied.

To patch or not?

What should end users do? First, it’s worth evaluating the risk and impact of either applying or not applying patches. Computers that are regularly exposed to the internet like desktops and public cloud instances (including virtual storage appliances running in a cloud instance)) are likely to be most at risk, whereas storage appliances behind a corporate firewall on a dedicated storage management network are at lowest risk. Measure this risk against the impact of applying the patches and what could go wrong. Applying patches to a storage platform supporting hundreds or thousands of users, for example, is a process that needs thinking through.

Action plan

Start by talking to your storage vendors. Ask them why they believe their platforms are exposed or not. Ask what testing of patching has been performed, from both a stability and performance perspective. If you have a lab environment, do some before/after testing with standard workloads. If you don’t have a lab, ask your vendor for support.

As there are no known exploits in the wild for Spectre/Meltdown, a wise approach is probably to wait a little before applying patches. Let the version 1 fixes be tested in the wild by other folks first. Invariably issues are found that then get corrected by another point release. Waiting a little also gives time for vendors to develop more efficient patches, rather than ones that simply act as a workaround. In any event, your approach will depend on your particular set of circumstances.



Source link

8 Ways Data Center Storage Will Change in 2018


The storage industry was on a roller coaster in 2017, with the decline of traditional SAN gear offset by enterprise interest in hyperconverged infrastructure, software-only solutions, and solid-state drives. We have seen enterprises shift from hard disks to solid-state as the boost in performance with SSDs transforms data center storage.

2018 will build on these trends and also add some new items to the storage roadmap. SSD is still evolving rapidly on four fronts:  core technology, performance, capacity and price. NVMe has already boosted flash IOPS and GB per second into the stratosphere and we stand on the brink of mainstream adoption of NVMe over Ethernet, with broad implications for how storage systems are configured going forward.

Vendors are shipping 32TB SSDs, leaving the largest HDD far behind at 16TB. With 3D die technology hitting its stride, we should see 50TB and 100TB drives in 2018, especially if 4-bit storage cells hit their goals. Much of the supply shortage in flash die is behind us, and prices should begin to drop again, though demand may grow faster than expected and slow the price drop.

Outside of the drives themselves, RAID arrays are in trouble. With an inherent performance bottleneck in the controller design, handling more than a few SSDs is a real challenge. Meanwhile, small storage appliances, which are essentially inexpensive commercial off-the-shelf servers, meet the need of object stores and hyperconverged nodes. This migration is fueled by startups like Excelero, which connect drives directly to the cluster fabric at RDMA speeds using NVMe over Ethernet.

A look at recent results reflects the industry’s shift to COTS. With the exception of NetApp, traditional storage vendors are experiencing single-digit revenue growth, while original design manufacturers, which supply huge volumes of COTS to cloud providers, are collectively seeing growth of 44%. Behind that growth is the increasing availability of unbundled storage software. The combination of cheap storage platforms and low-cost software is rapidly commoditizing the storage market. This trend will accelerate in 2018 as software-defined storage (SDS) begins to shape the market.

SDS is a broad concept, but inherently unbundles control and service software from hardware platforms. The concept has been very successful in networking and in cloud servers, so extending it to storage is not only logical, but required. We’ll see more SDS solutions and competition in 2018 than we’ve had in any year of the last decade.

NVMe will continue to replace SAS and SATA as the interface for enterprise drives. Over and above the savings in CPU overhead that it brings, NVMe supports new form-factor drives. We can expect 32TB+ SSDs in a 2.5 inch size in 2018, as well as servers using M.2 storage variants.

This has massive implications. Intel has showcased an M.2 “ruler” blade drive with 33+ TB capacities that can be mounted in a 1U server with 32 slots. That gives us a 1 Petabyte, ultra-fast 1U storage solution. Other vendors are talking up similar densities, signaling an important trend. Storage boxes will get smaller, hold huge capacities, and, due to SSD speed, outperform acres of HDD arrays. You’ll be able to go to the CIO and say, “I  really can shrink the data center!”

There’s more, though! High-performance SSDs enable deduplication and compression of data as an invisible background job. The services doing this use the excess bandwidth of the storage drives. For most commercial use cases, the effective capacity is multiplied 5X or more compared with raw capacity. Overall, compression reduces the number of small appliances needed, making SSD storage much cheaper overall than hard drives.

Let’s delve into the details of all these storage trends we can expect to see in the data center this year.

(Image: Olga Salt/Shutterstock)



Source link

10 Silly Data Center Memes


[Security Breach Report] Overall Impact of & Steps to Prevent Breaches

Despite the escalation of cybersecurity staffing and technology, enterprises continue to suffer data breaches and compromises at an alarming rate. How do these breaches occur? How are enterprises responding, and what is the impact of these compromises on the business? This report offers new data on the frequency of data breaches, the losses they cause, and the steps that organizations are taking to prevent them in the future.

MORE REPORTS



Source link

10 Silly Data Center Memes


Surviving the IT Security Skills Shortage

Cybersecurity professionals are in high demand — and short supply. Find out what Dark Reading discovered during their 2017 Security Staffing Survey and get some strategies for getting through the drought. Download the report today!

MORE REPORTS



Source link

Data Center Architecture: Converged, HCI, and Hyperscale


A comparison of three approaches to enterprise infrastructure.

If you are planning an infrastructure refresh or designing a greenfield data center from scratch, the hype around converged infrastructure, hyperconverged infrastructure (HCI) and hyperscale might have you scratching your head. In this blog, I’ll compare and contrast the three approaches and consider scenarios where one infrastructure architecture would be a better fit than the others.

Converged infrastructure

Converged infrastructure (CI) incorporates compute, storage and networking in a pre-packaged, turnkey solution. The primary driver behind convergence was server virtualization: expanding the flexibility of server virtualization to storage and network components. With CI, administrators could use automation and management tools to control the core components of the data center. This allowed for a single admin to provision, de-provision and make any compute, storage or networking changes on the fly.

Converged infrastructure platforms use the same silo-centric infrastructure components of traditional data centers. They’re simply pre-architected and pre-configured by the manufacturers. The glue that unifies the components is specialized management software. One of the earliest and most popular CI examples is Virtual Computing Environment (VCE). This was a joint venture by Cisco Systems, EMC, and VMware that developed and sold various sized converged infrastructure solutions known as Vblock. Today, Vblock systems are sold by the combined Dell-EMC entity, Dell Technologies.

CI solutions are a great choice for infrastructure pros who want an all-in-one solution that’s easy to buy and pre-packaged direct from the factory. CI is also easier from a support standpoint. If you maintain support contracts on your CI system, the manufacture will assist in troubleshooting end-to-end. That said, many vendors are shifting their focus towards hyperconverged infrastructures.

Hyperconverged infrastructure

HCI builds on CI. In addition to combining the three core components of a data center together, hyperconverged infrastructure leverages software to integrate compute, network and storage into a single unit as opposed to using separate components. This architecture design offers performance advantages and eliminates a great deal of physical cabling compared to silo- and CI-based data centers.  

Hyperconverged solutions also provide far more capability in terms of unified management and orchestration. The mobility of applications and data is greatly improved, as is the setup and management of functions like backups, snapshots, and restores. These operational efficiencies make HCI architectures more attractive from a cost-benefit analysis when compared to traditional converged infrastructure solutions.

In the end, a hyperconverged solution is all about simplicity and speed. A great use case for HCI would be a new virtual desktop infrastructure (VDI) deployment. Using the orchestration and automation tools available, you have the ideal platform to easily roll out hundreds or thousands of virtual desktops.

Hyperscale

The key attribute of hyperscale computing is the de-coupling of compute, network and storage software from the hardware. That’s right, while HCI combined everything into a single chassis, hyperscale decouples the components.

This approach, as practiced by hyperscale companies like Facebook and Google, provides more flexibility than hyperconverged solutions, which tend to grow in a linear fashion. For example, if you need more storage on your HCI system, you typically must add a node blade that includes both compute and built-in storage. Some hyperconverged solutions are better than others in this regard, but most fall prey to linear scaling problems if your workloads don’t scale in step.

Another benefit of hyperscale architectures is that you can manage both virtual and bare metal servers on a single system. This is ideal for databases that tend to operate in a non-virtualized manner. Hyperscale is most useful in situations where you need to scale-out one resource independently from the others. A good example is IoT because it requires a lot of data storage, but not much compute. A hyperscale architecture also helps in situations where it’s beneficial to continue operating bare metal compute resources, yet manage storage resources in elastic pools.



Source link