Tag Archives: Data

Data Storage War: Flash Vs. Hard Drives


The struggle for market share between flash-based drives and hard-disk drives is so similar to physical conflict that we can apply the same language. Wars are essentially territorial, with outcomes determined by who owns the battlefield. Logistics chains and technological leadership often determine point battles and having superiority of resources is always a major factor in sustaining an attack.

We often see the flash/HDD battle portrayed as a cloudy whole, forgetting that there are in fact a series of battlefronts. The devil is in the details and a map of the storage world today shows how, insidiously, flash-based products have gained almost all the available territory.

Storage today ranges from small drives used in medical and industrial gear to super-fast all-flash arrays that deliver millions of I/O operations per second. Factors such as weight, physical size, and power help determine the best storage match, together with price and performance. Logistics — for example, the availability of component die — are a major factor in setting storage prices and thus the balance of competitiveness.

To complicate issues, however, flash and HDDs are miles apart when it comes to performance. Using a solid-state drive may make a server run three or more times faster than the same configuration using HDDs. This is the technology component of the battlefront. Many comparisons of HDD and SSD prices ignore the impact of the difference on overall TCO, so consequently they overstate the cost of SSD-based solutions. This oversight has slowed SSD sales for years, though the industry today has mostly savvied up.

As we fly over the storage drive battlefields, what do we see? SSDs have established total technological dominance in most areas. For example, 15K and 10K RPM hard drives are topped out and starved of future investment; they just can’t keep up with SSDs and they cost more. This concedes the enterprise drive space to SSDs, with a resulting decline in RAID arrays and SAN gear. It’s interesting that SANs aren’t surrendering yet, but I’ll touch on that later.

The mobile PC space faces a race to the bottom, which has forced vendors to enrich configurations to make any margin. An obvious play is to go all-flash, implying longer battery life and less weight, among other benefits. SSDs now own most of this territory.

As we go territory by territory, we see that flash has won or is winning handily. The one exception is nearline bulk storage, where top hard-disk drive vendor Seagate, projects 10 more years of HDD dominance in bulk storage. I don’t buy that story and you’ll see why in this slideshow!

Note that battles may be won, but storage is a conservative market and change takes years. Businesses are still buying 15 K hard drives and nearline SATA drives won’t go away overnight!

(Image: Satakorn/Shutterstock)



Source link

Data Center Tech ‘Graduation:’ What IT Pros Have Learned


As schools around the country hold graduation ceremonies, classic songs like Green Day’s “Good Riddance (Time of Your Life)” will be sung, played, or reminisced about by students everywhere as they reflect on fond memories and lessons learned in school. Graduation is a symbol of transition and change, a milestone that represents growth, progress, and transformation.

Just as education fosters growth in students, digital transformation drives progress in an organization and ultimately leads to innovations in the data center, but not without a few lessons learned from setbacks and failures.

In the spirit of graduation season, we asked our THWACK IT community to tell us what technology they “graduated” to in 2018. According to the SolarWinds 2018 IT Trends Report, 94% of surveyed IT professionals indicated that cloud and/or hybrid IT is the most important technology in their IT organization’s technology strategy today. But what else have organizations experimented with over the last year? Check out some of the most popular technologies that THWACK community members tell us they have implemented this past year, in their words.

(Image: Nirat.pix/Shutterstock)



Source link

6 Reasons SSDs Will Take Over the Data Center


The first samples of flash-based SSDs surfaced 12 years ago, but only now does the technology appear poised to supplant hard drives in the data center, at least for primary storage. Why has it taken so long? After all, flash drives are as much as 1,000x faster than hard-disk drives for random I/O.

Partly, it has been a misunderstanding that overlooks systems, and focuses instead on storage elements and CPUs. This led the industry to focus on cost per terabyte, while the real focus should have been the total cost of a solution with or without flash. Simply put, most systems are I/O bound and the use of flash inevitably means needing fewer systems for the same workload. This typically offsets the cost difference.

The turning point in the storage industry came with all-flash arrays: simple drop-in devices that instantly and dramatically boosted SAN performance. This has evolved into a model of two-tier storage with SSDs as the primary tier and a slower, but cheaper, secondary tier of HDDs

Applying the new flash model to servers provides much higher server performance, just as price points for SSDs are dropping below enterprise hard drive prices. With favorable economics and much better performance, SSDs are now the preferred choice for primary tier storage.

We are now seeing the rise of Non-Volatile Memory Express (NVMe), which aims to replace SAS and SATA as the primary storage interface. NVMe is a very fast, low-overhead protocol that can handle millions of IOPS, far more than its predecessors. In the last year, NVMe pricing has come close to SAS drive prices, making the solution even more attractive. This year, we’ll see most server motherboards supporting NVMe ports, likely as SATA-Express, which also supports SATA drives.

NVMe is internal to servers, but a new NVMe over Fabrics (NVMe-oF) approach extends the NVMe protocol from a server out to arrays of NVMe drives and to all-flash and other storage appliances, complementing, among other things, the new hyper-converged infrastructure (HCI) model for cluster design.

The story isn’t all about performance, though. Vendors have promised to produce SSDs with 32 and 64TB capacity this year. That’s far larger than the biggest HDD, which is currently just 16TB and stuck at a dead-end at least until HAMR is worked out.

The brutal reality, however, is that solid-state opens up form-factor options that hard disk drives can’t achieve. Large HDDs will need to be 3.5 in form-factor. We already have 32TB SSDs in a 2.5 inch size and new form-factors, such as M2.0 and the “ruler“(an elongated M2.0), which will allow for a lot of capacity in a small appliance. Intel and Samsung are talking petabyte- sized storage in 1U boxes.

The secondary storage market is slow and cheap, making for a stronger barrier to entry against SSDs. The rise of 3D NAND and new Quad-Level Cell (QLC) flash devices will close the price gap to a great extent, while the huge capacity per drive will offset the remaining price gap by reducing the number of appliances.

Solid-state drives have a secret weapon in the battle for the secondary tier. Deduplication and compression become feasible because of the extra bandwidth in the whole storage structure, effectively multiplying capacity by factors of 5X to 10X. This lowers the cost of QLC-flash solutions below HDDs in price-per-available terabyte.

In the end, perhaps in just three or four years flash and SSDs will take over the data center and kill hard drives off for all but the most conservative and stubborn users. On the next pages, I drill down into how SSDs will dominate data center storage.

(Image: Timofeev Vladimir/Shutterstock)



Source link

SNIA Releases Data Protection Guidance for Storage Pros


Data storage professionals may not be accustomed to dealing with data security and privacy issues like due diligence, but with the European Union’s General Data Protection Regulation about to take effect, many will need to learn some new concepts.

That’s what makes a new white paper from the Storage Networking Industry Association especially timely, Eric Hibbard, chair of SNIA’s Security Technical Work Group, told me in an interview. SNIA, a nonprofit focused on developing storage standards and best practices, put together a document that provides guidance on data protection, specifically as it relates to storage.

“The storage industry has for many years has been insulated from having to worry about traditional security and to a less degree, the privacy issues,” Hibbard said. “With GDPR, the definition of a data breach moved from unauthorized access to include things like unauthorized data destruction or corruption. Why is that important to storage professionals? If you make an update to a storage system that causes corruption of data, and if that’s only copy of that data, it could constitute a data breach under GDPR. That’s the kind of thing we want to make sure the storage industry and consumers are aware of.”

The GDPR, which sets mandatory requirements for businesses, becomes enforceable May 25. It applies to any business storing data of EU citizens.

The white paper builds on the ISO/IEC 27040 storage security standard, which doesn’t directly address data protection, by providing specific guidance on topics such as data classification, retention and preservation, data authenticity and integrity, monitoring and auditing, and data disposition/sanitization.

For example, the issue of data preservation, retention, and archiving is barely touched on in the standard, so the paper expands on that and explains what the potential security issues are from a storage perspective, said Hibbard, who holds several certifications, including CISSP-ISSAP, and serves roles in other industry groups such as the Cloud Security Alliance.

The paper explains the importance of due diligence and due care – concepts that storage mangers aren’t used to dealing with, Hibbard said.

“In many instances, the regulations associated with data protection of personal data or PII (privacy) do not include details on the specific security controls that must be used,” SNIA wrote in its paper. “Instead, organizations are required to implement appropriate technical and organizational measures that meet their obligations to mitigate risks based on the context of their operations. Put another way, organizations must exercise sufficient due care and due diligence to avoid running afoul of the regulations.”

Failure to take steps to understand and address data exposure risks can demonstrate lack of due care and due diligence, the paper warns, adding: “Storage systems and ecosystems are such integral parts of ICT infrastructure that these concepts frequently apply, but this situation may not be understood by storage managers and administrators who are responsible and accountable.”

One of the components of due diligence is data disposition and sanitization. “When you’re done with data, how do you make sure it actually goes away so that it doesn’t become a source of a data breach?” Hibbard said.

The SNIA paper spends some time defining data protection, noting that the term means different things depending on whether someone works in storage, privacy, or information security. SNIA defines data protection as “assurance that data is not corrupted, is accessible for authorized purposes only, and is in compliance with applicable requirements.”

The association’s Storage Security: Data Protection white paper is one of many it produces, which are freely available. Others papers cover topics such as cloud storage, Ethernet storage, hyperscaler storage, and software-defined storage.



Source link

Facebook Debuts Data Center Fabric Aggregator


At the Open Compute Project Summit in San Jose on Tuesday, Facebook engineers showcased their latest disaggregated networking design, taking the wraps off new data center hardware. Microsoft, meanwhile, announced an effort to disaggregate solid-state drives to make them more flexible for the cloud.

The Fabric Aggregator, built on Facebook’s Wedge 100 gigabit top-of-rack switch and Open Switching System (FBOSS) software, is designed as a distributed network system to accommodate the social media giant’s rapid growth. The company is planning to build its twelfth data center and is expanding one in Nebraska from two buildings to six.

“We had tremendous growth of east-west traffic,” Sree Sankar, technical product manager at Facebook said, referring to the traffic flowing between buildings in a Facebook data center region. “We needed a change in the aggregation tier. We were already using the largest chassis switch.”

The company needed a system that would provide power efficiency and have a flexible design, she said. Engineers used Wedge 100 and FBOSS as building blocks and developed a cabling assembly unit to emulate the backplane. The design provides operational efficiency, 60% better power efficiency, and higher port density. Sankar said Facebook was able to deploy it quickly in its data center regions in the past nine months. Engineers can easily scale Fabric Aggregator up or down according to data center demands.

“It redefines network capacity in our data centers,” she said.

Facebook engineers wrote a detailed description of Fabric Aggregator in a blog post. They submitted the specifications for all the backplane options to the OCP, continuing their sharing tradition. Facebook’s networking contributions to OCP include its Wedge switch and Edge Fabric traffic control system. The company has been a major proponent of network disaggregation, saying traditional proprietary network gear doesn’t provide the flexibility and agility they need.

Seven years ago, Facebook spearheaded the creation of the Open Compute Project with a focus on open data center components such as racks and servers. The OCP now counts more than 4,000 engineers involved in its various projects and more than 370 specification and design packages, OCP CEO Rocky Bullock said in kicking off this week’s OCP Summit, which drew some 3,000 attendees.  

Microsoft unveils Project Denali

While Facebook built on its disaggregated networking approach, Microsoft announced Project Denali, an effort to create new standards for flash storage to optimize it for the cloud through disaggregation.

Kushagra Vaid, general manager of Azure Infrastructure at Microsoft, said cloud providers are top consumers of flash storage, which amounts to billions of dollars in annual spending. SSDs, however, with their “monolithic architecture” aren’t designed to be cloud friendly, he said.  

Any SSD innovation requires that the entire device be tested, and new functionality isn’t provided in a consistent manner, he said. “At cloud scale, we want to drive every bit of efficiency,” Vaid said. Microsoft engineers wanted to figure out a way to provide the same kind of flexibility and agility with SSDs as disaggregation brought to networking.

“Why can’t we do the same thing with SSDs?” he said.

Project Denali “standardizes the SSD firmware interfaces by disaggregating the functionality for software-defined data layout and media management,” Vaid wrote in a blog post.

“Project Denali is a standardization and evolution of Open Channel that defines the roles of SSD vs. that of the host in a standard interface. Media management, error correction, mapping of bad blocks and other functionality specific to the flash generation stays on the device while the host receives random writes, transmits streams of sequential writes, maintains the address map, and performs garbage collection. Denali allows for support of FPGAs or microcontrollers on the host side,” he wrote.

Vaid said this disaggregation provides a lot of benefits. “The point of creating a standard is to give choice and provide flexibility… You can start to think at a bigger scale because of this disaggregation, and have each layer focus on what it does best.”

Microsoft is working with several partners including CNEX Labs and Intel on Project Denali, which it plans to contribute to the OCP.

Hear more from Facebook and the Open Compute Project when they present live at the Network Transformation Summit at Interop ITX, April 30 and May 1 in Las Vegas. Register now!



Source link