Tag Archives: Data

Linux 4.19.8 Released With BLK-MQ Fix To The Recent Data Corruption Bug


LINUX KERNEL --

Hopefully you can set aside some time this weekend to upgrade to Linux 4.19.8 as there’s the BLK-MQ fix in place for the recent “EXT4 corruption issue” that was plaguing many users of Linux 4.19.

Greg Kroah-Hartman just released a number of stable kernel point releases. Linux 4.19.8 has just some minor additions like supporting the ELAN0621 touchpad, quirking all PDP Xbox One gamepads for better support, and some minor fixes… Linux 4.19.8 wouldn’t be worthy of a shout-out had it not been for Jens Axboe’s BLK-MQ patches part of this release.

Earlier this week the Linux 4.19+ data corruption issue was resolved and turned out not to be an EXT4 problem but rather an issue with the multi-queue block I/O queuing mechanism that could cause some data corruption when running without an I/O scheduler. Once that was figured out, the Linux 4.20 kernel quickly picked up the fixes and now it’s been back-ported to the 4.19.8 release. So particularly if using BLK-MQ with “none” as your I/O scheduler selection, make sure you upgrade to this latest release for data safety.

Greg also released Linux 4.14.87 and 4.9.144 as the latest for these LTS kernels albeit with no high profile changes.


Data Storage War: Flash Vs. Hard Drives


The struggle for market share between flash-based drives and hard-disk drives is so similar to physical conflict that we can apply the same language. Wars are essentially territorial, with outcomes determined by who owns the battlefield. Logistics chains and technological leadership often determine point battles and having superiority of resources is always a major factor in sustaining an attack.

We often see the flash/HDD battle portrayed as a cloudy whole, forgetting that there are in fact a series of battlefronts. The devil is in the details and a map of the storage world today shows how, insidiously, flash-based products have gained almost all the available territory.

Storage today ranges from small drives used in medical and industrial gear to super-fast all-flash arrays that deliver millions of I/O operations per second. Factors such as weight, physical size, and power help determine the best storage match, together with price and performance. Logistics — for example, the availability of component die — are a major factor in setting storage prices and thus the balance of competitiveness.

To complicate issues, however, flash and HDDs are miles apart when it comes to performance. Using a solid-state drive may make a server run three or more times faster than the same configuration using HDDs. This is the technology component of the battlefront. Many comparisons of HDD and SSD prices ignore the impact of the difference on overall TCO, so consequently they overstate the cost of SSD-based solutions. This oversight has slowed SSD sales for years, though the industry today has mostly savvied up.

As we fly over the storage drive battlefields, what do we see? SSDs have established total technological dominance in most areas. For example, 15K and 10K RPM hard drives are topped out and starved of future investment; they just can’t keep up with SSDs and they cost more. This concedes the enterprise drive space to SSDs, with a resulting decline in RAID arrays and SAN gear. It’s interesting that SANs aren’t surrendering yet, but I’ll touch on that later.

The mobile PC space faces a race to the bottom, which has forced vendors to enrich configurations to make any margin. An obvious play is to go all-flash, implying longer battery life and less weight, among other benefits. SSDs now own most of this territory.

As we go territory by territory, we see that flash has won or is winning handily. The one exception is nearline bulk storage, where top hard-disk drive vendor Seagate, projects 10 more years of HDD dominance in bulk storage. I don’t buy that story and you’ll see why in this slideshow!

Note that battles may be won, but storage is a conservative market and change takes years. Businesses are still buying 15 K hard drives and nearline SATA drives won’t go away overnight!

(Image: Satakorn/Shutterstock)



Source link

Data Center Tech ‘Graduation:’ What IT Pros Have Learned


As schools around the country hold graduation ceremonies, classic songs like Green Day’s “Good Riddance (Time of Your Life)” will be sung, played, or reminisced about by students everywhere as they reflect on fond memories and lessons learned in school. Graduation is a symbol of transition and change, a milestone that represents growth, progress, and transformation.

Just as education fosters growth in students, digital transformation drives progress in an organization and ultimately leads to innovations in the data center, but not without a few lessons learned from setbacks and failures.

In the spirit of graduation season, we asked our THWACK IT community to tell us what technology they “graduated” to in 2018. According to the SolarWinds 2018 IT Trends Report, 94% of surveyed IT professionals indicated that cloud and/or hybrid IT is the most important technology in their IT organization’s technology strategy today. But what else have organizations experimented with over the last year? Check out some of the most popular technologies that THWACK community members tell us they have implemented this past year, in their words.

(Image: Nirat.pix/Shutterstock)



Source link

6 Reasons SSDs Will Take Over the Data Center


The first samples of flash-based SSDs surfaced 12 years ago, but only now does the technology appear poised to supplant hard drives in the data center, at least for primary storage. Why has it taken so long? After all, flash drives are as much as 1,000x faster than hard-disk drives for random I/O.

Partly, it has been a misunderstanding that overlooks systems, and focuses instead on storage elements and CPUs. This led the industry to focus on cost per terabyte, while the real focus should have been the total cost of a solution with or without flash. Simply put, most systems are I/O bound and the use of flash inevitably means needing fewer systems for the same workload. This typically offsets the cost difference.

The turning point in the storage industry came with all-flash arrays: simple drop-in devices that instantly and dramatically boosted SAN performance. This has evolved into a model of two-tier storage with SSDs as the primary tier and a slower, but cheaper, secondary tier of HDDs

Applying the new flash model to servers provides much higher server performance, just as price points for SSDs are dropping below enterprise hard drive prices. With favorable economics and much better performance, SSDs are now the preferred choice for primary tier storage.

We are now seeing the rise of Non-Volatile Memory Express (NVMe), which aims to replace SAS and SATA as the primary storage interface. NVMe is a very fast, low-overhead protocol that can handle millions of IOPS, far more than its predecessors. In the last year, NVMe pricing has come close to SAS drive prices, making the solution even more attractive. This year, we’ll see most server motherboards supporting NVMe ports, likely as SATA-Express, which also supports SATA drives.

NVMe is internal to servers, but a new NVMe over Fabrics (NVMe-oF) approach extends the NVMe protocol from a server out to arrays of NVMe drives and to all-flash and other storage appliances, complementing, among other things, the new hyper-converged infrastructure (HCI) model for cluster design.

The story isn’t all about performance, though. Vendors have promised to produce SSDs with 32 and 64TB capacity this year. That’s far larger than the biggest HDD, which is currently just 16TB and stuck at a dead-end at least until HAMR is worked out.

The brutal reality, however, is that solid-state opens up form-factor options that hard disk drives can’t achieve. Large HDDs will need to be 3.5 in form-factor. We already have 32TB SSDs in a 2.5 inch size and new form-factors, such as M2.0 and the “ruler“(an elongated M2.0), which will allow for a lot of capacity in a small appliance. Intel and Samsung are talking petabyte- sized storage in 1U boxes.

The secondary storage market is slow and cheap, making for a stronger barrier to entry against SSDs. The rise of 3D NAND and new Quad-Level Cell (QLC) flash devices will close the price gap to a great extent, while the huge capacity per drive will offset the remaining price gap by reducing the number of appliances.

Solid-state drives have a secret weapon in the battle for the secondary tier. Deduplication and compression become feasible because of the extra bandwidth in the whole storage structure, effectively multiplying capacity by factors of 5X to 10X. This lowers the cost of QLC-flash solutions below HDDs in price-per-available terabyte.

In the end, perhaps in just three or four years flash and SSDs will take over the data center and kill hard drives off for all but the most conservative and stubborn users. On the next pages, I drill down into how SSDs will dominate data center storage.

(Image: Timofeev Vladimir/Shutterstock)



Source link

SNIA Releases Data Protection Guidance for Storage Pros


Data storage professionals may not be accustomed to dealing with data security and privacy issues like due diligence, but with the European Union’s General Data Protection Regulation about to take effect, many will need to learn some new concepts.

That’s what makes a new white paper from the Storage Networking Industry Association especially timely, Eric Hibbard, chair of SNIA’s Security Technical Work Group, told me in an interview. SNIA, a nonprofit focused on developing storage standards and best practices, put together a document that provides guidance on data protection, specifically as it relates to storage.

“The storage industry has for many years has been insulated from having to worry about traditional security and to a less degree, the privacy issues,” Hibbard said. “With GDPR, the definition of a data breach moved from unauthorized access to include things like unauthorized data destruction or corruption. Why is that important to storage professionals? If you make an update to a storage system that causes corruption of data, and if that’s only copy of that data, it could constitute a data breach under GDPR. That’s the kind of thing we want to make sure the storage industry and consumers are aware of.”

The GDPR, which sets mandatory requirements for businesses, becomes enforceable May 25. It applies to any business storing data of EU citizens.

The white paper builds on the ISO/IEC 27040 storage security standard, which doesn’t directly address data protection, by providing specific guidance on topics such as data classification, retention and preservation, data authenticity and integrity, monitoring and auditing, and data disposition/sanitization.

For example, the issue of data preservation, retention, and archiving is barely touched on in the standard, so the paper expands on that and explains what the potential security issues are from a storage perspective, said Hibbard, who holds several certifications, including CISSP-ISSAP, and serves roles in other industry groups such as the Cloud Security Alliance.

The paper explains the importance of due diligence and due care – concepts that storage mangers aren’t used to dealing with, Hibbard said.

“In many instances, the regulations associated with data protection of personal data or PII (privacy) do not include details on the specific security controls that must be used,” SNIA wrote in its paper. “Instead, organizations are required to implement appropriate technical and organizational measures that meet their obligations to mitigate risks based on the context of their operations. Put another way, organizations must exercise sufficient due care and due diligence to avoid running afoul of the regulations.”

Failure to take steps to understand and address data exposure risks can demonstrate lack of due care and due diligence, the paper warns, adding: “Storage systems and ecosystems are such integral parts of ICT infrastructure that these concepts frequently apply, but this situation may not be understood by storage managers and administrators who are responsible and accountable.”

One of the components of due diligence is data disposition and sanitization. “When you’re done with data, how do you make sure it actually goes away so that it doesn’t become a source of a data breach?” Hibbard said.

The SNIA paper spends some time defining data protection, noting that the term means different things depending on whether someone works in storage, privacy, or information security. SNIA defines data protection as “assurance that data is not corrupted, is accessible for authorized purposes only, and is in compliance with applicable requirements.”

The association’s Storage Security: Data Protection white paper is one of many it produces, which are freely available. Others papers cover topics such as cloud storage, Ethernet storage, hyperscaler storage, and software-defined storage.



Source link