Tag Archives: Storage

SNIA Releases Data Protection Guidance for Storage Pros


Data storage professionals may not be accustomed to dealing with data security and privacy issues like due diligence, but with the European Union’s General Data Protection Regulation about to take effect, many will need to learn some new concepts.

That’s what makes a new white paper from the Storage Networking Industry Association especially timely, Eric Hibbard, chair of SNIA’s Security Technical Work Group, told me in an interview. SNIA, a nonprofit focused on developing storage standards and best practices, put together a document that provides guidance on data protection, specifically as it relates to storage.

“The storage industry has for many years has been insulated from having to worry about traditional security and to a less degree, the privacy issues,” Hibbard said. “With GDPR, the definition of a data breach moved from unauthorized access to include things like unauthorized data destruction or corruption. Why is that important to storage professionals? If you make an update to a storage system that causes corruption of data, and if that’s only copy of that data, it could constitute a data breach under GDPR. That’s the kind of thing we want to make sure the storage industry and consumers are aware of.”

The GDPR, which sets mandatory requirements for businesses, becomes enforceable May 25. It applies to any business storing data of EU citizens.

The white paper builds on the ISO/IEC 27040 storage security standard, which doesn’t directly address data protection, by providing specific guidance on topics such as data classification, retention and preservation, data authenticity and integrity, monitoring and auditing, and data disposition/sanitization.

For example, the issue of data preservation, retention, and archiving is barely touched on in the standard, so the paper expands on that and explains what the potential security issues are from a storage perspective, said Hibbard, who holds several certifications, including CISSP-ISSAP, and serves roles in other industry groups such as the Cloud Security Alliance.

The paper explains the importance of due diligence and due care – concepts that storage mangers aren’t used to dealing with, Hibbard said.

“In many instances, the regulations associated with data protection of personal data or PII (privacy) do not include details on the specific security controls that must be used,” SNIA wrote in its paper. “Instead, organizations are required to implement appropriate technical and organizational measures that meet their obligations to mitigate risks based on the context of their operations. Put another way, organizations must exercise sufficient due care and due diligence to avoid running afoul of the regulations.”

Failure to take steps to understand and address data exposure risks can demonstrate lack of due care and due diligence, the paper warns, adding: “Storage systems and ecosystems are such integral parts of ICT infrastructure that these concepts frequently apply, but this situation may not be understood by storage managers and administrators who are responsible and accountable.”

One of the components of due diligence is data disposition and sanitization. “When you’re done with data, how do you make sure it actually goes away so that it doesn’t become a source of a data breach?” Hibbard said.

The SNIA paper spends some time defining data protection, noting that the term means different things depending on whether someone works in storage, privacy, or information security. SNIA defines data protection as “assurance that data is not corrupted, is accessible for authorized purposes only, and is in compliance with applicable requirements.”

The association’s Storage Security: Data Protection white paper is one of many it produces, which are freely available. Others papers cover topics such as cloud storage, Ethernet storage, hyperscaler storage, and software-defined storage.



Source link

Flash Storage Adoption in the Enterprise


We’ve heard for a while that flash storage is going mainstream, but how are companies actually using it and what results are they getting? A new report by IT analyst firm Evaluator Group sheds light on enterprise adoption of solid-state storage and why the technology has become so popular.

The firm, which specializes in analysis of data storage and information management, surveyed larger enterprises with more than 1,000 employees that had already deployed all-flash systems. That kept the study focused on organizations with first-hand experience with solid-state storage, Randy Kerns, senior strategist and analyst at Evaluator Group, told me in an interview. After the survey, which was conducted across various vertical markets, analysts interviewed many of the participants to get deeper insight.

Evaluator Group found that most of those surveyed bought all-flash arrays with the goal of speeding database performance so that certain applications ran faster. “The majority of them justified paying extra based on getting the databases to run faster,” Kerns said.

Another top use case was accelerating virtual machine environments, which involves supporting more virtual machines per physical server due to the improved performance with solid-state technology, he said.

Enterprises reported strong results with their flash storage deployments, the study found.

“In all cases, they got what they expected and more, to the point that they added additional workloads that weren’t performance demanding…They had more capabilities than they planned on, so they added more workloads to their environment,” Kern said. “And the future is adding more workloads or buying more all-flash systems for putting more workloads on.”

Organizations surveyed also reported improved reliability, with fewer interruptions either due to a device or system failure. “That was a big improvement for them,” he said. “It’s something they hadn’t counted on in their initial purchase.”

Survey participants said they valued the data protection capabilities of solid-state storage systems, such as snapshots. “The systems had the capabilities to do things differently so they could accelerate their data protection processes,” Kerns said.

Data reduction functionality wasn’t high on their list of solid-state features, as they considered it a basic capability of flash storage systems, according to Evaluator Group.

While solid-state storage has a reputation for being pricey, it wasn’t an issue for the survey participants, Kerns said. “These people already had them [all-flash systems], so the battle about cost is in the rear view mirror,” he said. “First-time buyers may have a sticker-shock issue, but for those who bought it, that’s history.”

When buying flash storage, enterprises tend to turn to their current storage systems vendor, the study found. “Incumbency wins,” Kerns said. A few bought from storage startups, but the majority preferred to stick with their existing vendor, enjoying new systems that operated in a similar fashion what they already had.

As for going all-flash, enterprises expect that will be the case eventually, but certainly won’t happen overnight. “They have a number of platforms that have a certain lifespan. They’ll just age those systems out, so it could be a number of years until they get to that point,” Kerns said.

Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!

 



Source link

Storage Management Software: Users Weigh In


Data storage never seems to stop evolving in ways that challenge IT departments. Aside from the need to deal with perpetual growth, data storage now requires management across cloud and on-premises infrastructure as well as hybrid environment. Different workloads also require varying service levels from storage solutions. Storage management tools have had to keep up with this rapid change.

Storage management tools give storage managers a way to stay on top of storage systems. They enable storage managers to track utilization, monitor performance and more. What do users actually think of storage management tools on the market today?

The discussion about storage management software on IT Central Station reveals that storage is about more than just storing data. It’s about keeping businesses running optimally. When customers can’t see their data, that’s not a storage problem. It’s a business problem. For this reason, storage managers appreciate storage management solutions that offer real time visibility into storage performance and the ability to compare relative performance from multiple storage systems. They like products that are responsive and efficient to use, with a “single pane of glass” and automated alerting.

The following reviews from IT Central Station users highlight the pros and cons of two top storage management software products: NetApp OnCommand Insight and Dell EMC ControlCenter.

NetApp OnCommand Insight

A storage administrator at a financial services company who goes by the handle StorageA7774, cited the product’s comprehensive view:

“Since we have to monitor multiple systems, it gives us a single pane of glass to look at all of our environments. Also, to compare and contrast, if one environment is having some issues, we can judge it against the other environments to make sure everything is on par with one another. In the financial services industry, customer responsiveness is very important. Financial advisors cannot sit in front of a customer and say, ‘I can’t get your data.’ Thus, being up and running and constantly available is a very important area for our client.”

Carter B., a storage administrator at a manufacturing company, cited a several ways OnCommand Insight helps his organization:

“The tracking of utilization of our storage systems; seeing the throughput—these are the most important metrics for having a working operating system and working storage system. It’s centralized. It’s got a lot of data in there. We can utilize the data that’s in there and the output to other systems to run scripts off of it. Therefore, it’s pretty versatile.”

However, a systems administrator at a real estate/law firm with the handle SystemsA3a53, noted a small drawback:

“There was a minor issue where we were receiving a notification that a cluster was not available, or communication to the cluster. OnCommand Manager could not reach a cluster, which is really much like a false positive. The minor issues were communications within the systems.”

And StorageA970f, a storage architect at a government agency, suggested an improvement to the tool’s interface:

“Maybe a little bit more graphical interface. Right now — and this is going to sound really weird — but whatever the biggest server is, the one that is utilizing the most storage space, instead of showing me that server and how much storage space, it just shows it to you in a big font. Literally in a big font. That’s it. So if your server is named Max and you’ve got another server named Sue, and Max is taking up most of your space, all it’s going to show is just Max is big, Sue is little. That’s is really weird, because I really want to see more than that. You can click on Max, drill down in and see the stuff. But I would rather, on my front interface, say, ‘Oh, gosh, Max is using 10 terabytes. Sue is only using one. She’s fixing to choke. Let me move some of this over.’”

Dell EMC ControlCenter

Gianfranco L., data manager at a tech services company, described how Dell EMC ControlCenter helps his organization:

“We use the SNMP gateway to aggregate hardware and performance events. The alerting feature is valuable because it completes the gap of storage monitoring. Often the storage comes with a tele-diagnostic service. For security purposes, it’s very important for us to be aware of every single failure in order to be more proactive and not only reactive.”
 

Bharath P., senior storage consultant at a financial services firm, described what he likes about the product.

“Centralized administration and management of SAN environment in the organization are valuable features. Improvements to my organization include ease of administration and that it fits in well with all the EMC SAN storage”

However, Hari K., senior infrastructure analyst at a financial services firm, said there’s room for improvement with EMC ControlCenter:

“It needed improvement with its stability. Also, since it was agent-based communication, we always had to ensure that the agents were running on the servers all the time.”

Gianfranco L., also cited an area where the product could do better:

“The use of agents is not easy. The architectural design of using every single agent for every type of storage can be reviewed with the use of general proxies. The general proxies also discover other vendors’ storage. This can be done with custom made scripts.”



Source link

The Evolution of Object Storage


It’s a truism that the amount of data created every year continues to grow at exponential rates. Almost every business now dependends on technology and the information those businesses generate has arguably become their greatest asset. Unstructured data, the kind best kept in object stores, has seen the biggest growth. So, where are we with object storage technology and what can we expect in the future?

Object storage systems

Object storage evolved out of the need to store large volumes of unstructured data for long periods of time at high levels of resiliency. Look back 20 years and we had block (traditional storage) and NAS appliances (typically as file servers). NAS – the most practical platform for unstructured at the time – didn’t really scale to the petabyte level and certainly didn’t offer the levels of resiliency expected for long-term data retention. Generally, businesses used tape for this kind of requirement, but of course tape is slow and inefficient.

Object storage developed to fill the gap by offering online access to content and over the years has developed into a mature technology. With new protection methods like erasure coding, the issue of securing data in a large-scale archive is generally solved.

Object stores use web-based protocols to store and retrieve data. Essentially, most offer four primitives, based on the CRUD acronym – Create, Read, Update, Delete. In many instances, Update is simply a Delete and Create pair of operations. This means interacting with an object store is relatively simple — issue a REST-based API call using HTTP that embeds the data and associated metadata.

This simplicity of operation highlights an issue for object storage: Applications need to be rewritten to use an object storage API. Thankfully vendors do offer SDKs to help in this process, but application changes are required. This problem points to the first evolution we’re seeing with object: multi-protocol access.

Multi-protocol

It’s fair to say that object stores have had multi-protocol access for some time, in the form of gateways or additional software that uses the object store back-end as a large pool of capacity. The problem with these kind of implementations is whether they truly offer concurrent access to the same data from different protocol stacks. It’s fine to be storing and retrieving objects with NFS, but how about storing with NFS and accessing with a web-based protocol?

Why would a business want to have the ability to store with one protocol and access via another? Well, offering NFS means applications can use an object store with no modification. Providing concurrent web-based access allows analytics tools to access the data without introducing performance issues associated with the NFS protocol, like locking or multiple threads hitting the same object. The typical read-only profile of analytics software means data can be analyzed without affecting the main application.

Many IoT devices, like video cameras, will only talk NFS, so ingesting this kind of content into an object store means file-based protocols are essential.

Scalability

One factor influencing the use of object stores is the ability to scale down, rather than just scale up. Many object storage solutions start at capacities of many hundreds of terabytes, which isn’t practical for smaller IT organizations. We’re starting to see vendors address this problem by producing products that can scale to the tens of terabytes of capacity.

Obviously, large-capacity hard drives and flash can be a problem here, but object stores could be implemented for the functional benefits, like storing data in a flat name space. So, vendors are offering solutions that are software-only and can be deployed either on dedicated hardware or as virtual instances on-premises or in the public cloud.

With IoT likely to be a big creator of data and that data being created over wide geographic distributions, then larger numbers of smaller object stores will prove a benefit in meeting the ongoing needs of IoT.

Software-defined

Turning back to the software-only solutions again for a moment, providing software-only solutions means businesses can choose the right type of hardware for their environments. Where hardware supply contracts already exist, businesses can simply pay for the object storage software and deploy on existing equipment. This includes testing on older hardware that might otherwise be disposed of.

Open source

The software-defined avenue leads on to another area in which object store is growing: open source. Ceph was one of the original platforms developed as an open source model. OpenIO offers the same experience, with advanced functionality, like serverless, charged as a premium. Minio, another open source solution, recently received $20 million in funding to take its platform to a wider audience, including Docker containers.

Trial offerings

The focus on software means it’s easy for organizations to try out object stores. Almost all vendors with the exception of IBM Cloud Storage and DDN offer some sort of trial process by either downloading the software or using the company’s lab environment. Providing trials opens software to easier evaluation and adoption in the long run.

What’s ahead

Looking at the future for object storage, it’s fair to say that recent developments have been about making solutions more consumable. There’s a greater focus on software-only and vendors are working on ease of use and installation. Multi-protocol connects more applications, making it easier to get data into object stores in the first place. I’m sure in the coming years we will see object stores continue to be an important platform for persistent data storage.



Source link

6 Ways to Transform Legacy Data Storage Infrastructure


So you have a bunch of EMC RAID arrays and a couple of Dell iSCSI SAN boxes, topped with a NetApp filer or two. What do you say to the CEO who reads my articles and knows enough to ask about solid-state drives, all-flash appliances, hyperconverged infrastructure, and all the other new innovations in storage? “Er, er, we should start over” doesn’t go over too well! Thankfully, there are some clever — and generally inexpensive — ways to answer the question, keep your job, and even get a pat on the back.

SSD and flash are game-changers, so they need to be incorporated into your storage infrastructure. SSDs are better than enterprise-class hard drives from a cost perspective because they will speed up your workload and reduce the number of storage appliances and servers needed. It’s even better if your servers support NVMe, since the interface is becoming ubiquitous and will replace both SAS and (a bit later) SATA, simply because it’s much faster and lower overhead.

As far as RAID arrays, we have to face up to the harsh reality that RAID controllers can only keep up with a few SSDs. The answer is either an all-flash array and keeping the RAID arrays for cool or cold secondary storage usage, or a move to a new architecture based on either hyperconverged appliances or compact storage boxes tailored for SSDs.

All-flash arrays become a fast storage tier, today usually Tier 1 storage in a system. They are designed to bolt onto an existing SAN and require minimal change in configuration files to function. Typically, all-flash boxes have smaller capacities than the RAID arrays, since they have enough I/O cycles to do near-real-time compression coupled with the ability to down-tier (compress) data to the old RAID arrays.

With an all-flash array, which isn’t outrageously expensive, you can boast to the CEO about 10-fold boosts in I/O speed, much lower latency , and as a bonus a combination of flash and secondary storage that usually has 5X effective capacity due to compression. Just tell the CEO how many RAID arrays and drives you didn’t buy. That’s worth a hero badge!

The idea of a flash front-end works for desktops, too. Use a small flash drive for the OS (C-drive) and store colder data on those 3.5” HDDs. Your desktop will boot really quickly, especially with Windows 10 and program loads will be a snap.

Within servers, the challenge is to make the CPU, rather than the rest of the system, the bottleneck. Adding SSDs as primary drives makes sense, with HDDs in older arrays doing duty as bulk secondary storage, just as with all-flash solutions, This idea has fleshed out into the hyperconverged infrastructure (HCI) concept where the drives in each node are shared with other servers in lieu of dedicated storage boxes. While HCI is a major philosophical change, the effort to get there isn’t that huge.

For the savvy storage admin, RAID arrays and iSCSI storage can both be turned into powerful object storage systems. Both support a JBOD (just a bunch of drives) mode, and if the JBODs are attached across a set of server nodes running “free” Ceph or Scality Ring software, the result is a decent object-storage solution, especially if compression and global deduplication are supported.

Likely by now, you are using public clouds for backup. Consider “perpetual “storage using a snapshot tool or continuous backup software to reduce your RPO and RTO. Use multi-zone operations in the public cloud to converge DR onto the perpetual storage setup, as part of a cloud-based DR process. Going to the cloud for backup should save a lot of capital expense money.

On the software front, the world of IT is migrating to a services-centric software-defined storage (SDS), which allows scaling and chaining of data services via a virtualized microservice concept. Even older SANs and server drives can be pulled into the methodology, with software making all legacy boxes in a data center operate as a single pool of storage. This simplifies storage management and makes data center storage more flexible.

Encryption ought to be added to any networked storage or backup. If this prevents even one hacker from reading your files in the next five years, you’ll look good! If you are running into a space crunch and the budget is tight, separate out your cold data, apply one of the “Zip” programs and choose the encrypted file option. This saves a lot of space and gives you encryption!

Let’s take a closer look at what you can do to transform your existing storage infrastructure and extend its life.

(Image: Production Perig/Shutterstock)



Source link