Tag Archives: Storage

Software-Defined Storage Products: IT Pros Offer Insight


Find out what users have to say about products in the emerging SDS market.

Software-defined storage promises two very attractive benefits to the enterprise: flexibility and lower cost. But how can IT pros know which software-defined storage (SDS) product will best meet the needs of their business?

Peer reviews published by real users can facilitate their decision-making with user feedback, insight, and product rankings that collectively indicate which products are in the lead.

Based on our real user reviews at IT Central Station, these products are some of the top choices for software-defined storage today.

Nutanix

A senior system engineer said, “The support we get from Nutanix is easily the best from all vendors we work with. If you open a case you directly speak to an engineer which can help quickly and efficiently. Our customers sometimes open support cases directly (not through us) and so far the feedback was great.”

However, a CTO at an IT consulting firm said while Nutanix has the ability to connect to Azure or AWS for storing backups, he would like to have the capability to spin up a backup on Azure or AWS for disaster-recovery purposes.

“Right now, you can only send a backup to either Azure or AWS. We would like to take a backup and spin it up to an actual server that could be connected to by users from the outside,” he added.

Here are more Nutanix reviews by IT Central Station users.

VMware vSAN

A senior systems administrator and storage specialist in the government sector said he finds that vSAN allows for very easy administration. “The fact that you don’t have LUNs to set up and assign is great. The ability to set up storage policies and assign them at the disk level is also a great part of this product,” he said. “You can allow for different setups for different workload requirements.”

A senior manager of IT infrastructure noted that “The vSAN Hardware Compatibility List Checker needs to improve, since currently it is a sore point for vSAN. You need to thoroughly check and re-check the HCL with multiple vendors like VMware, in the first instance, and manufacturers like Dell, IBM, HPE, etc., as the compatibility list is very narrow. I would definitely be happy if there is significant additional support for more models of servers from Dell, IBM, HPE, etc.”

Read more VMware vSAN reviews by IT Central Station members.

HPE StoreVirtual

A network engineer at a tech service firm reported that “Shelf level-redundancy is one of the big things that StoreVirtual has had before some other SAN manufacturer or SAN model brands, which is pretty nice. It can be rather expensive because you are much less efficient when you have that redundancy, but it’s definitely a benefit if you really need access to that data.

But a solutions engineer at an insurance company said the product’s user interface needs to be updated. “It’s getting kind of long in the tooth, and the user interface makes it look a lot more complex than it actually is to manage, and I think that you can mask a lot of that with a refresh of the user interface. While HPE has created a new HTML5 UI for the HyperConverged 380, it is not available to the rest of the StoreVirtual population.”

Read more HPE StoreVirtual reviews.  

Dell EMC ScaleIO

An engineer at a tech vendor that is both a customer and partner with Dell EMC likes the ScaleIO user interface. “EMC has been working with storage for a long time. Therefore, they know how to clearly present any important data, including data flow and each drive’s IOPS/bandwidth; and allow the user to easily monitor bottlenecks and problems, especially the rebuild and rebalance status of child objects. It controls them, as well as maintaining them well.”

He added that “If they could introduce a write cache feature, the product would be perfect overall.”

You can read more Dell EMC ScaleIO reviews here.



Source link

7 Ways to Secure Cloud Storage


Figuring out a good path to security in your cloud configurations can be quite a challenge. This is complicated by the different types of cloud we deploy – public or hybrid – and the class of data and computing we assign to those cloud segments. Generally, one can create a comprehensive and compliant cloud security solution, but the devil is in the details and a nuanced approach to different use cases is almost always required.

Let’s first dispel a few myths. The cloud is a very safe place for data, despite FUD from those who might want you to stay in-house. The large cloud providers (CSPs) maintain a tight ship, simply because they’d lose customers otherwise. Even so, we can assume their millions of tenants include some that are malevolent, whether hackers, government spies or commercial thieves.

At the same time, don’t make the common assumption that CSP-encrypted storage is safe. If the CSP uses drive-based encryption, don’t count on it. Security researchers in 2015 uncovered flaws in a particular hard drive product line that rendered the automatic encryption useless. This is lazy man’s encryption! Do it right and encrypt in the server with your own key set.

Part of the data security story is that data must maintain its integrity under attack. It isn’t sufficient to have one copy of data; just think what would happen if the only three replicas of a set of files in your S3 pool are all “updated” by malware. If you don’t provide a protection mechanism for this, you are likely doomed!

We are so happy with the flexibility of all the storage services available to us that we give scant consideration to what happens to, for example, instance storage when we delete the instance. Does it get erased? Or is it just re-issued? And if erasure is used on an SSD, how can we get over the internal block reassignment mechanism that just moves deleted blocks to the free pool? A tenant using the right software tool can read these blocks. Your CSP may have an elegant solution, but good governance requires you to ask them and understand the adequacy of the answer.

Governance is a still-evolving facet of the cloud. There are solutions for data you store, complete with automated analysis and event reporting, but the rise of SaaS and all the associated flavors of as-a-Service leaves the question of where your data is, and if it is in compliance with your high standards.

The ultimate challenge for cloud storage security is the human factor. Evil admins exist or are created within organizations and a robust and secure system needs to accept that fact and protect against it with access controls, multi-factor authentication, and a process that identifies any place that a single disgruntled employee can destroy valued data. Be paranoid; it’s a case of when, not if!

Let’s dig deeper into the security challenges of cloud storage and ways you can protect data stored in the cloud.

(Image: Kjpargeter/Shutterstock)



Source link

Enterprise Data Storage Shopping Tips


Enterprise data storage used to be an easy field. Keeping up meant just buying more drives from your RAID vendor. With all the new hardware and software today, this strategy no longer works. In fact, the radical changes in storage products impact not only storage buys, but ripple through to server choices and networking design.

This is actually a good news scenario. In data storage, we spent much of three decades with gradual drive capacity increases as the only real excitement. The result was a stagnation of choice, which made storage predictable and boring.

Today, the cloud and solid-state storage have revolutionized thinking and are driving much of the change happening today in the industry. The cloud brings low-cost storage-on-demand and simplified administration, while SSDs make server farms much faster and drastically reduce the number of servers required for a given job.

Storage software is changing rapidly, too. Ceph is the prime mover in open-source storage code, delivering a powerful object store with universal storage capability, providing all three mainstream storage modes (block-IO, NAS and SAN) in a single storage pool. Separately, there are storage management solutions for creating a single storage address space from NVDIMMs to the cloud, compression packages that typically shrink raw capacity needs by 5X, virtualization packages that turn server storage into a shared clustered pool, and tools to solve the “hybrid cloud dilemma” of where to place data for efficient and agile operations.

A single theme runs through all of this: Storage is getting cheaper and it’s time to reset our expectations. The traditional model of a one-stop shop at your neighborhood RAID vendor is giving way to a more savvy COTS buying model, where interchangeability of  component elements is so good that integration risk is negligible. We are still not all the way home on the software side in this, but hardware is now like Legos, with the parts always fitting together. The rapid uptake of all-flash arrays has demonstrated just how easy COTS-based solutions come together.

The future of storage is “more, better, cheaper!” SSDs will reach capacities of 100 TB in late 2018, blowing away any hard-drive alternatives. Primary storage is transitioning to all-solid-state as we speak and “enterprise” hard drives are becoming obsolete. The tremendous performance of SSDs has also replaced the RAID array with the compact storage appliance. We aren’t stopping here, though. NVDIMM is bridging the gap between storage and main memory, while NVMe-over-Fabric solutions ensure that hyperconverged infrastructure will be a dominant approach in future data centers.

With all these changes, what storage technologies should you consider buying to meet your company’s needs? Here are some shopping tips.

(Image: Evannovostro/Shutterstock)



Source link

HPE Snaps Up Flash Supplier Nimble Storage


Hewlett Packard Enterprise on Tuesday said it signed a deal to buy Nimble Storage, a maker of all-flash and hybrid-flash storage arrays, for $1 billion in cash.

HPE said Nimble’s predictive flash products complement its 3PAR and MSA products and advance its hybrid IT strategy. The company plans to integrate Nimble’s InfoSight Predictive Analytics platform across its storage product portfolio.

Founded in 2007, San Jose, Calif.-based Nimble has been a prominent player in the rise of flash storage in the enterprise data center. In a 2014 Network Computing blog post, storage expert Howard Marks described Nimble as successful not only with its technology, but also in selling the product. The company went public in 2013.

Krista Macomber, senior analyst at Technology Business Research, said Nimble has been one of the more successful flash storage vendors by combining innovation in flash components with its InfoSight analytics platform. Bringing analytics to the table for functions like performance monitoring “can help customers maximize their investment in flash technologies,” she told me in a phone interview.

While the cost of flash storage has come down in the past couple years, it still carries a premium, Macomber said. Being able to use analytics software to optimize flash is a valuable capability for an enterprise customer, she said.

She called HPE’s purchase of Nimble a sign of the times as the storage market moves away from traditional disk and standalone technologies and towards flash-based converged and hyperconverged models. “HPE has been very active in making acquisitions to accelerate the pace that it can evolve its portfolio towards these markets,” she said.

In January, HPE announced an agreement to buy hyperconverged startup SimpliVity for $650 million in cash.

“Nimble Storage’s portfolio complements and strengthens our current 3PAR products in the high-growth flash storage market and will help us deliver on our vision of making hybrid IT simple for our customers,”  HPE president and CEO Meg Whitman said in a prepared statement.

Rohit Kshetrapal, CEO at Nimble competitor Tegile, said in an email statement that the deal will fill the gap in its product lines created by its “aging LeftHand and MSA lines.” He also said HPE will need to draw the line between the Nimble and 3PAR products, describing them as overlapping.

The Nimble deal is expected to close in April.



Source link

Swordfish Storage Management Falls Short


I think most storage admins today look wistfully at their server and network counterparts and wish they had automated management tools on par with what they use. Clearly, any real progress in agile data centers requires storage to catch up and do what public cloud vendors have been doing for years.

Automated orchestration is the goal for storage, just as it is for the other areas. This is driven partly by the emergence of software-defined storage (SDS), albeit without standards for APIs. Meanwhile, hyperconverged systems create their own management challenge with distributed virtual SANs.

The Storage Networking Industry Association (SNIA) has begun the development of a storage management tool aimed at large installations. Dubbed Swordfish, this project aims to simplify management and also cope with today’s scale-out requirements. So far, unfortunately, it falls short of addressing problems on the horizon.

Swordfish derives from a prior SNIA effort, Redfish, which, by SNIA’s own admission, is complicated to set up and use. Swordfish builds on Redfish’s fundamental object classes for devices and appliances with a service layer that defines data services and a systems layer that allows aggregated groups of objects to be called out.

Where’s the GUI?

Currently, the Swordfish specification is still under development. Release 1.0 is available as of January, but downloadable code is still some months away. There are some mock views of how screens might look available on the SNIA site.

Swordfish right now supports only block I/O and filers, though object storage, hyperconverged systems, and storage security are all on the roadmap. There are Swordfish APIs for applications to call provisioning and monitoring functions, which underscores one of the shortfalls of the approach. Unlike most management tools, Swordfish is a framework to allow standardization of API calls to low-level functions. It isn’t a high-order single-pane GUI management tool, so it isn’t really automated.

Top-layer functionality will require another layer of code, likely from platform vendors like Dell Technologies, Hewlett-Packard Enterprise or NetApp or mainstream storage software suppliers such as IBM or Veritas. These companies can create the drill-down dashboards that are all the rage today.

Without the GUI layer, storage admins using Swordfish are reduced to defining their storage environment in exhausting detail in tabular form. Operations are CLI-based. It isn’t yet clear how far auto-discovery can go in building the system topology. Swordfish class definitions are very detailed, but much of the Swordfish management data in the system won’t likely be used, while its presence will slow up admin operations enormously. For the admin, operating with a naked Swordfish interface is probably not a good plan, but the bolt-on front-ends that are in the pipeline may significantly ease management tasks in the block IO space, especially if policy-based control and operational templates are included.

Not focused on future

In a way, Swordfish seems to be solving yesterday’s SAN problems, with a priority on support for block and filer rather than cloud and object storage, a CLI-based user interface, and fine granularity on individual elements such as drives or adapters. In contrast, today’s server management tools are more “cloudy” in their approach, aiming to automate away most management interactions. We should aim to “orchestrate” storage in much the way servers and virtual networks are handled in public clouds.

To meet future requirements driven by software-defined virtualized storage (SDS), the Swordfish working group will need to shift its focus to coordinating and controlling the various SDS elements in real time. This is already a major issue in hyperconverged systems design and no good storage management solutions exist yet. The desired solution will virtualize, to allow scaling on demand for management itself and to allow these tools to co-exist within a fully virtualized environment.

The industry needs something Swordfish or something like it to make sense out of software-defined storage. We need something to standardize the semantics of SDS and to converge the APIs into a solid working set. Otherwise, we’ll need to rewrite each app or script set every time a new data service or hardware node is added to the system.

This API convergence has to accommodate extended metadata mechanisms that support data-driven storage services. A tag that says “remove after three years” needs to be honored, for example. This tagging will be a major part of any SDS system and we are already seeing its use in backup systems such as Rubrik.

Without a solid storage management solution for the next evolution of storage, we’ll have vendors hyping their SDS solutions like crazy, but the systems won’t be able to talk to each other. Clearly, today’s Swordfish doesn’t stretch far enough.



Source link