Tag Archives: Storage

6 Ways to Transform Legacy Data Storage Infrastructure


So you have a bunch of EMC RAID arrays and a couple of Dell iSCSI SAN boxes, topped with a NetApp filer or two. What do you say to the CEO who reads my articles and knows enough to ask about solid-state drives, all-flash appliances, hyperconverged infrastructure, and all the other new innovations in storage? “Er, er, we should start over” doesn’t go over too well! Thankfully, there are some clever — and generally inexpensive — ways to answer the question, keep your job, and even get a pat on the back.

SSD and flash are game-changers, so they need to be incorporated into your storage infrastructure. SSDs are better than enterprise-class hard drives from a cost perspective because they will speed up your workload and reduce the number of storage appliances and servers needed. It’s even better if your servers support NVMe, since the interface is becoming ubiquitous and will replace both SAS and (a bit later) SATA, simply because it’s much faster and lower overhead.

As far as RAID arrays, we have to face up to the harsh reality that RAID controllers can only keep up with a few SSDs. The answer is either an all-flash array and keeping the RAID arrays for cool or cold secondary storage usage, or a move to a new architecture based on either hyperconverged appliances or compact storage boxes tailored for SSDs.

All-flash arrays become a fast storage tier, today usually Tier 1 storage in a system. They are designed to bolt onto an existing SAN and require minimal change in configuration files to function. Typically, all-flash boxes have smaller capacities than the RAID arrays, since they have enough I/O cycles to do near-real-time compression coupled with the ability to down-tier (compress) data to the old RAID arrays.

With an all-flash array, which isn’t outrageously expensive, you can boast to the CEO about 10-fold boosts in I/O speed, much lower latency , and as a bonus a combination of flash and secondary storage that usually has 5X effective capacity due to compression. Just tell the CEO how many RAID arrays and drives you didn’t buy. That’s worth a hero badge!

The idea of a flash front-end works for desktops, too. Use a small flash drive for the OS (C-drive) and store colder data on those 3.5” HDDs. Your desktop will boot really quickly, especially with Windows 10 and program loads will be a snap.

Within servers, the challenge is to make the CPU, rather than the rest of the system, the bottleneck. Adding SSDs as primary drives makes sense, with HDDs in older arrays doing duty as bulk secondary storage, just as with all-flash solutions, This idea has fleshed out into the hyperconverged infrastructure (HCI) concept where the drives in each node are shared with other servers in lieu of dedicated storage boxes. While HCI is a major philosophical change, the effort to get there isn’t that huge.

For the savvy storage admin, RAID arrays and iSCSI storage can both be turned into powerful object storage systems. Both support a JBOD (just a bunch of drives) mode, and if the JBODs are attached across a set of server nodes running “free” Ceph or Scality Ring software, the result is a decent object-storage solution, especially if compression and global deduplication are supported.

Likely by now, you are using public clouds for backup. Consider “perpetual “storage using a snapshot tool or continuous backup software to reduce your RPO and RTO. Use multi-zone operations in the public cloud to converge DR onto the perpetual storage setup, as part of a cloud-based DR process. Going to the cloud for backup should save a lot of capital expense money.

On the software front, the world of IT is migrating to a services-centric software-defined storage (SDS), which allows scaling and chaining of data services via a virtualized microservice concept. Even older SANs and server drives can be pulled into the methodology, with software making all legacy boxes in a data center operate as a single pool of storage. This simplifies storage management and makes data center storage more flexible.

Encryption ought to be added to any networked storage or backup. If this prevents even one hacker from reading your files in the next five years, you’ll look good! If you are running into a space crunch and the budget is tight, separate out your cold data, apply one of the “Zip” programs and choose the encrypted file option. This saves a lot of space and gives you encryption!

Let’s take a closer look at what you can do to transform your existing storage infrastructure and extend its life.

(Image: Production Perig/Shutterstock)



Source link

What NVMe over Fabrics Means for Data Storage


NVMe-oF will speed adoption of Non-Volatile Memory Express in the data center.

The last few years have seen Non-Volatile Memory Express (NVMe) completely revolutionize the storage industry. Its wide adoption has driven down flash memory prices. With lower prices and better performance, more enterprises and hyper-scale data centers are migrating to NVMe. The introduction of NVMe over Fabrics (NVMe-oF) promises to accelerate this trend.

The original base specification of NVMe is designed as a protocol for storage on flash memory that uses existing, unmodified PCIe as a local transport. This layered approach is very important. NVMe does not create a new electrical or frame layer; instead it takes advantage of what PCIe already offers. PCIe has a well-known history as a high speed interoperable bus technology. However, while it has those qualities, it’s not well suited for building a large storage fabric or covering distances longer than a few meters. With that limitation, NVMe would be limited to being used as a direct attached storage (DAS) technology, essentially connecting SSDs to the processor inside a server, or perhaps to connect all-flash arrays (AFA) within a rack. NVMe-oF allows things to be taken much further.

Connecting storage nodes over a fabric is important as it allows multiple paths to a given storage resource. It also enables concurrent operations to distributed storage, and a means to manage potential congestion. Further, it allows thousands of drives to be connected in a single pool of storage, since it is no longer limited by the reach of PCIe, but can also take advantage of a fabric technology like RoCE or Fibre Channel.

NVMe-oF describes a means of binding regular NVMe protocol over a chosen fabric technology, a simple abstraction enabling native NVMe commands to be transported over a fabric with minimal processing to map the fabric transport to PCIe and back.  Product demonstrations have shown that the latency penalty for accessing an NVMe SSD over a fabric as opposed to a direct PCIe link can be as low as 10 microseconds.

The layered approach means that a binding specification can be created for any fabric technology, although some fabrics may be better suited for certain applications. Today there are bindings for RDMA (RoCE, iWARP, Infiniband) and Fibre Channel. Work on a binding specification for TCP/IP has also begun.

Different products will use this layered capability in different ways. A simple NVMe-oF target, consisting of an array of NVMe SSDs, may expose all of its drives individually to the NVMe-oF host across the fabric, allowing the host to access and manage each drive individually. Other solutions may take a more integrated approach, using the drives within the array to create one big pool of storage offered that to the NVMe-oF initiator. With this approach, management of drives can be done locally within the array, without requiring the attention of the NVMe-oF initiator, or any higher layer software application. This also allows for the NVMe-oF target to implement and offer NVMe protocol features that may not be supported by drives within the array.

A good example of this is a secure erase feature. A lower cost drive may not support the feature, but if that drive is put into a NVMe-oF AFA target, the AFA can implement that secure erase feature and communicate to the initiator. The NVMe-oF target will handle the operations to the lower cost drive in order to properly support the feature from the perspective of the initiator. This provides implementers with a great deal of flexibility to meet customer needs by varying hardware vs. software feature implementation, drive cost, and performance.

The recent plugfest at UNH-IOL focused on testing simple RoCE and Fibre Channel fabrics. In these tests, a single initiator and target pair were connected over a simple two switch fabric. UNH-IOL performed NVMe protocol conformance testing, generating storage traffic  to ensure data could be transferred error-free. Additionally, testing involved inducing network disruptions to ensure the fabric could recover properly and transactions could resume.

In the data center, storage is used to support many different types of applications with an unending variety of workloads. NVMe-oF has been designed to enable flexibility in deployment, offering choices for drive cost and features support, local or remote management, and fabric connectivity. This flexibility will enable wide adoption. No doubt, we’ll continue to see expansion of the NVMe ecosystem.



Source link

How Flash Storage Supports Broadway Video’s 4K Growth


All-flash system enables fast, cost-effective production for entertainment and media company.

It would be an understatement to say the digital media and entertainment business changes quickly. At Broadway Video, we’re doing business every hour of every day, and one of the biggest challenges we have is maintaining headroom, bandwidth and storage space. The shows we produce and the file sizes that are required to stay relevant have exponentially increased, making it challenging for production companies like ours to maintain bandwidth and storage.

While 4K seems to be an industry standard now, we understand that broadcast technology is constantly evolving. In three years’ time or less, 8K 120p could be the new content resolution. This means that companies must maintain agility and relevance by offering a wide range of frame rates and sizes to deliver and distribute content in 4K and higher.

To enhance 4K offerings and beyond, we added an all-flash storage system into our infrastructure that allows for quick-turn, efficient and cost-effective production, post-production and delivery of TV shows and commercials. For Broadway Video, Hitachi’s Virtual Storage Platform (VSP) G Series was the obvious choice.

What does all-flash mean for Broadway Video:

Managing quick-turn, mission-critical data: In the media business, edits to a show are often being made 30 seconds before delivery to distribution channels. Overflow work, like recoloring jobs and editing an opening sequence, are quick turn. Some shows are written on Wednesday, shot on Friday and edited from Friday to Saturday before going on air that night. In these moments, being able to turn data around quickly is essential and made possible with a flash-based storage system.

Flash-based storage systems give companies high-speed turn, incredible efficiency and data throughput that 4K, high frame rate, HDR content requires. A virtual storage lineup delivers performance, resiliency and workload scalability for even the most challenging digital environments. 

Providing quality, industry-leading service: On the cusp of 4K technology, media companies must find ways to save money and manage efficiencies, hitting hard with multiple workstations and working at a high frame rate and density in the 4K workspace and above. There are unique challenges to delivering large files required when doing 4K 60 or 4K 59.94, and this is often where systems fall apart.

Companies that expand to support 4K content and technology must ensure they have robust and optimized storage solutions that are not only fast, but reliable and efficient. A flash storage system will improve performance of business-critical applications by eliminating storage bottlenecks and delivering immediate response rates while ensuring that no data is lost in the process.

In an ever-changing industry, partnering with a technology vendor is essential for digital transformation. We must predict upcoming trends and pivot to meet needs with solutions that are both cost effective and efficient, placing a tremendous focus on digital distribution and data storage systems. For Broadway Video, our strategic partnership with Hitachi Vantara allowed us to transition to a complete digital workflow and create a foundation for future growth to our post-production and digital distribution services.

Stacey Foster, President and Managing Director, Broadway Video Digital and Production, has worked with Broadway Video since 1981. Stacey has served as Coordinating Producer for Saturday Night Live since 1999. Having joined SNL in 1985, he has overseen all technical aspects of production for the show, as well as for numerous SNL and NBC specials, as well as lending his expertise to The Tonight Show with Jay Leno, Late Night (working with hosts David Letterman, Conan O’Brien, and Jimmy Fallon), and Mark Burnett’s Survivor. Stacey graduated from Montclair State University.



Source link

How Spectre and Meltdown Impact Data Center Storage


IT news over the last few weeks has been dominated by stories of vulnerabilities found in Intel x86 chips and almost all modern processors. The two exposures, Spectre and Meltdown, are a result of the speculative execution that all CPUs use to anticipate the flow of execution of code and ensure that internal instruction pipelines are filled as optimally as possible. It’s been reported that Spectre/Meltdown can have an impact on I/O and that means storage products could be affected. So, what are the impacts and what should data center operators and storage pros do?

Speculative execution

Speculative execution is a performance-improvement process used by modern processors where instructions are executed before the processor knows whether they will be needed. Imagine some code that branches as the result of a logic comparison. Without speculative execution, the processor needs to wait for the completion of that logic comparison before continuing to read ahead, resulting in a drop in performance. Speculative execution allows both (or all) branches of the logic to be followed; those that aren’t executed are simply discarded and the processor is kept active.

Both Spectre and Meltdown pose the risk of unauthorized access to data in this speculative execution process. A more detailed breakdown of the problem is available in two papers covering the vulnerabilities (here and here). Vendors have released O/S and BIOS workarounds for the exposures. Meltdown fixes have noticeably impacted performance on systems with high I/O activity due to the extra code needed to isolate user and system memory during context switches (syscalls). Reports range from 5%-50% additional CPU overhead, depending on the specific platform and workload.

Storage repercussions

How could this impact storage appliances and software? Over the last few years, almost all storage appliances and arrays have migrated to the Intel x86 architecture. Many are now built on Linux or Unix kernels and that means they are directly impacted by the processor vulnerabilities, which if patched, result in increased system load and higher latency.

Software-defined storage products are also potentially impacted, as they run on generic operating systems like Linux and Windows. The same applies for virtual storage appliances run in VMs and hyperconverged infrastructure, and of course either public cloud storage instances or high-intensity I/O cloud applications. Quantifying the impact is difficult as it depends on the amount of system calls the storage software has to make. Some products may be more affected than others.  

Vendor response

Storage vendors have had mixed responses to the CPU vulnerabilities. For appliances or arrays that are deemed to be “closed systems” and not able to run user code, their stance is that these systems are unaffected and won’t be patched.

Where appliances can run external code like Pure Storage’s FlashArray, which can execute user code via a feature called Purity Run, there will be a need to patch. Similarly, end users running SDS solutions on generic operating systems will need to patch. HCI and hypervisor vendors have already started to make announcements about patching, although the results have been varied. VMware for instance, released a set of patches only to recommend not installing them due to customer issues. Intel’s advisory earlier this week warning of problems with its patches has added to the confusion.

Some vendors such as Dell EMC haven’t made public statements about the impact of the vulnerabilities for all of their products. For example, Dell legacy storage product information is openly available, while information about Dell EMC products is only available behind support firewalls. I guess if you’re a user of those platforms, then you will have access, however, for wider market context it would have been helpful to see a consolidated response in order to assess the risk.

Reliability

So far, the patches released don’t seem to be very stable. Some have been withdrawn, others have crashed machines or made them unbootable. Vendors are in a difficult position, because the details of the vulnerabilities weren’t widely circulated in the community before they subsequently were made public. Some storage vendors only found out about the issue when the news broke in the press. This means some of the patches may be being rushed to market without full testing of the impact when they are applied.

To patch or not?

What should end users do? First, it’s worth evaluating the risk and impact of either applying or not applying patches. Computers that are regularly exposed to the internet like desktops and public cloud instances (including virtual storage appliances running in a cloud instance)) are likely to be most at risk, whereas storage appliances behind a corporate firewall on a dedicated storage management network are at lowest risk. Measure this risk against the impact of applying the patches and what could go wrong. Applying patches to a storage platform supporting hundreds or thousands of users, for example, is a process that needs thinking through.

Action plan

Start by talking to your storage vendors. Ask them why they believe their platforms are exposed or not. Ask what testing of patching has been performed, from both a stability and performance perspective. If you have a lab environment, do some before/after testing with standard workloads. If you don’t have a lab, ask your vendor for support.

As there are no known exploits in the wild for Spectre/Meltdown, a wise approach is probably to wait a little before applying patches. Let the version 1 fixes be tested in the wild by other folks first. Invariably issues are found that then get corrected by another point release. Waiting a little also gives time for vendors to develop more efficient patches, rather than ones that simply act as a workaround. In any event, your approach will depend on your particular set of circumstances.



Source link

How IT Storage Professionals Can Thrive In 2018


Just a few years ago, it took a much larger employee base to administer enterprise-level IT. Each staffer operated in a silo, managing a variety of areas that included storage. Like a Russian doll, these silos were broken down further into still more specialties. All told, the storage team of a large/global enterprise could be made up of as many as 100 throughout the enterprise.

Today, the idea of 100 staffers just to administer storage seems fantastic, as the IT staffs have focused more and more on their software and dev environments than their infrastructure. That old staff-size wasn’t bloat, however: Each member was considered vital, because complexity of an enterprise’s storage estate was a major issue; everything was complex, and nothing was intuitive.

But then revolution happened. It was introduced in the form of the 2008-2009 extensive worldwide economic downturn. Driven by the collapse of an unstable housing market, every sector of the economy stumbled, and businesses were forced to focus on leveraging technology for IT innovation. This disruption was followed by the AI Big Bang, and over time, a dissolution of traditional roles.

IT professionals suffered, especially within the storage industry. In many enterprises, as much as 50% of the storage workforce was pink-slipped. Despite this, the amount of data we’re administering has skyrocketed. IDC forecasts that by 2025, the global data sphere will grow to 163ZB or a trillion gigabytes.

IT employment levels eventually stabilized, but according to Computer Economics, organizations are experiencing productivity gains without accompanying significant increases in spending. In other words, IT organizations are getting more with less. Virtualization and automation have been speeding tasks, and the servers themselves are much faster than they once were.

Bureau of Labor Statistics says employment of computer and information technology occupations is projected to grow 13% from 2016 to 2026 for all IT jobs. IT staffers will nonetheless perform an extensive range of activities, says Gartner. In the next year, beyond management of software and hardware across applications and databases, servers, storage and networking, IT teams will also be expected to evangelize, consult, broker, coach and deliver solutions to their organizations.

Hiring managers will therefore increasingly focus on cultivating teams with more versatile skills, including non-IT functions. IT professionals must also be prepared to embrace education and certification initiatives to hone specialized skills that are broad enough to transfer to other platforms and verticals. Training will be the new normal.

The right tool for the job

Storage specialists will need a clear understanding of how systems can meet the needs of their enterprises. As with any hardware, IT admins require the right tool for the right job. They need to remember that a one-size-fits-all option is not a valid solution. Just as an expensive supercar can’t replace a city bus, some systems work better for their specific needs than others.

That means teams shouldn’t just throw money at a problem, but consider variables such as proximity to compute resources, diversity of performance, capital expenditure versus operating expenses and more. In general, storage professionals will need to right-size their solutions so they can scale to their changing needs. As with any purchase, no one wants to waste money on what they don’t need. But they also shouldn’t underestimate their long-term requirements in a manner that eventually hobbles their business. We’ve all heard the stories of enterprises held back by their storage systems.

Fundamentally, however, faster is usually better. Faster systems can provide more in-depth insights while responding to customers almost instantaneously. A system suited to your needs can also boost the performance of your existing applications. IT staffers will need to look for solutions that come with a portfolio of management tools. To improve storage efficiency, look for a solution with data reduction technologies like pattern removal, deduplication, and compression. And faster storage offerings leveraging flash technology have impact beyond the storage environment and associated applications to entire clouds and data centers.

With such tools, enterprise operations can maximize their resources for optimal speed while also reducing infrastructure costs across their compute and storage environments.

Get in tune with modernization

Storage professionals will need to embrace automation. Each storage pro will need to learn it, leverage it and understand its various use cases. In fact, teams should seek out as much automation as their vendor can provide, because their jobs will only continue the shift toward managing capacity with small staffs.

Additionally, IT pros will move to converged infrastructure, which simplifies IT by combining resources into a single, integrated solution. This approach reduces costs and while also minimizing compatibility issues among servers, storage systems, and network devices. Converged infrastructure can boost productivity by eliminating large portions of design, deployment, and management. Teams will be up and running faster so they can put their focus elsewhere.

Storage professionals should embrace their new hybrid job descriptions. They’ll likely need to reach beyond their domain skills, certifications, and comfort zones. As job their jobs continue to evolve, storage professionals will become hybrid specialists as the old silos will continue to collapse.

Some desired job skills are already evident, such a working knowledge of cloud. Others may be less so: Those with an understanding of the basics of marketing are more likely to thrive, as they argue for their fair slice of the budgeting pie. 

All told, it’s best to get in tune with modernization. After all, it’s unavoidable and fundamental to the IT workplace.

Eric Herzog is Chief Marketing Officer and Vice President, Worldwide Storage Channels for IBM Storage Systems and Software-Defined Infrastructure. Herzog has over 30 years of product management, marketing, business development, alliances, sales, and channels experience in the storage software, storage hardware, and storage solutions markets, managing all aspects of marketing, product management, sales, alliances, channels, and business development in both Fortune 500 and start-up storage companies.



Source link