Monthly Archives: January 2017

8 Vendors Poised To Make Strides In Storage In 2017


2017 is shaping up to be a year of great change in storage. The hard disk is dying and the result is that flash-based products are surging on all fronts, though Intel and Micron have a flash alternative in the pipeline. Object storage is gaining ground over traditional arrays. Software-defined storage is going to be hot, especially in hyperconverged architectures. Above all, 2017 will see the cloud, both public and hybrid, stealing growth away from the data center and its vendors.

Traditional storage array and filer vendors are looking for a share of the software and support revenue as hardware platforms enter into a race to the bottom driven by COTS gear and low-priced drives. Open source code seeks to compete with them, suggesting that we will have plenty of storage software choices for 2017 and beyond.

Storage-as-a-Service is a new alternative to data center disk farms and vendors are touting cloud-based variants as a solution for the hybrid cloud dilemma of where to place data. On top of all of this, old faithful storage interconnects such as SAS, SATA and Fibre Channel are now competing against upstarts like NVMe and NVMoF, while Ethernet has surged ahead in cost performance to become the cloud interconnect of choice.

Let’s look at some storage vendors that will be worth keeping an eye on in 2017. These are all companies with good plans, which, if fully executed, should give us leading products this year.

(Image: Wang An Qi/Shutterstock)



Source link

Linux Foundation LFCS and LFCE: Jorluis Perales | Linux.com


The Linux Foundation offers many resources for developers, users, and administrators of Linux systems. One of the most important offerings is its Linux Certification Program. The program is designed to help you differentiate yourself in a job market that’s hungry for your skills.

How well does the certification prepare you for the real world? To illustrate that, the Linux Foundation is featuring some of those who have recently passed the certification examinations. These testimonials should serve to help you decide if either the Linux Foundation Certified System Administrator (LFCS) or the Linux Foundation Certified Engineer (LFCE) certification is right for you. This time, we talk with recently certified Jorluis Perales.

How did you become interested in Linux and open source?

I became interested in Linux a few years ago, when I realized how easy it was to use a different OS that required less resources and the best of all, was available for free. I downloaded a CentOS image and created my very first machine. Staring at the terminal, I thought, “Now what?”. It was hard at the beginning, coming from a stylish GUI interface and then facing a terminal. I started to read lot of Linux books, followed step-by-step tutorials, and of course not giving up. Within a month, I created multiple Linux servers like Apache server, NFS, Mail and Proxy server, started to participate in several local Linux conferences, and made good connections which helped me to understand the benefits of using Linux for everything.

What Linux Foundation course did you achieve certification in? Why did you select that particular course?

I achieved both the LFCS and recently the LFCE. Both exams make it very apparent how well you understand Linux and its functionality. I created multiple virtual labs and practiced a lot… and I mean a lot. Both exams were challenging, but it feels so good when you study and know exactly how things work in Linux.

What are your career goals? How do you see Linux Foundation certification helping you achieve those goals and benefiting your career?

I would like to become a Linux Advisor. I love resolving Linux issues and educating people to leave their fear of Linux behind. The Linux Foundation certifications have helped me a lot to understand the real-world scenarios where the knowledge you’ll get through their certifications will prepare you to solve Linux problems and deliver better solutions.

What other hobbies or projects are you involved in? Do you participate in any open source projects at this time?

I have a Raspberry Pi, which a use for every single project that crosses my mind. Having Linux as its OS makes things easy to work with and also keeps me in a constant state of learning.

Do you plan to take future Linux Foundation courses? If so, which ones?

OpenStack caught my attention, so I am going to keep an eye on that certification.

In what ways do you think the certification will help you as a systems administrator in today’s market?

In a world where Linux is enjoying more and more market share within IT, having the knowledge that the certifications offer will make you stand out among others.

What Linux distribution do you prefer and why?

CentOS definitely. There is not anything you cannot do with this distribution. CentOS is easy to deploy and manage, very easy to learn. Since it is backed up by Red Hat, you’ll be more than prepared to take on any Red Hat issue and solve it like a champion.

Are you currently working as a Linux systems administrator? If so, what role does Linux play?

Sadly, I am not. I work as a Storage Engineer. There are a few Linux scenarios where I can put in practice what I learned, but that does not stop me from learning Linux and accomplishing my goal of becoming a Linux Expert.

Where do you see the Linux job market growing the most in the coming years?

In everything. People can no longer ignore Linux; more and more companies are changing their infrastructure to make way for open source tools that offer better solutions for their needs. Who will be there to administer their data center? We will.

What advice would you give those considering certification for their preparation?

Practice! And by practicing I mean a LOT. The main key is doing everything over and over again, by preparing several virtual labs. There are a lot of videos, books, and courses that will give you the knowledge needed to get certificated; do not follow only one source.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Read more:

Linux Foundation Certified Engineer: Alexandre Krispin

Linux Foundation Certified Engineer: Karthikeyan Ramaswamy

Linux Foundation Certified System Administrator: Muneeb Kalathil

Linux Foundation Certified System Administrator: Theary Sorn

Linux Foundation Certified Engineer: Ronni Jensen

Linux Foundation Certified System Administrator: Elyasin Shaladi

Linux Foundation Certified System Administrator: Lorenzo Paglia

Linux Foundation Certified System Administrator: William Brawner

Linux Foundation Certified Engineer: Ansil Hameed

Linux Foundation Certified System Administrator: Adedayo Samuel

Linux Foundation Certified System Administrator: Munzali Garba

4 Software-Defined Storage Trends


As enterprises move towards the software-defined data center (SDDC), many of them are deploying software-defined storage (SDS). According to Markets and Markets, the software-defined storage market was worth $4.72 billion in 2016, and it could increase to $22.56 billion by 2021. That’s a 36.7% compound annual growth rate.

Enterprises are attracted to SDS for two key reasons: flexibility and cost. SDS abstracts the storage software away from the hardware on which it runs. That gives organizations a lot more options, including the freedom to change vendors as they see fit and the ability to choose low-cost hardware. SDS solutions also offer management advantages that help enterprises reduce their total cost of ownership (TCO).

Enterprises appear eager to reap the benefits of SDS. Camberley Bates, managing partner and analyst at Evaluator Group, said in an interview, “Adoption is increasing as IT end users get more familiar with the options and issues with SDS.”

She highlighted four trends that are currently affecting the software-defined storage market.

1. Appliances dominate

By definition, software-defined storage runs on industry-standard hardware, so you might think that most organizations buy their SDS software and hardware separately and build their own arrays. However, that isn’t the case.

“Much of the [current SDS] adoption is in the form of an appliance from the vendor, and these include categories such as server-based storage, hyperconverged and converged infrastructure systems,” Bates said.

Although the market is embracing SDS, enterprises still don’t want to give up some of the benefits associated with buying a pre-built appliance where the hardware and software have been tested to work together.

2. NVMe improves performance

Designed to take advantage of the unique characteristics of SSDs, NVMe provides faster performance and lower latency than SAS or SATA. As a result, many different types of storage solutions have begun using NVMe technology, but Bates said that SDS solutions are adopting NVMe more quickly.

She added that in her firm’s labs,  NVMe proved to have lower price for performance  than other types of storage by a significant margin based on work with Intel last summer.

3. Enterprises want single-vendor support

One of the most common problems organizations run into when deploying do-it-yourself SDS solutions is the support runaround. When they experience an issue, they call their SDS software vendor for help, only to be told that the problem lies with the hardware. And, of course, the hardware vendor then blames the software vendor.

“There is a distinct need to have a single entity responsible for the service and support of the system,” Bates said.

She also noted that the potential risk of data loss makes this support issue more significant for SDS than for other types of software-defined infrastructure.

4. Scale-out remains challenging

The other big issue that organizations face with SDS is scalability. “Scale-out designs are not easy,” Bates said. “They may do well for the first two to four nodes, but if I am creating a large-scale hybrid cloud, then the environment needs to scale efficiently and resiliently. We have seen environments that fail on both counts.”

As organizations increasingly deploy hybrid clouds, they’ll need to look for SDS solutions that help them solve this scalability issue.

Camberley Bates will discuss SDS in more depth and offer tips on what enterprises should look for in SDS solutions at her Interop ITX session, “Software-Defined Storage: What It Is and Why It’s Making the Rounds in Enterprise IT.” Register now for Interop ITX, May 15-19 in Las Vegas.



Source link

HPE Inks Deal For SimpliVity


Hewlett-Packard Enterprise on Tuesday announced an agreement to buy hyperconverged startup SimpliVity for $650 million in cash to bolster its hybrid IT strategy.

Founded in 2009, SimpliVity was an early player in the fast-growing hyperconverged infrastructure market. The startup came out of stealth in 2012 with its OmniStack platform that combines compute, storage services, and network switching. The platform, which is composed of SimpliVity’s Data Virtualization Platform software and purpose-built Accelerator Card, includes data compression, deduplication, and built-in backup.

Gartner labeled SimpliVity a leader in hyperconvergence, along with Cisco, EMC, Nutanix, and NetApp, in its Magic Quadrant for Integrated Systems last fall. In addition to offering an OmniCube appliance, SimpliVity teams with Cisco, Dell, Huawei, and Lenovo to integrate OmniStack into their servers.

“This transaction expands HPE’s software-defined capability and fits squarely within our strategy to make hybrid IT simple for customers,” Meg Whitman, HPE president and CEO, said in a statement.

 

HPE said it will continue to offer its own hyperconverged products, the HC 380 and HC 250, for existing customers and partners. The company jumped into the hyperconvergence nearly a year ago with the HC 380. SimpliVity customers and partners shouldn’t expect any immediate changes in product roadmap, according to HPE, which said it will continue to support them.

Within 60 days of the deal closing — which HPE expects in the second quarter of its fiscal year 2017 — the company plans to offer SimpliVity’s software qualified for its ProLiant DL380 servers. By the second half, it expects to offer a range of integrated HPE SimpliVity systems on ProLiant servers.

Dan Conde, an analyst at Enterprise Strategy Group and Interop ITX Review Board member, told me in an email that SimpliVity provides HPE with better differentiation in the hyperconverged infrastructure market. HPE’s own products aren’t built from the ground-up for hyperconvergence to the same extent as SimpliVity’s, he said.

“I think they [HPE] wanted some ‘secret sauce’,” Conde said.

Technology Business Research recently estimated that the market for hyperconverged platforms will reach $7.2 billion by 2020.

SimpliVity’s OmniCube made its way to Hollywood last year, when it was disguised as the Pied Piper box in HBO’s “Silicon Valley” television show.



Source link

Using Containers For Persistent Storage


Learn how container-based storage is implemented and blurring the lines between data, storage, and applications.

In my previous blog, I discussed how persistent storage is needed to ensure that data continues to exist after a container terminates. This persistent storage is expected to sit on a traditional storage array or perhaps a software-defined storage (SDS) solution running across commodity hardware. So in an SDS world, couldn’t containers simply be used to deliver storage itself?

The idea of using containers to deliver persistent storage seems counter-intuitive to the very nature of how containers are expected to work. A container is typically seen as ethereal or short lived, with no persistent resources connected to it. Conversely, storage is expected to be resilient, persistent and the single point of truth for all of our data. However, vendors are starting to bring products to the market that use containers as the implementation of the storage platform or layer.

Container concepts and storage design

The idea of using containers for persistent storage delivery brings together two concepts that optimize the speed of applications: Make application code as lightweight as possible, and put the data as close to the application as possible. The idea of lightweight code is pretty simple to grasp;  putting data closer to the application requires a bit more thought.

Historically or at least over the last 15 years, data has been stored on external storage arrays to gain the benefits of scale, performance, efficiency and resiliency. The trade-off in this design has been the time taken to access the data over the storage area network. With disk-based systems, the SAN overhead wasn’t really that noticeable. As we move into the flash era, the time taken to traverse the network, plus the time spend executing the “storage stack” code is becoming increasingly obvious in application response times.

The answer has been to cache data locally with the application, either in the hypervisor (for virtual machines) or within the host. So imagine in a container environment, if storage was implemented within a container running on the same host as an application, the time taken to access that data would potentially be minimal. Remember that any container is just a collection of Linux processes, so container-to-container communications are fast.

Implementation

The rationale makes sense, but how can container-based storage be implemented? The first consideration is to think of containerization as the opportunity to virtualize and abstract pieces of the storage stack. For example, we can create a container that simply manages an individual piece of storage media like a disk drive or SSD. Interaction with that container allows us to store and retrieve data on the device. The container can manage how data is distributed, handle metadata, and communicate with other containers to handle data protection and replication. If the container fails, we simply restart it; as long as there is enough metadata on the physical device to help the container restart, then no other persistence is required.

This kind if design is nice because it encapsulates processes within individual microservices. If the “disk handling” container needs to be amended, it can be changed without having to recompile the entire storage platform. In addition, this kind of abstraction means storage could be developed for any platform capable of running containers – currently Linux and Windows – and have them interoperate very easily.

Vendor solutions

Now, who’s using this kind of deployment model today? StorageOS, a UK-based startup has developed a lightweight container application that runs across multiple hosts/nodes and has a mere 40 MB footprint. Portworx has developed a similar solution that runs services across multiple containers to deliver a scale-out architecture based on commodity hardware. Scality has started to introduce microservices functions into its RING software, based on containers.

It’s easy to assume container-based storage is only used by storage startups, however that’s not the case. EMC’s Unity storage array (the evolution of Clariion and VNX) uses containers to run data movers, providing a much more scalable solution for serving front-end I/O. Although this doesn’t strictly adhere to the design principles discussed above, it shows that the use of containers for delivering storage is starting to become widespread.

Moving apps to storage

So where do we go next? Well, some storage vendors have already started providing the capability of moving the application to the storage. Coho Data began offering the ability to run application code in a container on the storage platform in 2015. Zadara Storage provides the same capability in its platform. Both of these implementations were initially seen as a way to run data-intensive work such as virus scanning or compliance checking, but could equally be used to run persistent database applications.

What’s clear is the line between data, storage and the application is being blurred to the point that both can co-exist together on the same infrastructure. This was one of the key benefits of hyperconvergence, which started the trend to eliminating dedicated storage hardware. Delivering storage with containers takes us one step further by eliminating the boundary of running apps and storage in separate VMs We are inexorably moving closer to the goal of a truly software-defined data center.



Source link