Tag Archives: SoftwareDefined

Software-Defined Storage: Getting Started


Drawn by the combined lures of automation, flexibility, increased storage capacity, and improved staff efficiency, a growing number of enterprises are pondering a switch to software-defined storage (SDS).

SDS lets adopters separate storage resources from the underlying hardware platform. The approach enables storage to become an integral part of a larger software-designed data center (SDDC) architecture in which resources can be more easily automated and orchestrated.

SDS has moved from the early adoption stage into the mainstream, with enterprises in banking, manufacturing, pharmaceuticals, healthcare, media and government rapidly transitioning to the technology. “These customers have adopted SDS for a variety of use cases, including long-term archives, backup storage, media content distribution, big data lakes and healthcare image archives,” explained Jerome Lecat, CEO of Scality, a cloud and object storage technology provider.

Greg Schulz, founder of and a senior advisor at storage consulting firm Server StorageIO, said enterprises of all types and sizes are now poised to make the move to SDS. “Across the board, big and small, from government sector to private sector,” he said., “Likewise, across different types of applications.”

Getting started

Successful SDS adopters typically began by selecting a discrete use case as a starting point. “Within the enterprise, we see Tier 2 applications, such as backup and archive, as an optimal way to store mission-critical data that is large-scale and a perfect way to demonstrate the scalability, availability and cost-advantages of SDS,” Lecat said. “Over time, more use cases, including big data and deep learning, can be brought online to further improve the economic advantages of SDS.”

Enterprises that recently moved to a hyperconverged infrastructure (HCI) are already working with SDS, noted Sascha Giese, a senior sales engineer at IT infrastructure monitoring and management technology provider SolarWinds. “A good starting point for such organizations would be to evaluate whether HCI has benefitted your organization and, if so, consider whether to expand the SDS footprint in your data center.”

Even organizations that haven’t embraced HCI usually already have some type of virtualization in their environments, observed Matt Sirbu, director of data management and data center infrastructure at Softchoice, an IT infrastructure solutions provider.

“VMware, HyperV are really software-defined compute solutions,” he said. Software-defined storage products extend virtualization benefits to the data layer, but adopters also need to closely examine the supporting infrastructure. “Any business, when they come up to their next infrastructure refresh cycle, should start to evaluate newer technologies to see what the benefits will be to their organization by leveraging software-defined across all layers, compute and storage,” he said.

Jonathan Halstuch, co-founder and chief technology officer of RackTop Systems, a data management technology supplier, noted that it’s important to find an SDS product that can meet both current and future storage requirements, particularly in critical areas like compliance and security. “Be discriminating and find a solution that will reduce complexity and tasks for the IT department,” he advised. “Then begin to migrate workloads that are the easiest to migrate or are datasets that have special requirements that are currently being unmet, such as encryption, performance or accessibility.”

The end of a refresh cycle is a logical time to begin exploring SDS. “An organization should assess their technology roadmap for the next few years and consider making the switch to an SDS solution,” said Maghen Hannigan, director of converged and integrated solutions at technology products and services distributor Tech Data. “If an existing environment is in need of a new storage administrator, it may be worth considering (hiring) a new systems administrator proficient in software-defined storage.”

A refresh cycle-motivated commitment to SDS can be either large or small.  “It may be as simple as dropping in an SDS solution in place of legacy storage,” Halstuch explained. “However, it may make more sense to rethink the current architecture, review a hybrid cloud strategy and review the current staffing profile to determine what is the best SDS solution to adopt and how it fits into the long-term vision of the organization.”

Potential pitfalls

One mistake organizations often make when planning an SDS transition is to view the technology as a “point product” decision. “Software-defined solutions are ideally part of a larger stack that offers a common operational model for compute, storage, network and cloud,” said Lee Caswell, VP of products, storage and availability at VMware. . “The software-defined solution offers a digital foundation with investment protection for any hardware, any application, and any cloud.”

“In general, we see organizations regret their decisions to move to SDS either too abruptly or without proper planning,” said Daniel Gilfix, marketing manager of Red Hat’s storage division. “We witness the frustration of those who venture into the area without the proper skill sets, as if any storage administrator or cloud practitioner can pick up the knowledge and training overnight.”

Perhaps the biggest mistake SDS newcomers make is believing that the technology is a “silver bullet” for all workloads. “It’s important to look at the workload demands,” Sirbu stated. “All organizations can benefit from (SDS) for a large portion of their workloads, but it really comes down to analyzing business requirements with available IT resources to come up with the optimal solution to run their operations.”



Source link

Software-Defined Storage Products: IT Pro Perspective


Software-defined storage describes storage products in which the storage virtualization separates storage management software from the underlying hardware. In some cases, SDS products may offer storage resource pooling, abstraction, management workflow automation, and artificial-Intelligence (AI)-based resource allocation. SDS may also enable use of commodity hardware.

This article offers insight into some of the top software-defined storage products, according to online reviews by enterprise users in the IT Central Station community. The products reviewed include Dell EMC ScaleIO, HPE StoreVirtual, IBM Spectrum Virtualize, Red Hat Ceph, and StorPool.

What do enterprise IT pros actually think about these products? Here, users offer a balanced view of  their benefits and shortcomings.

Vladimir G., infrastructure services system administrator, wrote about the advantages he sees with Dell EMC ScaleIO:

“There is no built-in system for viewing history data, such as volume IOPS. We have to provide graphing by Prometheus and Grafana, which would be a good new feature in ScaleIO. The next good new feature would be moving volumes between different storage pools, e.g., from a SAS pool to a SSD pool. It would be nice to set minimum IOPS per volume, besides just the maximum, to be able to satisfy this demand from customers out of the box, not by calculating number of disks, etc. It would be nice to have better integration with monitoring and other vendor provisioning and orchestration tools. I am aware that this is a hard-to-achieve task, where it is necessary for product not to be proprietary and to become industry standard.”

Joe H., R&D engineer at a tech company, highlighted the product’s benefits:

“The ScaleIO UI has been working with storage for a long time. Therefore, they know how to clearly present any important data, including data flow and each drive’s IOPS/bandwidth, and allow the user to easily monitor bottlenecks and problems, especially the rebuild and rebalance status of child objects. It controls them, as well as maintaining them well.”

He also said if ScaleIO “could introduce a write cache feature, the product would be perfect overall.”

HPE StoreVirtual

Matthew A., system and network administrator at a non-tech company, described what he sees as HPE StoreVirtual’s valuable features: “Ease of carving out storage and the seamlessness behind the scenes of block management. I just let it do its thing. I don’t worry too much about it.”

An IT manager for infrastructure at a government agency who goes by the handle InfraITMgr243 said the product has benefitted his organization:

“StoreVirtual has been real good for us. We started with the original P4300 LeftHand SANs before they became StoreVirtual. What I love about those is the two nodes and the mirroring back and forth, and you can’t lose anything. It’s very solid, and we haven’t really had any trouble with those either. We have a newer StoreVirtual that we’ve connected to one of the C3000 Blade Enclosures and it runs well. We lost a system board once and we lost a couple of servers, but we were able to bring everything back. Equipment-wise, it allows us to do all our work. We’re real happy with that.”

Benoit H., WIS system engineer at a paper and forest products company, offered thoughts on how HPE StoreVirtual could improve: “Features like data deduplication would be great because in the end, this solution requires a lot of raw disk space because of the use of RAID5 on the hardware and RAID1 on the network.”

Philip S., solutions engineer at an insurance company, would like to see a new user interface:

“The user interface needs to be updated. It’s getting kind of long in the tooth, and the user interface makes it look a lot more complex than it actually is to manage, and I think that you can mask a lot of that with a refresh of the user interface. While HPE has created a new HTML5 UI for the HyperConverged 380, it is not available to the rest of the StoreVirtual population.”

IBM Spectrum Virtualize

Craig J., storage administrator at a retailer, described the benefits of IBM Spectrum Virtualize for his company:

“The product helps us to manage our storage in a way that allows us to put different frames inside or out of our storage infrastructure and migrate. The benefits are that it speeds up provisioning of the storage across different tiers and allows a small team to manage that function, for many petabytes of data.”

A storage engineer for a healthcare company who goes by the handle StorageEc5c3, also likes the software:

“It gives us a lot of flexibility and ease of management. We have all the tools in one place. We pretty much do all our storage using the Spectrum Virtualize. It makes it really easy for us to manage all our storage. It gives us the flexibility to move things in between these. I think a lot of the benefit is just the ease of use of the tool itself.”

But a storage admin at a financial services firm that uses the handle StorageA62f0 cited drawbacks with the product:

 “There is third site replication. Right now, we’re limited in our ability to migrate data between clusters. Like I said, we had to scale wide rather than tall and continue to protect our data while we migrate. Additionally, if we wanted to set up a third site for additional DR, we don’t really have a good option for that.”

Joshua M., technical analyst III at a healthcare company, also cited some shortcomings with Spectrum Virtualize:

“The feature that’s kind of missing is getting us up to the point where we can help the application owners see where their data is at, understand it, and potentially help us breakout. We’ve used easy tiered functions in the pools, so we’re trying to help step that storage down. If they can get visibility somehow into that data, help us further break that down, or better tier and separate out their data, that would be helpful.”

Red Hat Ceph

Anthony D., a senior software engineer, praised the community aspect of Ceph:

“By being open source, Ceph is not tied to the whim or fortunes of any one vendor. The community of Ceph code contributors and admins is large and active. Ceph’s ability to adapt to varying types of commodity hardware affords us substantial flexibility and future-proofing.”

Diego W., founding partner tech lead and DevOps consultant at a tech services company, values Ceph for its reliability.I have experienced failures and human mistakes. However, Ceph was able to recover automatically the data with a special procedure,” he wrote.

However, Flavio C., senior information technology specialist at a tech consulting company, said he sees room for improvement:

“In the deployment step, we need to create some config files to add Ceph functions in OpenStack modules (Nova, Cinder, Glance). It would be useful to have a tool that validates the format of the data in those files, before generating a deploy with failures.”

George P., systems engineer at a marketing services firm, highlighted a challenge for Ceph:

“Ceph lacks a little bit only in performance. It needs to scale a lot and needs very fast and well-orchestrated/configured hardware for best performance. This not a downside though, it is a challenge. Ceph only improves the given hardware.”

StorPool

Suha O., CEO at a tech company, gave high marks to StorPool:

“StorPool is a software-only solution with practically unlimited expansion capabilities. Its performance is very high. We were able to replace our SSD-only local storage systems without any performance penalty. Its price/performance is very high!”

He also suggested an improvement: “It would be good if, with next releases, StorPool provide a better GUI for monitoring and statistics. This would make our experience even better and complete.”

Richard L., a company president, likes StorPool’s manageability: “Managing StorPool is much simpler than our previous storage system, especially having a CLI option which our previous storage system was lacking.”

Maria R., head of IT services operations center at a communications service provider, said a better interface would help. “At times we need to check the disks and do some minor operations. A friendlier user interface would be useful in such cases.”

To learn more about SDS solutions, download IT Central Station’s SDS Buyer’s Guide based on real user reviews.

 



Source link

Software-Defined Data Centers: VMware Designs


These are best practices and proven practices for how a design for all components in the SDDC might look. It will highlight a possible cluster layout, including a detailed description of what needs to be put where, and why a certain configuration needs to be made.

Typically, every design should have an overview to quickly understand what the solution is going to look like and how the major components are related. In the SDDC one could start drawing the vSphere Clusters, including their functions.

Logical overview of the SDDC clusters

This following image describes an SDDC that is going to be run on the three-cluster approach:

 

The three clusters are as follows:

  • The management cluster for all SDDC managing services
  • The NSX edge cluster where all the north-south network traffic is flowing through
  • The actual payload cluster where the production VMs get deployed

Tip: Newer best practices from VMware, as described in the VMware validated designs (VVD) version 3.0, also propose a two-cluster approach. In this case, the edge cluster is not needed anymore and all edge VMs are deployed directly onto the payload cluster. This can be a better choice from a cost and scalability perspective. However, it is important to choose the model according to the requirements and constraints found in the design.

The overview should be only as complex as necessary since its purpose is to give a quick impression over the solution and its configuration. Typically, there are a few of these overviews for each section.

This forms a basic SDDC design where the edge and the management cluster are separated. According to the latest VMware best practices, payload and edge VMs can also run on the same cluster. This basically is a decision based on scale and size of the entire environment. Often it is also a decision based on a limit or a requirement — for example, edge hosts need to be physically separated from management hosts.

Logical overview of solution components

This is as important as the cluster overview and should describe the basic structure of the SDDC components, including some possible connections to third-party integration like IPAM.

Also, it should provide a basic understanding for the relationship between the different solutions.

 

It is important to have an understanding of these components and how they work together. This will become important during the deployment of the SDDC since none of these components should be left out or configured wrong. For the vRealize Log Insight connects, that is especially important.

Note: If not all components are configured to send their logs into vRealize Log Insight, there will be gaps, which can make troubleshooting very difficult or even impossible. A plan, which describes the relation, can be very helpful during this step of the SDDC configuration.

These connections should also be reflected in a table to show the relationship and confirm that everything has been set up correctly. The better the detail is in the design, the lower the chance that something gets configured wrong or is forgotten during the installation.

The vRealize Automation design

Based on the use case, there are two setup methods/designs vRealize Automation 7 supports when being installed.

Small: Small stands for a very dense and easy-to-deploy design. It is not recommended for any enterprise workloads or even for production. But it is ideal for a proof of concept (PoC) environment, or for a small dev/test environment to play around with SDDC principles and functions.

The key to the small deployment is that all the IaaS components can reside on one single Windows VM. Optionally, there can be additional DEMs attached which eases future scale. However, this setup has one fundamental disadvantage: There is no built-in resilience or HA for the portal or DEM layer. This means that every glitch in one of these components will always affect the entire SDDC.

Enterprise: Although this is a more complex way to install vRealize Automation, this option will be ready for production use cases and is meant to serve big environments. All the components in this design will be distributed across multiple VMs to enable resiliency and high availability.

 

In this design, the vRealize Automation OVA (vApp) is running twice. To enable true resilience a load balancer needs to be configured. The users access the load balancer and get forwarded to one of the portals. VMware has good documentation on configuring NSX as a load balancer for this purpose, as well as the F5 load balancer. Basically, any load balancer can be used, as long as it supports HTML protocol checks.

Note: DNS alias or MS load-balancing should not be used for this, since these methods cannot prove if the target server is still alive. According to VMware, there are checks required for the load balancer to understand if each of the vRA Apps is still available. If these checks are not implemented, the user will get an error while trying to access the broken vRA

In addition to the vRealize Automation portal, there has to be a load balancer for the web server components. Also, these components will be installed on a separate Windows VM. The load balancer for these components has the same requirements as the one for the vRealize Automation instances.

The active web server must only contain one web component of vRA, while the second (passive) web server can contain component 2, 3, and more.

Finally, the DEM workers have to be doubled and put behind a load balancer to ensure that the whole solution is resilient and can survive an outage of any one of the components.

Tip: If this design is used, the VMs for the different solutions need to run on different ESXi hosts in order to guarantee full resiliency and high availability. Therefore, VM affinity must be used to ensure that the DEMs, web servers or vRA appliances never run on the same ESXi host. It is very important to set this rule, otherwise, a single ESXi outage might affect the entire SDDC.

This is one of VMware’s suggested reference designs in order to ensure vRA availability for users requesting services. Although it is only a suggestion it is highly recommended for a production environment. Despite all the complexity, it offers the highest grade of availability and ensures that the SDDC can stay operative even if the management stack might have troubles.

Tip: vSphere HA cannot deliver this grade of availability since the VM would power off and on again. This can be harmful in an SDDC environment. Also, to bring back up operations, the startup order is important. Since HA can’t really take care of that, it might power the VM back on at a surviving host, but the SDDC might still be unusable due to connection errors (wrong order, stalled communication, and so on).

Once the decision is made for one of these designs, it should be documented as well in the setup section. Also, take care that none of the limits, assumptions, or requirements are violated with that decision.

Another mechanism of resiliency is to ensure that the required vRA SQL database is configured as an SQL cluster. This would ensure that no single point of failure could affect this component. Typically, big organizations have already some form of SQL cluster running, where the vRA database could be installed. If this isn’t a possibility, it is strongly recommended to set up such a cluster in order to protect the database as well. This fact should be documented in the design as a requirement when it comes to the vRA installation.

This tutorial is a chapter excerpt from “Building VMware Software-Defined Data Centers” by Valentin Hamburger. Use the code ORSCP50 at checkout to save 50% on the recommended retail price until Dec. 15.



Source link

Software-Defined Storage: 4 Factors Fueling Demand


As organizations look for cost-effective ways to house their ever-growing stores of data, many of them are turning to software-defined storage. According to market researchers at ESG, 52% of organizations are committed to software-defined storage (SDS) as a long-term strategy.

Some vendor-sponsored studies have found even higher rates of SDS adoption; while the findings are self-serving, they’re still noteworthy. For example, a SUSE report published in 2017 found that 63% of enterprises surveyed planned to adopt SDS within 12 months, and in DataCore Software’s sixth annual State of Software-Defined Storage, Hyperconverged and Cloud Storage survey, only 6% of respondents said they were not considering SDS.

What’s driving this interest in SDS? Let’s look at four important reasons why enterprises are considering the technology.

1. Avoid vendor lock-in

In an interview, Camberley Bates, managing director and analyst at Evaluator Group who spoke about SDS at Interop ITX,  said, “The primary driver of SDS is the belief that it delivers independence, and the cost benefit of not being tied to the hardware vendor.”

In fact, when DataCore asked IT professionals about the business drivers for SDS, 52% said that they wanted to avoid hardware lock-in from storage manufacturers.

However, Bates cautioned that organizations need to consider the costs and risk associated with integrating storage hardware and software on their own. She said that many organizations do not want the hassle of integration, which is driving up sales of pre-integrated appliances based on SDS technology.

2. Cost savings

Of course, SDS can also have financial benefits beyond avoiding lock-in. In the SUSE study, 72% of respondents said they evaluate their storage purchases based on total cost of ownership (TCO) over time, and 81% of those surveyed said the business case for SDS is compelling.

Part of the reason why SDS can deliver low TCO is because of its ability to simplify storage management. The DataCore study found that the top business driver for SDS, cited by 55% of respondents was “to simplify management of different models of storage.”

3. Support IT initiatives

Another key reason why organizations are investigating SDS is because they need to support other IT initiatives. In the SUSE survey, IT pros said that key technologies influencing their storage decisions included cloud computing (54%), big-data analytics (50%), mobility (47%) and the internet of things (46%).

Organizations are looking ahead to how these trends might change their future infrastructure needs. Not surprisingly, in the DataCore report, 53% of organizations said a desire to help future-proof their data centers was driving their SDS move.

4. Scalability

Many of those key trends that are spurring the SDS transition are dramatically increasing the amount of data organizations need to store. Because it offers excellent scalability, SDS appeals to enterprises experiencing fast data growth.

In the SUSE study, 96% of companies surveyed said they like the business scalability offered by SDS. In addition, 95% found scalable performance and capacity appealing.

As data storage demands continue to grow, this need to increase capacity while keeping overall costs down may be the critical factor in determining whether businesses choose to invest in SDS.

 



Source link

Software-Defined Storage Products: IT Pros Offer Insight


Find out what users have to say about products in the emerging SDS market.

Software-defined storage promises two very attractive benefits to the enterprise: flexibility and lower cost. But how can IT pros know which software-defined storage (SDS) product will best meet the needs of their business?

Peer reviews published by real users can facilitate their decision-making with user feedback, insight, and product rankings that collectively indicate which products are in the lead.

Based on our real user reviews at IT Central Station, these products are some of the top choices for software-defined storage today.

Nutanix

A senior system engineer said, “The support we get from Nutanix is easily the best from all vendors we work with. If you open a case you directly speak to an engineer which can help quickly and efficiently. Our customers sometimes open support cases directly (not through us) and so far the feedback was great.”

However, a CTO at an IT consulting firm said while Nutanix has the ability to connect to Azure or AWS for storing backups, he would like to have the capability to spin up a backup on Azure or AWS for disaster-recovery purposes.

“Right now, you can only send a backup to either Azure or AWS. We would like to take a backup and spin it up to an actual server that could be connected to by users from the outside,” he added.

Here are more Nutanix reviews by IT Central Station users.

VMware vSAN

A senior systems administrator and storage specialist in the government sector said he finds that vSAN allows for very easy administration. “The fact that you don’t have LUNs to set up and assign is great. The ability to set up storage policies and assign them at the disk level is also a great part of this product,” he said. “You can allow for different setups for different workload requirements.”

A senior manager of IT infrastructure noted that “The vSAN Hardware Compatibility List Checker needs to improve, since currently it is a sore point for vSAN. You need to thoroughly check and re-check the HCL with multiple vendors like VMware, in the first instance, and manufacturers like Dell, IBM, HPE, etc., as the compatibility list is very narrow. I would definitely be happy if there is significant additional support for more models of servers from Dell, IBM, HPE, etc.”

Read more VMware vSAN reviews by IT Central Station members.

HPE StoreVirtual

A network engineer at a tech service firm reported that “Shelf level-redundancy is one of the big things that StoreVirtual has had before some other SAN manufacturer or SAN model brands, which is pretty nice. It can be rather expensive because you are much less efficient when you have that redundancy, but it’s definitely a benefit if you really need access to that data.

But a solutions engineer at an insurance company said the product’s user interface needs to be updated. “It’s getting kind of long in the tooth, and the user interface makes it look a lot more complex than it actually is to manage, and I think that you can mask a lot of that with a refresh of the user interface. While HPE has created a new HTML5 UI for the HyperConverged 380, it is not available to the rest of the StoreVirtual population.”

Read more HPE StoreVirtual reviews.  

Dell EMC ScaleIO

An engineer at a tech vendor that is both a customer and partner with Dell EMC likes the ScaleIO user interface. “EMC has been working with storage for a long time. Therefore, they know how to clearly present any important data, including data flow and each drive’s IOPS/bandwidth; and allow the user to easily monitor bottlenecks and problems, especially the rebuild and rebalance status of child objects. It controls them, as well as maintaining them well.”

He added that “If they could introduce a write cache feature, the product would be perfect overall.”

You can read more Dell EMC ScaleIO reviews here.



Source link