Tag Archives: 4

Site Reliability Engineering: 4 Things to Know


In 2016, Google published a book called “Site Reliability Engineering: How Google Runs Production Systems” that extolled a new approach to managing IT infrastructure. In Google’s words, site reliability engineering, or SRE for short, is “what you get when you treat operations as if it’s a software problem.”

That definition seems to align very closely with the DevOps movement, which aims, in part, to bring agile software development approaches to infrastructure management. People involved in DevOps teams have become increasingly interested in SRE and how it might help them become more collaborative and agile.

To find out more about site reliability engineering, Network Computing spoke with Rob Hirschfeld, who has been in the cloud andinfrastructure space for nearly 15 years, including work with early ESX betas and serving on the Open Stack Foundation Board. Hirschfeld, cofounder and CEO of RackN, will present “DevOps vs SRE vs Cloud Native” at Interop ITX 2018.

We asked him to explain some of the basics of SRE, and what infrastructure pros need to know about this new concept. He highlighted four key facts:

1. SRE is a job function that started at Google

“Site reliability engineering is a term that was coined by Google to describe their engineering operations group,” Hirschfeld said. “It’s basically a job functions that spans multiple disciplines on the operations side of Google. They are responsible not only for data center operations, but going up all the way to interacting with application developers and some of their key internet properties to analyze them, do performance management — basically take a sustained application into an ongoing full lifecycle deployment.”

2. SRE complements DevOps approaches and cloud-native architecture

Hirschfeld explained that DevOps, SRE, and cloud-native apply similar philosophies to different aspects of IT. DevOps is “about people and culture and process, SRE is “a job function,” and cloud-native is “an architectural pattern that describes how applications are built in a way that makes them more sustainable and runnable in the cloud,” he said.

He added, “It fits very cleanly together where we have an architectural pattern, a job function, a process management description — all three tie together to really create the way modern application development works.”

3. SRE offers greater reliability and performance

In Hirschfeld’s words, SRE “supercharges a company’s operational experiences.”

He said that by embracing SRE, companies are “placing a high priority on sustaining engineering and making sure their site is up and running and performing well, and that they are not so focused on adding a feature that might hurt the customer experience in the end by being unreliable or slow.”

He also noted that while many organizations have very high regard for their developers, that hasn’t always been true for IT operations personnel. SRE can equalize the influence and respect afforded to development and operations staff.

4. SRE requires commitment

The one big downside of SRE is that it “takes a bit of commitment,” Hirschfeld said. “If the company is used to letting the operations team fight fires all the time and move from crisis to crisis, the SRE team is going to slow down those process while it cleans house, while it fixes the backlog of problems and builds a more repeatable process.” That process can be discouraging, but he encourages organizations not to give up.

He also noted, “If you just throw SRE onto a team that’s not empowered as an SRE team, you will not be that successful at all. It’s not something you should do halfway.”

In conclusion, he re-emphasized the connections among DevOps, SRE, and cloud-native. “You can’t succeed at SRE without thinking about DevOps, without thinking about cloud-native architecture as well,” he said. “They all go hand-in-hand.”

Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!

 



Source link

Hybrid Cloud: 4 Top Use Cases


In the early days of cloud computing, experts talked a lot about the relative merits of public and private clouds and which would be the better choice for enterprises. These days, most enterprises aren’t deciding between public or private clouds; they have both. Hybrid and multi-cloud environments have become the norm.

However, setting up a true hybrid cloud, with integration between a public cloud and private cloud environment, can be very challenging.

“If the end user does not have specific applications in mind about what they are building [a hybrid cloud] for and what they are doing, we find that they typically fail,” Camberley Bates, managing director and analyst at Evaluator Group, told me in an interview.

So which use cases are best suited to the hybrid cloud? Bates highlighted three scenarios where organizations are experiencing the greatest success with their hybrid cloud initiatives, and one use case that’s popular but more challenging.

1. Disaster recovery and business continuity

Setting up an independent environment for disaster recovery (DR) or business continuity purposes can be a very costly proposition. Using a hybrid cloud setup, where the on-premises data center fails over to a public cloud service in the case of an emergency, is much more affordable. Plus, it can give enterprises access to IT resources in a geographic location far enough away from their primary site that they are unlikely to be affected by the same disaster events.

Bates noted that costs are usually big driver for choosing hybrid cloud over other DR options. With hybrid cloud, “I have a flexible environment where I’m not paying for all of that infrastructure all the time constantly.” she said. “I have the ability to expand very rapidly if I need to. I have a low-cost environment. So if I combine those pieces, suddenly disaster recovery as an insurance policy environment is cost effective.”

2. Archive

Using a hybrid cloud for archive data has very similar benefits as disaster recovery, and enterprises often undertake DR and archive hybrid cloud efforts simultaneously.

“There’s somewhat of a belief system that some people have that the cloud is cheaper than on-prem, which is not necessarily true,” cautioned Bates. However, she added, “It is really cheap to put data at rest in a hybrid cloud for long periods of time. So if I have data that is truly at rest and I’m not moving it in and out, it’s very cost effective.”

3. DevOps application development

Another area where enterprises are experiencing a lot of success with hybrid clouds is with application development. As organizations have embraced DevOps and agile methodologies, IT teams are looking for ways to speed up the development process.

Bates said, “The DevOps guys are using [public cloud] to set up and do application development.” She explained, “The public cloud is very simple and easy to use. It’s very fast to get going with it.”

But once applications are ready to deploy in production, many enterprises choose to move them back to the on-premises data center, often for data governance or cost reasons, Bates explained. The hybrid cloud model makes it possible for the organization to meet its needs for speed and flexibility in development, as well as its needs for stability, easy management, security, and low costs in production.

4. Cloud bursting

Many organizations are also interested in using a hybrid cloud for “cloud bursting.” That is, they want to run their applications in a private cloud until demand for resources reaches a certain level, at which point they would fail over to a public cloud service.

However, Bates said, “Cloud bursting is a desire and a desirable capability, but it is not easy to set up, is what our research found.”

Bates has seen some companies, particularly financial trading companies, be successful with hybrid cloud setups, but this particular use case continues to be very challenging to put into practice.

Learn more about why enterprises are adopting hybrid cloud and best practices that lead to favorable outcomes at Camberley Bates’ Interop ITX session, “Hybrid Cloud Success & Failure: Use Cases & Technology Options.” 

Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!

 



Source link

Software-Defined Storage: 4 Factors Fueling Demand


As organizations look for cost-effective ways to house their ever-growing stores of data, many of them are turning to software-defined storage. According to market researchers at ESG, 52% of organizations are committed to software-defined storage (SDS) as a long-term strategy.

Some vendor-sponsored studies have found even higher rates of SDS adoption; while the findings are self-serving, they’re still noteworthy. For example, a SUSE report published in 2017 found that 63% of enterprises surveyed planned to adopt SDS within 12 months, and in DataCore Software’s sixth annual State of Software-Defined Storage, Hyperconverged and Cloud Storage survey, only 6% of respondents said they were not considering SDS.

What’s driving this interest in SDS? Let’s look at four important reasons why enterprises are considering the technology.

1. Avoid vendor lock-in

In an interview, Camberley Bates, managing director and analyst at Evaluator Group who spoke about SDS at Interop ITX,  said, “The primary driver of SDS is the belief that it delivers independence, and the cost benefit of not being tied to the hardware vendor.”

In fact, when DataCore asked IT professionals about the business drivers for SDS, 52% said that they wanted to avoid hardware lock-in from storage manufacturers.

However, Bates cautioned that organizations need to consider the costs and risk associated with integrating storage hardware and software on their own. She said that many organizations do not want the hassle of integration, which is driving up sales of pre-integrated appliances based on SDS technology.

2. Cost savings

Of course, SDS can also have financial benefits beyond avoiding lock-in. In the SUSE study, 72% of respondents said they evaluate their storage purchases based on total cost of ownership (TCO) over time, and 81% of those surveyed said the business case for SDS is compelling.

Part of the reason why SDS can deliver low TCO is because of its ability to simplify storage management. The DataCore study found that the top business driver for SDS, cited by 55% of respondents was “to simplify management of different models of storage.”

3. Support IT initiatives

Another key reason why organizations are investigating SDS is because they need to support other IT initiatives. In the SUSE survey, IT pros said that key technologies influencing their storage decisions included cloud computing (54%), big-data analytics (50%), mobility (47%) and the internet of things (46%).

Organizations are looking ahead to how these trends might change their future infrastructure needs. Not surprisingly, in the DataCore report, 53% of organizations said a desire to help future-proof their data centers was driving their SDS move.

4. Scalability

Many of those key trends that are spurring the SDS transition are dramatically increasing the amount of data organizations need to store. Because it offers excellent scalability, SDS appeals to enterprises experiencing fast data growth.

In the SUSE study, 96% of companies surveyed said they like the business scalability offered by SDS. In addition, 95% found scalable performance and capacity appealing.

As data storage demands continue to grow, this need to increase capacity while keeping overall costs down may be the critical factor in determining whether businesses choose to invest in SDS.

 



Source link

4 Software-Defined Storage Trends


As enterprises move towards the software-defined data center (SDDC), many of them are deploying software-defined storage (SDS). According to Markets and Markets, the software-defined storage market was worth $4.72 billion in 2016, and it could increase to $22.56 billion by 2021. That’s a 36.7% compound annual growth rate.

Enterprises are attracted to SDS for two key reasons: flexibility and cost. SDS abstracts the storage software away from the hardware on which it runs. That gives organizations a lot more options, including the freedom to change vendors as they see fit and the ability to choose low-cost hardware. SDS solutions also offer management advantages that help enterprises reduce their total cost of ownership (TCO).

Enterprises appear eager to reap the benefits of SDS. Camberley Bates, managing partner and analyst at Evaluator Group, said in an interview, “Adoption is increasing as IT end users get more familiar with the options and issues with SDS.”

She highlighted four trends that are currently affecting the software-defined storage market.

1. Appliances dominate

By definition, software-defined storage runs on industry-standard hardware, so you might think that most organizations buy their SDS software and hardware separately and build their own arrays. However, that isn’t the case.

“Much of the [current SDS] adoption is in the form of an appliance from the vendor, and these include categories such as server-based storage, hyperconverged and converged infrastructure systems,” Bates said.

Although the market is embracing SDS, enterprises still don’t want to give up some of the benefits associated with buying a pre-built appliance where the hardware and software have been tested to work together.

2. NVMe improves performance

Designed to take advantage of the unique characteristics of SSDs, NVMe provides faster performance and lower latency than SAS or SATA. As a result, many different types of storage solutions have begun using NVMe technology, but Bates said that SDS solutions are adopting NVMe more quickly.

She added that in her firm’s labs,  NVMe proved to have lower price for performance  than other types of storage by a significant margin based on work with Intel last summer.

3. Enterprises want single-vendor support

One of the most common problems organizations run into when deploying do-it-yourself SDS solutions is the support runaround. When they experience an issue, they call their SDS software vendor for help, only to be told that the problem lies with the hardware. And, of course, the hardware vendor then blames the software vendor.

“There is a distinct need to have a single entity responsible for the service and support of the system,” Bates said.

She also noted that the potential risk of data loss makes this support issue more significant for SDS than for other types of software-defined infrastructure.

4. Scale-out remains challenging

The other big issue that organizations face with SDS is scalability. “Scale-out designs are not easy,” Bates said. “They may do well for the first two to four nodes, but if I am creating a large-scale hybrid cloud, then the environment needs to scale efficiently and resiliently. We have seen environments that fail on both counts.”

As organizations increasingly deploy hybrid clouds, they’ll need to look for SDS solutions that help them solve this scalability issue.

Camberley Bates will discuss SDS in more depth and offer tips on what enterprises should look for in SDS solutions at her Interop ITX session, “Software-Defined Storage: What It Is and Why It’s Making the Rounds in Enterprise IT.” Register now for Interop ITX, May 15-19 in Las Vegas.



Source link