Tag Archives: 3

Top 3 Disaster Recovery Mistakes


Considering the high cost of IT downtime, disaster recovery planning is critical for every enterprise. According to a 2016 IHS report, downtime costs North American companies $700 billion a year. For a typical mid-size company, the average cost was around $1 million, while a large enterprise lost more than $60 million on average, IHS found.

Yet even with the stakes so high, companies can fall into common pitfalls when it comes to disaster recovery planning to mitigate the impact of service outages. GS Khalsa, senior technical marketing manager at VMware, said that he sees organizations making the same three mistakes over and over again.

1. Not having a DR plan

In Khalsa’s opinion, by far the biggest mistake that companies make — and one of the most common — is failing to put together any sort of disaster recovery plan at all. He said that industry statistics indicate that up to 50% of organizations haven’t done any DR planning.

That’s unfortunate because preparing for a disaster doesn’t have to be as complicated or as costly as most organizations assume. “It doesn’t have to involve any purchases,” Khalsa said in an interview. “It doesn’t have to involve anything more than a discussion with the business that this is what our DR plan is.”

Even if companies decide to do nothing more than restore from their latest nightly backup, they should at least write that plan down so that they know what to expect and what to do in case of an emergency, he added.

2. Not testing the DR plan

Coming up with a plan is just the first step. Organizations also need a way to test the plan. Unfortunately, in a traditional, non-virtualized data center, there isn’t an easy, non-disruptive way to conduct a recovery test. As a result, most companies test “infrequently, if at all,” Khalsa said.

He pointed out that having a virtualized environment eases testing. Organizations can copy their VMs and test their recovery processes on an isolated network. That way they can see how long recovery will take and find potential problems without interrupting ongoing operations.

3. Not understanding the complexity of DR

Organizations also sometimes underestimate how much work it takes to recover from a backup. Khalsa explained that some organizations expect to be able to do their restores manually, which really isn’t feasible once you have more than about 10 or 20 VMs.

He noted that sometimes IT staff will write their own scripts to automate the recovery process, but even that can be problematic. “People forget that disasters don’t just impact systems, they also potentially impact people,” Khalsa said. The person who wrote the script may not be available to come into work following a disaster, which could hamper the recovery process.

Khalsa’s No. 1 tip for organizations involved in DR planning is for IT to communicate clearly with the business. Management and executives need to understand the recovery point objective (RPO) and recovery time objective (RTO) options and make some decisions about the acceptable level of risk.

“More communication is better,” Khalsa said.

Hear more about disaster recovery planning from GS Khalsa live and in person at Interop ITX, where he will present, “Disaster Recovery In The Virtualized Data Center.” Register now for Interop ITX, May 15-19, in Las Vegas.



Source link

Packet Blast: Top Tech Blogs, March 3


We collect the top expert content in the infrastructure community and fire it along the priority queue.



Source link

Adapting IT Operations to Emerging Trends: 3 Tips


For infrastructure management professionals, keeping up with new trends is a constant challenge. IT must constantly weigh the potential benefits and risks of adopting new technologies, as well as the pros and cons of continuing to maintain their legacy hardware and applications.

Some experts say that right now is a particularly difficult time for enterprise IT given the massive changes that are occurring. When asked about the trends affecting enterprise IT operations today, Keith Townsend, principal at The CTO Advisor, told me, “Obviously the biggest one is the cloud and the need to integrate cloud.”

In its latest market research, IDC predicts that public cloud services and infrastructure spending will grow 24.4% this year, and Gartner forecasts that the public cloud services market will grow 18%in 2017. By either measure, enterprises are going to be running a lot more of their workloads in the cloud, which means IT operations will need to adapt to deal with this new situation.

Townsend, who also is SAP infrastructure architect at AbbVie, said that the growth in hybrid cloud computing and new advancements like serverless computing and containers pose challenges for IT operations, given “the resulting need for automation and orchestration throughout the enterprise IT infrastructure.” He added, “Ultimately, they need to transform their organizations from a people, process and technology perspective.”

For organizations seeking to accomplish that transformation, Townsend offered three key pieces of advice.

Put the strategy first

Townsend said the biggest mistake he sees enterprises making “is investing in tools before they really understand their strategy.” Organizations know that their approach to IT needs to change, but they don’t always clearly define their goals and objectives.

Instead, Townsend said, they often start by “going out to vendors and asking vendors to solve this problem for them in the form of some tool or dashboard or some framework without understanding what the drivers are internally.”

IT operations groups can save themselves a great deal of time, money and aggravation by focusing on their strategy first before they invest in new tools.

Self-fund your transformation

Attaining the level of agility and flexibility that allows organizations to take advantage of the latest advances in cloud computing isn’t easy or cheap. “That requires some investment, but it’s tough to get that investment,” Townsend acknowledged.

Instead of asking for budget increases, he believes the best way to do that investment is through self-funding.

Most IT teams spend about 80% of their budgets on maintaining existing systems, activities that are colloquially called “keeping the lights on.” That leaves only 20% of the budget for new projects and transformation. “That mix needs to be changed,” said Townsend.

He recommends that organizations look for ways to become more efficient. By carefully deploying automation and adopting new processes, teams can accomplish a “series of mini-transformations” that gradually decreases the amount of money that must be spent on maintenance and frees up more funds and staff resources for new projects.

Focus on agility, not services

In his work, Townsend has seen many IT teams often make a common mistake when it comes to dealing with the business side of the organization: not paying enough attention to what is happening in the business and what the business really wants.

When the business comes to IT with a request, IT typically responds with a list of limited options. Townsend said that these limited options are the equivalent of telling the business no. “What they are asking for is agility,” he said.

He told a story about a recent six-month infrastructure project where the business objectives for the project completely changed between the beginning of the project and the end. An IT organization can only adapt to those sort of constant changes by adopting a DevOps approach, he said. If IT wants to remain relevant and help organizations capitalize on the new opportunities that the cloud offers, it has to become much more agile and flexible.

You can see Keith Townsend live and in person at Interop ITX, where he will offer more insight about how enterprise IT needs to transform itself in his session, “Holistic IT Operations in the Application Age.” Register now for Interop ITX, May 15-19, in Las Vegas.



Source link

3 Drivers For Faster Connectivity


More powerful CPUs, faster storage, and software-defined architectures require faster networking.

In the never ending game of leapfrog between processing, memory, and I/O, the network has become the new server bottle neck. Today’s 10 GbE server networks are simply unable to keep up with the processor’s insatiable demand for data. This is creating a real problem where expensive servers equipped with the latest CPUs are giant beasts that suck power at an astounding rate, with all these massively parallel cores running at gigahertz rates busily doing nothing.

Giant multi-core processors with underfed network I/O are basically being starved of the data needed for them to continue processing. Architects from the hyperscale data centers have realized the need for higher network bandwidth and have jumped to the new 25, 50, and even100 GbE networks to keep their servers fed with data.

It’s important to understand the reasons why these leaders have made the decision to adopt faster networks. In fact there are three reasons why I/O has become the server bottleneck and faster networks are needed:

  1. CPUs with more cores need more data to feed them
  2. Faster storage needs faster networks
  3. Software-defined everything (SDX) uses networking to save money

Multi-core CPUs need fast data pipes

Despite the many predictions of the imminent demise of Moore’s Law, chip vendors have continued to rapidly advance processor and memory technologies. The latest X86, Power, and ARM CPUs offer dozens of cores and deliver hundreds of times the processing capabilities of the single core processors available at the start of the century. For example, this summer IBM announced the Power9 architecture, slated to be available in the second half of 2017 and include 24 cores!

Memory density and performance have advanced rapidly as well. So today’s advanced processor cores demand more data than ever to keep them fed, and you don’t want the CPU-memory sub-system  — the most expensive component of the server —  sitting idle waiting for data. It’s simple math that a faster network pays for itself by achieving improved server efficiency.

Faster storage needs faster networks

Just a few short years ago, the vast majority of storage was based on hard-disk drive technology — what I like to call “spinning rust” — with data access times of around 10 milliseconds and supporting only around 200 I/O operations per second (IOPS).

Today’s advanced flash-based solid state disks access data at least 100 times faster and a single NVMe flash drive can deliver around 30 Gbps of bandwidth and over one million IOPS. With new technologies like 3D XPoint and ReRAM just around the corner, access times are set to be reduced by another factor of 100X.

These lightning-quick access times means that servers equipped with solid-state storage need at least 25 Gbps networks to effectively take advantage of the available performance. A 10 Gbps connection leaves two thirds of the available bandwidth stranded. This is also true for accessing data from all-flash arrays from traditional storage vendors like EMC, NetApp, and Pure Systems. Here there is an incredible amount of performance that can be trapped in the centralized storage array, warranting 50 or even 100 Gbps networks.  

Software-defined everything 

The last driver for higher performance server connectivity is the trend towards software defined everything (SDX). This was perhaps best explained by Albert Greenberg from Microsoft, in his ONS keynote presentation where he said, “To make storage cheaper we use lots more network!” He went on to explain that to make Azure Storage scale they use RoCE (RDMA over Converged Ethernet) at 40Gb/s to achieve “massive COGS savings.”

The key realization here is that with software defined storage, the network becomes a vital component of the solution and the key to achieving the cost savings available with industry standard servers. Instead of buying purpose built networking, storage, or database appliances engineered with five-nines reliability; an SDX architecture takes a fundamentally different approach.

Start with off-the-shelf servers with three-nines reliability and engineer a software defined system that achieves five-nines reliability through mirroring, high availability, and erasure coding. So instead of a high bandwidth backplane to strip data across disks – you simply use the network to stripe data across multiple servers. Of course you need a lot more network to do this but the cost savings, scalability, and performance benefits are dramatic.

Buying the most powerful server possible has always been a wise investment. This is more true than ever with powerful multicore processors, faster solid-state storage, and software-defined everything architectures. But to take advantage of all of the power of these server and software components requires more I/O and network bandwidth.



Source link