Tag Archives: Recovery

5 Disaster Recovery Tips: Learning from Hurricanes

Hurricanes Irma and Harvey highlight the need for DR planning to ensure business continuity.


This has been an awful year for natural disasters, and yet, we’re not even midway through a hurricane season that’s been particularly devastating. Hurricanes Irma and Harvey, and the flooding that ensued, has resulted in loss of life, extensive property damage, and crippled infrastructure..

Naturally, businesses have also been impacted. When it comes to applications, data and data centers, this is a wake-up call. At the same time, these are situations that motivate companies and individuals to introduce much-needed change. With this in mind, I’ll offer five tips any IT organization can use to become more resilient against natural disaster, no matter the characteristics of their systems and data centers. This can lead to better availability of critical data and tools when disaster strikes, continuity in serving customers, as well as peace of mind knowing preparations have been made and work can continue as expected.

1. Keep your people safe

When a natural disaster is anticipated (if there is notice), IT staffers need to focus on personal and family safety issues. Having to work late to take one more backup off-site shouldn’t be part of the last-minute process. Simply put, no data is worth putting lives at risk. If the rest of these tips are followed, IT staff won’t have to scramble in the heavy push of preparation to tie up loose ends of what already should be a resilient IT strategy.

2. Follow the 3-2-1 rule

In my role, I’ve long advocated the 3-2-1 rule, and we need to keep reiterating it: Have three different copies of important data saved, on two different media, one of these being off-site. Embrace this rule if you haven’t already. There are two additional key benefits of the 3-2-1 rule: It doesn’t require any specific technology and can address nearly any failure scenario.

3. 10 miles may not be enough

My third tip pertains to the off-site recommendation above. Many organizations believe the off-site copy or disaster recovery facility should be at least 10 miles away. This no longer may be sufficient; the path and fallout of a hurricane can be wide-reaching. Moreover, you want to avoid having personnel spend unnecessary time in a car traveling to complete the IT work. Cloud technologies can provide a more efficient and safer solution. This can involve using disaster recovery as a service (DRaaS) from a service provider or simply putting backups in the cloud.

4. Test your DR plan

Ensure that when a disaster plan is created there is particular focus on anticipating and eliminating surprises. This should involve regularly testing of backups to be certain they are completely recoverable, that the plan will function as expected and all data is where it needs to be (off-site, for example). The last thing you want during a disaster is to find that the plan hasn’t been completely implemented or run in months, or worse, discover there are workloads which are not recoverable.

5. Communications planning

My final recommendation is to work backwards in all required systems and with providers of all types to ensure you don’t have risks you can’t fix. Pay close attention to geography in relation to your own facilities, as well as country locations for data sovereignty considerations. This can apply to telecommunications providers, too. A critical component about response to any disaster is that organizations are able to communicate. Given what has happened in some locations in the path of Hurricane Irma, even cellular communication can be unreliable. Consider developing a plan to ensure communications in the interim if key business systems are down.

The recent flood and hurricane damage has been significant. The truth is, when it comes to the data, IT services, and more, there is a significant risk a business may never recover if it’s not adequately prepared. We live in a digitally transformed world and many businesses can’t operate without the availability of systems and data. These simple tips can bring about the resiliency companies need to effectively handle disasters, and prove their reliability to the customers they serve.

Rick Vanover is director of technical product marketing for Veeam Software.

Source link

Backup and Recovery Software: IT Pros Weigh In

How can enterprise IT professionals know which data backup and recovery software to choose for their business? There are numerous products on the market for this critical data center function.

Peer reviews published by real users facilitate this software decision-making with user feedback, insight, and product rankings that collectively indicate which solutions are in the lead. With this knowledge, potential users are equipped to choose the product offering best-suited to their organizational needs.

Based on real user reviews at IT Central Station, these five products are some of the top data backup and recovery solutions on the market. The reviews from IT pros provide valuable insight into the products’ benefits and shortcomings.

Veeam Backup

Chris C., a systems engineer at a business law firm, shared this perspective: “With moving the Veeam server to a physical server and creating a proxy server on each of the hosts, we are able to leverage SAN-based backup, which is very fast. Jobs are completed overnight and never run into the business hours.”

Alberto Z., a senior IT engineer at a tech company, noted an area for improvement: “Determining the space for the WAN acceleration server sometimes is hard, especially if you have many source sites for the data. I would like to have a kind of storage space calculator that gives me an estimate for the size of the WAN accelerator server we are creating; give it a list of VMs to be backed up.”

Read more Veeam Backup reviews by IT Central Station users.

HPE Data Protector

Darren O., systems engineer at a biotech company, provided this review of HPE Data Protector: “The granularity of brick-level restore functionality is very valuable. We receive approximately 10 restore requests on a daily basis for your typical file/folder restore, with the odd Exchange mailbox restore request thrown in, just to keep me on my toes.”

A systems and data services leader at a financial services firm who goes by the handle HeadOfSy6f42 said he would like to have more capacity. “This can be done by having more deduplication and compression. If they can compress the data more and more, we will save more space,” he noted.

Read more HPE Data Protector reviews by IT Central Station users.


Guy N., CEO at a tech services firm, cited  two primary improvements in the Asigra platform with the recent version 13.1 SP1:

  • “Virtualization: a tremendous variety of data protection solutions for virtual machines.
  • Cloud-2-Cloud: Office 365, Google, Amazon, etc. This is a full package of data protection platform!””

He also provided insight about the Asigra’s cost and licensing features:

“It is important to be perfectly knowledgeable about Asigra’s pricing model. It is somewhat more complex that other backup vendors, but it includes a huge opportunity for savings, especially with their recovery license model (RLM).”

Read more Asigra reviews by IT Central Station users. 

Altaro VM Backup

IT support engineer Vasileios P. offered this view: “Simplicity and reliability. I had some difficulties activating the product, but after the activation phase all went smooth…I could create VM backups from live machines without any issues. The restore process also was very quick.”

However, Chaim K., CEO of a tech services company, said he needs “to be able to restore emails to Exchange Live not just to a PST. This is a major drawback as I want to be able to restore individual items or mailboxes directly into my live Exchange databases so the user can see the email right away.”

Read more Altaro VM Backup reviews by IT Central Station users.


Dan G., senior system administrator for a healthcare organization, wrote that Commvault’s “most valuable feature is the ability to backup over the dedicated Fiber Channel directly from SAN. There’s no impact to the network or users…Backups happen overnight instead of three days. Storage for backups has been reduced by 60%.”

He added that the “bare-metal restore needs some work. It’s not intuitive and seems to have been an afterthought.”

Read more Commvault reviews by IT Central Station users.

Source link

Top 3 Disaster Recovery Mistakes

Considering the high cost of IT downtime, disaster recovery planning is critical for every enterprise. According to a 2016 IHS report, downtime costs North American companies $700 billion a year. For a typical mid-size company, the average cost was around $1 million, while a large enterprise lost more than $60 million on average, IHS found.

Yet even with the stakes so high, companies can fall into common pitfalls when it comes to disaster recovery planning to mitigate the impact of service outages. GS Khalsa, senior technical marketing manager at VMware, said that he sees organizations making the same three mistakes over and over again.

1. Not having a DR plan

In Khalsa’s opinion, by far the biggest mistake that companies make — and one of the most common — is failing to put together any sort of disaster recovery plan at all. He said that industry statistics indicate that up to 50% of organizations haven’t done any DR planning.

That’s unfortunate because preparing for a disaster doesn’t have to be as complicated or as costly as most organizations assume. “It doesn’t have to involve any purchases,” Khalsa said in an interview. “It doesn’t have to involve anything more than a discussion with the business that this is what our DR plan is.”

Even if companies decide to do nothing more than restore from their latest nightly backup, they should at least write that plan down so that they know what to expect and what to do in case of an emergency, he added.

2. Not testing the DR plan

Coming up with a plan is just the first step. Organizations also need a way to test the plan. Unfortunately, in a traditional, non-virtualized data center, there isn’t an easy, non-disruptive way to conduct a recovery test. As a result, most companies test “infrequently, if at all,” Khalsa said.

He pointed out that having a virtualized environment eases testing. Organizations can copy their VMs and test their recovery processes on an isolated network. That way they can see how long recovery will take and find potential problems without interrupting ongoing operations.

3. Not understanding the complexity of DR

Organizations also sometimes underestimate how much work it takes to recover from a backup. Khalsa explained that some organizations expect to be able to do their restores manually, which really isn’t feasible once you have more than about 10 or 20 VMs.

He noted that sometimes IT staff will write their own scripts to automate the recovery process, but even that can be problematic. “People forget that disasters don’t just impact systems, they also potentially impact people,” Khalsa said. The person who wrote the script may not be available to come into work following a disaster, which could hamper the recovery process.

Khalsa’s No. 1 tip for organizations involved in DR planning is for IT to communicate clearly with the business. Management and executives need to understand the recovery point objective (RPO) and recovery time objective (RTO) options and make some decisions about the acceptable level of risk.

“More communication is better,” Khalsa said.

Hear more about disaster recovery planning from GS Khalsa live and in person at Interop ITX, where he will present, “Disaster Recovery In The Virtualized Data Center.” Register now for Interop ITX, May 15-19, in Las Vegas.

Source link

Delta Outages Reveal Flawed Disaster Recovery Plans

Delta’s recent IT failures put spotlight on the faulty nature of enterprise disaster recovery planning.

Getting stranded in an airport due to a cancelled flight is about as pleasurable as going to the dentist to get your teeth drilled. Unfortunately, travel disruptions due to IT system outages have become all too common. Earlier this week, an estimated 280 flights were cancelled and many others delayed due to Delta computer problems. If this sounds familiar, you probably remember when thousands of travelers were stranded after an even bigger Delta systems outage resulted in more than 2,000 flight cancellations over a three-day period in 2016.

These computer failures are a prime example of what happens when mission-critical IT infrastructure fails and backup systems don’t kick in quickly enough, resulting in big consequences. For Delta, the preventable outages cost it $100 million dollars in lost revenue, lost business, and damage to reputation. 

These incidents also draw attention to the patchwork and often outdated nature of IT systems that power many airlines and businesses in other industries, which will no doubt contribute to future failures. While this week’s outage wasn’t as severe as the one Delta experienced last August, it further points to the fact that both outdated IT systems and inadequate disaster recovery planning will lead to more failures if changes aren’t made.

Disaster recovery planning

In light of the recent outages at Delta — and similar ones at United and Southwest — how can organizations avoid disruptions to IT services others depend on? While occasional mishaps are unavoidable, a little planning and investment in infrastructure can help companies sidestep, or at least more quickly recover from similar IT challenges.

To prevent avoidable failures, companies should thoroughly evaluate their disaster recovery plans and build redundancy into their systems where possible. Taking time to walk through potential failure scenarios and auditing the effectiveness of existing systems can also help avoid disaster.

For example, in the August 2016 failure, a potentially outdated power control module in Delta’s Atlanta-based data center failed, causing a small fire that was quickly extinguished. The good news? Delta had a backup power system in place. The bad news? Approximately 300 out of 7,000 servers at Delta weren’t wired to backup power, a flaw in the company’s planning that caused the entire Delta computer system to a grind to a halt, Delta CEO Ed Bastian told The Atlanta Journal-Constitution.

A wakeup call

If there’s a silver lining to the recent Delta incidents, it’s that high-profile outages have spurred many companies into action. In fact, a Spiceworks poll revealed that found a majority of organizations are taking steps to prevent IT disasters in light of the recent IT outages at airlines.

As to what they plan to do, respondents said the No. 1 step they plan to take is to improve their disaster recovery policies and procedures. This might include preparing a more comprehensive DR plan that covers likely “what if” scenarios that can bring your IT systems down, such as power outages, hardware failure, human error, and natural disasters.

Many companies also plan to replace their older IT systems with newer infrastructure and to invest in making their IT infrastructure more redundant. This typically involves testing backup and failover systems to make sure they’ll actually work when it counts. By rehearsing disaster recovery scenarios regularly, you also help ensure a faster time to recovery in the event of an incident. If systems are redundant enough, a failure might not result in any downtime at all.

But while many companies are taking steps to prevent IT outages, we also found that nearly 30% of organizations aren’t planning to make any changes at all. 

I’m sure you’re familiar with Murphy’s law: “Anything that can go wrong, will go wrong.” The same goes for disaster recovery planning. It’s not a matter of if something will happen, it’s a matter of when. To be truly effective, disaster planning needs to be looked at closely as a primary part of IT strategy, not just an afterthought. While the Delta outages are still fresh on the minds of management and IT pros alike, there’s no better time for action than the present.

Source link