Tag Archives: Cloud

Disaster Recovery in the Public Cloud


Find out about the options for building highly available environments using public cloud providers, along with the benefits and tradeoffs.

I’ve had the opportunity to speak with many users about their plans for public cloud adoption; these discussions frequently revolve around how to avoid being impacted by potential cloud outages. Questions come up because public cloud outages do occur, even though they happen less frequently now than they may have in the past, and customers are concerned about mitigating the risk of disruption.

Thankfully, every major public cloud vendor offers options for building highly available environments that can survive some type of outage. AWS, for example, suggests four options that leverage multiple geographic regions. These options, which are also available with the other public cloud vendors, come with different price points and deliver different recovery point objectives (RPO) and different recovery time objectives (RTO).

 

Companies can choose the option that best meets their RPO/RTO requirements and budget. The key takeaway is that public cloud providers enable customers to build highly available solutions on their global infrastructure.

Let’s take a brief look at these options and review some basic principles for building highly available environments using the public cloud. I’ll use AWS for my examples, but the principles apply across all public cloud providers.

First, understand the recovery point objective (RPO) and recovery time objective (RTO) for each of your applications so you can design the right solution for each use case. Second, there’s no one-size-fits-all solution for leveraging multiple geographic regions. There are different approaches you can take depending on RPO, RTO, and the amount of cost you are willing and able to incur and the tradeoffs you are willing to make. Some of these approaches, using AWS as the example, include:

  • Recovering to another region from backups – Back up your environment to S3, including EBS snapshots, RDS snapshots, AMIs, and regular file backups. Since S3 only replicates data, by default, to availability zones within a single region, you’ll need to enable cross-region replication to your DR region. You’ll incur the cost of transferring and storing data in a second region but won’t incur compute, EBS, or database costs until you need to go live in your DR region. The trade-off is the time required to launch your applications.
  • Warm standby in another region – Replicate data to a second region where you’ll run a scaled-down version of your production environment. The scaled-down environment is always live and sized to run the minimal capacity needed to resume business. Use Route 53 to switch over to your DR region as needed. Scale up the environment to full capacity as needed. With this option, you get faster recovery, but incur higher costs.
  • Hot standby in another region – Replicate data to a second region where you run a full version of your production environment. The environment is always live, and invoking full DR involves switching traffic over using Route 53. You get even faster recovery, but also incur even higher costs.
  • Multi-region active/active solution – Data is synchronized between both regions and both regions are used to service requests. This is the most complex to set up and the most expensive. However, little or no downtime is suffered even when an entire region fails. While the approaches above are really DR solutions, this one is about building a true highly available solution.

One of the keys to a successful multi-region setup and DR process is to automate as much as possible. This includes backups, replication, and launching your applications. Leverage automation tools such Ansible and Terraform to capture the state of your environment and to automate launching of resources. Also, test repeatedly to ensure that you’re able to successfully recover from an availability zone or region failure. Test not only your tools, but your processes.

Obviously, much more can be said on this topic. If you are interested in learning more about disaster recovery in the cloud, you can see me in person at the upcoming Interop ITX 2018 in Las Vegas, where I will present, “Saving Your Bacon with the Cloud When Your Data Center Is on Fire.” 

Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!

 



Source link

Disaster Recovery in the Public Cloud


Find out about the options for building highly available environments using public cloud providers, along with the benefits and tradeoffs.

I’ve had the opportunity to speak with many users about their plans for public cloud adoption; these discussions frequently revolve around how to avoid being impacted by potential cloud outages. Questions come up because public cloud outages do occur, even though they happen less frequently now than they may have in the past, and customers are concerned about mitigating the risk of disruption.

Thankfully, every major public cloud vendor offers options for building highly available environments that can survive some type of outage. AWS, for example, suggests four options that leverage multiple geographic regions. These options, which are also available with the other public cloud vendors, come with different price points and deliver different recovery point objectives (RPO) and different recovery time objectives (RTO).

 

Companies can choose the option that best meets their RPO/RTO requirements and budget. The key takeaway is that public cloud providers enable customers to build highly available solutions on their global infrastructure.

Let’s take a brief look at these options and review some basic principles for building highly available environments using the public cloud. I’ll use AWS for my examples, but the principles apply across all public cloud providers.

First, understand the recovery point objective (RPO) and recovery time objective (RTO) for each of your applications so you can design the right solution for each use case. Second, there’s no one-size-fits-all solution for leveraging multiple geographic regions. There are different approaches you can take depending on RPO, RTO, and the amount of cost you are willing and able to incur and the tradeoffs you are willing to make. Some of these approaches, using AWS as the example, include:

  • Recovering to another region from backups – Back up your environment to S3, including EBS snapshots, RDS snapshots, AMIs, and regular file backups. Since S3 only replicates data, by default, to availability zones within a single region, you’ll need to enable cross-region replication to your DR region. You’ll incur the cost of transferring and storing data in a second region but won’t incur compute, EBS, or database costs until you need to go live in your DR region. The trade-off is the time required to launch your applications.
  • Warm standby in another region – Replicate data to a second region where you’ll run a scaled-down version of your production environment. The scaled-down environment is always live and sized to run the minimal capacity needed to resume business. Use Route 53 to switch over to your DR region as needed. Scale up the environment to full capacity as needed. With this option, you get faster recovery, but incur higher costs.
  • Hot standby in another region – Replicate data to a second region where you run a full version of your production environment. The environment is always live, and invoking full DR involves switching traffic over using Route 53. You get even faster recovery, but also incur even higher costs.
  • Multi-region active/active solution – Data is synchronized between both regions and both regions are used to service requests. This is the most complex to set up and the most expensive. However, little or no downtime is suffered even when an entire region fails. While the approaches above are really DR solutions, this one is about building a true highly available solution.

One of the keys to a successful multi-region setup and DR process is to automate as much as possible. This includes backups, replication, and launching your applications. Leverage automation tools such Ansible and Terraform to capture the state of your environment and to automate launching of resources. Also, test repeatedly to ensure that you’re able to successfully recover from an availability zone or region failure. Test not only your tools, but your processes.

Obviously, much more can be said on this topic. If you are interested in learning more about disaster recovery in the cloud, you can see me in person at the upcoming Interop ITX 2018 in Las Vegas, where I will present, “Saving Your Bacon with the Cloud When Your Data Center Is on Fire.” 

Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!

 



Source link

Data Protection in the Public Cloud: 6 Steps


While cloud security remains a top concern in the enterprise, public clouds are likely to be more secure than your private computing setup. This might seem counter-intuitive, but cloud service providers have a leverage of scale that allows them to spend much more on security tools than any large enterprise, while the cost of that security is diluted across millions of users to fractions of a cent.

That doesn’t mean enterprises can hand over all responsibility for data security to their cloud provider. There are still many basic security steps companies need to take, starting with authentication. While this applies to all users, it’s particularly critical for sysadmins. A password compromise on their mobiles could be the equivalent of handing over the corporate master keys. For the admin, multi-factor authentication practices are critical for secure operations. Adding biometrics using smartphones is the latest wave in the second or third part of that authentication; there are a lot of creative strategies!

Beyond guarding access to cloud data, what about securing the data itself? We’ve heard of major data exposures occurring when a set of instances are deleted, but the corresponding data isn’t. After a while, these files get loose and can lead to some interesting reading. This is pure carelessness on the part of the data owner.

There are two answers to this issue. For larger cloud setups, I recommend a cloud data manager that tracks all data and spots orphan files. That should stop the wandering buckets, but what about the case when a hacker gets in, by whatever means, and can reach useful, current data? The answer, simply, is good encryption.

Encryption is a bit more involved than using PKZIP on a directory. AES-256 encryption or better is essential. Key management is crucial; having one admin with the key is a disaster waiting to happen, while writing down on a sticky note is going to the opposite extreme. One option offered by cloud providers is drive-based encryption, but this fails on two counts. First, drive-based encryption usually has only a few keys to select from and, guess what, hackers can readily access a list on the internet. Second, the data has to be decrypted by the network storage device to which the drive is attached. It’s then re-encrypted (or not) as it’s sent to the requesting server. There are lots of security holes in that process.

End-to-end encryption is far better, where encryption is done with a key kept in the server. This stops downstream security vulnerabilities from being an issue while also adding protection from packet sniffing.

Data sprawl is easy to create with clouds, but opens up another security risk, especially if a great deal of cloud management is decentralized to departmental computing or even users. Cloud data management tools address this much better than written policies. It’s also worthwhile considering adding global deduplication to the storage management mix. This reduces the exposure footprint considerably.

Finally, the whole question of how to backup data is in flux today. Traditional backup and disaster recovery has moved from in-house tape and disk methods to the cloud as the preferred storage medium. The question now is whether a formal backup process is the proper strategy, as opposed to snapshot or continuous backup systems. The snapshot approach is growing, due to the value of small recovery windows and limited data loss exposure, but there may be risks from not having separate backup copies, perhaps stored in different clouds.

On the next pages, I take a closer look at ways companies can protect their data when using the public cloud.

(Image: phloxii/Shutterstock)



Source link

Hybrid Cloud: 4 Top Use Cases


In the early days of cloud computing, experts talked a lot about the relative merits of public and private clouds and which would be the better choice for enterprises. These days, most enterprises aren’t deciding between public or private clouds; they have both. Hybrid and multi-cloud environments have become the norm.

However, setting up a true hybrid cloud, with integration between a public cloud and private cloud environment, can be very challenging.

“If the end user does not have specific applications in mind about what they are building [a hybrid cloud] for and what they are doing, we find that they typically fail,” Camberley Bates, managing director and analyst at Evaluator Group, told me in an interview.

So which use cases are best suited to the hybrid cloud? Bates highlighted three scenarios where organizations are experiencing the greatest success with their hybrid cloud initiatives, and one use case that’s popular but more challenging.

1. Disaster recovery and business continuity

Setting up an independent environment for disaster recovery (DR) or business continuity purposes can be a very costly proposition. Using a hybrid cloud setup, where the on-premises data center fails over to a public cloud service in the case of an emergency, is much more affordable. Plus, it can give enterprises access to IT resources in a geographic location far enough away from their primary site that they are unlikely to be affected by the same disaster events.

Bates noted that costs are usually big driver for choosing hybrid cloud over other DR options. With hybrid cloud, “I have a flexible environment where I’m not paying for all of that infrastructure all the time constantly.” she said. “I have the ability to expand very rapidly if I need to. I have a low-cost environment. So if I combine those pieces, suddenly disaster recovery as an insurance policy environment is cost effective.”

2. Archive

Using a hybrid cloud for archive data has very similar benefits as disaster recovery, and enterprises often undertake DR and archive hybrid cloud efforts simultaneously.

“There’s somewhat of a belief system that some people have that the cloud is cheaper than on-prem, which is not necessarily true,” cautioned Bates. However, she added, “It is really cheap to put data at rest in a hybrid cloud for long periods of time. So if I have data that is truly at rest and I’m not moving it in and out, it’s very cost effective.”

3. DevOps application development

Another area where enterprises are experiencing a lot of success with hybrid clouds is with application development. As organizations have embraced DevOps and agile methodologies, IT teams are looking for ways to speed up the development process.

Bates said, “The DevOps guys are using [public cloud] to set up and do application development.” She explained, “The public cloud is very simple and easy to use. It’s very fast to get going with it.”

But once applications are ready to deploy in production, many enterprises choose to move them back to the on-premises data center, often for data governance or cost reasons, Bates explained. The hybrid cloud model makes it possible for the organization to meet its needs for speed and flexibility in development, as well as its needs for stability, easy management, security, and low costs in production.

4. Cloud bursting

Many organizations are also interested in using a hybrid cloud for “cloud bursting.” That is, they want to run their applications in a private cloud until demand for resources reaches a certain level, at which point they would fail over to a public cloud service.

However, Bates said, “Cloud bursting is a desire and a desirable capability, but it is not easy to set up, is what our research found.”

Bates has seen some companies, particularly financial trading companies, be successful with hybrid cloud setups, but this particular use case continues to be very challenging to put into practice.

Learn more about why enterprises are adopting hybrid cloud and best practices that lead to favorable outcomes at Camberley Bates’ Interop ITX session, “Hybrid Cloud Success & Failure: Use Cases & Technology Options.” 

Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!

 



Source link

Choosing a Cloud Provider: 8 Storage Considerations


Amazon Web Services, Google, and Azure dominate the cloud service provider space, but for some applications it may make sense to choose a smaller provider specializing in your app class and able to deliver a finer-tuned solution. No matter which cloud provider you choose, it pays to look closely at the wide variety of cloud storage services they offer to make sure they will meet your company’s requirements.

There are two major classes of storage with the big cloud providers, which offer local instance storage with selected instances, as well as a selection of network storage options for permanent storage and sharing between instances.

As with any storage, performance is a factor in your decision-making process. There are many shared network storage alternatives, including storage tiers from really hot to freezing cold and within the top tiers, differences depending on choice of replica count, and variations in prices for copying data to other spaces.

The very hot tier is moving to SSD and even here there are differences between NVMe and SATA SSDs, which cloud tenants typically see as IOPS levels. For large instances and GPU-based instances, the faster choice is probably better, though this depends on your use case.

At the other extreme, the cold and “freezing” storage, the choices are disk or tape, which impacts data retrieval times. With tape, that can take as much as two hours, compared with just seconds for disk.

Data security and vendor reliability are two other key considerations when choosing a cloud provider that will store your enterprise data.  Continue on to get tips for your selection process.

(Image: Blackboard/Shutterstock)



Source link