Tag Archives: Cloud

Cloud Migration Opportunities to Be Aware of in 2019 | IT Infrastructure Advice, Discussion, Community


I’ve now been a part of a lot of DevOps projects. I’ve seen entire DevOps practices migrated into the cloud and entire legacy ball of mud applications transformed from monolith to microservices. However, it’s the fringe projects that I’ve found to be really interesting. These include deep integration around data ingestion, IoT development, and massive mainframe and ERP systems.

Over the past year, I’ve seen several customers and industries explore and even begin to move some of their legacy environments into the cloud. For example, migrating a part or your entire SAP ecosystem onto Google Cloud Platform, or, moving your entire Oracle ecosystem into the cloud. Finally, working with massive IBM AS/400 architectures and moving (or refactoring) those components into the cloud.

There are several reasons as to why so many organizations have hesitated in moving or modernizing these systems. Reasons include:

  • Fear, uncertainty, and doubt. Let’s be honest, not a lot of people have done this. And, it can be scary to move your entire business engine into the cloud. There aren’t a lot of use-cases, and moving these platforms can be really risky. Many organizations simply can’t afford this multi-million-dollar experiment.
  • Complexity. Some AS/400 systems have been in place for years; even decades. Know what? They work. The same goes for massive ERP systems. Many organizations are ingrained in working with their SAP solution, but only on premise.
  • Cost and Investment. Some enterprises have invested millions, and even tens of millions of dollars into their ERP and mainframe systems. They have put in years of development, investment in infrastructure, contracts, personnel, and so much more. You’d need to show some real ROI to even consider moving these systems to the cloud.

Now for some good news. Organizations are realizing that to disrupt a digital-driven market, they must themselves be disrupted. This means breaking traditional deployment paradigms and looking at ways to migrate mainframes and even ERP systems into the cloud. Or, looking ways powerful hybrid solutions can help.

Read the rest of this article at InformationWeek.



Source link

When it Comes to Cloud, It’s a Hybrid World | IT Infrastructure Advice, Discussion, Community


The world has reached the cloud tipping point, in which more than half of all enterprises have become cloud first.  Clearly, we’ve arrived at the cloud computing era, but what does this mean?  Will the future be an Amazon Web Services (AWS)-centric world, or is there room for other public cloud vendors, such as Microsoft Azure or Google Cloud to dominate?  What about private cloud?  And will on-premises apps eventually go the way of the analog telephone?

Curious, we surveyed 135 cloud professionals at the latest AWS re:Invent show in Las Vegas to find out more. The typical cloud professional we surveyed worked at an organization with 1,500 employees within a wide range of industries, and respondents were split between staff (58 percent) and management (42 percent) roles.

What we discovered is that the future looks a bit chaotic. The idea of a neat, homogenous public cloud future is not likely to happen. The future will be decidedly hybrid.

First, we found that enterprises today have a roughly equal number of on-premises and cloud workloads; by 2020, they expect that to tilt much more heavily toward cloud. No surprise there, but we also learned that fully 11 percent of workloads today are hybrid and that figure will remain about the same at 12 percent in 2020. 

What drives enterprises toward a particular app deployment model? The respondents told us that they host apps on-premises for security, cost and compliance reasons, whereas cloud delivers reliability, performance, and flexibility. In other words, the answer to the question of where workloads are hosted is a clear “it depends.”

In terms of which public cloud service is preferred by the professionals we surveyed, AWS is the clear leader: 81 percent are currently using AWS in production, with 16 percent trialing (note: this was an AWS show).  But don’t give up on other platforms. Nearly half were using or trialing Azure, and 39 percent were involved with Google Cloud. In fact, a survey by the Cloud Native Computing Foundation shows that Google Cloud is a close second to AWS for container and Kubernetes use cases.

So, the race is not over. In fact, most cloud professionals are using multiple cloud platforms. Why? The top three reasons cited were cost-effectiveness, redundancy, and security. Furthermore, 22 to 24 percent were at least experimenting with Azure Stack or VMware Cloud for AWS, two local cloud offerings. 

What about on-premises apps? We asked how long on-premises apps will last, and a third of the respondents told us six more years. One in five say on-premises apps will continue for 10 years or more.

Prepare for the future

So, we are clearly in the era of cloud computing, but this cloudy future will include multiple public cloud platforms, hybrid clouds, some local cloud options, and, at least for the foreseeable future, a mix of on-premises apps. In short, the future looks messy.  So, what can enterprises do to better prepare for this hybrid world?

First, IT should perform a tool/ecosystem assessment of all existing solutions for compatibility and ensure that the solutions and tools you choose in the future support hybrid environments. 

Second, change your hiring bias to look more for generalists than experts. In the old world, people had tighter boundaries and vertical expertise. For example, a storage admin was an expert in storage, but not so much in networking. In the new world, people act much more as generalists and can cover more IT domains with less required depth. 

Third, transition from a CapEx footing to an OpEx footing. Make sure you have the appropriate visibility and control over processes that drive costs. When the environment is hybrid, equipment is no longer a sunk cost. The actual OpEx budget is being used every month, and you need a solution that provides the right visibility for on-premises and cloud to track the cost implications of every engineering decision, not only at the end of the month, but on a daily basis (including alerting, if needed, for exceptionally high consumption) to ensure the tight management of budgets.

And, finally, understand the dependency and topology between components and data. Before engaging in the transition to hybrid, ensure that you understand the topology between application components and the data components of the application and all its users.  Make sure that the right components and data components are placed, as projects can fail if your planning does not take this into consideration.  Note that having data and other components in different locations (on-prem vs. cloud) may be ok, depending on the type and size of data and the security requirements.

As we fully transition to our cloud-computing future, it is an exciting time.  Opportunities abound.  But now is also the time to prepare for what is sure to be a hybrid future.

 



Source link

Cloud Storage Adoption Soars in the Workplace


Despite lingering cloud security fears, businesses are adopting cloud storage and file-sharing services at a rapid clip, according to new research from Spiceworks.

Eighty percent of the 544 organizations Spiceworks polled reported using cloud storage services, and another 16% say they plan to deploy them within the next two years. A similar study by the company in 2016 found that 53% of businesses were using these services.

Even though cloud storage and file-sharing services are becoming pervasive in the workplace, 25% of survey respondents believe their data in the cloud is not at all secure, or only somewhat secure. Sixteen percent of those polled said their organization has experienced one or more security incidents, including stolen credentials or data theft, via their cloud storage service in the last 12 months.

To mitigate the risks, many organizations have implemented various security measures, Spiceworks found. Fifty-seven percent of survey participants said their organizations only allow employees to use IT-approved cloud storage services. Fifty-five percent enforce user access controls and 48% conduct employee security training.

Less common security controls for cloud storage services include multi-factor authentication (28%) and encrypting data in transit (26%), according to the research.

Microsoft OneDrive takes lead

The Spiceworks study also polled IT pros on their choice of cloud storage services vendor and found that Microsoft OneDrive has vaulted ahead of the competition in both the enterprise and the SMB markets. Among businesses with more than 1,000 employees, OneDrive’s adoption rate was 59%, much higher than GoogleDrive (29%) and Dropbox (25%). Among small and midsize businesses with 100 to 999 employees, the adoption rate for OneDrive was 54% compared to 35% using Dropbox and 33% using Google Drive.

“It’s evident that in a matter of two years, OneDrive has stolen the top spot from Dropbox as the most commonly used cloud storage service in the business environment,” Peter Tsai, senior technology analyst at Spiceworks, wrote in a  blog post.

In 2016, Spiceworks research found that 33% of organizations were using Dropbox, 31% were using OneDrive, and 27% were using Google Drive. An additional 18% of businesses planned to adopt OneDrive.

Tsai surmised that OneDrive’s popularity is connected to the fact that it’s bundled with an Office 365 subscription, which many organizations have. A separate Spiceworks study found that more than 50% of companies subscribe to Office 365.

Security edges reliability

When buying cloud storage services, IT buyers put a priority on security, according to Spiceworks. Ninety-seven percent of survey respondents ranked it as an important or extremely important factor, followed closely by reliability at 96%.

Interestingly, 39% of those polled said security is the attribute they most closely associate with OneDrive, compared to Google Drive (28%) and Dropbox (19%). In terms of reliability and cost effectiveness, Google Drive led the pack. Dropbox got the highest ranking for ease of use.

According to the Spiceworks report, Dropbox also wins out when it comes to cloud storage services unsanctioned by the IT department.

The Spiceworks survey polled IT pros in the company’s network; they represent a variety of company sizes and industries.



Source link

Disaster Recovery in the Public Cloud


Find out about the options for building highly available environments using public cloud providers, along with the benefits and tradeoffs.

I’ve had the opportunity to speak with many users about their plans for public cloud adoption; these discussions frequently revolve around how to avoid being impacted by potential cloud outages. Questions come up because public cloud outages do occur, even though they happen less frequently now than they may have in the past, and customers are concerned about mitigating the risk of disruption.

Thankfully, every major public cloud vendor offers options for building highly available environments that can survive some type of outage. AWS, for example, suggests four options that leverage multiple geographic regions. These options, which are also available with the other public cloud vendors, come with different price points and deliver different recovery point objectives (RPO) and different recovery time objectives (RTO).

 

Companies can choose the option that best meets their RPO/RTO requirements and budget. The key takeaway is that public cloud providers enable customers to build highly available solutions on their global infrastructure.

Let’s take a brief look at these options and review some basic principles for building highly available environments using the public cloud. I’ll use AWS for my examples, but the principles apply across all public cloud providers.

First, understand the recovery point objective (RPO) and recovery time objective (RTO) for each of your applications so you can design the right solution for each use case. Second, there’s no one-size-fits-all solution for leveraging multiple geographic regions. There are different approaches you can take depending on RPO, RTO, and the amount of cost you are willing and able to incur and the tradeoffs you are willing to make. Some of these approaches, using AWS as the example, include:

  • Recovering to another region from backups – Back up your environment to S3, including EBS snapshots, RDS snapshots, AMIs, and regular file backups. Since S3 only replicates data, by default, to availability zones within a single region, you’ll need to enable cross-region replication to your DR region. You’ll incur the cost of transferring and storing data in a second region but won’t incur compute, EBS, or database costs until you need to go live in your DR region. The trade-off is the time required to launch your applications.
  • Warm standby in another region – Replicate data to a second region where you’ll run a scaled-down version of your production environment. The scaled-down environment is always live and sized to run the minimal capacity needed to resume business. Use Route 53 to switch over to your DR region as needed. Scale up the environment to full capacity as needed. With this option, you get faster recovery, but incur higher costs.
  • Hot standby in another region – Replicate data to a second region where you run a full version of your production environment. The environment is always live, and invoking full DR involves switching traffic over using Route 53. You get even faster recovery, but also incur even higher costs.
  • Multi-region active/active solution – Data is synchronized between both regions and both regions are used to service requests. This is the most complex to set up and the most expensive. However, little or no downtime is suffered even when an entire region fails. While the approaches above are really DR solutions, this one is about building a true highly available solution.

One of the keys to a successful multi-region setup and DR process is to automate as much as possible. This includes backups, replication, and launching your applications. Leverage automation tools such Ansible and Terraform to capture the state of your environment and to automate launching of resources. Also, test repeatedly to ensure that you’re able to successfully recover from an availability zone or region failure. Test not only your tools, but your processes.

Obviously, much more can be said on this topic. If you are interested in learning more about disaster recovery in the cloud, you can see me in person at the upcoming Interop ITX 2018 in Las Vegas, where I will present, “Saving Your Bacon with the Cloud When Your Data Center Is on Fire.” 

Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!

 



Source link

Disaster Recovery in the Public Cloud


Find out about the options for building highly available environments using public cloud providers, along with the benefits and tradeoffs.

I’ve had the opportunity to speak with many users about their plans for public cloud adoption; these discussions frequently revolve around how to avoid being impacted by potential cloud outages. Questions come up because public cloud outages do occur, even though they happen less frequently now than they may have in the past, and customers are concerned about mitigating the risk of disruption.

Thankfully, every major public cloud vendor offers options for building highly available environments that can survive some type of outage. AWS, for example, suggests four options that leverage multiple geographic regions. These options, which are also available with the other public cloud vendors, come with different price points and deliver different recovery point objectives (RPO) and different recovery time objectives (RTO).

 

Companies can choose the option that best meets their RPO/RTO requirements and budget. The key takeaway is that public cloud providers enable customers to build highly available solutions on their global infrastructure.

Let’s take a brief look at these options and review some basic principles for building highly available environments using the public cloud. I’ll use AWS for my examples, but the principles apply across all public cloud providers.

First, understand the recovery point objective (RPO) and recovery time objective (RTO) for each of your applications so you can design the right solution for each use case. Second, there’s no one-size-fits-all solution for leveraging multiple geographic regions. There are different approaches you can take depending on RPO, RTO, and the amount of cost you are willing and able to incur and the tradeoffs you are willing to make. Some of these approaches, using AWS as the example, include:

  • Recovering to another region from backups – Back up your environment to S3, including EBS snapshots, RDS snapshots, AMIs, and regular file backups. Since S3 only replicates data, by default, to availability zones within a single region, you’ll need to enable cross-region replication to your DR region. You’ll incur the cost of transferring and storing data in a second region but won’t incur compute, EBS, or database costs until you need to go live in your DR region. The trade-off is the time required to launch your applications.
  • Warm standby in another region – Replicate data to a second region where you’ll run a scaled-down version of your production environment. The scaled-down environment is always live and sized to run the minimal capacity needed to resume business. Use Route 53 to switch over to your DR region as needed. Scale up the environment to full capacity as needed. With this option, you get faster recovery, but incur higher costs.
  • Hot standby in another region – Replicate data to a second region where you run a full version of your production environment. The environment is always live, and invoking full DR involves switching traffic over using Route 53. You get even faster recovery, but also incur even higher costs.
  • Multi-region active/active solution – Data is synchronized between both regions and both regions are used to service requests. This is the most complex to set up and the most expensive. However, little or no downtime is suffered even when an entire region fails. While the approaches above are really DR solutions, this one is about building a true highly available solution.

One of the keys to a successful multi-region setup and DR process is to automate as much as possible. This includes backups, replication, and launching your applications. Leverage automation tools such Ansible and Terraform to capture the state of your environment and to automate launching of resources. Also, test repeatedly to ensure that you’re able to successfully recover from an availability zone or region failure. Test not only your tools, but your processes.

Obviously, much more can be said on this topic. If you are interested in learning more about disaster recovery in the cloud, you can see me in person at the upcoming Interop ITX 2018 in Las Vegas, where I will present, “Saving Your Bacon with the Cloud When Your Data Center Is on Fire.” 

Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!

 



Source link