Tag Archives: Cloud

3 Hidden Public Cloud Costs and How to Avoid Them | IT Infrastructure Advice, Discussion, Community


According to Gartner, worldwide public cloud revenue is expected to grow 17.3 percent this year, representing a whopping $206.2 billion. That’s up from just over $175 billion last year.

Clearly, IT organizations are ready to fire up their purchase orders, but before you commit, remember the old saying: “there’s no free lunch.” Hidden costs are an unfortunate byproduct of the public cloud life. Understand what you’re getting into upfront so you can decide when using a public cloud provider is cost effective and appropriate, or when it might be better to go a different route, such as a hybrid or multi-cloud approach.

Ingress costs

Often, public cloud providers’ ingress costs–the initial price you to pay to sign up–are either fairly low or non-existent. In some cases, the cloud provider will even help you transport your data for nothing.

The issue here is not so much cost as it is time. Transporting massive petabytes of data into a public cloud service can take weeks, if not months, during which time critical data might be unavailable. You could send it over a private network, but there’s a time cost to that, too.

Transactional costs

Most public cloud providers will charge a nominal fee every time you attempt to access your data. These fees are almost infinitesimal, sometimes averaging pennies per hour, which cloud providers hope to make up in high volume.

Things can get pretty pricey when you’re running thousands of analytics jobs. It’s easy for a CIO looking for cost savings to simply say “let’s put everything we have in the public cloud” when everything you have is fairly minimal, but as data use rises, so do transactional costs. In that case, using the public cloud exclusively for everything might not be the wisest long-term investment.

Egress costs

Stop me if you’ve heard this one before: “Our boss asked us to move all of our data to one public cloud provider. Now, we’re trying to move it to another, but we have to rewrite all of our scripts. It’s a huge pain.”

Moving your data from one provider to another can be a huge pain. This act of egress can result in significant costs, creating a form of cloud provider lock-in that can be difficult to break. Teams need to recreate new scripts, which translates to additional time and money and lost productivity. You’re recreating not just the wheel but a car’s entire engine and chassis.

A hybrid solution

You might be wondering if the public cloud is worth the cost. In many cases, the answer is “yes,” but it depends on your goals.

For better agility, investing in the public cloud is a wise move. Likewise, if you’re a smaller business, you will probably incur fewer transactional costs because you will likely have less data than a larger corporation.

But the answer might be “yes…and no.” You may choose to adopt hybrid and multi-cloud strategies, keeping some data on-premises or split up in different clouds.

A hybrid and multi-cloud strategy provides options. Companies can enjoy the extra tools and capabilities offered by public clouds while keeping costs under control. They don’t have to worry about ingress costs, and transactional costs can be minimized. They can also greatly reduce or even eliminate egress costs since they likely do not have to perform wholesale data migrations between different providers and can just delete their public cloud data if they have an on-premise backup.

Moving data within a hybrid environment

Moving applications between clouds can present its own challenges. Every public cloud provider uses its own cloud storage protocols. Migrating data between these disparate and disconnected protocols can result in egress costs–just what you’re trying to avoid.

You need to be able to federate your data so that it can be used across distinct protocols with minimal effort and cost. This can be accomplished by aggregating native storage from different cloud providers into a storage repository that uses a single endpoint to manage all of your organization’s clouds. Instead of manually pulling data out of one and migrating it to another, you can automatically migrate data and applications to and from the appropriate clouds.

When combined with container-native storage–highly portable object storage for containerized applications–you can easily transport all of your applications and their associated data between different providers. Furthermore, developers can automatically provision this storage without having to bother their data managers, saving everyone a lot of time and headaches and automatically boosting the performance of their teams.

Call it virtualization of object storage, or protocol translation. Whatever the name, it can all be done without breaking a sweat (or the bank). The result is the optimization of your hybrid or multi-cloud environments and the elimination of the hidden time and costs associated with public cloud storage. 

 



Source link

It’s Time for Enterprise Networking to Embrace Cloud Architectures | IT Infrastructure Advice, Discussion, Community


I’ll start at the end. Cloud computing is now the vernacular for computing. Cloud networking will, within the next 24 months, be the vernacular for networking. The same paradigms that have revolutionized computing will do so for networking.

Monolithic architecture moved into client/server architectures, which then evolved into service-oriented architectures, which has in turn given way to the now ubiquitous microservices/container model. This microservices architecture is the mainstay of cloud and public cloud computing, as well as serverless/utility computing models. Cloud software architectures bring numerous benefits to applications including:

  • Horizontal scale

  • Use of resource pools for near unlimited capacity

  • Distributed services and databases

  • Fault tolerance and containerization for hitless “restartability”

  • In-service upgrades

  • Programmability, both northbound and southbound, for flexible integration across services

  • Programming language independence

It is these attributes that we see (for the most part) in the large, global SaaS applications such as Amazon’s e-commerce website, Netflix’s streaming service, Facebook, and Twitter’s social networks. The same capabilities – with the same global, highly available, and horizontal scale – can be applied to enterprise networking.

The heart of networking is routing. Routing algorithms have maintained the same architecture for the past 30 years. Border Gateway Protocol (BGP4), the routing protocol of the Internet, has been in use since 1994. Routing protocols are designed for resiliency and autonomous operation. Each router or autonomous system can be an island unto itself, needing only visibility and connectivity to its directly attached neighbors. This architecture has allowed for the completely decentralized and highly resilient operation of BGP routing, yet it has also introduced challenges. Scaling and convergence problems continually plague BGP operations and Internet performance. There have been proposals to replace BGP, but its installed base makes that nearly impossible. The next best option is to augment it.

The most common mechanism for augmentation is to build an overlay network. An overlay network uses the BGP4-powered Internet as a foundation and bypasses BGP routing using alternative routing protocols. This approach combines the best of BGP routing – resiliency and global availability – with the performance and scale improvements of new and innovative routing protocols. The overlay model and these new routing protocols open the door to routing based on performance metrics and application awareness, and the potential to bring LAN-like performance to the Internet-powered WAN. This is at the heart of the cloud networking evolution and software-defined networking moving forward.

Building atop BGP4’s flat, decentralized architecture, new routing protocols are leveraging cloud software architectures to develop fast, scalable, and performance-driven routing protocols, embracing both the centralized and the distributed nature of cloud computing. The Internet, acting as the underlying network, provides basic connectivity. A broad network of sensors, small microservices deployed across major points of presence globally, run simple performance tests at set intervals and feed the results to a centralized, hierarchical routing engine. The basic tests provide insights into throughput, loss, and latency at key points of presence globally. A centralized routing engine then leverages deep learning to use the performance data, both current and historical, to create routes. The routing updates can be pushed to overlay network routers, and these routers then update their forwarding tables. Route hierarchy brings scale and resiliency. For example, should connectivity to the centralized routing engine be lost, routing persists and survives via router-to-router updates and, in the case of a potential prolonged outage, by bypassing to the underlying network.

Key elements deliver benefits

There are a few key elements of centralized overlay routing that are really novel:

Performance as a metric: BGP does not factor performance in route calculations, so it is possible (if not probable) that a poor performing link or multiple links will be used, impacting application performance. This manifests itself in poor TCP performance (which leads to degraded throughput), as well as high loss, impacting real-time applications. The use of performance data in centralized overlay routing introduces the capability to route not just based on hop count or least cost, but also by best performance. 

Application specific routing: Using performance telemetry for routing enables routes with an application bias. High throughput routes can be used for file transfers, and low loss and latency routes can be used for real-time applications such as voice or video.

High availability:  The use of proven, battle-tested, cloud software architecture for cloud networking ensures that centralized routing is not only resilient but is also highly available on a number of levels. Use of distributed microservices and the capability to “restart” individual services on the fly without service outage – a key element of cloud software architecture – combined with the safety net of reverting to underlay BGP4 routing, ensures packets continue to flow even in the event of something catastrophic.

Native integration into SD-WAN and SDN: As SD-WAN continues to overtake the WAN edge, support for centralized routing will continue to grow. Progressive SD-WAN vendors are today starting to utilize overlay networks and centralized routing, demonstrating its viability.

Networking is evolving, embracing cloud software architectures and techniques. It is pushing into the enterprise from two sides – from the data center, and from the WAN edge. This push is accelerated by the approach of augmenting Internet technologies versus an outright replacement, enabling enterprises to deploy these new technologies across their networks quickly. The effects are immediate and noticeable, as the performance of critical business applications is positively impacted with the enterprise, across the enterprise WAN, and with enterprise SaaS applications and cloud workloads. 

 



Source link

Cloud Storage and Policies: How Can You Find Your Way? | IT Infrastructure Advice, Discussion, Community


Cloud storage is one of the hottest topics today. Rightfully so, there seem to be new services being added seemingly daily. Storage services make up one of the most attractive cloud services, so it is only natural to find business problems to solve.

The reality is that storage in the cloud is a whole new discipline. Completely different. Like forget everything you know and let’s start from the beginning. Both Amazon Web Services and Microsoft Azure have many different storage services. Some are like what we have used on-premises, such as Azure File Storage and AWS Elastic Block Store. These resemble traditional file shares and block storage on-premises, yet how they are used can make a very big difference on your experience in the cloud. There are more storage services in the cloud (such as object storage, gateways and more), and they are different than what has traditionally been used on-premises, and that is where it gets interesting.

Let’s first identify why organizations want to leverage the cloud for storage. This may seem a needless step, but it is more critical than ever. The why is very important. The fundamental reason why should be that the cloud is the right platform for the storage need. Supporting reasons will also include cloud benefits such as these:

No upfront purchase: This is different than the on-premises storage practice of purchasing for the future capacity needs (best guesses, overspend or bad misses of targets are common with this practice!).

Effectively unlimited capacity: Ask any mathematician and they will quickly dispute the cloud is not unlimited, but from most customer perspective the cloud will provide effectively unlimited storage options.

Predictable pricing: While not exactly linear, it is pretty clear what consumption pricing will be with cloud storage.

These are some of the good reasons to embrace cloud storage, but beyond the reasons to go to the cloud the strong advice is to look at storage policies and usage to not have any surprises in the future. Some of this includes looking at the economics from a complete scope of use. Too many times pricing is just seen as how much consumption per month. Take AWS S3 for example, for S3 Standard Storage one can have the first 50 TB per month priced at $0.023 per GB (pricing as of March 2019, US East (Ohio) region). But other aspects of using the storage should absolutely be considered. Take for example the following other aspects:

Getting data into the cloud is often overlooked, but there is a cost to that as well. This makes how data is written to the cloud important. Is data sent in small increments (more write operations or put tasks) or in relatively fewer larger increments? This can change the cost profile.

Egress is where data is read from a cloud storage location, and that has a cost. One practical cost is to leverage solutions with cloud storage that retrieve the right pieces; versus entire datasets.

Deleting data Interesting to think about, not for costs per se; but deleting data should be considered. The data in the cloud will live as long as you pay for it, so give thought to ensure no dead data is living in the cloud.

But what can organizations do to manage cloud storage from a policy perspective? In a way, some of the same practices as before can be applied. But also leverage frameworks from the cloud platforms to help manage the usage and consumption. AWS Organizations is a good example for providing policy-based management of multiple AWS accounts. This will streamline account management, billing and control to cloud services. Similar capabilities exist in Azure with Subscription and Service Management along with Azure RBAC.

Between taking a responsible look at new cloud services from what we have learned in the past coupled with what new frameworks are available to use in the cloud, organizations can easily and confidently embrace cloud storage services to not only solve the right platform question, but also manage it in a way that lets CIOs and decision makers sleep at night.



Source link

Troubleshooting Network Performance in Cloud Architectures | IT Infrastructure Advice, Discussion, Community


Troubleshooting within public or hybrid clouds can be a challenge when end users begin complaining of network and application performance problems. The loss of visibility of the underlying cloud network renders some traditional troubleshooting methods and tools ineffective. Thus, we must come up with alternative ways to regain that visibility. Let’s look at five tips on how to better troubleshoot application performance in public cloud or hybrid cloud environments.

Tip 1: Verify the application and all services are operational form end-to-end

The first step in the troubleshooting process should be to verify that the cloud provider is not having an issue on their end. Depending on whether your service uses a SaaS, PaaS or IaaS model, the verification process will change. For example, Salesforce SaaS platform has a status page where you can see if there are any incidents/outages or maintenance windows that may be impacting your users.

Also, don’t forget to check other dependent services that can also impact access or performance to cloud services. Services such as DHCP and internal/external DNS are common dependencies can cause problems — making it look like there is something wrong with the network. Depending on where the end user connects from in relation to the cloud application they are trying to access, the DHCP and DNS servers used will vary greatly. Verifying end users are receiving proper IP’s and can resolve domains properly can save a great deal of time and headaches.

Tip 2: Review recent network configuration changes

If a performance problem to a cloud app seemingly crops up out of nowhere, it’s likely a recent network change is to blame. On the corporate LAN, review any firewall, NAT or VLAN adds/changes didn’t inadvertently cause an outage for a portion of your users. These same types of network changes should also be verified within IaaS clouds as well.

QoS or other traffic shaping changes can also accidentally degrade performance between the corporate LAN and remote cloud services. Automated tools can be used to verify that applications are being properly marked — and those markings are being adhered to on a hop-by-hop basis between the end user and as far out to the cloud application or service as possible.

Tip 3: Use traditional network monitoring and troubleshooting tools

Depending on the cloud architecture model you’re using, traditional network troubleshooting tools can be greater or less effective when troubleshooting performance degradation. For instance, if you use IaaS such as AWS EC2 or Microsoft Azure, you have enough visibility to use most network troubleshooting and support tools such as ping, traceroute, and SNMP. You can even get NetFlow/IPFIX data streamed to a collector — or even run packet captures in a limited fashion. However, when troubleshooting PaaS or SaaS cloud models, these tools become far less useful. Thus, you end up having to trust your service provider that everything is operating as it should on their end.

Tip 4: Use built-in application diagnostics and assessment tools

Many enterprise applications have built-in or supplemental diagnostic tools that IT departments can use for troubleshooting purposes. These tools often provide detailed information that help you determine whether performance is an application-related issue — or a problem with the network or infrastructure. For example, if you’re having issues with Microsoft Teams through Office 365, you can test and verify sufficient end-to-end network performance using their Skype for Business Network Assessment Tool. Although this tool is most commonly used to verify whether Teams is a viable option pre-deployment. It can also be used post-deployment for troubleshooting purposes.

Tip 5: Consider SD-WAN built-in analytics or pure-play network analytics tools

Network analytics tools and platforms are the latest way for administrators to troubleshoot network and application performance problems. Network analytics platforms collect streaming telemetry and network health information using several methods and protocols. All data is then combined and analyzed using artificial intelligence (AI). The results of the analysis help pinpoint areas on the corporate network or cloud where network performance problems are occurring.

If you have extended your SD-WAN architecture to the public cloud, you can leverage the myriad of analytics components that are commonly included in these platforms. Alternatively, there are a growing number of pure-play vendors that sell multi-vendor network analytics tools that can be deployed across entire corporate LANs and into public clouds. While these two methods can be expensive and more complicated to deploy initially, they have shown to speed up performance troubleshooting and root cause analysis processes dramatically.



Source link

As Cloud Services Evolve, What’s Next? | IT Infrastructure Advice, Discussion, Community


Since its inception, it’s no exaggeration to say that cloud computing has become one of the pillars on which modern society is built. Yet while the concept of the cloud has fully entered the popular imagination (most people associate it with digital storage services like Google Drive or Dropbox), in truth, we have only scratched the surface of cloud computing’s potential.

But simply storing documents for simultaneous access is only one facet of the cloud, and arguably not even the most important one. In fact, just as cryptocurrency combined several existing technologies to create a new, profitable whole, so too will cloud computing form the backbone of something new.

What’s next for cloud computing?

It seems clear that the next milestone for cloud will be mixed realities (MR), virtual reality (VR), and augmented reality (AR). One possibility includes virtual conferencing; in contrast to video conferences, where several participants are splashed across a screen, a VR (or AR) meeting allows people to sit together in a conference room. Rather than talking over each other or misreading social cues, attendees can carry on a meeting as if they were physically present in the same room, allowing for more productive (and less tense) gatherings.

Another possibility is a Blockchain-based cloud. Combining the two is a logical step: the system would feature the security of blockchain’s tamper-resistant record, as well as the ease and convenience of cloud computing. In many ways, the two are a perfect match for each other. Like the cloud, blockchain is decentralized, as it relies on a network of computers to verify transactions and continually update the record. Dispersing cloud-based blockchain technologies could lead to more secure record-keeping in such vital areas as global finance and manufacturing, where transparency is difficult to come by.

Smart cities are also likely to see significant boosts from cloud computing in the near future. Cloud computing would connect with Internet of Things (IoT) devices to allow for improvements like intelligent traffic and parking management, regulation of reduced cost of power and water, and optimization of other automated devices. Smart cities can lead to greater scalability of cloud-based computing, which can, in turn, make it easier to create common smart city services that can be reused and implemented across other cities.

The edge and the cloud: rivals or friends?

While cloud computing is still considered a relatively new technology, many experts also believe that it will give way to edge computing, which looks to reduce latency and connectivity costs by keeping relevant data as close to its source as possible. While this might seem like the new technology trumps cloud computing as a whole, edge computing is preferred for systems with specialized needs that require lower latency and faster data analysis, such as in fields like finance and manufacturing. Cloud computing alternatively works well as part of a general platform or software, like Amazon Web Services, Microsoft Azure, and Google Drive.

Ultimately, we will see edge computing as a tool to work alongside cloud computing in furthering our technological capabilities. Modern cloud computing hasn’t been around for very long and still has much room for growth. Instead of one form of computing replacing another in order to handle data and the Internet of Things (IoT), they work together to optimize computing and processing performance. As we continue to develop new technologies, both cloud and edge computing will become just two of the many ways we will be able to optimize and effectively navigate our highly interconnected world.

From its conception as an amorphous database of information accessible from any computer on a certain network, to its future incarnations as mediums for mixed realities and blockchain, to the addition of new technologies that work with the cloud like edge computing, the cloud has certainly come a long way in a short time. It’s easy to see that the future of the cloud is bright, and cloud computing is only going to become more capable as we move forward.

 



Source link