Tag Archives: Cloud

What AT&T’s Deals with IBM and Microsoft Mean for the Cloud | IT Infrastructure Advice, Discussion, Community


AT&T’s recent deals to use technology and services from IBM and Microsoft show how multivendor agreements could develop for large organizations. Last week came word of the alliance that will bring AT&T Business solutions to IBM Cloud. One day after that news, AT&T and Microsoft announced a separate partnership in which Microsoft Azure will serve as the preferred cloud provider for AT&T’s non-network infrastructure applications. AT&T also said it will get Microsoft 365 in the hands of much of its workforce. The size of telecom company AT&T makes each deal significant and that these partnerships were split across vendors speaks to the dynamics at play as organizations enact transformation plans.

The companies declined requests to comment further on these deals but a pair of industry watchers from Gartner shared their perspectives on what this all could mean in the long run.

The first deal is a multi-year strategic alliance in which AT&T will use IBM’s knowledge update and modernize internal applications for AT&T Business Solutions as part of a migration to IBM Cloud. The deal also gives AT&T Business access to Red Hat’s platform for managing applications and workloads. The companies expect this will help AT&T Business improve service to enterprise clients. On the flipside, IBM will tap AT&T Business, the primary provider of software-defined networking. The organizations already had a partnership in place with IBM using AT&T Business as its global networking provider.

The expanded relationship between AT&T and IBM raised some questions from Sid Nag, vice president, cloud services and technologies for Gartner, about what the long-term gains might be. “IBM has been struggling with their cloud initiative,” he says. “They haven’t made much traction in terms of competing with Amazon, Azure, and Google.”

Read the rest of this article on InformationWeek.



Source link

Database Deployments Moving to the Cloud | IT Infrastructure Advice, Discussion, Community


The days of deploying on-premises databases appear to be in the rearview mirror, or rapidly heading there. Microsoft and Amazon Web Services account for 75.5% of the market growth.

If you are a startup company looking to implement a new database, chances are you aren’t going to license the software and install it on the server in your office. No, you will look at what AWS or Microsoft Azure has to offer, or maybe you will look at Salesforce.com if you are looking for a CRM platform system.

That’s what seems to have happened to companies like Cloudera and MapR that pioneered Hadoop implementations. A lot of that big data went into the cloud instead.

But it’s not just startups anymore that are looking to put their data to the cloud.

While small and midsized organizations will move more quickly to the cloud, enterprises will also move there, too, but over a number of years, according to a new report from Gartner, The Future of the DBMS Market is Cloud.

Read the rest of this article on InformationWeek.



Source link

Why CIOs Are Betting on Cloud for Their Modern Data Programs | IT Infrastructure Advice, Discussion, Community


Enterprise infrastructures are changing rapidly as the management and visibility requirements of modern, data-driven applications are outpacing legacy data storage functionality. Gartner confirms that, with artificial intelligence and machine learning driving an explosion in data volume and variety, IT operations are outgrowing existing frameworks. Although insights from today’s vast amounts of structured, semi-structured, and unstructured data can deliver superior value, organizations are currently unable to adequately monitor or analyze this information (and between 60 percent and 73 percent of all data within an enterprise goes unused).

Cloud has been the buzz for more than a decade, and it is now seeing mass adoption among enterprises. Similarly, over the past several years, the size and scope of data pipelines have grown significantly. Just a few years ago, Fortune 500 companies were still experimenting with and testing the efficacy of ‘big data’ as they move toward a digital transformation. Yet today, the majority of those organizations have moved from big data pilots to large-scale, full production workloads with enterprise-level SLAs. Now, these organizations are most interested in maximizing the return on their big data investments and developing new use cases that create new revenue streams.

Data is staying put: Why Big Data needs the cloud

According to recent research from Sapio Research, who surveyed more than 300 IT decision makers, ranging from directors to C-suite, enterprises are overwhelmingly embracing the cloud to host their big data programs. As of January of this year, 79% of the respondents have data workloads currently running in the cloud, and 83% have a strategy to move existing data applications into the cloud. Why?

Modern data applications create processing workloads that require elastic scaling, meaning compute and storage needs change frequently and independently of each other. The cloud provides the flexibility to accommodate this type of elasticity, ensuring the computing and storage resources are available to ensure optimal performance of data pipelines under any circumstances. Many new generation data applications require data workflows to process increased traffic loads at certain times, yet little need to process data at other times – think of social media, video streaming or dating sites. For the many different organizations that encounter this type of resilience monthly, weekly, or even daily, the cloud provides an agile, scalable environment that helps future-proof against these unpredictable increases in data volume, velocity, and variety.

As an example, e-commerce retailers use data processing and analytics tools to provide targeted, real-time shopping suggestions for customers as well as to analyze their actions and experiences. Every year, these organizations experience spiking website traffic on major shopping days like Cyber Monday – and in a traditional big data infrastructure, a company would need to deploy physical servers to support this activity. These servers would likely not be required the other 364 days of the year, resulting in wasted expenditures. With the cloud, however, online retailers have instant access to additional compute and storage resources to accommodate traffic surges and to scale back down during quieter times. In short, cloud computing lacks the headaches of manual configuration and troubleshooting, as with on-premise, and saves money by eliminating the need to physically grow infrastructure.

Lastly, for organizations that handle hyper-secure, personal information (think social security numbers, health records, financial details, etc.) and worry about cloud-based data protection, adopting a hybrid cloud model allow enterprises to keep sensitive workloads on-premises while moving additional workloads to the cloud. Organizations realize they don’t have to be all in or out of the cloud. Sapio’s survey revealed that most respondents are embracing a hybrid cloud strategy (56 percent) for this reason.

The rapid increase in data volume and variety drives organizations to rethink enterprise infrastructures, particularly cloud strategies, and focus on longer-term data growth, flexibility, and cost savings. Over the next year, we will see an increase in modernized data processing systems, ran partially or entirely on the cloud, to support advanced data-driven applications and its emerging use cases.



Source link

3 Hidden Public Cloud Costs and How to Avoid Them | IT Infrastructure Advice, Discussion, Community


According to Gartner, worldwide public cloud revenue is expected to grow 17.3 percent this year, representing a whopping $206.2 billion. That’s up from just over $175 billion last year.

Clearly, IT organizations are ready to fire up their purchase orders, but before you commit, remember the old saying: “there’s no free lunch.” Hidden costs are an unfortunate byproduct of the public cloud life. Understand what you’re getting into upfront so you can decide when using a public cloud provider is cost effective and appropriate, or when it might be better to go a different route, such as a hybrid or multi-cloud approach.

Ingress costs

Often, public cloud providers’ ingress costs–the initial price you to pay to sign up–are either fairly low or non-existent. In some cases, the cloud provider will even help you transport your data for nothing.

The issue here is not so much cost as it is time. Transporting massive petabytes of data into a public cloud service can take weeks, if not months, during which time critical data might be unavailable. You could send it over a private network, but there’s a time cost to that, too.

Transactional costs

Most public cloud providers will charge a nominal fee every time you attempt to access your data. These fees are almost infinitesimal, sometimes averaging pennies per hour, which cloud providers hope to make up in high volume.

Things can get pretty pricey when you’re running thousands of analytics jobs. It’s easy for a CIO looking for cost savings to simply say “let’s put everything we have in the public cloud” when everything you have is fairly minimal, but as data use rises, so do transactional costs. In that case, using the public cloud exclusively for everything might not be the wisest long-term investment.

Egress costs

Stop me if you’ve heard this one before: “Our boss asked us to move all of our data to one public cloud provider. Now, we’re trying to move it to another, but we have to rewrite all of our scripts. It’s a huge pain.”

Moving your data from one provider to another can be a huge pain. This act of egress can result in significant costs, creating a form of cloud provider lock-in that can be difficult to break. Teams need to recreate new scripts, which translates to additional time and money and lost productivity. You’re recreating not just the wheel but a car’s entire engine and chassis.

A hybrid solution

You might be wondering if the public cloud is worth the cost. In many cases, the answer is “yes,” but it depends on your goals.

For better agility, investing in the public cloud is a wise move. Likewise, if you’re a smaller business, you will probably incur fewer transactional costs because you will likely have less data than a larger corporation.

But the answer might be “yes…and no.” You may choose to adopt hybrid and multi-cloud strategies, keeping some data on-premises or split up in different clouds.

A hybrid and multi-cloud strategy provides options. Companies can enjoy the extra tools and capabilities offered by public clouds while keeping costs under control. They don’t have to worry about ingress costs, and transactional costs can be minimized. They can also greatly reduce or even eliminate egress costs since they likely do not have to perform wholesale data migrations between different providers and can just delete their public cloud data if they have an on-premise backup.

Moving data within a hybrid environment

Moving applications between clouds can present its own challenges. Every public cloud provider uses its own cloud storage protocols. Migrating data between these disparate and disconnected protocols can result in egress costs–just what you’re trying to avoid.

You need to be able to federate your data so that it can be used across distinct protocols with minimal effort and cost. This can be accomplished by aggregating native storage from different cloud providers into a storage repository that uses a single endpoint to manage all of your organization’s clouds. Instead of manually pulling data out of one and migrating it to another, you can automatically migrate data and applications to and from the appropriate clouds.

When combined with container-native storage–highly portable object storage for containerized applications–you can easily transport all of your applications and their associated data between different providers. Furthermore, developers can automatically provision this storage without having to bother their data managers, saving everyone a lot of time and headaches and automatically boosting the performance of their teams.

Call it virtualization of object storage, or protocol translation. Whatever the name, it can all be done without breaking a sweat (or the bank). The result is the optimization of your hybrid or multi-cloud environments and the elimination of the hidden time and costs associated with public cloud storage. 

 



Source link

It’s Time for Enterprise Networking to Embrace Cloud Architectures | IT Infrastructure Advice, Discussion, Community


I’ll start at the end. Cloud computing is now the vernacular for computing. Cloud networking will, within the next 24 months, be the vernacular for networking. The same paradigms that have revolutionized computing will do so for networking.

Monolithic architecture moved into client/server architectures, which then evolved into service-oriented architectures, which has in turn given way to the now ubiquitous microservices/container model. This microservices architecture is the mainstay of cloud and public cloud computing, as well as serverless/utility computing models. Cloud software architectures bring numerous benefits to applications including:

  • Horizontal scale

  • Use of resource pools for near unlimited capacity

  • Distributed services and databases

  • Fault tolerance and containerization for hitless “restartability”

  • In-service upgrades

  • Programmability, both northbound and southbound, for flexible integration across services

  • Programming language independence

It is these attributes that we see (for the most part) in the large, global SaaS applications such as Amazon’s e-commerce website, Netflix’s streaming service, Facebook, and Twitter’s social networks. The same capabilities – with the same global, highly available, and horizontal scale – can be applied to enterprise networking.

The heart of networking is routing. Routing algorithms have maintained the same architecture for the past 30 years. Border Gateway Protocol (BGP4), the routing protocol of the Internet, has been in use since 1994. Routing protocols are designed for resiliency and autonomous operation. Each router or autonomous system can be an island unto itself, needing only visibility and connectivity to its directly attached neighbors. This architecture has allowed for the completely decentralized and highly resilient operation of BGP routing, yet it has also introduced challenges. Scaling and convergence problems continually plague BGP operations and Internet performance. There have been proposals to replace BGP, but its installed base makes that nearly impossible. The next best option is to augment it.

The most common mechanism for augmentation is to build an overlay network. An overlay network uses the BGP4-powered Internet as a foundation and bypasses BGP routing using alternative routing protocols. This approach combines the best of BGP routing – resiliency and global availability – with the performance and scale improvements of new and innovative routing protocols. The overlay model and these new routing protocols open the door to routing based on performance metrics and application awareness, and the potential to bring LAN-like performance to the Internet-powered WAN. This is at the heart of the cloud networking evolution and software-defined networking moving forward.

Building atop BGP4’s flat, decentralized architecture, new routing protocols are leveraging cloud software architectures to develop fast, scalable, and performance-driven routing protocols, embracing both the centralized and the distributed nature of cloud computing. The Internet, acting as the underlying network, provides basic connectivity. A broad network of sensors, small microservices deployed across major points of presence globally, run simple performance tests at set intervals and feed the results to a centralized, hierarchical routing engine. The basic tests provide insights into throughput, loss, and latency at key points of presence globally. A centralized routing engine then leverages deep learning to use the performance data, both current and historical, to create routes. The routing updates can be pushed to overlay network routers, and these routers then update their forwarding tables. Route hierarchy brings scale and resiliency. For example, should connectivity to the centralized routing engine be lost, routing persists and survives via router-to-router updates and, in the case of a potential prolonged outage, by bypassing to the underlying network.

Key elements deliver benefits

There are a few key elements of centralized overlay routing that are really novel:

Performance as a metric: BGP does not factor performance in route calculations, so it is possible (if not probable) that a poor performing link or multiple links will be used, impacting application performance. This manifests itself in poor TCP performance (which leads to degraded throughput), as well as high loss, impacting real-time applications. The use of performance data in centralized overlay routing introduces the capability to route not just based on hop count or least cost, but also by best performance. 

Application specific routing: Using performance telemetry for routing enables routes with an application bias. High throughput routes can be used for file transfers, and low loss and latency routes can be used for real-time applications such as voice or video.

High availability:  The use of proven, battle-tested, cloud software architecture for cloud networking ensures that centralized routing is not only resilient but is also highly available on a number of levels. Use of distributed microservices and the capability to “restart” individual services on the fly without service outage – a key element of cloud software architecture – combined with the safety net of reverting to underlay BGP4 routing, ensures packets continue to flow even in the event of something catastrophic.

Native integration into SD-WAN and SDN: As SD-WAN continues to overtake the WAN edge, support for centralized routing will continue to grow. Progressive SD-WAN vendors are today starting to utilize overlay networks and centralized routing, demonstrating its viability.

Networking is evolving, embracing cloud software architectures and techniques. It is pushing into the enterprise from two sides – from the data center, and from the WAN edge. This push is accelerated by the approach of augmenting Internet technologies versus an outright replacement, enabling enterprises to deploy these new technologies across their networks quickly. The effects are immediate and noticeable, as the performance of critical business applications is positively impacted with the enterprise, across the enterprise WAN, and with enterprise SaaS applications and cloud workloads. 

 



Source link