Tag Archives: Networking

Overcoming Networking Challenges in the Transition to SD-WAN | IT Infrastructure Advice, Discussion, Community


Your definition of SD-WAN may depend on when you joined the conversation. For some, SD-WAN is about hybrid WAN using diverse uplinks intelligently based on application requirements. For others, it’s about securely connecting sites over the WAN. And for others still, it’s about introducing software-defined control in a bid to become more efficient. 

Whatever the starting point, this much is clear: SD-WAN is an operational transformation in the making for enterprises of all shapes and sizes. 

The thing about operational changes

Indeed, some of SD-WAN’s value can be unlocked by merely deploying an SD-WAN device at the edge. With a secure router in tow, enterprises can unlock secure connectivity over the WAN and execute workflows from a cloud-managed controller. Operators can use that same software-defined model to gain visibility over a set of distributed devices. In many ways, the barrier to SD-WAN is fairly low, requiring only the will to move beyond current practices.

But the thing about operational changes is that they occur not only in the devices, but also the people. 

Moving to controller-based management means elevating operations above the current device-by-device, command-by-command slog that has typified enterprise networking for decades. Operationally, this will ultimately prove easier. But when a workforce self-identifies by their certification numbers, such changes can be imposing. 

Additionally, SD-WAN represents not only a change in the point of interaction (the controller over the CLI) but also a change in the altitude of engagement. Using declarative, intent-based models, operators are meant to specify requirements in abstract terms rather than behavior in explicit device primitives. Of course, to operate at the application level means networking teams cannot merely coexist with their application brethren. There must be a degree of collaboration that is frequently absent in the current siloed model of legacy IT operations.

Oh, what a tangled web we weave

Of course, these operational changes, if confined to a single set of devices at the WAN edge, are straightforward. But if software-defined principles are transformative, why would they stop at the edge?

It seems obvious that the SD-WAN movement will be but a waypoint on the path to a broader enterprise play. The architectural constructs and networking practices that are currently being applied to the WAN will extend over the wired LAN as well. And from there, they should logically continue down to the access layer, which is largely a wireless play. 

When this happens, the practice of networking becomes more complex. The interdependencies between different systems across different parts of the network are not always known, and even when they are, the tools required to manage them have largely not existed. When networking teams purchase and deploy in silos, the suppliers tend to follow suit. 

For example, in an actual SD-WAN deployment, an enterprise was making changes to the branch gateway that were to be pushed out to thousands of sites. Among the changes was one hidden in the MTU size that resulted in an MTU mismatch over the WAN. Such mismatches can lead to intermittent connectivity issues between the source and destination hosts. Merely checking for connectivity in the absence of actual traffic doesn’t necessarily guarantee success. If the network elements are treated in isolation, this change goes live the next day. 

In this instance, the enterprise had extended its management domain from the WAN into the wireless LAN. By monitoring wireless experience, they identified intermittent issues down to the endpoint. This flagged that there was a problem, which was subsequently resolved in the SD-WAN gateway.

The key here is that operations is not a siloed function. If end-user experience is the new uptime, then networking teams must expand their purview beyond basic connectivity checks. 

Diversity of infrastructure

One of the other driving forces behind SD-WAN is leveraging diverse underlying transport. An early premise behind hybrid WAN was the idea that not all applications require the same treatment. Enterprises should be able to make intelligent decisions about which uplinks are used. In this case, SD-WAN endeavors to make use of a diverse set of underlying transport options. 

This is, of course, only possible if SD-WAN is implemented as a bridge from traditional to modern. When technology is treated as a greenfield adjunct to existing deployments, it can be difficult to marry the past to the future. As enterprises grapple with their SD-WAN plans, striking a balance between leveraging and retiring existing infrastructure will be important. When budgets and timelines don’t allow for a clean start, finding solutions that provide a path forward without stranding assets can be a cost-effective way of evolving. 

The key in these bridging scenarios is unifying operations over diverging infrastructure. Indeed, one of the transformative aspects of SD-WAN is that abstracted control ought to create separation between management and devices.

Preparing for the future

Ultimately, the challenge in evolving to SD-WAN will not be in the deployment of gateway devices, but rather in exploiting the operational tenets beyond just the SD-WAN devices themselves. If enterprises believe this is a one-and-done purchase opportunity, there might be very little to consider. But for those that see the software-defined movement as a precursor to what is next, it makes sense to think through how the rest of the enterprise will be ushered into this era. 

Specifically, how will the WAN and LAN come together? Will the network evolve as a single solution or as a collection of multivendor components stitched together through operational abstraction? How will workflows be treated? And how can control models take advantage of already emerging advances in areas like artificial intelligence?

While enterprise attitudes towards the future will undoubtedly vary, one thing seems clear; we are just getting started.



Source link

Social Media, Crafters, Gamers and the Online Censorship Debate | Social Networking


Ravelry, an online knitting community that has more than
8 million members, last month announced that it would ban forum posts,
projects, patterns and even profiles from users who supported President
Trump or his administration.

“We cannot provide a space that is inclusive of all and also allow
support for open white supremacy,” the administrators of Ravelry
posted on the site on June 23.

“Support of the Trump administration is undeniably support for white
supremacy,” the post added.

The administrators have maintained that they aren’t endorsing
Democrats or banning Republicans. Users who do support the
administration have been told they can still participate — they just can’t voice their support on Ravelry.

Ravelry’s move was met with both an outpouring
of support from those who opposed the administration’s policies and condemnation from those who support the president.

Ravelry is not the first online community to issue such an ultimatum
to users. The roleplaying game portal RPGnet last fall issued a
decree that support for President Trump would be banned on its forums.

“Support for elected hate groups aren’t welcome here,” the
administrators posted. “We can’t save the world, but we can protect
and care for the small patch that is this board.”

Is It Censorship?

The banning of conservative groups hasn’t been limited to Ravelry or
RPGnet. Facebook last fall announced that it had
purged more than 800 U.S. accounts that it identified as flooding users with politically oriented spam.

However, some conservatives — including Sen. Ted Cruz, R-Texas — have argued that Facebook has unfairly targeted those expressing conservative opinions. Cruz this spring raised his concerns with representatives from Facebook and Twitter during the Senate Judiciary Subcommittee on Constitution’s hearing, “Stifling Free Speech:
Technological Censorship and the Public Discourse.”

The threat of political censorship could be problematic due to the
lack of transparency Cruz noted during the April hearing.

“If Big Tech wants to be partisan political speakers it has that
right,” he said, “but it has no entitlement to a special immunity
from liability under Section 230 that The New York Times doesn’t enjoy, that The Washington Post doesn’t enjoy — that nobody else enjoys other than Big Tech.”

Understanding Section 230

Much of the debate revolves around Section 230 of the Communications
Decency Act of 1996, the common name for Title V of the
Telecommunications Act of 1996. As part of a landmark piece of
Internet legislation in the United States, it provides immunity from
liability for providers and users of an “interactive computer service”
that publishes information provided by third-party users.

The law basically says that those who host or republish speech are not
legally responsible for what others say and do. That includes not only
Internet service providers (ISPs) such as Comcast or Verizon, but also
any services that publish third-party content, which would include
the likes of Facebook and Ravelry.

One of Section 230’s authors, Sen. Ron Wyden, D-Ore., has
countered that the law was intended to make sure that companies could moderate their respective websites without fear of lawsuits.

Striking a Balance

The divide online is of course just a mirror of the deep political
divide in the U.S., and it is unlikely that legal wording
will do much to heal it. Battle lines have been drawn, and both sides
continue to dig in. The question is whether banning those with
differing opinions actually helps or hurts matters.

There is an argument that this is simply defusing controversy and
silencing the most extreme voices.

“It was once shared with me that intolerance of intolerance is
intolerance,” said Nathaniel Ivers, associate professor and
chairman of the department of counseling at
Wake Forest University.

“We often think of intolerance as an inherently negative thing;
however, there are instances in which communities, groups and
organizations are justified in establishing zero tolerance clauses for
certain behaviors and ideologies,” he told TechNewsWorld.

“The challenge, however, is that these clauses are innately rigid and
may at times exclude ideas, attitudes and behaviors that are
benign,” Ivers added.

Then there is the concern of whether this is an issue of censorship.
However, those who understand media law know that in the legal sense, censorship applies to the government and media. Private companies actually are within their rights to determine what is appropriate for their audiences.

In most cases users also agree to terms of use, and
violating those terms — which can include the posting of what is considered inappropriate content — can result in removal of content or
termination of membership to a group or site.

“Does Ravelry, have the right to censor?” pondered social media
consultant Lon Safko.

“Sure they do. It’s their site, and they can do anything they want
short of child pornography,” he told TechNewsWorld.

Extreme Decisions

Facebook’s and Ravelry’s decisions to ban some content have been based
on what each views as “extremist” in nature. This may reflect the deep divide in the nation, but is the action inappropriate?

“Generally speaking, social media companies — like other companies —
have significant leeway in running their business in the manner of
their choosing, as long as they do not violate applicable laws,” said
Robert Foehl, executive in residence for the business law and ethics
department at the
Ohio University Online Master of Business
Administration
program.

“When making all kinds of business decisions, companies are
increasingly considering the impacts, both positive and negative, on
the various stakeholders of the company —
owners/investors/shareholders, for sure, but also other important
stakeholders, such as customers, employees, and communities/society,”
he told TechNewsWorld.

This is why Facebook last year began following the lead of eBay and
other online sites that have banned the sale of items of a
questionable nature. Among them are items of the Third Reich. While some legitimate collectors who see the historical value in such items have
voiced concern, Facebook’s decision was based in part on how they could be linked to extremist groups.

“Social media companies have long maintained content policies that
govern what is deemed acceptable content for their product and users
can freely choose whether they want to agree with those policies and
use the product, or disagree and not become a customer of the
company,” Foehl suggested.

“This trend signals a tipping point in the Internet’s 30-year history,
and in particular the data ecosystem it has brought forth with its
growing encroachment into people’s private lives,” added Chris Olson,
CEO of
The Media Trust.

“There is a fine line between hate speech and the right to free
speech,” Olson told TechNewsWorld. “As with all rights, there comes responsibility — like the
responsibility not to ruin people’s lives nor start a riot.”

Political Debate

A greater concern than whether personal opinions — even those that some
may find distasteful — are being silenced is what this means for the
political debate. Is one side, notably the conservative voice, being
cut out of the debate?

“Over the last few years, concerns about freedom of speech,
censorship, and social media’s role in political communication have
come to the forefront,” noted Ohio University’s Foehl.

“Given the pervasiveness and importance of social media platforms as a
means of communication and connection in today’s societies, these
concerns are timely and legitimate,” he added.

It is important not to inadvertently conflate issues, Foehl noted.

“It is important to remember that the constitutional right to freedom
of speech in the United States protects against inappropriate
restrictions on speech by state actors — in essence, the government
and related institutions,” he explained. “So, citizens are free to
express ideas through speech in the town square without governmental
interference. In the United States, social media companies are not
state actors; thus the freedom of speech protections afforded by the
Constitution do not apply to speech contained in their platforms.”

The Ethics Question

One question then is whether Facebook or other social media
companies — as well as firms like Ravelry and RPG.net — actually have
acted unethically. That could depend in large part on their intentions
in disallowing certain content.

“If the intent is to remove content that could reasonably be seen to
cause harm to, or if the content does not respect the dignity or
autonomy of, individuals or a group of people, then it is very likely
that the company acted ethically when removing the content,” said
Foehl.

“On the other hand, if content was removed in order to attempt to
impose the ideology of the company’s executives, political or
otherwise, on others, then the content removal would be ethically
suspect,” he added.

“Of course, this is the rub — those whose content has been removed
many times feel it is because of ideological conflicts with the
content decision makers; and President Trump has been especially vocal
about his view that political bias is the basis for many content
decisions,” Foehl noted.

However, social media companies may not have overstepped any
existing authority, given their role in society today.

“Companies are not state actors, and they have the authority to develop
their products as they see fit, as long as they comply with applicable
laws,” emphasized Foehl.

“The development, implementation, and
enforcement of clearly communicated content guidelines are a [requirement]
of customer trust. Customers have the autonomy to decide whether they
want to do business with the company,” he added.

Equal Time and Fairness

Some conservatives could argue that they are being shut out of the
dialogue online, but there are precedents to consider. The first is the
equal time rule, which is specific to elections. It requires that U.S. radio and
television broadcast stations must provide an equivalent opportunity
to any opposing political candidates who request it.

However, that applies to elections and to the broadcast
medium, so those who suggest that Ravelry’s ban is a violation
misunderstand the law.

The other law is the FCC’s fairness doctrine, a policy introduced in
1949 that requires the holders of broadcast licenses to present both
sides of controversial issues. The FCC eliminated the policy in 1987 —
and that move may have been instrumental in leading to the proliferation of conservative
talk radio.

As the Internet is now maturing, this issue may need to be reconsidered.

“The government, industry and Internet public will have to agree to a
set of standards — all of which are still being hammered out and
tried,” said The Media Trust’s Olson.

“This industry attempt is merely the result of a larger set of
problems, like pointing fingers at outdated laws and regulations,
social media platforms, or people’s uncontrolled impulses,” he added.

“The solution lies in everyone working together in crafting better
governance policies that can be applied as a minimum around the
world,” said Olson. “With technology outpacing laws and norms, the
path forward is a rocky one until the base standards are hammered
out.”

Consequences of the Discourse

In the end this banning of conservatives — whether for
legitimate concerns or petty grievances — could fracture communities and ultimately be bad for business.

“Censorship is bad for Ravelry’s business,” said Safko.
“If they don’t allow pro-Trump, then as a business site, they should not allow anti-Trump or any political postings.”

Failure to do so could result in legislation and strict rules —
something that isn’t good for a free and open discussion of issues and
civil debate.

“A potential issue with rigid laws, policies, or regulations, is that
they can, over time, create a very homogeneous community,” said Wake
Forest University’s Ivers.

“In such communities, people may, for a time, feel more comfortable;
however, these groups also may become fertile ground for stereotypes
and xenophobia,” he warned.

“It is important to clarify that the social media sites like Ravelry
and RPGnet that have banned content related specifically to President
Trump have made the decision that he is inextricably linked to hate
speech — speech that attacks a person or group based on protected
characteristics such as race, religion, disability and sexual
orientation,” explained Foehl.

“They have not banned content based on where such content falls on the
political spectrum,” he added.

As a result, social media companies find themselves in a very difficult
situation when it comes to removing content.

“This situation is exacerbated by social media’s prevalence and
importance in the exchange of ideas in the modern world,” said
Foehl.

“The decision to remove content should not be taken lightly and must
pass ethical scrutiny,” he added. “Employing a sound and formal
governance structure that allows content removal decisions to be made
quickly — but not hastily — and independently [from company
executives] is advisable. The criteria for content removal should be
developed with a mind toward ensuring doing no harm and treating
others with respect and dignity, while allowing for the exercise of
personal autonomy.”


Peter Suciu has been an ECT News Network reporter since 2012. His areas of focus include cybersecurity, mobile phones, displays, streaming media, pay TV and autonomous vehicles. He has written and edited for numerous publications and websites, including Newsweek, Wired and FoxNews.com.
Email Peter.





Source link

It’s Time for Enterprise Networking to Embrace Cloud Architectures | IT Infrastructure Advice, Discussion, Community


I’ll start at the end. Cloud computing is now the vernacular for computing. Cloud networking will, within the next 24 months, be the vernacular for networking. The same paradigms that have revolutionized computing will do so for networking.

Monolithic architecture moved into client/server architectures, which then evolved into service-oriented architectures, which has in turn given way to the now ubiquitous microservices/container model. This microservices architecture is the mainstay of cloud and public cloud computing, as well as serverless/utility computing models. Cloud software architectures bring numerous benefits to applications including:

  • Horizontal scale

  • Use of resource pools for near unlimited capacity

  • Distributed services and databases

  • Fault tolerance and containerization for hitless “restartability”

  • In-service upgrades

  • Programmability, both northbound and southbound, for flexible integration across services

  • Programming language independence

It is these attributes that we see (for the most part) in the large, global SaaS applications such as Amazon’s e-commerce website, Netflix’s streaming service, Facebook, and Twitter’s social networks. The same capabilities – with the same global, highly available, and horizontal scale – can be applied to enterprise networking.

The heart of networking is routing. Routing algorithms have maintained the same architecture for the past 30 years. Border Gateway Protocol (BGP4), the routing protocol of the Internet, has been in use since 1994. Routing protocols are designed for resiliency and autonomous operation. Each router or autonomous system can be an island unto itself, needing only visibility and connectivity to its directly attached neighbors. This architecture has allowed for the completely decentralized and highly resilient operation of BGP routing, yet it has also introduced challenges. Scaling and convergence problems continually plague BGP operations and Internet performance. There have been proposals to replace BGP, but its installed base makes that nearly impossible. The next best option is to augment it.

The most common mechanism for augmentation is to build an overlay network. An overlay network uses the BGP4-powered Internet as a foundation and bypasses BGP routing using alternative routing protocols. This approach combines the best of BGP routing – resiliency and global availability – with the performance and scale improvements of new and innovative routing protocols. The overlay model and these new routing protocols open the door to routing based on performance metrics and application awareness, and the potential to bring LAN-like performance to the Internet-powered WAN. This is at the heart of the cloud networking evolution and software-defined networking moving forward.

Building atop BGP4’s flat, decentralized architecture, new routing protocols are leveraging cloud software architectures to develop fast, scalable, and performance-driven routing protocols, embracing both the centralized and the distributed nature of cloud computing. The Internet, acting as the underlying network, provides basic connectivity. A broad network of sensors, small microservices deployed across major points of presence globally, run simple performance tests at set intervals and feed the results to a centralized, hierarchical routing engine. The basic tests provide insights into throughput, loss, and latency at key points of presence globally. A centralized routing engine then leverages deep learning to use the performance data, both current and historical, to create routes. The routing updates can be pushed to overlay network routers, and these routers then update their forwarding tables. Route hierarchy brings scale and resiliency. For example, should connectivity to the centralized routing engine be lost, routing persists and survives via router-to-router updates and, in the case of a potential prolonged outage, by bypassing to the underlying network.

Key elements deliver benefits

There are a few key elements of centralized overlay routing that are really novel:

Performance as a metric: BGP does not factor performance in route calculations, so it is possible (if not probable) that a poor performing link or multiple links will be used, impacting application performance. This manifests itself in poor TCP performance (which leads to degraded throughput), as well as high loss, impacting real-time applications. The use of performance data in centralized overlay routing introduces the capability to route not just based on hop count or least cost, but also by best performance. 

Application specific routing: Using performance telemetry for routing enables routes with an application bias. High throughput routes can be used for file transfers, and low loss and latency routes can be used for real-time applications such as voice or video.

High availability:  The use of proven, battle-tested, cloud software architecture for cloud networking ensures that centralized routing is not only resilient but is also highly available on a number of levels. Use of distributed microservices and the capability to “restart” individual services on the fly without service outage – a key element of cloud software architecture – combined with the safety net of reverting to underlay BGP4 routing, ensures packets continue to flow even in the event of something catastrophic.

Native integration into SD-WAN and SDN: As SD-WAN continues to overtake the WAN edge, support for centralized routing will continue to grow. Progressive SD-WAN vendors are today starting to utilize overlay networks and centralized routing, demonstrating its viability.

Networking is evolving, embracing cloud software architectures and techniques. It is pushing into the enterprise from two sides – from the data center, and from the WAN edge. This push is accelerated by the approach of augmenting Internet technologies versus an outright replacement, enabling enterprises to deploy these new technologies across their networks quickly. The effects are immediate and noticeable, as the performance of critical business applications is positively impacted with the enterprise, across the enterprise WAN, and with enterprise SaaS applications and cloud workloads. 

 



Source link

Stream On? Networking Issues for Content Owners and Cord Cutters | IT Infrastructure Advice, Discussion, Community


Roughly 20 years ago, when the multi-week March Madness men’s college basketball tournament was streamed for the first time (by CBS), best effort streaming was hailed as impressive at a time when the internet was in its early years.

Since then, the focus has been on Quality-of-Experience (QoE) and delivering ads for sponsorships. The same holds true for a growing list of OTT services providers pitching those leaving cable TV.

Streaming: By the numbers

Those homes using streaming services separate from traditional pay “cable” TV offerings need to know about the Internet connections to view video in different resolutions/formats, including 4K.

For Netflix, the speed you need to handle 1080p high-definition (HD) streams is roughly 5 Mbit/sec. To handle 4K ultra-high definition (UHD) stream, which is a higher resolution format with four times more pixels, you’ll need a 25 Mbit/sec connection.

Many movie makers have shot their films in the higher 4K format so that consumers with 4K UHD TV sets can enjoy the more crisp and immersive viewing experience. If you did a side-by-side HD and 4K viewing comparison, the naked eye would detect the difference starting with 50-55-inch units.

Data usage

When watching TV or movies on Netflix, consumers use roughly 1 Gigabyte of data per hour per device for Standard Definition content and almost 3 Gigabytes of data per hour per stream of HD. content.

For 4K viewing, Netflix allows users to set their data usage at 7 Gigabytes per device, per stream, per hour. Netflix offers an Auto setting that the company says adjusts automatically to deliver the high possible quality, based on the consumer’s current internet connection speed.

Net neutrality nixed

The plan for net neutrality was, as the words suggest, for an even playing field for the handling of traffic over the open internet. But with the rules dumped last year, content owners and consumers alike have wondered if ISPs would give top priority to their own traffic, and a lower priority for traffic of those with competing services. Why? Because network kingpins such as Comcast also own content. (Comcast owns Time Warner and NBCU.)

The chief concern is that, for example, video traffic from a service from say Amazon, a competitor, be relegated to a lower priority, likely degrading service performance?

And with the increasingly crowded over-the-top (OTT) service space flush with contenders putting a full-court press on cable TV customers, would an ISP outright block traffic from a rival?

So, what happens in the absence of net neutrality? It has been written that ISPs must themselves disclose any instances of blocking, throttling or pay for prioritization. Such an occurrence would be examined to determine if it was anti-competitive by the Federal Trade Commission, not the FCC.

ISPs had originally explained – roughly a decade ago – that packet inspection was part of their ongoing network management efforts. Stay tuned.

Now what?

Whether they own network infrastructure or not, content owners and licensees (which includes OTT service providers) need to focus tightly on measuring, achieving and improving the consumer’s QoE.

There’s little margin for error in this undertaking given that the fast-lengthening list of streaming services. Also, unlike traditional cable TV, most all streamers offer free trials and don’t require term commitments.

So, if consumers don’t like what they see (or can’t see), it’s easy to change the channel.

Stay tuned – and enjoy you’re the madness of March college hoops.



Source link

Why You Should Pay Attention to Intention-Based Networking | IT Infrastructure Advice, Discussion, Community


If you’re not yet familiar with intent-based networking (IBN), you soon will be.

Although software-defined networks (SDNs) now automate most network management processes, a growing number of organizations are coming to the realization that they need even more capabilities to ensure that their networks are operating as intended. IBN technology helps users by quickly identify lurking network problems via a series of rich insights and then troubleshoots and remediates the issues. Meanwhile, IBN’s powerful security and policy capabilities provide simplified segmentation, consistent policy deployment and an ability to detect threats hidden in encrypted traffic.

Traditional methods for defining, provisioning, monitoring and securing networks to meet service level agreements are failing to keep pace with ongoing digital disruption, noted Andrew Wertkin, CTO of BlueCat, a network technology firm. “The change rate on networks continues to accelerate, and the landscape continues to grow in complexity with network virtualization, hyper-converged infrastructures and the continued growth of devices demanding network resources,” he said.

As enterprise networks grow larger and more diverse, network operation is becoming increasingly challenging. Managers need networking software that helps them plan, design and operate their networks more efficiently and with less downtime, while maintaining security, explained Natale Ruello, director of product at Forward Networks, a network verification platform provider. “Intent-based networking helps businesses mitigate the massive risks inherent in running complex, multi-vendor networks,” he added.

IBN simplifies complex networks by allowing operations personnel to work at a higher level and specify exactly what needs to be done, said Kireeti Kompella, Juniper Networks’ CTO, engineering. “IBN and automation are the only feasible approaches to scaling for future network technologies, in particular, IoT and multicloud.”

Multiple benefits

Intent-based networking allows network administrators to set policies for the desired state of the network and then use automation to ensure those policies are implemented, explained John Smith CTO of LiveAction, a network performance software provider. “This makes it easier to translate business requirements into what you want the network to do, rather than having to use network-level commands, for example.”

IBN provides more direct mapping of what you want done by setting and implementing policies, Smith noted. “It also allows network administrators to compare the results of how the network performs against intent,” he said. “IBN implies automation, so all the benefits of automation to program the network also apply to IBN, such as easier network management and configuration.”

IBN improves network reliability and helps network operators sleep better at night, Ruello observed. “Organizations have a better understanding of their networks—whether or not they are behaving as intended—and how to remediate any issues,” he said. “Without the right tools, many questions are simply not answered, leaving operators blind to potential problems in their network or working long weekends hunting down the needles in their haystacks.”

Enterprises and telecommunications service providers that use IBM generally save time, money and resources. “It costs more money to maintain, monitor and operate an existing network than it does to initially buy and install the networking equipment,” Kompella reported. “By implementing a network that encompasses all of the advantages provided by automation, companies can redeploy their resources to focus on other tasks that provide a higher RoI.”

Moving forward

The first step toward IBN adoption should be to deploy a system that can verify current network behavior and ensure that’s aligned with intent, Ruello advised. “One key thing to note [is that] IBN adoption does not mean greenfield all the time,” he said. “Existing networks can benefit from IBN by deploying, for instance, a network verification platform to make sure network behavior and intent are correct.”

IBN adoption represents something far more than a technology transformation, Wertkin noted. “It also affects [the] organization, skillsets, operations, compliance/governance, and existing service level agreements,” he observed. “One recommendation we have is for organizations to assess their readiness in these areas.”

As with all transformative technologies—especially those impacting critical infrastructure, such as core networking—Wertkin suggested starting in the lab, working with key vendors and then transitioning IBN to less critical networks in the user domain, such as guest networks. “Vendors can allow the market to get ahead of product and it is key to understand the actual capabilities as opposed to the PowerPoint promises,” he said.

Final thoughts

“Often, we technologists forget that architectures are only meaningful if they fulfill requirements,” Wertkin observed. He added that it’s becoming increasingly difficult to understand network requirements, given how rapidly business needs change. “We should design and architect infrastructure to enable rapid change, and there is a great deal of promise with IBN.”

“We expect this space to heat up as more people learn about and experience the benefits,” Ruello predicted.

 



Source link