Tag Archives: Network

Automating the Enterprise Network: Why Scripting is No Longer the Answer | IT Infrastructure Advice, Discussion, Community


Numerous open-source scripting approaches to network management are currently available in the market. While promising, they may prove to be a high-risk trap for enterprises looking to automate, remove complexity, and make changes to their networks quickly. While instances of expensive network outages are usually kept under wraps, enterprises must be aware of these hidden issues and look away from traditional scripting to achieve an automated network foundation that ensures business continuity and innovation.

Reducing complexity and improving agility: Drivers for automation

Enterprises are trying to reduce complexity – including lengthy lab testing and implementation cycles – in their networks to improve agility. The end goal is a platform for competitive business innovation with policy-driven, intent-based principles. In addition, network virtualization, SD-WAN, and other new shifts in networking mean the network-as-a-service is no longer predictable.

These dynamics are beginning to obsolete scripting and home-grown coding because both are still locked into a static model of the network rather than maintaining the stability of core business while evolving the network as new initiatives are added dynamically. It’s the network itself that represents the living, evolving business – not the static-scripted or manually- configured model. Months of learning, customizing, and testing not only can’t keep pace but are actually no longer needed. Rather, enterprises need a dynamic knowledge base of the network that can deliver automated remedies, updates, and alerts for configuration and ongoing maintenance and management. This is why intent-based networking is resonating in the industry; validation of business intent, automated implementation, awareness of network state, assurance, optimization, and remediation are all required for the modern network. The question is how to get there fast and efficiently.

Why scripting isn’t the answer

There are several reasons why writing scripts is not the answer for enterprises looking to automate their networks:

  • While python scripting is a compelling upgrade to slow, manual processes, unlike telecom protocols, scripts are not standardized and typically don’t use best practices nor scale in a multi-vendor network. As business intent evolves from new initiatives or acquisitions, scaling the network becomes critical. Scripts are notoriously difficult to adapt to new vendor systems and may inhibit cost-savings.

  • Home-grown scripting, unlike code, cannot self-adapt to new environments, be programmed to interact with network state, nor operate as a machine-learning platform. At best, home-grown scripts provide a one-off, static network configurator for a fixed point in time. As the network changes, the scripts must be updated and re-tested to manage any underlying knowledge base while polling the changing state of network resources. Even without the training, script testing, bug fixing, and maintenance, the user is left with an approach that is static and must be re-scripted manually and continually. If a user wants to make the network policy-driven, he or she must hire or contract further scarce resources to write, test, and maintain custom software.

  • DIY scripting from generic templates or playbooks is another approach that seems promising. However, it requires customizing integrity tests, introduces the same high-risk maintenance issues and testing delays, is unresponsive to policy change, and still requires trained skills and customization. Unlike open source web platforms, these templates are not backed by massive communities and have the potential to damage the enterprise operation.

  • With scripting, the enterprise user is left to build compliance testing software to minimize enterprise risk. Compliance automation requires ongoing audit and action to validate actual network state, ensuring compliance to policy. Even after updating and re-testing scripts, there is no guarantee that problems have been fixed – or that new problems haven’t been introduced.

  • Scripting can be problematic when there are staffing changes in the enterprise. As staff change, the cost of either repeated training or poorly documented scripts creates a cycle of re-creation. Scripts not well understood by new staff tend to be disposable and are replaced, introducing additional testing and ultimately, more risk.

  • Interpreted scripts are slow and inefficient when compared to compiled, optimized code. In large configurations, this can impact availability and maintenance periods as the scripts update networks and are subsequently tested. Enterprises are looking to speed operations and dynamic, automated changes may make the concept of large-scale network maintenance almost disappear.

To operate a responsive, automated network, the existing model of static scripting, monolithic testing, and training maintenance does not support the type of fast-moving intent-based, networking that is becoming the goal of the modern enterprise. To build a foundation that keeps the business evolving and competitive, enterprises need to move away from traditional scripting and towards more intent-based automation.



Source link

Securing Today’s New and Varied Network Edges | IT Infrastructure Advice, Discussion, Community


The network perimeter has been replaced with a series of edge network environments and devices that the organization doesn’t own either (in the case of cloud infrastructures, SaaS applications, or user-owned mobile devices) or no longer rely on a hub-and-spoke connection model that backhauls traffic for inspection. From an IT perspective, the challenge is ensuring consistency between these environments—especially when DevOps and web teams may not even report to the same line of business.

Security at all the edges

To start, organizations need to deploy security solutions built around open standards so they can openly see other devices, share and correlate threat intelligence, and participate in a coordinated response—regardless of their form factor or where they have been deployed in the distributed network.

Next, these solutions also need to be adapted to the unique requirements of today’s new edge environments:

The multi-cloud edge: Each cloud platform has unique controls and management interfaces that require security solutions to be specially configured in order to operate natively. However, security tools that function natively in a cloud environment may have challenges interoperating with versions running natively on other platforms. And security devices that are deployed as an overlay solution can lose functionality, making consistent policy enforcement difficult.

To address this challenge, IT teams need to select security solutions that operate natively across a wide range of cloud platforms and include connectors that ensure consistent policy orchestration and enforcement across and between network environments.

The SaaS and shadow IT edge: Users often have 15 times more applications deployed in the network than IT knows about.

Security solutions need to be able to identify these Shadow IT applications; ensure that critical workflows, data, and applications being directed to those sites are being adequately secured and monitored; and ensure that malicious data or applications are blocked from entering the network from these uncontrolled sites.

The IoT edge: An alarming majority of IoT devices are not only inherently insecure, but they can’t even be updated or patched, which is why they are a preferred target by cybercriminals.

Security solutions need the ability to dynamically identify devices at the moment of access, apply policies and segmentation rules, and share those policies across the distributed network.

The mobile workforce edge: It is not unusual for a single user to have multiple devices connected to the network simultaneously. These users also often blend personal and professional data, applications, and profiles onto a single device, exposing organizations to risk.

A comprehensive security strategy for endpoint devices needs to include VPN, network access control and segmentation, endpoint security tied to network policies and a mobile device management (MDM) solution that can automatically secure connections and remotely wipe device drives.

The OT edge: As IT and OT networks converge, the attack surface not only expands, but each environment is exposed to new risks from the other. On the OT side, newly deployed IT solutions connect devices and resources that have been traditionally isolated, exposing them to threats. From the OT side, delicate and aging solutions often have vulnerabilities that can be exploited, creating a new platform from which to launch attacks.

Securing OT requires adopting a Zero Trust model, establishing secure controls between OT and IT, and deploying access control and segmentation to secure delicate or at-risk applications, devices, and control systems.

The WAN edge: The hub-and-spoke model for branch offices is gone. Instead, the new SD-Branch allows remote locations to operate as a fully integrated component of the extended WAN. And because many branches also include their own LAN, comprised of fixed and mobile devices, IoT, cloud connections and multiple public internet links, solutions need to support a complex mix of LAN-WAN-LAN environments.

Protecting the WAN Edge requires a security solution that can easily move into and across all of these environments using a zero-touch deployment model. A secure SD-WAN solution needs a fully integrated suite of security tools that extends consistent security functionality, performance, and enforcement to the remote location and then seamlessly interoperate with the local branch LAN.

The emerging 5G edge: 5G promises to deliver on the potential of things like connected cars, smart cities, and edge networking, where devices can share critical information, receive rich media streams, run data-heavy applications and make real-time decisions.

This will require security to move to the edge, where it needs to be embedded in edge networking and IoT devices to avoid the need for round trips for data inspection and policy decisions.

It’s time for a new generation of security

Second-generation security solutions can’t take us any further. Organizations need a third-generation security designed for today’s digital marketplace, built around high performance, adaptability, cross-device and cross-platform interoperability, and self-learning capabilities that not only see and respond to threats in real time but actually anticipate threats before they happen. This will allow security to be self-provisioning, self-operating, self-learning, self-adjusting, and self-correcting, enabling organizations to defend themselves against the expanding attack surface successfully.

 



Source link

Mitigating Network Security Vulnerabilities with Cloud-Native Approaches | IT Infrastructure Advice, Discussion, Community


In the fast-paced and highly competitive application driven economy, business has a direct dependence on the security and availability of the cloud infrastructure it runs on. Whether it’s running in a private cloud, hybrid, multicloud, or even as a new distributed workload at the Intelligent Edge, the same business questions prevail:

  • Is my service secure, and are my customers and their data protected?

  • Is it available to the users who depend on it? 

  • Can my operational model cope with any unforeseen needs or circumstances?

As security attacks are growing more complex and spanning across multiple technologies, attackers are inherently becoming cleverer, more tech-savvy, and increasingly state-sponsored. Business critical applications and infrastructure are continuously being probed for vulnerabilities by both the good guys and the bad ones. As it relates to network operators, we are constantly told to keep our pulse on the latest security vulnerabilities in order to fix them quickly. However, we know the all true reality that fixing security vulnerabilities quickly and across a fleet of network devices is rarely possible. Which is why the fact that some of the 31 security vulnerabilities Cisco announced in April, which were quickly exploited in the wild, is that much more frightening.

Unfortunately, these latest vulnerability disclosures prove that the answer to those three critical questions is a resounding “No!”  Given Cisco’s dominance in networking and the fact that every other network operating system (NOS) from every vendor out there has the same fundamental architectural problem, this should be the subject of a national debate on how we are still living in a world with the networking industry that gets by on the hope that nobody notices or tries to exploit vulnerabilities.

Today’s failing network

For a very long time, devices stemming from network vendors have been given a hall pass when it comes to meeting internal security policies. The reality is that updating NOS code is hard, takes time, and is disruptive to business services.  These closed, tightly integrated pieces of equipment were synonymous with “hardened” – but this is far from the case.

When security vulnerabilities are found, the fixes require a new monolithic image to be delivered from vendors. At best, this takes months to test, verify, and roll-out these fixes. At worst, this never happens at all. In the meantime, it leaves a gaping security hole in the infrastructure. All application data moves across the network infrastructure. If this infrastructure is compromised, an attacker has the ability, redirect, block, or capture this information.

In the recent exploit dubbed “Sea Turtle,” DNS was hijacked and used by attackers to create man-in-the-middle attacks to critical infrastructure components. They leveraged the exploits in Cisco’s IOS and IOS-XE to gain unauthenticated access and were able to reload the affected devices and remotely execute code with elevated privileges. The fix for these issues? A new monolithic image, which again needs months (or longer) to be tested, verified and manually rolled out, both with the fix AND other changes that could impact how the device functions in the environment. This is simply not good enough for business applications that run in a highly dynamic and face paced environment

The need for cloud-native networking

The monolithic approach to networking is flawed, and a new architecture is needed. The ability to update and resolve security vulnerabilities is a modern fact of running infrastructure. This is something that is not possible with legacy networks. We have viewed this infrastructure as static, siloed, and brittle. This starts with a failure in how NOS have been architected. The only way that we are going to solve the operational issues that are faced by today’s operators is with an entirely new approach that is built on the principles of microservices and containerization, leveraging the latest advancements in DevOps trends and cloud-native tools. With a cloud-native approach, you can upgrade or immutably replace network applications with no or minimal impact in seconds compared to months. All this allows DevOps teams and NetOps teams to collaborate, enabling companies to embrace the mindset of speed and constant change.

By employing cloud-native methods and tools, you create an open ecosystem that empowers operators to utilize the same toolsets, practices, and language across the infrastructure spectrum. The network now becomes an extension to the application deployment cycle rather than separate step outside of it. By utilizing the same familiar Cloud Native framework that has been adopted by DevOps teams, the network can now be pulled into the CI/CD (continuous integration, continuous delivery) pipeline for greater automation, control, and reliability. This enables delivery of application time to service more quickly and improve operational efficiencies by automating repeatable processes.

A cloud-native approach to networking encourages a culture of “Yes” when it comes to needed change because its containerized microservices architecture create a more resilient and flexible network. Network operators can limit features to just those they need and are able to replace the single micro-service that is affected.  Since DevOps and NetOps now speak the same language, this is accomplished in a coordinated and distributed fashion that results in a lower risk of service impact. Fewer features translate into a simpler environment that is easier to troubleshoot and with fewer things to go wrong. Running only the features you need results in fewer security vulnerabilities and having network features deployed as containerized microservices shortens the test cycle. All of this translates into a more resilient network. A network that reduces the risk and increases predictability encourages NetOps engineers to have optimistic reactions when making changes in a production network.

Networking is living in the dark ages when compared to how applications are managed and deployed.  We need to break up the network monolith and treat the network like the distributed application it has always been.  Network operators can’t continue to rely on NOS architectures that were designed 30 years ago when applications where centrally located and did not move.  As can be seen by the Cisco security vulnerabilities that continue to highlight the trojan horse the network has become to infrastructure and applications running across it – it’s time for a change.



Source link

Intent-Based Verification Leading a New Wave of Network Automation | IT Infrastructure Advice, Discussion, Community


Intent-Based Networking (IBN) is one of the most significant IT trends in recent years and is widely considered the “next big thing” in networking. The IBN vision comes as a natural successor to Software-Defined Networking (SDN), with the goal of automating networking operations and better aligning networks with business goals or intent.

Gartner first coined the term IBN in 2017. As defined, IBN comprises of networking software that helps to plan, design, implement and operate networks that can improve network availability and agility. In practice, it boils down to two key capabilities: 1) configuration: the ability to translate high-level policy or intent to network configuration, and (2) verification: the ability to verify that the actual behavior matches the high-level intent.

The biggest challenge in delivering an IBN solution is automated intelligence — the ability for software to reason about network behavior and map back-and-forth between high-level intent and actual configuration. It effectively means replicating and automating years of knowledge and experience that seasoned network operators have developed from years of operating networks and diagnosing and troubleshooting issues. Additionally, there are organizational barriers to adopting IBN within existing networks and workflows. How can enterprises, whose business depends on the network, trust software to run their network?

Fortunately, there is a practical, easily deployable aspect of IBN that delivers automation benefits today: network verification.

Verifying network behavior is a key IT process to automate

So, what is network verification? Network verification is the ability to validate that the end-to-end behavior of the network, as determined by its configuration and state, matches the higher-level intent. More specifically, network verification systems can reason about every possible behavior that the network can demonstrate, given its current configuration and state. It can mathematically analyze all possible end-to-end paths in the network for all possible packets that can enter the network. This end-to-end behavior analysis can then be compared against the high-level intent. Some examples of end-to-end behavior that network verification can easily verify are:

  • Are there are least 3 redundant paths from a particular access layer router to another site through an MPLS Core?

  • Are there any single points of failure along an entire network path?

  • Have we ensured logical traffic isolation between two tenants or applications for all non-management IP protocols?

  • Is traffic coming in from the external internet properly restricted to only specific destinations and services?

  • Are only specific services running in our Amazon cloud available from various internal sites, systems and users? If so, which ones?

IBN verification systems have the capability of understanding such high-level, generalized requirements and verifying them in the context of the current network state. IBN effectively bridges the intent with the individual device configurations to reason through and automate the verification process. From an IT perspective, this can proactively identify any latent errors in the network which could eventually lead to outages, while avoiding tedious manual searches to isolate issues or perform root-cause analysis. For example, if a set of configuration changes are proposed or a new service is deployed, IBN can help verify the impact to existing policies before deploying to the live network, averting possible roll-backs and helping to accelerate change windows.

Verification is a distinctly different methodology than traditional testing environments; it is reasoning based on an analysis of the network design, configurations and current network state. It does not look at live traffic flows or test scenarios to determine network activity. Verification can thus do something traditional testing can rarely do: “prove a negative”, by confirming that something can’t happen, such as two networks being unreachable through any path. IBN verification can also identify configuration errors like MTU mismatches, forwarding loops, or IP address duplication anywhere in the network, which may not show up in any specific test, and without reviewing devices one by one.

How does network verification work in practice today?

IBN verification systems create a model of the network that can reason about all possible behaviors and use that to verify compliance with the intended policies and service descriptions. For an IBN verification system to work on an existing network, the only conditions that need to be met are: 1) Read-only access must be available to each device to pull configuration files and state information, 2) the IBN software must accurately model the behavior of each network device (switch, router, firewall and load balancer) for all possible packet flows, and 3) the IBN model must accommodate all protocols and services such as EVPN, BGP, MPLS, virtual networking, etc.

IBN in general, and verification in particular, is shifting the network IT model from a reactive approach to problems, to a proactive approach where an automated analysis of current network designs can virtually eliminate human errors and misconfigurations to avoid issues in the first place. The automation that verification enables is helping to replicate and augment the rare expertise of the critical IT engineers in diagnosing outages, documenting network requirements and verifying fixes.

A prudent approach to IBN

While we are still some ways away from enterprises being ready to let software completely take over their networks, most IBN deployments today are succeeding by focusing on the network verification process. Not only is it safe (requires ready-only access to devices), it can also easily integrate into existing networks and workflows without requiring a hardware refresh. Enterprises are able to realize immediately tangible benefits, because verification accelerates change management processes, increases reliability and improves agility.

At a higher level, verification is a prudent first step to deploying IBN. Verification enables enterprises to build trust in their network, and in the processes that operate the network, allowing them to evolve from manual human-driven, to software-assisted, to ultimately software-driven network operations.



Source link

3 Imperatives for Network Management Success in the Hybrid World | IT Infrastructure Advice, Discussion, Community


Networks today are a mixed bag, comprised of what can be a tangled mess of physical, virtualized, and cloud infrastructure. In order to compete today, businesses are pursuing digital transformation initiatives such as SD-WAN, Network Function Virtualization, and edge computing for a competitive edge. While these technologies offer great benefits, they also add great complexity. The race for a competitive edge inevitably creates interoperability hurdles amongst IT systems. Today businesses must wade through wired and wireless networks, multi-platform, multi-vendor, as well as multi-cloud – each with their own set of complexities. Performance issues inevitably arise, which can cause downtime, and cost a business anywhere from tens of thousands to millions of dollars

One major challenge faced by many network operations (NetOps) teams is the use of too many monitoring tools. The issue of monitoring tool sprawl is far worse than most realize. According to a bi-annual network management study from Enterprise Management Associates, nearly half of all networking pros are using between four and ten tools to monitor and troubleshoot their networks. And nearly one-third of IT teams are juggling 11 or more tools!

Today’s hybrid networks simply demand more. Organizations must anticipate, identify, troubleshoot and resolve a wide array of network issues. An important key to network management is comprehensive visibility, with advanced performance analytics, all through a single pane of glass.

Here are three imperatives for network visibility and management across hybrid networks:

The ability to collect various data sources across all network domains: Whether a team is conducting capacity planning, troubleshooting a critical performance issue, or analyzing an anomaly to achieve true end-to-end visibility across the entire network, teams need insight into a broad range of data sources. From Flow (IPFIX, NetFlow, sFlow, Cflowd, etc.) and SNMP, to packet data (full capture and analytics) and API integrations (REST, Bulk, Stream, etc.), each data source plays a unique and critical role in the overall process of managing the network. Without the ability to consume these different data sources, NetOps can be left with insufficient data that can hinder their ability to manage and troubleshoot the network.

The ability toisualize and interpret that data intuitively in order to take action: It’s not enough to simply have access to every network data type. NetOPs teams need solutions that translate data into simplistic management and troubleshooting workflows. For instance, Flow data from virtual, physical and cloud devices is especially critical to managing and troubleshooting application performance. But, if a network management platform doesn’t allow the team to visualize an applications flow across the entire network – from source IP address to destination IP address – it will be difficult to preserve a positive end user experience. Packet-level data is critical for troubleshooting complex application issues like slow database performance. Visualizing the network path and reviewing the packet data creates performance visualizations that allow NetOPs to resolve issues faster. Whether troubleshooting a VoIP issue or optimizing a new SD-WAN deployment, having granular visibility into all types of network data is imperative to comprehensive network management and control. 

The ability to present top-level status updates and reports to executive stakeholders: What good is all this if NetOps can’t clearly communicate its value and progress to executives? Higher ups typically only care about a few key reports and don’t want to be bogged down trying to decipher in-depth networking analytics. How are we doing on uptime? What’s the availability of a particular set of devices, circuits or sites? What caused the minor downtime incident last week? How is the bottom line impacted? There’s a reason they call it an executive summary. If you can’t arm executives with this type of critical information, they won’t be able to make sounds budgetary, personnel, or business decisions. Teams need management solutions that enable them to generate reports that convey easily-digestible network performance metrics, SLA status, application conditions, and ultimately the merits of their work.

The complexity challenges presented by multi-vendor, multi-platform and multi-cloud IT environments, coupled with the ever-present issue of tool sprawl, makes managing today’s hybrid networks an uphill battle. NetOps teams need access to a wide range of network data sources, to visualize that information coherently, and to act quickly. Imperative is effective reporting on business-critical metrics, in order to successfully manage these complex modern network topologies.

 



Source link