Tag Archives: Hybrid

The Power of the Packet in Today’s Hybrid IT Environment | IT Infrastructure Advice, Discussion, Community

Networks have never been more complex than they are today. New technology deployments and IT initiatives come into play each year, constantly adding to the ever-growing mix of wired, wireless, multi-vendor, and multi-cloud environments. Unfortunately, despite the business advantages that come with new cloud deployments, updated wireless technologies, and other technologies, the hybrid nature of modern networks creates visibility challenges for network operations (NetOps) teams, including time-consuming troubleshooting, downtime incidents and other costly issues. According to a recent survey, 35 percent of networking professionals struggle with poor visibility across all fabrics of the network and 42 percent of network teams spend too much time troubleshooting across the entire network. So, what’s the solution?

One “80-20” rule in networking states that 80 percent of network issues can be resolved solely using flow data. However, as complex, hybrid networks become the norm, the remaining 20 percent of issues require even more granular insight and visibility to troubleshoot quickly and correctly. This means that NetOps teams must look beyond flow data alone to better manage and optimize these increasingly hybrid networks. Today, let’s explore how packet data can solve many of the issues we commonly experience in network environments.

The Power of the Packet

Packet data is the most granular data type network administrators can collect, helping NetOps teams troubleshoot more complicated issues they wouldn’t be able to address using flow data alone. Packets can provide a wide breadth of useful information network teams can use to quickly isolate the root cause of network issues. Faster troubleshooting leads to quicker resolution, less downtime, increased productivity, better user experience, and ultimately, it allows NetOps teams to focus more on strategic initiatives like network transformation projects.

Here are three prime examples of how packets can empower NetOps teams to manage, troubleshoot and optimize today’s hybrid networks:

Isolating the Root Cause of Latency – One very common example is when users are experiencing latency, but the network team doesn’t know what’s causing it. As we know, a flow with high latency could have several root causes. However, NetOps teams don’t have time to blindly trial and error each possibility, especially when subpar network performance can derail business operations.

With access to packet data, IT teams can drill down to isolate the exact cause of the issue with confidence. Packets can quickly identify whether latency is caused by the network or an application and can help pinpoint the exact transaction within an application that is causing latency to occur, providing specific and actionable troubleshooting data to application engineers to quickly address the issue. Packets can also show network teams exactly where latency is occurring in a network path, as quite often the latency is being introduced by a specific network asset. This saves time, effort, and allows NetOps to spend their time focusing on more important things instead of tedious troubleshooting.

Troubleshooting Pesky VoIP Issues – Imagine that a customer is experiencing poor VoIP performance (dropped calls, poor call quality, etc.) and they voice their frustration to IT, hoping to get the issue resolved as soon as possible. Typically, customers know their phone numbers but not their IP address, and since flow data, even IPFIX, does not typically include phone numbers in the flow record, it is difficult to quickly isolate the flows in question. So, NetOps teams need to involve other information, tools, or resources to identify the flows in question and resolve this issue, and this significantly reduces the chances of fixing the problem quickly. Luckily, packets provide them with sender and receiver IP addresses – everything they need to get to the bottom of the issue and quickly resolve it – and with one tool. In this scenario, packet data is instrumental in helping network teams deliver better end-user experiences and prevent similar issues from occurring in the future.

Conducting Thorough Forensic Analyses – Unfortunately, most network issues are discovered only after they’ve already had a chance to disrupt the business in one way or another. The damage has already been done, leaving network teams scrambling reactively to fix the issue (with a tremendous amount of pressure to do so quickly). In the case of a network breach or downtime incident that has already occurred, network teams need to act fast to prevent further damage.

Packet data can allow NetOps teams to go back and piece together where things went wrong and what caused the incident. It can be used to reconstruct web sessions so IT can analyze users’ past network activities, protocol data, application activity, and more. Packet data also shows network teams a real-time view for performance analysis and troubleshooting. Obviously in these situations, there’s no way to go back in time and undo the breach or network failure that happened in the first place, but these insights can help NetOps to quickly resolve the issue, re-establish expected network performance and prevent future issues.

We know two things for sure in today’s complex IT landscape. The first is that networks will continue to become more “hybrid” as time goes on, and the second is that company executives, customers and end-users don’t care about what challenges this brings about for NetOps – they still expect high performance and quality experiences. As such, IT departments must be able to troubleshoot issues quickly and with confidence, regardless of where in a hybrid network they originate. This means that access to packet data for streamlined troubleshooting and network optimization is now imperative for every NetOps team.   

Related Network Computing articles:

Network Visibility Rightsizing

8 Common Network Analytics Data Sources

Machine Learning on Telemetry Data: Mining Value from Chaos



Source link

4 Visibility Requirements to Ensure App Performance Across Hybrid Nets | IT Infrastructure Advice, Discussion, Community

In a recent survey of enterprise networking and IT professionals from Sirkin Research, 35% of respondents struggled with poor visibility into performance across all fabrics of the network. But as network transformation initiatives like SD-WAN, SDN, and public/private clouds become more widespread, hybrid networks are quickly becoming a fact of life for IT and NetOps professionals. Without visibility into these networks, IT can’t troubleshoot the business-critical applications that organizations rely on.

Monitoring hybrid network can be challenging, but here are four techniques IT and NetOps can use to gain visibility into today’s complex networks:

1) Ad-hoc wireless sniffing: In my opinion, monitoring all wireless traffic isn’t realistic for most organizations – it requires too many capture points spread throughout the wireless network. A better solution is to supplement flow data and packet data from wired network segments with ad-hoc wireless packet capture for issues that can’t be resolved based on the flow data alone. Sending a network engineer on-site to conduct a packet capture is one option, but it’s extremely expensive. It’s possible, with the right setup, to use a nearby AP as a sensor to sniff wireless traffic between a client and an access point for a short time. This isn’t a common capability today, but I believe organizations need to start designing this into their networks.

As personal devices and IoT becomes more common in the workplace, wireless issues are only going to increase. If you can’t track performance across the entire end-to-end network, then you can’t truly ensure end-user performance. Therefore, having visibility into the wireless network is key to understanding hybrid networks and meeting service levels. 

2) Go to the packet data when needed: There’s an “80/20 rule” in networking that says 80% of issues can be resolved using flow data. But for the 20% that can’t, organizations will need to dig into packet data, since these problems could have many different causes. For example, an end user complains that an application is running slowly. Maybe it’s the network, but the application could also be at fault. Perhaps it wasn’t perfectly designed, and it’s letting multiple users try to change an element of its database simultaneously, resulting in longer processing times. Without quick access to packet data, these difficult application issues can’t be resolved successfully.  

There are several free packet capture and analysis tools like Wireshark, Tcpdump, and Kismet, but larger organizations with complex networks may need to invest in a packet capture and analysis product that offers features like network mapping, customizable reports, and visualizations to speed up troubleshooting.

3) Supplement flow data with deep packet inspection: NetFlow and similar types of network telemetry all have limits. For example, when using NetFlow or IPFIX to troubleshoot VoIP calls, this data includes IP addresses, but not phone numbers. Customers calling to complain about VoIP will know their number, but probably not their IP address, so IT has no way to looking up the flows they need to hunt down the problem! Network monitoring solutions that are integrated with deep packet inspection (DPI) provide the flexibility to “add” new data elements into flow data, such as the phone number of a VoIP call, and this can significantly reduce troubleshooting time. TCP retries is another useful data point that could be added to quickly identify network problems before they become obvious to end users. By adding selective data points to NetFlow, flow-based monitoring tools become much more useful for new situations that hybrid networks create.

4) Gather data to plan, verify and optimize SD-WAN rollouts: To ensure successful application performance during a transition to SD-WAN, enterprises need visibility into their existing network devices to determine the baseline of current application performance and decide which sites and application policies need to be developed. Planning should also include how the SD-WAN edge device(s) will interface to the existing infrastructure, especially in the case of a hybrid WAN, where some traffic will remain on the existing WAN infrastructure. Real-time visibility is also required into the new SD-WAN once it’s running to verify that it’s performing as expected. Although the SD-WAN itself can provide performance data, integrated flow/packet-based monitoring will provide more granular visibility into the complete, end-to-end application path, allowing network engineers to determine if a problem is in the SD-WAN, with the carrier or in another portion of the network. By monitoring the entire network through all three of these phases, IT can ensure a new SD-WAN project doesn’t negatively affect business-critical applications. 

Troubleshooting on hybrid networks isn’t easy, but it’s essential for IT and NetOps to have these capabilities to support network transformation projects. With the techniques outlined above, IT will be well-positioned to respond to application issues quickly and effectively, no matter what fabric of the network they come from.

Source link

3 Imperatives for Network Management Success in the Hybrid World | IT Infrastructure Advice, Discussion, Community

Networks today are a mixed bag, comprised of what can be a tangled mess of physical, virtualized, and cloud infrastructure. In order to compete today, businesses are pursuing digital transformation initiatives such as SD-WAN, Network Function Virtualization, and edge computing for a competitive edge. While these technologies offer great benefits, they also add great complexity. The race for a competitive edge inevitably creates interoperability hurdles amongst IT systems. Today businesses must wade through wired and wireless networks, multi-platform, multi-vendor, as well as multi-cloud – each with their own set of complexities. Performance issues inevitably arise, which can cause downtime, and cost a business anywhere from tens of thousands to millions of dollars

One major challenge faced by many network operations (NetOps) teams is the use of too many monitoring tools. The issue of monitoring tool sprawl is far worse than most realize. According to a bi-annual network management study from Enterprise Management Associates, nearly half of all networking pros are using between four and ten tools to monitor and troubleshoot their networks. And nearly one-third of IT teams are juggling 11 or more tools!

Today’s hybrid networks simply demand more. Organizations must anticipate, identify, troubleshoot and resolve a wide array of network issues. An important key to network management is comprehensive visibility, with advanced performance analytics, all through a single pane of glass.

Here are three imperatives for network visibility and management across hybrid networks:

The ability to collect various data sources across all network domains: Whether a team is conducting capacity planning, troubleshooting a critical performance issue, or analyzing an anomaly to achieve true end-to-end visibility across the entire network, teams need insight into a broad range of data sources. From Flow (IPFIX, NetFlow, sFlow, Cflowd, etc.) and SNMP, to packet data (full capture and analytics) and API integrations (REST, Bulk, Stream, etc.), each data source plays a unique and critical role in the overall process of managing the network. Without the ability to consume these different data sources, NetOps can be left with insufficient data that can hinder their ability to manage and troubleshoot the network.

The ability toisualize and interpret that data intuitively in order to take action: It’s not enough to simply have access to every network data type. NetOPs teams need solutions that translate data into simplistic management and troubleshooting workflows. For instance, Flow data from virtual, physical and cloud devices is especially critical to managing and troubleshooting application performance. But, if a network management platform doesn’t allow the team to visualize an applications flow across the entire network – from source IP address to destination IP address – it will be difficult to preserve a positive end user experience. Packet-level data is critical for troubleshooting complex application issues like slow database performance. Visualizing the network path and reviewing the packet data creates performance visualizations that allow NetOPs to resolve issues faster. Whether troubleshooting a VoIP issue or optimizing a new SD-WAN deployment, having granular visibility into all types of network data is imperative to comprehensive network management and control. 

The ability to present top-level status updates and reports to executive stakeholders: What good is all this if NetOps can’t clearly communicate its value and progress to executives? Higher ups typically only care about a few key reports and don’t want to be bogged down trying to decipher in-depth networking analytics. How are we doing on uptime? What’s the availability of a particular set of devices, circuits or sites? What caused the minor downtime incident last week? How is the bottom line impacted? There’s a reason they call it an executive summary. If you can’t arm executives with this type of critical information, they won’t be able to make sounds budgetary, personnel, or business decisions. Teams need management solutions that enable them to generate reports that convey easily-digestible network performance metrics, SLA status, application conditions, and ultimately the merits of their work.

The complexity challenges presented by multi-vendor, multi-platform and multi-cloud IT environments, coupled with the ever-present issue of tool sprawl, makes managing today’s hybrid networks an uphill battle. NetOps teams need access to a wide range of network data sources, to visualize that information coherently, and to act quickly. Imperative is effective reporting on business-critical metrics, in order to successfully manage these complex modern network topologies.


Source link

Achieving QoS in a Hybrid Cloud Implementation | IT Infrastructure Advice, Discussion, Community

Quality of service, or QoS, is important when mixing real-time and bulk traffic. Add big data applications and the challenge grows. Let’s look at strategies that we can use to protect real-time traffic in a hybrid cloud environment where end-to-end QoS may not be possible.

I define a hybrid cloud as a combination of an enterprise on-premises cloud system and a remote, vendor-provided cloud system. The on-premises systems typically support either infrastructure or platform delivered in the as-a-service model, while the vendor systems could provide a variety of services (infrastructure, platform, data center, or software). In a hybrid cloud, applications might have components located on premises or externally. An application that has real-time communications requirements between sites should be prioritized over non-real-time traffic.

You may also have a software service, such as VoIP, that has real-time components. Somehow, you must connect your voice endpoints within the enterprise to the voice control system service. Call control services typically have less critical timing constraints than real-time streams going to conference calling services located in a cloud provider’s infrastructure.

No QoS over the Internet

QoS is normally used to prioritize different types of traffic, relative to each other. The process involves classifying traffic by marking packets with either a class-of-service (CoS) or Differentiated Services Code Point (DSCP) identifier. Once packets are marked, the network uses the embedded CoS/DSCP identifier to perform rate limiting and prioritization for forwarding. Time-sensitive packets get transmitted before less-time-sensitive packets. A QoS design typically has four, eight, or 12 different classes.

Read the rest of the article on NoJitter.

Keep up to date on the latest hybrid cloud technology innovations by attending the infrastructure track of this year’s Interop conference. 

Source link

Now It’s Time for the Hybrid IT Worker | IT Infrastructure Advice, Discussion, Community

“Hybrid IT” is a well-worn term, referring to the blend of in-house and cloud-based IT resources that enterprises routinely use. But little has been said about the emerging need for hybrid IT workers who can man the trenches in central IT, or in end-user departments equally well—and must in all cases find ways to bridge the communication and political gaps between business users and IT.

The need to be a bridge builder between IT and end users couldn’t be greater:

  • Shadow IT is expanding.
  • Citizen development is growing.
  • Vendors are knocking at the doors of user departments that have their own mini IT’s.

Meanwhile, business users continue to view IT as unresponsive and insensitive, as evidenced by a physician acquaintance of mine who recently shared her frustration with IT after she called an IT help desk to assist her with a new system.

“I was told to just read the FAQ notes and that I should be able to figure it out myself,” she said. “The response made me feel insignificant and angry. I thought to myself, “So if you think you are so good, let’s see how you do if you’re called upon to make a cancer diagnosis,” but I stopped short of doing that because it wasn’t a reaction I wanted to show.”

Maybe not. But there is a lot of user anxiety about IT response times and insensitivity to the business. This has been a major reason why shadow IT and citizen development have grown. 

These trends and frustrations are also causing some CIOs to rethink IT deployment.

Read the rest of this article on InformationWeek

Source link