Tag Archives: Community

4 Visibility Requirements to Ensure App Performance Across Hybrid Nets | IT Infrastructure Advice, Discussion, Community


In a recent survey of enterprise networking and IT professionals from Sirkin Research, 35% of respondents struggled with poor visibility into performance across all fabrics of the network. But as network transformation initiatives like SD-WAN, SDN, and public/private clouds become more widespread, hybrid networks are quickly becoming a fact of life for IT and NetOps professionals. Without visibility into these networks, IT can’t troubleshoot the business-critical applications that organizations rely on.

Monitoring hybrid network can be challenging, but here are four techniques IT and NetOps can use to gain visibility into today’s complex networks:

1) Ad-hoc wireless sniffing: In my opinion, monitoring all wireless traffic isn’t realistic for most organizations – it requires too many capture points spread throughout the wireless network. A better solution is to supplement flow data and packet data from wired network segments with ad-hoc wireless packet capture for issues that can’t be resolved based on the flow data alone. Sending a network engineer on-site to conduct a packet capture is one option, but it’s extremely expensive. It’s possible, with the right setup, to use a nearby AP as a sensor to sniff wireless traffic between a client and an access point for a short time. This isn’t a common capability today, but I believe organizations need to start designing this into their networks.

As personal devices and IoT becomes more common in the workplace, wireless issues are only going to increase. If you can’t track performance across the entire end-to-end network, then you can’t truly ensure end-user performance. Therefore, having visibility into the wireless network is key to understanding hybrid networks and meeting service levels. 

2) Go to the packet data when needed: There’s an “80/20 rule” in networking that says 80% of issues can be resolved using flow data. But for the 20% that can’t, organizations will need to dig into packet data, since these problems could have many different causes. For example, an end user complains that an application is running slowly. Maybe it’s the network, but the application could also be at fault. Perhaps it wasn’t perfectly designed, and it’s letting multiple users try to change an element of its database simultaneously, resulting in longer processing times. Without quick access to packet data, these difficult application issues can’t be resolved successfully.  

There are several free packet capture and analysis tools like Wireshark, Tcpdump, and Kismet, but larger organizations with complex networks may need to invest in a packet capture and analysis product that offers features like network mapping, customizable reports, and visualizations to speed up troubleshooting.

3) Supplement flow data with deep packet inspection: NetFlow and similar types of network telemetry all have limits. For example, when using NetFlow or IPFIX to troubleshoot VoIP calls, this data includes IP addresses, but not phone numbers. Customers calling to complain about VoIP will know their number, but probably not their IP address, so IT has no way to looking up the flows they need to hunt down the problem! Network monitoring solutions that are integrated with deep packet inspection (DPI) provide the flexibility to “add” new data elements into flow data, such as the phone number of a VoIP call, and this can significantly reduce troubleshooting time. TCP retries is another useful data point that could be added to quickly identify network problems before they become obvious to end users. By adding selective data points to NetFlow, flow-based monitoring tools become much more useful for new situations that hybrid networks create.

4) Gather data to plan, verify and optimize SD-WAN rollouts: To ensure successful application performance during a transition to SD-WAN, enterprises need visibility into their existing network devices to determine the baseline of current application performance and decide which sites and application policies need to be developed. Planning should also include how the SD-WAN edge device(s) will interface to the existing infrastructure, especially in the case of a hybrid WAN, where some traffic will remain on the existing WAN infrastructure. Real-time visibility is also required into the new SD-WAN once it’s running to verify that it’s performing as expected. Although the SD-WAN itself can provide performance data, integrated flow/packet-based monitoring will provide more granular visibility into the complete, end-to-end application path, allowing network engineers to determine if a problem is in the SD-WAN, with the carrier or in another portion of the network. By monitoring the entire network through all three of these phases, IT can ensure a new SD-WAN project doesn’t negatively affect business-critical applications. 

Troubleshooting on hybrid networks isn’t easy, but it’s essential for IT and NetOps to have these capabilities to support network transformation projects. With the techniques outlined above, IT will be well-positioned to respond to application issues quickly and effectively, no matter what fabric of the network they come from.



Source link

The Looming Skills Crisis in the Epicenter of Your Enterprise | IT Infrastructure Advice, Discussion, Community


We often hear about skills shortages in “hot” fields like security or cloud or artificial intelligence — the roles that make flashy headlines. But there is another massive skills gap being largely overlooked, that if not addressed, could have extraordinary consequences on the success of businesses. That skills gap lies in the very heart of your enterprise: in the data center.

Every digital transformation effort runs through the data center. Modern enterprises need a modern data center. But despite being the lifeblood of the business, the data center hasn’t evolved at the same pace as the rest of the enterprise. Technology alone won’t modernize the data center though – it takes people.   

According to a report from the Uptime Institute, many data center staff simply don’t have the skills needed to modernize the data center. They lack experience in hybrid environments, software, and automation. Data center staff are also getting older, and businesses are struggling to fill open positions. Meanwhile, the people that do have those “newer” skills aren’t joining data center teams. See above: they’re probably being recruited to security, cloud, or AI teams! 

This has left enterprises vulnerable in one of the most important technical functions in the business. To mitigate this skills gap, enterprises need a two-pronged approach: invest in automation  and double-down on training and retaining data center staff.

Automation is not a four-letter word

Embracing more automation in the enterprise may change jobs and roles, but it won’t replace the need for IT staff. Rather, it will augment and assist humans. And ultimately, automation could be the thing to make the data center “cool” again. Because the job won’t be about memorizing CLI commands or IP addresses, which feels old and archaic. Instead, automation takes the mundanity out of the equation, and it will be about streamlining the provisioning and management of the data center. With automation, data center professionals could potentially run the data center on an app on their phone, or literally, use their voice to tell Slack to provision a new server and alert you when it’s done. Automation also removes the time-consuming bottleneck often involved in the change control process, which occurs when there is a request made for a new application or a change to something existing. These changes often turn into laborious processes involving multiple steps, documentation, and approvals, but automation is able to eliminate the manual work and expedite the time it takes to make the necessary change.

Most importantly, automation empowers data center professionals to be proactive and build skills by focusing on more strategic initiatives. It gives them the tools to transform what’s often seen as a cost center into a powerful asset that drives business outcomes. And beyond the satisfaction and day-to-day output of data center professionals, automation will allow organizations to be more agile and forward-looking.

Prioritize training and broadening skillsets

As much as automation will mitigate the skills gap in the data center, it’s not a silver bullet. The success of digital transformation and data center modernization entirely depends on the strength and intellect of the people within the walls of these enterprises. Which is exactly why organizations, large and small, need to up-level and broaden the range of training in the data center.

Training needs to focus on skills development for existing professionals – they need to learn new tools (i.e., software, automation, performance management, analytics) to help enrich their knowledge and extend their capabilities across functions. Data center professionals don’t need to become programmers (most won’t and don’t want to). But the vertical silos within the data center are shifting to a horizontal focus with greater attention to how all the pieces tie together. Think of it as a college major in networking, with minors in software, servers, security, virtualization, and storage.

In addition to providing more in-depth training to existing staff, organizations should also aim to recruit IT professionals with specialized knowledge of software and automation. Those workers may not automatically consider data center jobs, but if businesses can create additional incentives, those skills could greatly augment current teams.

Solving the skills crisis requires both technology and people

Digital transformation is a blessing and a curse. As many doors as it has opened, it’s created legitimate challenges for organizations bold enough to take these projects on. As of today, among the greatest limiting factors in technology-driven initiatives truly taking off is skills. A data center managed by teams with traditional skills will remain traditional, a legacy. A data center managed by teams with modern skills will become a more strategic asset, automating and empowering a modern business and providing a critical foundation for enabling an autonomous enterprise.

Through a combination of smart use of automation and a focus on people, organizations can begin to address the skills shortage and drive their businesses into the future.

 

 



Source link

Best Practices, Tips, and Tricks to Switch Configuration | IT Infrastructure Advice, Discussion, Community


I’m working on a new network design for a remote location and thought I would share some of my best practices, tips and tricks.

In this article I will assume the general design has been sorted out and will go to the configuration phase.

In some large companies, this step can be very simple. You get an IP address and password configured.  After the switch is installed and powered on, the network staff can remote in and ‘push’ the final configuration to the switch. In this case, I do not have that option.

My checklist of items to configure will be based on the client design documentation. Here’s a quick list of items to cover: DHCP, routing, VLANs, Spanning Tree, passwords, backups/upgrades, access lists, interface descriptions, time servers, authentication details, telnet/SSH, web interface, sys logging, and SNMP. Let’s look at these items in more detail.

DHCP

(Cisco configuration example)

If your switch supports it, I always enable DHCP for the installation since the network connection to the production DHCP server may not be available. In some cases, I create a vendor VLAN, with DHCP that only allows access to specific networks or devices. That way the vendor isn’t always asking for a static IP when on site or guessing and causing a duplicate IP address situation. I’m sure we’ve all seen people invert the default and host IP address.

Routing

If the switch has routing capabilities, it is important to configure the proper default gateway or which specific routing protocols need to be supported. Pay attention to those scenarios where you may have two or more default routes since every vendor treats this differently. Some round robin between destination IP address, or treat it as a fail over, or load balance based on all sorts of options. In this case, the client specified a static route to a single destination, easy.

VLANS

(Cisco configuration example)

Typically you will have two VLANS: admin and clients. Or in some cases three VLANS: admin, clients, and VOIP. It is very important to figure out as much of this in advance for your IP subnetting design. In most cases, contiguous IP subnets are preferred. Don’t forget to put descriptions on your VLAN interfaces, if your device supports it. Deciding on your VLAN tagging configuration also falls into this category.

Spanning tree

Spanning tree, rapid spanning tree, or the many other names that cover this same protocol is always significant. This also include specific items such as BODU blocking and manually configuring Priority values. In some specific cases, I disabled spanning tree but refer to your design document.

Passwords

Figure out your password naming convention, how often it will change, and if you must include any authentication servers like Radius TACAS+. You should check your equipment manual to see if your device supports some advanced features like incorrect login lockouts/accounting/alerts.

Backups/updates

I always keep the base configuration on the device and a USB key while installing in case I need to revert back to the original configuration. You need to consider how often you will back up device configurations. There are many options, from manually backing up configurations, to scripts and finally applications that will back up whenever changes are made.  I have written quite a few scripts for clients that did not have a solution in place to perform a weekly backup. Don’t forget about backing up your firmware, IOS, and equipment software.  It is quite common to discover the device needs updates even though you just received it.

Access lists or filters

 This covers device to protocol access. Device access is how you connect to the device with physical ports like Ethernet, Serial, USB, and others. I am not a fan of leaving physical ports without passwords unless the client specifically requests it. If you device has various ‘levels of access’ avoid using the same password. If you are going to create multiple user accounts, try to do it by job function or department like WAN, WiFI, Voice, and others.

Then there is other forms of access like HTTP/HTTPS, Telnet/SSH, API’s, and vendor specific applications/protocols. Protocol access involves allowing access to specific protocols, IP addresses, or IP subnets.  Depending on your product, this might cover such items such as telnet, SNMP, RMON, Netflow, HTTP/HTTPS, and others. During the installation I believe it is critical to monitor new equipment and ensure all is well. In some cases we might enable SNMP for a while until the equipment is added to the corporate monitoring system.

Interface descriptions

(Cisco configuration example)

I can’t stress enough how important descriptions are for ALL devices when possible. Device such as switches, routers, and firewalls may be in secured locations or offsite so knowing what is connected to speeds up troubleshooting. Do not solely rely on vendor discovery protocols since they may not be compatible with all equipment and you never know what devices will send them out. In specific scenarios, I actually disable discovery protocols from untrusted or public ports or networks since a lot of important information is being sent out all ports in clear text.

Sys logging, time servers and SNMP

This also covers other monitoring protocols Netflow, RMON, and more. The point here is to decide what the addresses and credentials are of these devices in your environment and ensure the relevant protocols work before walking away.

All these points should be confirmed and reviewed during support and configuration changes.



Source link

Network and Security: The Janus Effect | IT Infrastructure Advice, Discussion, Community


The cybersecurity industry has grown exponentially over the past decade with an expectation that the global market will reach $300 billion by 2024. Yet as the industry to protect networks has grown, the industry attacking it, has grown in parallel. Phishing campaigns are still gaining access to data at an unprecedented rate, costing American organizations up to half a billion dollars a year and showing no signs of slowing down. Data theft is rampant across the globe, and that information is being sold to the highest bidder, creating financial incentive and perpetuating the cycle of cybercrime. Yet, where the corporate network is concerned, that is only half the story with IT Teams facing pressures to increase speed and productivity while adding the latest technologies to keep competitive in the marketplace.

Head in the clouds

Adding to the load are companies’ voyage into a digital environment and transitioning to the cloud. While this migration to the cloud is more cost efficient, cybercriminals are coming along for the ride as well with sensitive data as the ultimate prize. Using stolen credentials, cybercriminals are hacking cloud-based email services starting with phishing attacks or taking advantage of configuration errors. Web applications are targeted to secure credentials to access cloud-based email accounts. Web applications, privilege misuse, and cyber-espionage represent 71 percent of breaches according to the Verizon 2019 Data Breach Investigations Report.

Companies have failed to stem the growing velocity of data theft because, in many cases, the security and network teams have not been able to collaborate and communicate effectively. This challenge is also compounded by the size of the organization. In large, complex, and multivendor environments, network and security teams often operate separately and view each other as obstacles to getting their jobs done. It is the schism that leads to high profile data thefts regardless of more people, more budget, and more deployed security tools.

Budgeting for the great divide

In fact, 89 percent of companies expect their IT budgets to grow or stay steady in 2019, according to Spiceworks 2019 State of IT Report. A significant portion of that is allocated to update IT Infrastructure. Yet the network, one of the greatest of IT investments, is also the most underutilized.

Network traffic alone accounts for 37 percent of data thefts according to a McAfee report and database leaks is just ahead at 38 percent. Data exfiltration is just one example where key employees are targeted with phishing emails, and someone makes that fateful click. With valid credentials in hand, bad actors gain a foothold, move laterally and eventually find data worth stealing. They then begin to transfer that data out of the network through a ‘low and slow’ method. Eventually, over time, if this data transfer goes un-noticed, the bad actor is successful in their theft. If the security team does not have access to network-related visibility, they will never see this data movement. The networking team may see the data movement, but since they are not tasked with catching data exfiltration, they are not looking for it.

A significant cause for these siloed environments is the separate allocation of budget across network and security teams. Both teams require visibility into the network to do their job, and each team uses its budget to purchase products to deliver that visibility. Most vendors are building products that target either the network or the security team, which leads to duplicative technologies being purchased at twice the cost. This scenario exacerbates the problem of poor communication and no collaboration allowing network and security teams to become stuck in a loop that leaves the organization at even greater risk.

Given how much data is compromised via network traffic, without a platform to analyze the data, organizations will continue to be unaware of breaches, and data theft and their data will be further compromised.

Investing in a united view

The future of security is going to rely upon improved collaboration between the network and security teams. The network can serve as the greatest source of truth for both operations and security teams if leveraged to its full potential while making significant savings in operational expenditures.

Network infrastructure can natively gather metadata about every business transaction that crosses it and export it to a central platform for the collection, monitoring, and analysis. When NetOps and SecOps are joined collaboratively, they can extract the true value from a single, shared platform and are able to discover important insights that lead to smarter decisions enabling a more secure and efficient organization.



Source link

How to Hire Reliable Remote Tech Talent | IT Infrastructure Advice, Discussion, Community


If you’re ready to hire more tech talent at your company, the path to success may involve looking for remote workers. Advancements in technology help employees get things done anywhere with an internet connection.

Buffer’s State of Remote Work 2019 report was based on a survey of nearly 2,500 remote workers and found that tech roles are among the most common jobs they hold. More specifically, 38% of participants said they worked in software, and 18% fell into a category called IT and services.

Moreover, 69% of overall survey respondents said they work as remote employees, and 22% classified themselves as freelancers or self-employed. The information from the Buffer survey should reinforce that many tech professionals work remotely and that it’s not far-fetched to hire some of them at your company.

Hiring a team member you may never meet face to face undoubtedly has challenges. Here are some things you should do to increase favorable outcomes when interviewing remote workers or deciding to bring them on board.

Read the rest of this article on InformationWeek.



Source link