Tag Archives: Deal

3 Net Monitoring Metrics to Deal with Performance Degradation | IT Infrastructure Advice, Discussion, Community


Network Performance Monitoring using flow data (NetFlow) is an approach to isolate the root cause of performance issues related to network traffic by measuring a set of characteristics across L2-L7 layers.

There are three basic causes of performance issues: round trip time, server response time, and jitter. Each can contribute to low performance and downtimes. Let’s examine each one.

1. Round trip time

Also called network delay, round trip time represents a data transfer time of a packet being transmitted from client to server and back. It is a single value that models the performance of the network itself, calculated by observing the time needed to establish a TCP session. A typical value in enterprise networks in one location is less than 1 ms (even tens of microseconds) as on the local network. An application has no impact on the TCP handshake as this is part of the TCP/IP stack implemented in the operating system itself. It would require an operating system malfunction to influence this metric which won’t happen in practice. Here are some typical root causes of network delays.

Overload of network devices: High packet rates impact buffers in network devices where packets need to wait to be dispatched. QoS can help to prioritise critical services to a certain extent but experiencing a DDoS attack may lead to network congestion and increased values of RTT.

Clients working from remote locations: Complaining about slow application responses might not always be the case. Having an RTT of 500ms when connecting from home through a VPN to a company data centre means that just to transmit the packet takes half a second and any application will look slow from a user’s perspective.

Cloud applications: To lower the delay, SaaS providers use CDNs and proxy servers to host the application as close to customers as possible. For the same reason large companies purchase dedicated lines to connect their infrastructure directly to cloud providers.

Ethernet vs. Wi-Fi: In my practical experience, the usual performance difference between wired Ethernet connection and WiFi is around 10ms. So 10ms is the average penalty you get when going through WiFi instead of wired Ethernet connection. And we are still talking about ideal conditions.

Performance bottleneck caused by heterogeneous port speeds: Imagine a 10G backbone while servers are connected through 1G, especially when multiple servers share such a 1G uplink. Numerous clients can easily generate traffic that will spike above 1G port capacity, saturating switch buffers, which leads to packet drops. Such packets need to be retransmitted and consecutively users experience a network delay.

 2. Server response time

This metric represents the request processing time on the server side and so represents the delay caused by the application itself. The measured server response time expresses the time difference between the predicted observation time of the server’s ACK packet (prediction based on observation time of the client request and previously measured RTT value) and the actual observation time of the server’s response. The measurement can’t rely on observing an ACK packet from the server since the ACK packet might be merged with the server’s response.

SRT enables a performance measurement of the whole application, per application server, per client network range or even individual clients. This enables finding correlations between application performance and a number of clients or a specific time of the day. Using this metric together with RTT answers the ultimate question. Is it a network issue or application issue?

3. Jitter – variance of delay between packets

Jitter can show irregularities in packet flow by calculating the variance of individual delays between the packets. In an ideal case, delay between the individual packets is a constant value, which means that jitter is 0. In reality, having a jitter value of 0 doesn’t occur as a variety of parameters might influence the data stream. Why should we measure jitter anyway? Jitter is critical and has the main value for assessing the quality of real-time applications, such as conference calls and video streaming. But also when downloading, e.g. a Linux distribution ISO file of Linux distribution from a mirror, jitter may indicate an unstable network connection.

Summary

Continuous monitoring and baselining of network performance monitoring metrics by using flow data helps network administrators to identify an issue in the network itself, specific connections or applications. It’s valuable to reveal problems before users do and prevent complaints on performance degradation. Long term monitoring of network performance metrics (RTT, SRT, Jitter) can help to predict future needs (capacity planning) and incidents.

Network performance monitoring metrics can considerably improve the performance of the network as well as contributing to the improvement of the application side.

 

 



Source link

Let’s Make a Deal: Negotiating with IT Vendors | IT Infrastructure Advice, Discussion, Community


If you work in an enterprise IT organization that’s been around for a more than a decade or two, it’s a pretty good bet that you do business with one of the so-called megavendors — IBM, Microsoft, Oracle, and/or SAP. These established vendors have had deep roots in enterprise businesses for many years with ERP systems, databases, and more.

And if you think that your business is under pressure from market and industry disruptors, you should also realize that the same market forces are in play for these megavendors. The cloud has changed their licensing and business models. They are also under strain as these models have shifted, and their sales tactics have shifted, too, as they look to drive upsell revenue. These companies want to increase your spending year-over-year in the cloud with subscription licensing. They have a strategic product set that promotes the sale of other products, too.

Before you head into your contract negotiations with these megavendors, you need to prepare your own tactics and strategies. That’s according to Melanie Alexander, a director analyst at Gartner specializing in vendor contract negotiations. She provided some perspective on the best ways to prepare for your negotiations with these vendors during a session at the recent Gartner Data and Analytics Summit in Orlando, Florida.

“Their main purpose in life is for you to spend more money with them,” Alexander said. They will want to upsell you to use the full platform — to get you on the hardware and middleware and application stack, she said. They have a strategic product set and those products help them promote the sales of their other products. They want to get you into their cloud and lock you into subscription pricing. They want you to increase your spending with them year over year.

Read the rest of this article on InformationWeek.



Source link

HPE Inks Deal For SimpliVity


Hewlett-Packard Enterprise on Tuesday announced an agreement to buy hyperconverged startup SimpliVity for $650 million in cash to bolster its hybrid IT strategy.

Founded in 2009, SimpliVity was an early player in the fast-growing hyperconverged infrastructure market. The startup came out of stealth in 2012 with its OmniStack platform that combines compute, storage services, and network switching. The platform, which is composed of SimpliVity’s Data Virtualization Platform software and purpose-built Accelerator Card, includes data compression, deduplication, and built-in backup.

Gartner labeled SimpliVity a leader in hyperconvergence, along with Cisco, EMC, Nutanix, and NetApp, in its Magic Quadrant for Integrated Systems last fall. In addition to offering an OmniCube appliance, SimpliVity teams with Cisco, Dell, Huawei, and Lenovo to integrate OmniStack into their servers.

“This transaction expands HPE’s software-defined capability and fits squarely within our strategy to make hybrid IT simple for customers,” Meg Whitman, HPE president and CEO, said in a statement.

 

HPE said it will continue to offer its own hyperconverged products, the HC 380 and HC 250, for existing customers and partners. The company jumped into the hyperconvergence nearly a year ago with the HC 380. SimpliVity customers and partners shouldn’t expect any immediate changes in product roadmap, according to HPE, which said it will continue to support them.

Within 60 days of the deal closing — which HPE expects in the second quarter of its fiscal year 2017 — the company plans to offer SimpliVity’s software qualified for its ProLiant DL380 servers. By the second half, it expects to offer a range of integrated HPE SimpliVity systems on ProLiant servers.

Dan Conde, an analyst at Enterprise Strategy Group and Interop ITX Review Board member, told me in an email that SimpliVity provides HPE with better differentiation in the hyperconverged infrastructure market. HPE’s own products aren’t built from the ground-up for hyperconvergence to the same extent as SimpliVity’s, he said.

“I think they [HPE] wanted some ‘secret sauce’,” Conde said.

Technology Business Research recently estimated that the market for hyperconverged platforms will reach $7.2 billion by 2020.

SimpliVity’s OmniCube made its way to Hollywood last year, when it was disguised as the Pied Piper box in HBO’s “Silicon Valley” television show.



Source link

HPE Inks Deal For SimpliVity


Hewlett-Packard Enterprise on Tuesday announced an agreement to buy hyperconverged startup SimpliVity for $650 million in cash to bolster its hybrid IT strategy.

Founded in 2009, SimpliVity was an early player in the fast-growing hyperconverged infrastructure market. The startup came out of stealth in 2012 with its OmniStack platform that combines compute, storage services, and network switching. The platform, which is composed of SimpliVity’s Data Virtualization Platform software and purpose-built Accelerator Card, includes data compression, deduplication, and built-in backup.

Gartner labeled SimpliVity a leader in hyperconvergence, along with Cisco, EMC, Nutanix, and NetApp, in its Magic Quadrant for Integrated Systems last fall. In addition to offering an OmniCube appliance, SimpliVity teams with Cisco, Dell, Huawei, and Lenovo to integrate OmniStack into their servers.

“This transaction expands HPE’s software-defined capability and fits squarely within our strategy to make hybrid IT simple for customers,” Meg Whitman, HPE president and CEO, said in a statement.

 

HPE said it will continue to offer its own hyperconverged products, the HC 380 and HC 250, for existing customers and partners. The company jumped into the hyperconvergence nearly a year ago with the HC 380. SimpliVity customers and partners shouldn’t expect any immediate changes in product roadmap, according to HPE, which said it will continue to support them.

Within 60 days of the deal closing — which HPE expects in the second quarter of its fiscal year 2017 — the company plans to offer SimpliVity’s software qualified for its ProLiant DL380 servers. By the second half, it expects to offer a range of integrated HPE SimpliVity systems on ProLiant servers.

Dan Conde, an analyst at Enterprise Strategy Group and Interop ITX Review Board member, told me in an email that SimpliVity provides HPE with better differentiation in the hyperconverged infrastructure market. HPE’s own products aren’t built from the ground-up for hyperconvergence to the same extent as SimpliVity’s, he said.

“I think they [HPE] wanted some ‘secret sauce’,” Conde said.

Technology Business Research recently estimated that the market for hyperconverged platforms will reach $7.2 billion by 2020.

SimpliVity’s OmniCube made its way to Hollywood last year, when it was disguised as the Pied Piper box in HBO’s “Silicon Valley” television show.



Source link