Tag Archives: Infrastructure

Why Settle for Just “OK” Network Operations? | IT Infrastructure Advice, Discussion, Community


More often than not, just “OK” is not an option. After all, OK expectations can only lead to OK outcomes. This is showcased in a recent popular advertising campaign from AT&T, which depicts scenarios where just OK is not acceptable, portraying an “OK surgeon,” “OK babysitter,” and “OK tattoo artist.” While the commercials are comical, they bring to light some of the very real and not so funny problems that many businesses and specifically IT teams are dealing with.

With the explosion of the internet of things (IoT) and advancements in automation, artificial intelligence (AI), software-defined networking (SDN), and DevOps, many IT professionals are realizing that the processes they once relied on to manage critical areas of the network have become just OK. And when it comes to network operations, just OK is not OK. Networks today are mission critical, often relied upon to keep the entire business up and running. In fact, according to Gartner, the average cost of network downtime is around $5,600 per minute – a massive expense for any organization, especially when you factor in the amount of time it typically takes a network team to troubleshoot an issue using OK, aka manual, methods.

As our IT environments continue to transform, our processes must as well. The role of the network engineer has already evolved to include much more responsibility than ever before, and currently, many are struggling to juggle everything on their plates. As a result, there are several areas where IT teams have accepted an OK standard, but it’s not too late to transform OK to actually effective and efficient.  

An OK approach for complex dynamic networks

SDN is beginning to show some real benefits to organizations that are implementing the technology to create efficient, centralized network management, roll out new applications and services with greater agility, enhance security and reduce operational costs. On the flip side, however, SDN also brings on new operational challenges, creating hybrid network environments where SDN architecture is merged with traditional data center and MPLS networks. These hybrid environments are incredibly complex, consisting of hundreds and thousands of components and undergoing constant change. As the networks continue to become more complex and dynamic, significant visibility issues are created for network teams.  

Ideally, network engineers are able to see both SDN and non-SDN networks side-by-side so they can visualize the physical and logical interconnections and correlate the layers of abstraction at any moment. This visibility becomes critical especially during troubleshooting when speed is of the essence. Remember, downtime can cost an organization $5,600 per minute – with the ability to directly impact the bottom line. Unfortunately, existing troubleshooting and mapping strategies like CLI and network diagramming are less effective in complex hybrid networks, forcing IT teams to race against the clock to identify an issue, increasing MTTR (mean time to repair). End-to-end visibility across hybrid networks is essential for being able to identify and mitigate potential issues quickly. Without it, existing processes are just OK.

Automation takes things up a notch, far beyond just OK, allowing teams to view both traditional and application-centric infrastructure as well as data integration with the SDN console in a single view. This enables enterprises to acclimate to an application-centric infrastructure and understand how application dependencies map to the underlying fabric. In hybrid environments, where abstraction can lead to a cloudy view of the network, automated processes and the right data integration can give engineers the dynamic visibility they need.

OK collaboration between network and application teams

As networks become more software-defined and application-centric, the line between the application and network team starts to get blurry. The two often spend time blaming the other department for an issue and rarely take a collaborative approach to troubleshooting. As long as applications depend on the network to function and companies depend on applications to conduct business, the blame game between the two for slow performance, downtime, or otherwise will continue – that is if just OK network processes are in place.

Not only is there tension between applications and network teams, but there’s also a big knowledge and skills gap between the two, which brings new challenges as network projects start crossing over into application territory and vice versa. This is where automation and visibility come into play. Automation can help network engineers apply existing knowledge to these new environments and allows for IT teams to share their critical knowledge effectively – whether that be design information, troubleshooting steps or network change history. By providing a common visibility framework during troubleshooting and security and enabling teams to codify and share best practices, automation transforms OK IT communications living in silos, to effective collaboration for better results.

As organizations continue to invest in the latest technology and as a result, networks continue to grow in size and complexity, it’s become clear that automation is no longer a luxury, it’s a necessity. Traditional methods of network management simply don’t cut it with the hybrid environments of today. Stop settling for OK outcomes from your IT operations when automation can ensure the network is performing at its best.



Source link

Mobile Video and 5G – A 2020 Vision | IT Infrastructure Advice, Discussion, Community


The Super Bowl winning leader of numerous NFL teams’ quote holds true for video use over 5G networks to date. U.S. operators are busy this year deploying networks that enable super-fast wireless technology that have the potential to change the way business and consumers uses mobile devices.

Many see 2020 as a primetime opportunity for delivery of video over 5G to mobile devices, what with huge far-flung viewing events including the presidential election here in the states and the 2020 Summer Olympics in Tokyo, for starters.

Envision smartphone owners folding open their devices to create an iPad-sized screen to view any of a myriad of streaming video content sources – and staying engaged longer in the absences of delays in streaming TV shows and movies that are commonplace with sub-5G wireless links today.

Potential Use Cases

Global carrier networking provider Ericsson has defined five key industries that could benefit from 5G usage. They are TV and media; manufacturing; healthcare; telecommunications; and transportation/infrastructure.

The TV and media industry is loaded with those looking for ways to get content to the wireless masses. At the Consumer Electronics Show in January, entertainment conglomerate Disney, whose family includes ABC, ESPN, Fox Entertainment, Lucasfilm, Marvel, and Pixar, announced a project to explore 5G media opportunities with Verizon.

Disney is testing 5G applications through its new StudioLab facility.

“We see 5G changing everything about how media is produced and consumed,” Disney Studios CTO Jamie Voris was quoted as saying.

Reality Check

At this early stage in 5G wireless evolution, there are plenty of IT executives that are interested skeptics. There’s already a mobile video ecosystem. Adding 5G technology to enable a new one sounds great, but some warn that unless the result is adding value to the mobile video, it’ll be a tough sell. Or worse, it will be a painful lesson learned with 3D TV. Also high on the must-have list is a high-quality viewing experience. Minus those two items, interested skeptics will be sideline sitters, not early implementors.

Think of the cycle as a game of leapfrog. As data delivery improves (it should, with super-fast 5G services), hardware needs to get better. And if 5G is everything it’s purported to be, it makes sense for sales of phones with foldable screens start to pick up. This would provide a larger viewing area for rich content.

Business case: When data delivery improves, operators and enterprises will need to construct a business case for 5G uses. Key questions to be answered include how operators will price super-fast 5G services? Will there be tiered and unlimited use plans and what will they cost? What of affordability?

Content: There’s plenty of content from the media and entertainment industries that could be live streamed to mobile 5G devices, with sports matches at the top of the list. On-demand content such as movies and TV shows could also find their way to mobile devices with super-fast 5G wireless connections.

4K Support: 4K is an increasingly popular format for content that offers a rich and more immersive viewing experience than HD content. The former specifies four times more pixels. 4K TVs have been available for years, with prices continuing to fall. The delivery of native 4K content to these and mobile devices requires a recommended 12Mbit/sec to 25Mbit/sec of bandwidth. That’s achievable for many homes with wired links (still leaving many out), but a deterrent to those with mobile devices using current wireless connections.

5G Devices: Bigger is often better here as viewing content on many smartphones is problematic thanks to smallish screens. This limits the average viewing times for mobile video. This could change if we see more and cheaper foldable smartphones like the Samsung Galaxy Fold – with its 7.3-inch screen -due out in April. The units carry a price tag of roughly $2,000, which will hamper sales, according to a recent report by Juniper Research. Competition could drive the price downward. The latest iPad has a 9.7-inch screen.

OTT services: Internet-streamed TV services stand to gain even more ground on traditional wired cable TV offerings, which is great news for current and prospective cord cutters. Why? Subscription TV services such as Netflix, Amazon, Hulu, Sling and DirecTV Now all want to expand their mobile video business, which has been limited by available wireless bandwidth.

Mobile gaming. Look for 5G service to forever change gaming by providing super-fast speeds needed that crush latency and delay still experienced in online gaming in the home.  5G should also be a game changer by expanding use to those with mobile devices who, because of speed constraints with current wireless networks, waited to get home to get in the game. Expect gamers to continue to prefer big- screen TV monitors, but also embrace the mobile option when away from home.

The Road Ahead

If the larger-screen mobile devices, using content delivered over 5G networks, provide a high-quality viewing experience for their owners and IT departments, the case for mobile video over these super-fast networks will be close to fulfilling the potential of the underlying cellular technology. That’s doing something.

 



Source link

Cloud Storage and Policies: How Can You Find Your Way? | IT Infrastructure Advice, Discussion, Community


Cloud storage is one of the hottest topics today. Rightfully so, there seem to be new services being added seemingly daily. Storage services make up one of the most attractive cloud services, so it is only natural to find business problems to solve.

The reality is that storage in the cloud is a whole new discipline. Completely different. Like forget everything you know and let’s start from the beginning. Both Amazon Web Services and Microsoft Azure have many different storage services. Some are like what we have used on-premises, such as Azure File Storage and AWS Elastic Block Store. These resemble traditional file shares and block storage on-premises, yet how they are used can make a very big difference on your experience in the cloud. There are more storage services in the cloud (such as object storage, gateways and more), and they are different than what has traditionally been used on-premises, and that is where it gets interesting.

Let’s first identify why organizations want to leverage the cloud for storage. This may seem a needless step, but it is more critical than ever. The why is very important. The fundamental reason why should be that the cloud is the right platform for the storage need. Supporting reasons will also include cloud benefits such as these:

No upfront purchase: This is different than the on-premises storage practice of purchasing for the future capacity needs (best guesses, overspend or bad misses of targets are common with this practice!).

Effectively unlimited capacity: Ask any mathematician and they will quickly dispute the cloud is not unlimited, but from most customer perspective the cloud will provide effectively unlimited storage options.

Predictable pricing: While not exactly linear, it is pretty clear what consumption pricing will be with cloud storage.

These are some of the good reasons to embrace cloud storage, but beyond the reasons to go to the cloud the strong advice is to look at storage policies and usage to not have any surprises in the future. Some of this includes looking at the economics from a complete scope of use. Too many times pricing is just seen as how much consumption per month. Take AWS S3 for example, for S3 Standard Storage one can have the first 50 TB per month priced at $0.023 per GB (pricing as of March 2019, US East (Ohio) region). But other aspects of using the storage should absolutely be considered. Take for example the following other aspects:

Getting data into the cloud is often overlooked, but there is a cost to that as well. This makes how data is written to the cloud important. Is data sent in small increments (more write operations or put tasks) or in relatively fewer larger increments? This can change the cost profile.

Egress is where data is read from a cloud storage location, and that has a cost. One practical cost is to leverage solutions with cloud storage that retrieve the right pieces; versus entire datasets.

Deleting data Interesting to think about, not for costs per se; but deleting data should be considered. The data in the cloud will live as long as you pay for it, so give thought to ensure no dead data is living in the cloud.

But what can organizations do to manage cloud storage from a policy perspective? In a way, some of the same practices as before can be applied. But also leverage frameworks from the cloud platforms to help manage the usage and consumption. AWS Organizations is a good example for providing policy-based management of multiple AWS accounts. This will streamline account management, billing and control to cloud services. Similar capabilities exist in Azure with Subscription and Service Management along with Azure RBAC.

Between taking a responsible look at new cloud services from what we have learned in the past coupled with what new frameworks are available to use in the cloud, organizations can easily and confidently embrace cloud storage services to not only solve the right platform question, but also manage it in a way that lets CIOs and decision makers sleep at night.



Source link

8 Challenges DevOps Faces Today | IT Infrastructure Advice, Discussion, Community


If you’re an enterprise IT leader, chances are good that you’ve at least heard of DevOps. In fact, your organization has likely experimented with at least some DevOps techniques. In the 2018 Interop State of DevOps Report, only 3% of the 150 business technology decision makers surveyed said they weren’t at all familiar with the approach, while a full 84% said that they were “familiar” or “very familiar” with key DevOps concepts. In addition, two-thirds of those who took part in the study said they had either implemented DevOps in their organizations or planned to do so within twelve months. Only 9% said their organization had no DevOps plans.

Now, a new Harvard Business Review Report sponsored by Google Cloud finds that while organizations implementing DevOps have experienced a lot of benefits, these teams continue to face significant challenges. The survey asked 654 HBR readers about their companies’ experiences with DevOps. The majority of respondents (89%) came from enterprises with 1,000 or more employees, and they represented a range of industries and different parts of the world.

On the plus side, the majority of respondents said that DevOps was having a positive impact on their speed to market (70%), productivity (67%), customer relevance (67%), innovation (66%), product or service quality (64%), employee satisfaction (57%) and costs (54%).

However, the report also surfaced some frustrations that IT leaders are experiencing as they progress with their DevOps adoption. Companies aren’t always realizing the benefits that they expected to see after adopting the approach, and some companies are far more successful with the approach than others. Some of the issues plaguing DevOps teams are the same sorts of cultural obstacles that DevOps has always faced, but others are new problems that are only emerging as the approach becomes more widespread.

Read the rest of this article on InformationWeek.



Source link

Four Tips to Worsen Your Network Security | IT Infrastructure Advice, Discussion, Community


If you want to keep your network infrastructure secured, you need to monitor what’s going on with routers, switches, and other network devices. Such visibility would enable you to quickly detect and investigate threats to perimeter security, such as unauthorized changes to configurations, suspicious logon attempts, and scanning threats. For example, improper changes of network device configurations will leave your network vulnerable to hackers who could break into your network. If you want to strengthen your network security, never follow these four tips.

Tip # 1: Don’t care about unauthorized logons

Most attempts to log on to a network device are valid actions by network administrators — but some are not. Inability to promptly detect suspicious logon attempts leaves your organization vulnerable to attackers. Unusual events include access by an admin outside of business hours or during holidays, failed logon attempts, or the modification of access rights, etc. An immediate alert about suspicious events enables IT personnel to take action before security is compromised. This practice is also helpful for compliance audits, as it gives evidence that privileged users and their activities on your devices are closely watched (e.g., who is logging in and how often).

Tip # 2: Configure your devices at random

The key threat associated with network devices is improper configuration. A single incorrect change can weaken your perimeter security, raise concerns during regulatory audits and even cause costly system outages that can bring your business down. For example, a firewall misconfiguration can give attackers easy access to your network, which could lead to lasting damage. Visibility into who changed what will provide you with insight and control of your network devices. Continuous auditing would enable you to have better user accountability and detect potential security incidents more quickly before they cause real trouble.

Tip # 3: Ignore scanning threats

Hackers often use network scanning to learn about a network’s structure and behavior to execute an attack on the network. If you avoid monitoring of your network devices for scanning threats, you might miss malicious activities until your sensitive data is compromised. To strengthen your protection against scanning threats and minimize the risk of data breaches, ensure continuous monitoring of network devices. Such visibility would enable you to understand which host and subnet were scanned, from which IP address it was initiated, and how many scanning attempts were made.

Tip # 4: Ease control of VPN logons

Virtual private network (VPN) access is a popular way to improve the security of remote connections for many organizations, but there are many security risks associated with it. In reality, VPN connections are usually used by anyone in the organization without any approvals. The best practices recommend providing access to network resources via VPN only after proper approvals and only to users that need access according to their business need. However, practice shows that there are no 100 percent secured VPNs and any VPN connection is a risk. The major risk scenarios include a user connecting via public Wi-Fi (since someone might steal their credentials) or a user who doesn’t usually work with VPN suddenly beginning to use it (which can be a sign that a user has lost their device and someone else is trying to log in using it). Visibility into network devices enables you to keep track of each VPN logon attempt. Such visibility also provides information about who tried to access your network devices, the IP address each authentication attempt was made from, and the cause of each failed VPN logon.



Source link