Tag Archives: Infrastructure

Interop ITX Spotlights IT Infrastructure Evolution


As enterprises look to leverage technology for new products and services that give them an edge over the competition, IT infrastructure is changing faster than ever. Legacy architectures are giving way to software-defined technologies, cloud, open source and automation as IT organizations adapt to support these enterprise digital transformation initiatives and focus on speed and new capabilities.

With all these changes, infrastructure pros are under intense pressure. How do you keep up with all the emerging trends? How do you know evaluate new technologies and figure out which ones might be right for your business? At the same time, you still need to maintain existing infrastructure, so efficiency is critical.

At Interop ITX, infrastructure pros can get a wealth of education on all the hot technologies and emerging IT trends in just five days. The conference features more than two dozen full- and half-day workshops, summits, and hour-long sessions focused on infrastructure, including networking, containers, automation, and hyperconvergence.

Attendees can get up to speed on software-defined networking, software-defined storage, next-generation WANs, and wireless networking design. For those who want some direct experience with new technologies, workshops on network automation and open source are some of the sessions that include hands-on instruction.

At the same time, attendees can also get practical tips for managing existing infrastructure with sessions on network troubleshooting, wireless security, and disaster recovery.

Leading these workshops and sessions are some of the brightest minds in the IT community. These infrastructure experts, such as Greg Ferro and Ethan Banks, are among the most respected in the industry for their deep knowledge in their respective domains. The speaker roster at Interop ITX includes analysts and consultants as well as practitioners from Mastercard and Shutterstock, who will provide first-hand accounts of how they’re transforming their infrastructure.

Here’s a sample of what you can look forward to in the Interop ITX infrastructure track:

Packet Pushers Future of Networking Summit – Greg Ferro and Ethan Banks of Packet Pushers will reprise their popular two-day summit, which will look at the technologies and trends impacting networking in the next five to ten years. This year’s summit will cover automation and orchestration, visibility and analytics, cloud networking, and next-gen WAN.

Container Crash Course – Containers are one of the hottest technologies in IT today and a hot topic at this year’s Interop. This all-day event is designed to equip attendees with core knowledge of containers and understanding of how the technology can apply to their business. The summit features a panel of experts in container technologies, microservices, and DevOps from companies such as Docker, Red Hat, Amazon Web Services.

Later in the week, Stephen Foskett, organizer of the popular Tech Field Days, will present “The Case for Containers: What, When and Why?” and Brian Gracely, director of product strategy at Red Hat and well-known cloud expert, will present “Managing Containers in Production: What You Need to Think About.

Hands-on Practical Network Automation” – Interop ITX attendees have a couple opportunities to learn about network automation. This half-day workshop will cover how to get started with network automation and includes an introduction to Python. Two of the workshop’s speakers — Jere Julian, extensibility engineer at Arista Networks and Scott Lowe, engineering architect at VMware — recently wrote a Network Computing blog outlining the benefits of automation. Twin Bridges Founder Kirk Byers, who teaches Python to network pros, and Matt Oswalt, software engineer at Stackstorm, are co-presenters.

 

Oswalt also is scheduled to present “Fundamental Principles of Automation,” which is designed to help IT pros understand automation basics.

Cloud expert Lori MacVittie, principal technical evangelist at F5 Networks, will discuss IT automation more broadly in her session, “Operationalizing IT with Automation and APIs.”

SDN: What Is It Good For?” — This session will feature a panel of infrastructure experts including Robin Perez, deputy director of infrastructure for the City of New York and Thomas Edwards, VP of engineering and development at FOX, who will provide first-hand accounts of how their organizations have implemented SDN. The panel is designed to provide practical guidance for defining and scoping an SDN project. Lisa Caywood, director of ecosystem development at the Linux Foundation’s OpenDaylight Project, is the panel moderator.

“Wireless Network Design That Scales to Business Demands” — This session is a recent addition to the Interop ITX schedule featuring top-rated Interop speaker George Stefanick, wireless network architect at Houston Methodist Hospital. His session at last year’s Interop on wireless network design, which covered site surveys and issues like co-channel interference, received high marks from attendees.

The Killer Troubleshooting Toolset Revisited” – Networking pros can spend a lot of time trying to track down the root cause of network and application performance issues. In this half-day workshop, Mike Pennacchi, owner and lead network analyst for Network Protocol Specialists, will cover a number of powerful network troubleshooting tools that help streamline the process. Pennacchi is a longtime Interop instructor whose sessions consistently receive high marks.

Converged and Hyperconverged Infrastructure: Myths and Truths” — The buzz around converged/hyperconverged infrastructure is inescapable. Interop ITX features a couple sessions to help cut through the hype. This session, presented by Krista Macomber, TBR senior analyst, will cover the pros and cons, adoption trends, and provide recommendations for enterprises considering the technology. Another session, “Things To Know Before You (Hyper) Converge Your Infrastructure,” will cover key considerations and evaluation criteria. Enterprise Strategy Group analysts Dan Conde and Jack Poller are the presenters.

Building the Infrastructure Future at Mastercard” – Len Sanker, senior VP of enterprise architecture and data engineering at Mastercard will discuss challenges in aligning technology capabilities with business goals. Another practitioner, Shutterstock CIO David Giambruno, will share best practices and lessons learned while leading a major data center transformation in “Building a Next-Generation API-Driven Infrastructure for Scaling Growth.”



Source link

15 Infrastructure Experts to See at Interop ITX


Each year, Interop brings together some of the best and brightest minds in the IT community to share their expertise, and this year is no different. While the name has changed somewhat – it’s now Interop ITX – this year’s conference will provide the same high-caliber roster of speakers. These technology experts, some of the most respected in the industry, will provide in-depth sessions and workshops across six tracks: infrastructure, data and analytics, cloud, security, DevOps, and leadership/professional development.

Since our focus here at Network Computing is infrastructure, we thought we’d put the spotlight on some of the infrastructure pros you can expect to see at Interop ITX May 15-19 in Las Vegas. These experienced and innovative IT practitioners and analysts will speak on topics like network automation, wireless networking, storage, hyperconvergence, and containers.

You’re probably familiar with many of these IT experts – they’re some of the more well-known names in infrastructure and some, such as Ethan Banks and Greg Ferro of Packet Pushers, are top-rated speakers at past Interop conferences. This year’s event also features some names you may not be as familiar with, but who have deep knowledge in their domains, like Shawn Zandi, a principal network architect at LinkedIn.

Interop ITX prides itself on its independence, so you can expect these experts to provide objective insight on issues that are critical in today’s fast-changing IT environment.

The following pages are a sample of the infrastructure experts scheduled to speak at Interop ITX. Check out the Interop ITX schedule to see the full roster of infrastructure speakers, as well as presenters in the other tracks.



Source link

Converged Vs. Hyperconverged Infrastructure: What’s The Difference?


Traditionally, the responsibility of assembling IT infrastructure falls to the IT team. Vendors provide some guidelines, but the IT staff ultimately does the hard work of integrating them. The ability to pick and choose components is a benefit, but requires effort in qualification of vendors, validation for regulatory compliance, procurement, and deployment.

Converged and hyperconverged infrastructure provides an alternative. In this blog, I’ll examine how they evolved from the traditional infrastructure model and compare their different features and capabilities.

Reference architectures

Reference architectures, which provide blueprints of compatible configurations, help to alleviate some of the burden of IT infrastructure integration. Hardware or software vendors provide defined behavior and performance given selected choices of hardware devices and software, along with configuration parameters. However, since reference architectures may involve different vendors, they can present problems in determining who IT groups need to call for support.

Furthermore, given that the systems combine components from multiple vendors, systems management remained difficult. For example, visibility into all levels of the hardware and software stack is not possible since management tools can’t assume how the infrastructure was set up. Even with systems management standards and APIs, tools aren’t comprehensive enough to understand device-specific information.

Converged infrastructure: ready-made

Converged infrastructures takes the idea of a reference architecture and integrates the system prior to shipping to customers; systems are pre-tested and pre-configured. One unpacks the box, plugs it into the network and power, and the system is ready to use.

IT organizations choose converged systems for ease of deployment and management instead of the benefits of an open, interoperable system with choice of components. Simplicity overcomes choice.

Hyperconverged: The building-block approach

Hyperconverged systems take the convergence concept one step further. These systems are preconfigured, but provide integration via software-defined capabilities and interfaces. Software interfaces act as a glue that supplements the pre-integrated hardware components.

In hyperconverged systems, functions such as storage are integrated through software interfaces, as opposed to the traditional physical cabling, configuration and connections. This type of capability is typically done using virtualization and can exploit commodity hardware and servers.

Local storage not a key differentiator

While converged systems may include traditional storage delivered using discrete NAS or Fibre Channel SAN, hyperconverged systems can take different forms of storage (rotating disk or flash) and present it via software in a unified way.  

A hyperconverged system  may use local storage, but it can use an external system with software interfaces to present a unified storage pool. Some vendors get caught up in the definition of whether the storage is implemented locally (implemented as a disk within the server) or as a separate storage system. I think that’s missing the bigger picture. What’s more important is the ability for the systems to scale.

Scale-out is key

Software enables hyperconverged systems to be used as scale-out building blocks. In the enterprise, storage is often an area of interest, since it has been difficult to scale out storage in the same way compute capacity expands by incrementally adding servers.

Hyperconverged building blocks enables graceful scale out, as capacity may increase without re-architecting the hardware infrastructure. The goal is to unify as many services using software that acts as layer separating the hardware infrastructure from the workload. That extra layer may result in some performance tradeoff, but some vendors believe that the systems are fast enough for most non-critical workloads.

Making a choice

How do enterprises choose converged vs hyperconverged systems? ESG’s research shows that enterprises choose converged infrastructure for mission-critical workloads, citing better performance, reliability, and scalability.  Enterprises choose hyperconverged systems for consolidating multiple functions into one platform, ease of use, and deploying tier-2 workloads.

Converged and hyperconverged systems continue to gain interest since they enable creation of on-premises clouds with elastic workloads and resource pooling. However, they can’t solve all problems for all customers. An ESG survey shows that, even five years out, over half the respondents plan to create an on-premises infrastructure strategy based on best-of-breed components as opposed to converged or hyperconverged infrastructure.

Thus, I recommend that IT organizations examine these technologies, but realize that they can’t solve every problem for every organization.

Hear more from Dan Conde live and in person at Interop ITX, where he will co-present “Things to Know Before You (Hyper) Converge Your Infrastructure,” with Jack Poller, senior lab analyst at Enterprise Strategy Group. Register now for Interop ITX, May 15-19 in Las Vegas.



Source link