Tag Archives: Infrastructure

On-Prem IT Infrastructure Endures, Talent Needed


Despite steady adoption of public cloud services, organizations continue to invest in their on-premises IT infrastructure and the people who run it, according to a new report from 451 Research.

The firm’s latest “Voice of the Enterprise: Datacenter Transformation” study found that organizations are maintaining healthy capacity in their on-premises data centers and have no plans to cut back on the staff assigned to data center and facility operations. Almost 60% of the nearly 700 IT decision makers surveyed by the firm said they have enough data center floor space and power capacity to last at least five years.

Even though many companies expect the total number of IT staffers to decline over the next year, most expect the number of employees dedicated to data center and facilities will stay the same or increase, according to 451 Research.

The reason for the continued data center investment, cited by 63% of those polled, was fairly generic: business growth. Christian Perry, research manager and lead analyst of the report, said analysts dove a little deeper. As it turns out, companies are finding that keeping workloads long term on public cloud services isn’t all that cost effective.

Regardless of the type of workload in the cloud – ERP, communications, or CRM for example – or size of the company, when an organization expands a workload by adding new licenses, seats, or functions, the cost over time winds up close to what it would cost to keep the workload on-premises, Perry said. Costs include opex and capex for IT infrastructure – servers, storage and networking – as well as the facilities that contain it.

“It still is dirt cheap to go to the cloud, but to stay in the cloud, that’s a whole other story,” he told me in a phone interview.

While some companies manage their cloud costs well, unexpected growth, a massive new project or a new division coming online can make cloud costs unwieldy, Perry said.

Another factor that’s playing into the continued data center investment is the “cloudification” of on-premises IT infrastructure. Converged infrastructure has enabled companies to reach greater levels of agility, flexibility, and cost control, Perry said, adding that hyperconverged infrastructure boosts that trend.

Data center skills shortage

While organizations continue to invest their on-premises IT infrastructure and facilities, they’re running into staffing challenges, 451 Research found. Twenty-nine percent face a skills shortage when trying to find qualified data center and facilities personnel, Perry said.

As companies are shifting away from traditional IT architectures to converged and hyperconverged infrastructure, demand for IT generalists has grown, he said. “Specialists are still critical in on-prem environments, but we’ve definitely seen the rise of the generalist…There’s a lot of training going on internally in organizations to bring their specialists to a generalist level.”

Of the 29% facing staffing challenges, a majority (60%) are focused on training existing staff to fill the gaps. Those attending the training tend to be server and storage administrators, 451 Research found. “There’s a certain sense of fear that they’re going to become siloed and potentially irrelevant,” Perry said. “At the same time, there’s a lot of excitement about these newer architectures and software-defined technologies.”

Companies cited a big skills gap in the areas of virtualization and containers, technologies companies view as transformative to their on-premises infrastructure, he said. They’re also key technologies to facilitate the continued enterprise focus on data center consolidation.

“The jump in cloud has had an impact on IT staffing overall,” Perry said. “A lot of cloud service providers have scooped up a ton of good IT talent. That’s not just Tier 1 cloud providers, but also Tier 2…They’re pulling away skilled IT staff and leaving gaps for on-prem.”

A separate 451 Research report that looked into enterprise server and converged infrastructure trends found that VM administration was the top skill enterprises have trouble finding. A third of organizations reported a networking skills gap.

 

 

 

 

 

 

 

 



Source link

Hyperconverged Infrastructure: What Do Users Think?


Hyperconvergence burst onto the IT scene a few years ago and remains one of the hottest trends in IT today. Vendors promise greater efficiency and agility with hyperconverged infrastructure. But what do IT pros who use the technology have to say?

Members of IT Central Station, a community of more than 250,000 IT pros who contribute enterprise technology reviews based on their experience, provided insight into leading hyperconverged infrastructure products. They cited features they love in HPE SimpliVity, Nutanix, and VMware vSAN, along with product shortcomings.

Since virtualized workloads are becoming more prevalent, IT Central Station members have found that hyperconverged infrastructure offers organizations the benefit of removing previously separate storage networks. Hyperconverged systems are flexible and can be expanded by adding nodes to the base unit.

HPE SimpliVity

Charlene H., senior systems administrator at a healthcare company, described her positive experience with HPE SimpliVity:  

“The ease of managing this system! Recently added the All Flash CN3400F and oh my goodness, are these nodes fast as lighting! I love having a private cloud for my organization. Public cloud will never care for my organization’s data more than I do.”

Tommy H., senior systems/storage engineer at Banc of California, described the value that HPE SimpliVity’s backup capabilities have added to his organization:

“Backups are all automatic and admins do not have to worry if the production VMs are being backed up. Easy backup policy with no LUN administration is also one less task to worry about. DR and DR replication are no longer an issue; no longer have to seed a SAN locally and ship it out to the DR site.”

However, a senior cloud data architect who uses HPE SimpliVity said improvements could be made to both its data storage and data replication capabilities:

“I would like to see replication to a cloud solution. I would like to replicate the data so that we have a backup copy off-site. I could then be comfortable getting rid of our existing backup solution….The other feature would be a single copy of the data storage as opposed to a dual copy. In that way, when I do things that automatically have dual copies, such as with our SQL server databases, I would not then be making four copies of the data.”

A senior systems administrator at a consultancy company would like to see other improvements:

“There are some maintenance features (replica copy load-balancing) that could stand to be automated and/or streamlined for customer execution.…Also, the ability to scale compute and storage independently of one another would be a way to add value to the entire product line.”

Nutanix

A cyber security engineer at a technology services company explained why he likes Nutanix:

“Hyperconvergence is the most valuable feature for me, as it allows me to scale the hardware accordingly to project requirements…It is now our single most powerful server that is easily scalable and has an HTML5 site that manages all aspects of the system.”

An enterprise systems and IT architect at a technology services company described the improvements that Nutanix has brought to his organization:

“There was a 30% reduction in CAPEX spending when we moved towards the Nutanix platform and we had a high ROI.”

A systems engineer at a university cited room for improvement:  

“The improvement needed is for elastic clusters, meaning the ability to depart and join nodes in an automatic way. We have a laboratory that needs to perform bare metal tests and therefore needs to unjoin the nodes from the cluster and later on join them back.”

Leandro L., system architect at a technology services company, suggested that Nutanix improve its asynchronous replication capabilities:

“I would like to see asynchronous replication in less than 60 minutes, or even in 15 minutes. I understand that they are working to lower replication times to 1 minute or less.”

VMware vSAN

Raymund R., a network and system administrator, values VMware vSAN’s minimal downtime:

“The minimal downtime alone is a winning blow for both the management and the ITs. Unexpected downtime is inevitable. It’s been part any organization. Addressing that pitfall really gives an edge from a business perspective.”

Harri W. ICT network administrator at a maritime company, praised vSAN’s scalability and upgrade capabilities:

“Scalability and future upgrades are a piece of cake. If you want more IOPs, then add disk groups and/or nodes on the fly. If you want to upgrade the hardware, then add new servers and retire the old ones. No service breaks at all.”

However, Javier G., engagement cloud solution architect at a communications service provider, would like to see improved hardware support with vSAN:

“The list of hardware supported should be increased in the future. I would improve these areas by increasing the number of partners to support as many partners as possible.”

Similarly, Pushkaraj D., senior manager of IT infrastructure at a tech services company, discussed the need for improved hardware compatibility:

“The vSAN Hardware Compatibility List Checker needs to improve, since currently it is a sore point for vSAN. …You need to thoroughly check and re-check the HCL with multiple vendors like VMware, in the first instance, and manufacturers like Dell, IBM, HPE, etc., as the compatibility list is very narrow.”

 



Source link

Interop ITX Spotlights IT Infrastructure Evolution


As enterprises look to leverage technology for new products and services that give them an edge over the competition, IT infrastructure is changing faster than ever. Legacy architectures are giving way to software-defined technologies, cloud, open source and automation as IT organizations adapt to support these enterprise digital transformation initiatives and focus on speed and new capabilities.

With all these changes, infrastructure pros are under intense pressure. How do you keep up with all the emerging trends? How do you know evaluate new technologies and figure out which ones might be right for your business? At the same time, you still need to maintain existing infrastructure, so efficiency is critical.

At Interop ITX, infrastructure pros can get a wealth of education on all the hot technologies and emerging IT trends in just five days. The conference features more than two dozen full- and half-day workshops, summits, and hour-long sessions focused on infrastructure, including networking, containers, automation, and hyperconvergence.

Attendees can get up to speed on software-defined networking, software-defined storage, next-generation WANs, and wireless networking design. For those who want some direct experience with new technologies, workshops on network automation and open source are some of the sessions that include hands-on instruction.

At the same time, attendees can also get practical tips for managing existing infrastructure with sessions on network troubleshooting, wireless security, and disaster recovery.

Leading these workshops and sessions are some of the brightest minds in the IT community. These infrastructure experts, such as Greg Ferro and Ethan Banks, are among the most respected in the industry for their deep knowledge in their respective domains. The speaker roster at Interop ITX includes analysts and consultants as well as practitioners from Mastercard and Shutterstock, who will provide first-hand accounts of how they’re transforming their infrastructure.

Here’s a sample of what you can look forward to in the Interop ITX infrastructure track:

Packet Pushers Future of Networking Summit – Greg Ferro and Ethan Banks of Packet Pushers will reprise their popular two-day summit, which will look at the technologies and trends impacting networking in the next five to ten years. This year’s summit will cover automation and orchestration, visibility and analytics, cloud networking, and next-gen WAN.

Container Crash Course – Containers are one of the hottest technologies in IT today and a hot topic at this year’s Interop. This all-day event is designed to equip attendees with core knowledge of containers and understanding of how the technology can apply to their business. The summit features a panel of experts in container technologies, microservices, and DevOps from companies such as Docker, Red Hat, Amazon Web Services.

Later in the week, Stephen Foskett, organizer of the popular Tech Field Days, will present “The Case for Containers: What, When and Why?” and Brian Gracely, director of product strategy at Red Hat and well-known cloud expert, will present “Managing Containers in Production: What You Need to Think About.

Hands-on Practical Network Automation” – Interop ITX attendees have a couple opportunities to learn about network automation. This half-day workshop will cover how to get started with network automation and includes an introduction to Python. Two of the workshop’s speakers — Jere Julian, extensibility engineer at Arista Networks and Scott Lowe, engineering architect at VMware — recently wrote a Network Computing blog outlining the benefits of automation. Twin Bridges Founder Kirk Byers, who teaches Python to network pros, and Matt Oswalt, software engineer at Stackstorm, are co-presenters.

 

Oswalt also is scheduled to present “Fundamental Principles of Automation,” which is designed to help IT pros understand automation basics.

Cloud expert Lori MacVittie, principal technical evangelist at F5 Networks, will discuss IT automation more broadly in her session, “Operationalizing IT with Automation and APIs.”

SDN: What Is It Good For?” — This session will feature a panel of infrastructure experts including Robin Perez, deputy director of infrastructure for the City of New York and Thomas Edwards, VP of engineering and development at FOX, who will provide first-hand accounts of how their organizations have implemented SDN. The panel is designed to provide practical guidance for defining and scoping an SDN project. Lisa Caywood, director of ecosystem development at the Linux Foundation’s OpenDaylight Project, is the panel moderator.

“Wireless Network Design That Scales to Business Demands” — This session is a recent addition to the Interop ITX schedule featuring top-rated Interop speaker George Stefanick, wireless network architect at Houston Methodist Hospital. His session at last year’s Interop on wireless network design, which covered site surveys and issues like co-channel interference, received high marks from attendees.

The Killer Troubleshooting Toolset Revisited” – Networking pros can spend a lot of time trying to track down the root cause of network and application performance issues. In this half-day workshop, Mike Pennacchi, owner and lead network analyst for Network Protocol Specialists, will cover a number of powerful network troubleshooting tools that help streamline the process. Pennacchi is a longtime Interop instructor whose sessions consistently receive high marks.

Converged and Hyperconverged Infrastructure: Myths and Truths” — The buzz around converged/hyperconverged infrastructure is inescapable. Interop ITX features a couple sessions to help cut through the hype. This session, presented by Krista Macomber, TBR senior analyst, will cover the pros and cons, adoption trends, and provide recommendations for enterprises considering the technology. Another session, “Things To Know Before You (Hyper) Converge Your Infrastructure,” will cover key considerations and evaluation criteria. Enterprise Strategy Group analysts Dan Conde and Jack Poller are the presenters.

Building the Infrastructure Future at Mastercard” – Len Sanker, senior VP of enterprise architecture and data engineering at Mastercard will discuss challenges in aligning technology capabilities with business goals. Another practitioner, Shutterstock CIO David Giambruno, will share best practices and lessons learned while leading a major data center transformation in “Building a Next-Generation API-Driven Infrastructure for Scaling Growth.”



Source link

15 Infrastructure Experts to See at Interop ITX


Each year, Interop brings together some of the best and brightest minds in the IT community to share their expertise, and this year is no different. While the name has changed somewhat – it’s now Interop ITX – this year’s conference will provide the same high-caliber roster of speakers. These technology experts, some of the most respected in the industry, will provide in-depth sessions and workshops across six tracks: infrastructure, data and analytics, cloud, security, DevOps, and leadership/professional development.

Since our focus here at Network Computing is infrastructure, we thought we’d put the spotlight on some of the infrastructure pros you can expect to see at Interop ITX May 15-19 in Las Vegas. These experienced and innovative IT practitioners and analysts will speak on topics like network automation, wireless networking, storage, hyperconvergence, and containers.

You’re probably familiar with many of these IT experts – they’re some of the more well-known names in infrastructure and some, such as Ethan Banks and Greg Ferro of Packet Pushers, are top-rated speakers at past Interop conferences. This year’s event also features some names you may not be as familiar with, but who have deep knowledge in their domains, like Shawn Zandi, a principal network architect at LinkedIn.

Interop ITX prides itself on its independence, so you can expect these experts to provide objective insight on issues that are critical in today’s fast-changing IT environment.

The following pages are a sample of the infrastructure experts scheduled to speak at Interop ITX. Check out the Interop ITX schedule to see the full roster of infrastructure speakers, as well as presenters in the other tracks.



Source link

Converged Vs. Hyperconverged Infrastructure: What’s The Difference?


Traditionally, the responsibility of assembling IT infrastructure falls to the IT team. Vendors provide some guidelines, but the IT staff ultimately does the hard work of integrating them. The ability to pick and choose components is a benefit, but requires effort in qualification of vendors, validation for regulatory compliance, procurement, and deployment.

Converged and hyperconverged infrastructure provides an alternative. In this blog, I’ll examine how they evolved from the traditional infrastructure model and compare their different features and capabilities.

Reference architectures

Reference architectures, which provide blueprints of compatible configurations, help to alleviate some of the burden of IT infrastructure integration. Hardware or software vendors provide defined behavior and performance given selected choices of hardware devices and software, along with configuration parameters. However, since reference architectures may involve different vendors, they can present problems in determining who IT groups need to call for support.

Furthermore, given that the systems combine components from multiple vendors, systems management remained difficult. For example, visibility into all levels of the hardware and software stack is not possible since management tools can’t assume how the infrastructure was set up. Even with systems management standards and APIs, tools aren’t comprehensive enough to understand device-specific information.

Converged infrastructure: ready-made

Converged infrastructures takes the idea of a reference architecture and integrates the system prior to shipping to customers; systems are pre-tested and pre-configured. One unpacks the box, plugs it into the network and power, and the system is ready to use.

IT organizations choose converged systems for ease of deployment and management instead of the benefits of an open, interoperable system with choice of components. Simplicity overcomes choice.

Hyperconverged: The building-block approach

Hyperconverged systems take the convergence concept one step further. These systems are preconfigured, but provide integration via software-defined capabilities and interfaces. Software interfaces act as a glue that supplements the pre-integrated hardware components.

In hyperconverged systems, functions such as storage are integrated through software interfaces, as opposed to the traditional physical cabling, configuration and connections. This type of capability is typically done using virtualization and can exploit commodity hardware and servers.

Local storage not a key differentiator

While converged systems may include traditional storage delivered using discrete NAS or Fibre Channel SAN, hyperconverged systems can take different forms of storage (rotating disk or flash) and present it via software in a unified way.  

A hyperconverged system  may use local storage, but it can use an external system with software interfaces to present a unified storage pool. Some vendors get caught up in the definition of whether the storage is implemented locally (implemented as a disk within the server) or as a separate storage system. I think that’s missing the bigger picture. What’s more important is the ability for the systems to scale.

Scale-out is key

Software enables hyperconverged systems to be used as scale-out building blocks. In the enterprise, storage is often an area of interest, since it has been difficult to scale out storage in the same way compute capacity expands by incrementally adding servers.

Hyperconverged building blocks enables graceful scale out, as capacity may increase without re-architecting the hardware infrastructure. The goal is to unify as many services using software that acts as layer separating the hardware infrastructure from the workload. That extra layer may result in some performance tradeoff, but some vendors believe that the systems are fast enough for most non-critical workloads.

Making a choice

How do enterprises choose converged vs hyperconverged systems? ESG’s research shows that enterprises choose converged infrastructure for mission-critical workloads, citing better performance, reliability, and scalability.  Enterprises choose hyperconverged systems for consolidating multiple functions into one platform, ease of use, and deploying tier-2 workloads.

Converged and hyperconverged systems continue to gain interest since they enable creation of on-premises clouds with elastic workloads and resource pooling. However, they can’t solve all problems for all customers. An ESG survey shows that, even five years out, over half the respondents plan to create an on-premises infrastructure strategy based on best-of-breed components as opposed to converged or hyperconverged infrastructure.

Thus, I recommend that IT organizations examine these technologies, but realize that they can’t solve every problem for every organization.

Hear more from Dan Conde live and in person at Interop ITX, where he will co-present “Things to Know Before You (Hyper) Converge Your Infrastructure,” with Jack Poller, senior lab analyst at Enterprise Strategy Group. Register now for Interop ITX, May 15-19 in Las Vegas.



Source link