Tag Archives: Infrastructure

7 IT Infrastructure Certifications Gaining Value


IT professionals often wonder whether the time and expense involved in acquiring a certification is worth it. And if it is, which certification should they pursue?

Foote Partners recently released its 2018 IT Skills Demand and Pay Trends Report, which helps answer those questions. Based on data from 3,188 North American employers, the report found that the average market value for the 446 IT certifications it tracks climbed 0.3% in the first quarter of the year. While that doesn’t seem like much, the study also found that on average, having a single certification was worth 7.6% of an IT worker’s base pay.

Drilling down into the data, the report also found that a select group of certifications had gained 10% or more in market value during the six months ending April 1. Several of those certifications are related to IT infrastructure, and those are the certifications highlighted on the following slides.

However, the fact that a given certification has recently increased in value doesn’t necessarily mean that demand is increasing for a given skill or that the trend will continue. The report rightly points out that a lot of different factors can influence supply and demand for certifications, including factors people don’t often consider, such as vendors aggressively marketing certain certifications or overhauling their certification programs.

Foote Partners’ data also revealed that market value volatility for tech skills is leveling out.  The analysts attribute the decreasing fluctuation in pay for various skills to “something more urgent”: the arrival of “game-changing emerging technologies” like blockchain, the internet of things, artificial intelligence, automation, data analytics, and new cybersecurity advances. These areas could see some of the highest employment demand in coming months and years, and vendors haven’t yet created certifications related to some of this newer tech.

Still, for IT professionals who follow career trends, it’s worth noting which certifications are seeing sharp upticks in demand. And according to Foote Partners, the following certs from the networking and communications and systems administration categories gained significant value.

Note: Certifications are arranged in alphabetical order, not in order of relative value.



Source link

Composable Infrastructure: A Skeptic’s View


One of the buzzwords you hear about in data centers these days is composable infrastructure. Hewlett-Packard Enterprise, Cisco, Intel and others have touted the concept as a more efficient way to provision and manage on-premises data center infrastructure and dynamically support workloads. Dell also recently got into the act, introducing Kinetic.

But at Interop ITX, attendees heard a less than enthusiastic perspective on composable infrastructure. Rob Hirschfeld, CEO of RackN who has been in the cloud and infrastructure space for nearly 15 years including serving on the Open Stack Foundation Board, said IT infrastructure buyers should carefully consider whether the technology truly solves a problem for their business.

“I’m pretty skeptical about composable infrastructure,” he said, prefacing his talk. “I’m not a fan of bright and shiny for bright and shiny’s sake. It needs to solve a problem.”

Hirschfeld noted that while his focus is software, what his company does — develop software to automate bare metal servers — has a lot in common with composable hardware. Composable infrastructure is “about how you change your hardware’s form factor,” he said.

From his perspective, the important criteria when buying IT infrastructure are: commodity components are interchangeable, it’s manageable at scale, and it reduces implementation complexity. “If you’re not reducing the complexity of your environment, then you’re ultimately creating technical debt or other problems,” he said.

 

So what is composable infrastructure? Hirschfeld provided this definition: A server chassis that allows you to dynamically reallocate resources like RAM, storage, networking, and GPUs between CPUs to software define a physical server.

“So it’s very much like a virtual machine, but with bare metal,” he added.

Today’s composable infrastructure solutions use high-speed interconnections — PCIe and NVMe — to extend the bus length of the components in a single computer, he said. The CPU remains the central identity of a system, but resources like RAM can be reassigned.

Hirschfeld noted his distaste for the term “composable,” which can be confusing taken out of context. Moreover, composable infrastructure can be confused with converged infrastructure, which he described as creating infrastructure using common building blocks instead of having specialized compute/storage units. While converged infrastructure is often used to simplify implementation of virtualized infrastructure, practically speaking, composable infrastructure competes with virtualized infrastructure, he said.

Composable infrastructure is designed to enable the creation of “heterogeneous machine configurations without knowing in advance your target configuration needs,” according to Hirschfeld, who added that virtualized infrastructure can accomplish the same thing.

While composable infrastructure is cool technology, IT buyers need to consider it from a practical point of view, Hirschfeld said. “My concern with this model is that I have 10 chasses of composable infrastructure and each has 20% spare capacity. Now I have to figure out how to manage that,” Hirschfeld said.

Most people he knows don’t dynamically scale their capacity, which is why he’s a skeptic, he said.

“I’m not saying don’t buy this hardware. There are legitimate vendors and it might solve your use case,” he said. “But understand what your use cases are and pressure test against other solutions on the market because this is a premium model.”

Hirschfeld isn’t sold on the benefits composable infrastructure vendors promise, such as reduced overprovisioning, improved time to service, and availability.

In his view, there are two types of IT infrastructure buyers: Those who want to buy an appliance and are willing to spend money on standard systems, and those who are focused on scale, are cost-sensitive, and have a multi-vendor deployment.

“In both cases, you’ll have a pretty predictable use of infrastructure,” Hirschfeld said. “If you don’t, you’re probably not buying infrastructure, but buying it from a cloud provider.”



Source link

Top Trends Impacting Enterprise Infrastructure


Enterprise infrastructure teams are under massive pressure as the cloud continues to upend traditional IT architectures and ways of providing service to the business. Companies are on a quest to reap the speed and agility benefits of cloud and automation, and infrastructure pros must keep up.

In this rapidly changing IT environment, new technologies are challenging the status quo. Traditional gear such as dedicated servers, storage arrays, and network hardware still have their place, but companies are increasingly looking to the cloud, automation, and software-defined technologies to pursue their digital initiatives.

According to IDC, by 2020, the heavy workload demands of next-generation applications and IT architectures will have forced 55% of enterprises to modernize their data center assets by updating their existing facilities or deploying new facilities.

Moreover, by the end of next year, the need for better agility and manageability will lead companies focused on digital transformation to migrate more than 50% of their IT infrastructure in their data center and edge locations to a software-defined model, IDC predicts. This shift will speed adoption of advanced architectures such as containers, analysts said.

Keith Townsend, founder of The CTO Advisor and Interop ITX Infrastructure Track Chair, keeps a close eye the evolution of IT infrastructure. On the next pages, read his advice on what he sees as the top technologies and trends for infrastructure pros today: hyperconvergence, network disaggregation, cloud migration strategies, and new abstraction layers such as containers.

(Image: Timofeev Vladimir/Shutterstock)

Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!

 



Source link

6 Ways to Transform Legacy Data Storage Infrastructure


So you have a bunch of EMC RAID arrays and a couple of Dell iSCSI SAN boxes, topped with a NetApp filer or two. What do you say to the CEO who reads my articles and knows enough to ask about solid-state drives, all-flash appliances, hyperconverged infrastructure, and all the other new innovations in storage? “Er, er, we should start over” doesn’t go over too well! Thankfully, there are some clever — and generally inexpensive — ways to answer the question, keep your job, and even get a pat on the back.

SSD and flash are game-changers, so they need to be incorporated into your storage infrastructure. SSDs are better than enterprise-class hard drives from a cost perspective because they will speed up your workload and reduce the number of storage appliances and servers needed. It’s even better if your servers support NVMe, since the interface is becoming ubiquitous and will replace both SAS and (a bit later) SATA, simply because it’s much faster and lower overhead.

As far as RAID arrays, we have to face up to the harsh reality that RAID controllers can only keep up with a few SSDs. The answer is either an all-flash array and keeping the RAID arrays for cool or cold secondary storage usage, or a move to a new architecture based on either hyperconverged appliances or compact storage boxes tailored for SSDs.

All-flash arrays become a fast storage tier, today usually Tier 1 storage in a system. They are designed to bolt onto an existing SAN and require minimal change in configuration files to function. Typically, all-flash boxes have smaller capacities than the RAID arrays, since they have enough I/O cycles to do near-real-time compression coupled with the ability to down-tier (compress) data to the old RAID arrays.

With an all-flash array, which isn’t outrageously expensive, you can boast to the CEO about 10-fold boosts in I/O speed, much lower latency , and as a bonus a combination of flash and secondary storage that usually has 5X effective capacity due to compression. Just tell the CEO how many RAID arrays and drives you didn’t buy. That’s worth a hero badge!

The idea of a flash front-end works for desktops, too. Use a small flash drive for the OS (C-drive) and store colder data on those 3.5” HDDs. Your desktop will boot really quickly, especially with Windows 10 and program loads will be a snap.

Within servers, the challenge is to make the CPU, rather than the rest of the system, the bottleneck. Adding SSDs as primary drives makes sense, with HDDs in older arrays doing duty as bulk secondary storage, just as with all-flash solutions, This idea has fleshed out into the hyperconverged infrastructure (HCI) concept where the drives in each node are shared with other servers in lieu of dedicated storage boxes. While HCI is a major philosophical change, the effort to get there isn’t that huge.

For the savvy storage admin, RAID arrays and iSCSI storage can both be turned into powerful object storage systems. Both support a JBOD (just a bunch of drives) mode, and if the JBODs are attached across a set of server nodes running “free” Ceph or Scality Ring software, the result is a decent object-storage solution, especially if compression and global deduplication are supported.

Likely by now, you are using public clouds for backup. Consider “perpetual “storage using a snapshot tool or continuous backup software to reduce your RPO and RTO. Use multi-zone operations in the public cloud to converge DR onto the perpetual storage setup, as part of a cloud-based DR process. Going to the cloud for backup should save a lot of capital expense money.

On the software front, the world of IT is migrating to a services-centric software-defined storage (SDS), which allows scaling and chaining of data services via a virtualized microservice concept. Even older SANs and server drives can be pulled into the methodology, with software making all legacy boxes in a data center operate as a single pool of storage. This simplifies storage management and makes data center storage more flexible.

Encryption ought to be added to any networked storage or backup. If this prevents even one hacker from reading your files in the next five years, you’ll look good! If you are running into a space crunch and the budget is tight, separate out your cold data, apply one of the “Zip” programs and choose the encrypted file option. This saves a lot of space and gives you encryption!

Let’s take a closer look at what you can do to transform your existing storage infrastructure and extend its life.

(Image: Production Perig/Shutterstock)



Source link

Does Hyperconverged Infrastructure Save Money?


The State of Ransomware

Ransomware has become one of the most prevalent new cybersecurity threats faced by today’s enterprises. This new report from Dark Reading includes feedback from IT and IT security professionals about their organization’s ransomware experiences, defense plans, and malware challenges. Find out what they had to say!

MORE REPORTS



Source link