Monthly Archives: October 2017

On-Prem IT Infrastructure Endures, Talent Needed


Despite steady adoption of public cloud services, organizations continue to invest in their on-premises IT infrastructure and the people who run it, according to a new report from 451 Research.

The firm’s latest “Voice of the Enterprise: Datacenter Transformation” study found that organizations are maintaining healthy capacity in their on-premises data centers and have no plans to cut back on the staff assigned to data center and facility operations. Almost 60% of the nearly 700 IT decision makers surveyed by the firm said they have enough data center floor space and power capacity to last at least five years.

Even though many companies expect the total number of IT staffers to decline over the next year, most expect the number of employees dedicated to data center and facilities will stay the same or increase, according to 451 Research.

The reason for the continued data center investment, cited by 63% of those polled, was fairly generic: business growth. Christian Perry, research manager and lead analyst of the report, said analysts dove a little deeper. As it turns out, companies are finding that keeping workloads long term on public cloud services isn’t all that cost effective.

Regardless of the type of workload in the cloud – ERP, communications, or CRM for example – or size of the company, when an organization expands a workload by adding new licenses, seats, or functions, the cost over time winds up close to what it would cost to keep the workload on-premises, Perry said. Costs include opex and capex for IT infrastructure – servers, storage and networking – as well as the facilities that contain it.

“It still is dirt cheap to go to the cloud, but to stay in the cloud, that’s a whole other story,” he told me in a phone interview.

While some companies manage their cloud costs well, unexpected growth, a massive new project or a new division coming online can make cloud costs unwieldy, Perry said.

Another factor that’s playing into the continued data center investment is the “cloudification” of on-premises IT infrastructure. Converged infrastructure has enabled companies to reach greater levels of agility, flexibility, and cost control, Perry said, adding that hyperconverged infrastructure boosts that trend.

Data center skills shortage

While organizations continue to invest their on-premises IT infrastructure and facilities, they’re running into staffing challenges, 451 Research found. Twenty-nine percent face a skills shortage when trying to find qualified data center and facilities personnel, Perry said.

As companies are shifting away from traditional IT architectures to converged and hyperconverged infrastructure, demand for IT generalists has grown, he said. “Specialists are still critical in on-prem environments, but we’ve definitely seen the rise of the generalist…There’s a lot of training going on internally in organizations to bring their specialists to a generalist level.”

Of the 29% facing staffing challenges, a majority (60%) are focused on training existing staff to fill the gaps. Those attending the training tend to be server and storage administrators, 451 Research found. “There’s a certain sense of fear that they’re going to become siloed and potentially irrelevant,” Perry said. “At the same time, there’s a lot of excitement about these newer architectures and software-defined technologies.”

Companies cited a big skills gap in the areas of virtualization and containers, technologies companies view as transformative to their on-premises infrastructure, he said. They’re also key technologies to facilitate the continued enterprise focus on data center consolidation.

“The jump in cloud has had an impact on IT staffing overall,” Perry said. “A lot of cloud service providers have scooped up a ton of good IT talent. That’s not just Tier 1 cloud providers, but also Tier 2…They’re pulling away skilled IT staff and leaving gaps for on-prem.”

A separate 451 Research report that looked into enterprise server and converged infrastructure trends found that VM administration was the top skill enterprises have trouble finding. A third of organizations reported a networking skills gap.

 

 

 

 

 

 

 

 



Source link

Calculating IPv6 Subnets in Linux | Linux.com


We’re going to look at some IPv6 calculators, sipcalc and subnetcalc, and some tricks for subnetting without breaking our brains. Let’s start with reviewing IPv6 address types. There are three types: unicast, multicast, and anycast.

IPv6 Unicast

The unicast address is a single address identifying a single interface. In other words, what we usually think of as our host address. There are three types of unicast addresses:

  • Global unicast are unique publicly routable addresses. These are controlled by the Internet Assigned Numbers Authority (IANA), just like IPv4 addresses. These are the address blocks you get from your Internet service provider. These are in the 2000::/3 range, minus a few exceptions listed in the table at the above link.
  • Link-local addresses use the fe80::/10 address block and are similar to the private address classes in IPv4 (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16). Some major differences are link-local addresses are not routable, but are confined to a single network segment. They are automatically derived from the MAC address of the network interface; this is not a guarantee that all of them are unique, but your odds are pretty good that they are. The IPv6 protocol requires that every network interface is automatically assigned a link-local address.
  • Special addresses are loopback addresses, IPv4-address mapped spaces, and 6-to-4 addresses for crossing from an IPv4 network to an IPv6 network.

Multicast

Multicast in IPv6 is similar to the old IPv4 broadcast: a packet sent to a multicast address is delivered to every interface in a group. The IPv6 difference is that only hosts who are members of the multicast group receive the multicast packets, rather than all reachable hosts. IPv6 multicast is routable, and routers will not forward multicast packets unless there are members of the multicast groups to forward the packets to. Remember IPv4 broadcast storms? They’re much less likely to occur with IPv6. Multicast relies on UDP rather than TCP, so it is used for multimedia streaming, such as efficiently streaming the video feed from a single IP camera to multiple hosts. See IPv6 Multicast Address Space Registry for complete information.

Anycast

An anycast address is a single unicast address assigned to multiple nodes, and packets are received by the first available node. It is a cool mechanism to provide both load-balancing and automatic failover without a lot of hassle. There is no special anycast addressing scheme; all you do is assign the same address to multiple nodes. The root name servers use anycast addressing.

IPv6 Subnet Calculators

What I really really want is an IPv6 equivalent for ipcalc, which calculates multiple IPv4 subnets with ease. I have not found one.

There are other helpful tools for IPv6. ipv6calc performs all manner of useful queries and address manipulation. It does not include a subnet calculator, but it does tell you the subnet and host portions of an address:

$ ipv6calc -qi 2001:0db8:0000:0055:0000:0000:0000:0100
Address type: unicast, global-unicast, productive, iid, iid-local
Registry for address: reserved(RFC3849#4)
Address type has SLA: 0055
Interface identifier: 0000:0000:0000:0100

SLA stands for Site Level Aggregation, which means subnet. If you change 0055 to 0056 then you have a new subnet. The interface identifier is the portion that identifies a single network interface. Think of an IPv6 address as having three parts: the network address, which is the same for every node on your network, and the subnet and host addresses, which you control. (Network nerds use all kinds of cool terminology to say these things, but I prefer the simplified version.)

|---network---|  |subnet|  |---------host-------|
2001:0db8:0000    :0055     :0000:0000:0000:0100

IPv6 addresses are in hexadecimal, which is the 16 characters 0-9 and a-f. So, within the subnet and host blocks, you can use any numbers from 0000 to ffff. So even if you count on your fingers this isn’t too hard to figure out.

Having calculators helps check your work. (Free tip to documentation writers and anyone who wants to be helpful: examples of both correct and incorrect output are fabulously useful.) There are two IPv6 calculators that I use. subnetcalc is actively maintained, while sipcalc is not, though the maintainers accept patches and bugfixes. They work similarly, and present information in slightly different ways. Sometimes all you need is a different viewpoint.

Let’s say your ISP gives you 2001:db8:abcd::0/64. How many addresses is that?

$ subnetcalc 2001:db8:abcd::0/64
Address       = 2001:db8:abcd::
                   2001 = 00100000 00000001
                   0db8 = 00001101 10111000
                   abcd = 10101011 11001101
                   0000 = 00000000 00000000
                   0000 = 00000000 00000000
                   0000 = 00000000 00000000
                   0000 = 00000000 00000000
                   0000 = 00000000 00000000
Network       = 2001:db8:abcd:: / 64
Netmask       = ffff:ffff:ffff:ffff::
Wildcard Mask = ::ffff:ffff:ffff:ffff
Hosts Bits    = 64
Max. Hosts    = 18446744073709551616   (2^64 - 1)
Host Range    = { 2001:db8:abcd::1 - 2001:db8:abcd:0:ffff:ffff:ffff:ffff }
Properties    =
   - 2001:db8:abcd:: is a NETWORK address
[...]

18,446,744,073,709,551,616 addresses is probably enough. The Wildcard Mask shows the bits that define your host addresses. But maybe you want to divide this up a bit. There are 128 bits in an IPv6 address (8 quads x 16 bits), so let’s plug that into subnetcalc and see what happens:

$ subnetcalc 2001:db8:abcd::0/128
[...]
Network       = 2001:db8:abcd:: / 128
Netmask       = ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
Wildcard Mask = ::
Hosts Bits    = 0
Max. Hosts    = 0   (2^0 - 1)
Host Range    = { 2001:db8:abcd::1 - 2001:db8:abcd:: }

Zero hosts? That doesn’t sound good. sipcalc shows the same thing in a different way:

$ sipcalc 2001:db8:abcd::0/128
-[ipv6 : 2001:db8:abcd::0/128] - 0

[IPV6 INFO]
Expanded Address        - 2001:0db8:abcd:0000:0000:0000:0000:0000
Compressed address      - 2001:db8:abcd::
Subnet prefix (masked)  - 2001:db8:abcd:0:0:0:0:0/128
Address ID (masked)     - 0:0:0:0:0:0:0:0/128
Prefix address          - ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
Prefix length           - 128
Address type            - Aggregatable Global Unicast Addresses
Network range           - 2001:0db8:abcd:0000:0000:0000:0000:0000 -
                          2001:0db8:abcd:0000:0000:0000:0000:0000

So we want something between /64 and /128.

$ subnetcalc 2001:db8:abcd::0/86 -n
Address       = 2001:db8:abcd::
                   2001 = 00100000 00000001
                   0db8 = 00001101 10111000
                   abcd = 10101011 11001101
                   0000 = 00000000 00000000
                   0000 = 00000000 00000000
                   0000 = 00000000 00000000
                   0000 = 00000000 00000000
                   0000 = 00000000 00000000
Network       = 2001:db8:abcd:: / 86
Netmask       = ffff:ffff:ffff:ffff:ffff:fc00::
Wildcard Mask = ::3ff:ffff:ffff
Hosts Bits    = 42
Max. Hosts    = 4398046511103   (2^42 - 1)
Host Range    = { 2001:db8:abcd::1 - 2001:db8:abcd::3ff:ffff:ffff }
Properties    =
   - 2001:db8:abcd:: is a NETWORK address

The -n option disables DNS lookups. We’re getting closer:

$ subnetcalc 2001:db8:abcd::0/120 -n
Address       = 2001:db8:abcd::
                   2001 = 00100000 00000001
                   0db8 = 00001101 10111000
                   abcd = 10101011 11001101
                   0000 = 00000000 00000000
                   0000 = 00000000 00000000
                   0000 = 00000000 00000000
                   0000 = 00000000 00000000
                   0000 = 00000000 00000000
Network       = 2001:db8:abcd:: / 120
Netmask       = ffff:ffff:ffff:ffff:ffff:ffff:ffff:ff00
Wildcard Mask = ::ff
Hosts Bits    = 8
Max. Hosts    = 255   (2^8 - 1)
Host Range    = { 2001:db8:abcd::1 - 2001:db8:abcd::ff }
Properties    =
   - 2001:db8:abcd:: is a NETWORK address

255 hosts works for me. So, while this isn’t quite as easy as ipcalc spelling out multiple subnets at once, it’s still useful. You might want to copy the Range blocks/IPv6 table and keep it close as a handy reference. It prints out the complete 2000::/3 range in a nice table, and also explains the math.

Next week, we’ll learn about networking in KVM, and using virtual machines to quickly and easily test various networking scenarios.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Big Data Storage: 7 Key Factors


Defining big data is actually more of a challenge than you might think. The glib definition talks of masses of unstructured data, but the reality is that it’s a merging of many data sources, both structured and structured, to create a pool of stored data that can be analyzed for useful information.

We might ask, “How big is big data?” The answer from storage marketers is usually “Big, really big!” or “Petabytes!”, but again, there are many dimensions to sizing what will be stored. Much big data becomes junk within minutes of being analyzed, while some needs to stay around. This makes data lifecycle management crucial. Add to that globalization, which brings foreign customers to even small US retailers. The requirements for personal data lifecycle management under the European Union General Data Protection Regulation go into effect in May 2018 and penalties for non-compliance are draconian, even for foreign companies, at up to 4% of global annual revenues per affected person.

For an IT industry just getting used to the term terabyte, storing petabytes of new data seems expensive and daunting. This would most definitely be the case with RAID storage array; in the past, an EMC salesman could retire on the commissions from selling the first petabyte of storage. But today’s drives and storage appliances have changed all the rules about the cost of capacity, especially where open source software can be brought into play.

In fact, there was quite a bit of buzz at the Flash Memory Summit in August about appliances holding one petabyte in a single 1U rack. With 3D NAND and new form factors like Intel’s “Ruler” drives, we’ll reach the 1 PB goal within a few months. It’s a space, power, and cost game changer for big data storage capacity.

Concentrated capacity requires concentrated networking bandwidth. The first step is to connect those petabyte boxes with NVMe over Ethernet, running today at 100 Gbps, but vendors are already in the early stages of 200Gbps deployment. This is a major leap forward in network capability, but even that isn’t enough to keep up with drives designed with massive internal parallelism.

Compression of data helps in many big data storage use cases, from removing repetitive images of the same lobby to repeated chunks of Word files. New methods of compression using GPUs can handle tremendous data rates, giving those petabyte 1U boxes a way of quickly talking to the world.

The exciting part of big data storage is really a software story. Unstructured data is usually stored in a key/data format, on top of traditional block IO, which is an inefficient method that tries to mask several mismatches. Newer designs range from extended metadata tagging of objects to storing data in an open-ended key/data format on a drive or storage appliance. These are embryonic approaches, but the value proposition seems clear.

Finally, the public cloud offers a home for big data that is elastic and scalable to huge sizes. This has the obvious value of being always right-sized to enterprise needs and AWS, Azure and Google have all added a strong list of big data services to match. With huge instances and GPU support, cloud virtual machines can emulate an in-house server farm effectively, and make a compelling case for a hybrid or public cloud-based solution.

Suffice to say, enterprises have a lot to consider when they map out a plan for big data storage. Let’s look at some of these factors in more detail.

(Images: Timofeev Vladimir/Shutterstock)



Source link

8 Infrastructure Trends Ahead for 2018


The cloud is making inroads into the enterprise, but on-premises IT infrastructure remains a critical part of companies’ IT strategies. According to the Interop ITX and InformationWeek 2018 State of Infrastructure study, companies are continuing to invest in data center, storage, and networking infrastructure as they build out their digital strategies.

The survey, which polled 150 IT leaders and practitioners from a range of industries and company sizes, found that 24% said their organization plans to increase spending on IT infrastructure by more than 10% in the next year. Twenty-one percent plan to spend 5% to 10% more on IT infrastructure spending compared to last year while 18% expect to spend no more than 5%.

Twenty-seven percent of IT leaders surveyed said their organizations plan to increase build out or support of IT infrastructure in order to support new business opportunities. Another 30% cited increased workforce demands as the driver for a bigger focus on infrastructure.

Enterprises are investing in a variety of technologies to help them achieve their digital goals and keep up with changing demands, according to the study. Storage is a huge focus for companies as they try to keep pace with skyrocketing data growth. In fact, the rapid growth of data and data storage is the single greatest factor driving change in IT infrastructure, the survey showed.

Companies are also focused on boosting network security, increasing bandwidth, adding more servers to their data centers, and building out their WLANs.

At the same time, they see plenty of challenges ahead to modernizing their infrastructure, including cost of implementation, lack of staff expertise, and security concerns.

Read ahead to find out what organizations are planning in the year ahead for their IT infrastructure. For the full survey results, download the complete report. Learn more about infrastructure trends at Interop ITX in Las Vegas April 30-May 4. Register today! 

(Image: Connect world/Shutterstock)



Source link

10 Hyperconvergence Vendors Setting the Pace


As companies look for ways to make their IT infrastructure more agile and efficient, hyperconvergence has become a top consideration. The integrated technology promises faster deployment and simplified management for the cloud era.

An Enterprise Strategy Group survey last year found that 70% of 308 respondents plan to use hyperconverged infrastructure while 15% already use it and 10% are interested in it. IDC reported that hyperconverged sales grew 48.5% year over year in the second quarter of this year, generating $763.4 million in sales. Transparency Market Research estimates the global HCI market to reach $31 billion by 2025, up from $1.5 billion last year.

“It’s moved well beyond the hype phase into the established infrastructure phase,” Christian Perry, research manager covering IT infrastructure at 451 Research, told me in an interview.

With hyperconvergence, organizations can quickly deploy infrastructure to support new workloads, divisions, or projects, he said. “In that sense, it really provides an on-premises cloud-like option.”

Hyperconverged infrastructure leverages software to integrate compute and storage typically in a single appliance on commodity hardware. Fully virtualized, hyperconverged products take a building-block approach and are designed to scale out easily by adding nodes. According to IDC, a key differentiator for hyperconverged systems, compared to other integrated systems, is their scale-out architecture and ability to provide all compute and storage functions through the same x86 server-based resources.

ESG Analyst Dan Conde told me that some newer hyperconverged systems include broader networking features, but that for the most part, the technology’s focus is on storage and “in-the-box” connectivity.

VDI has been a top use case for hyperconverged infrastructure, but Perry said 451 Research is seeing the technology used for a range of use cases, including data protection, and traditional virtualized workloads such as Microsoft applications. Because it’s easy to deploy, the technology is well suited for branch and remote locations, but companies are also running it in the core data centers alongside traditional infrastructure, he said.

Vendor lock-in, high cost, and inflexible scaling (compute and storage capacity must be added at the same rate) are among the drawbacks that some have cited with hyperconvergence platforms. Perry said he hasn’t seen scalability issues among adopters, and that opex costs are much lower than traditional infrastructure. Hyperconverged products also have proven to be highly resilient, he added.

Perry said the first step for organizations evaluating hyperconverged products is to clearly identify their use case, which will narrow their choices. They also should take into account how the product will integrate with the rest of their infrastructure; for example, if it uses a different hypervisor, will the IT team be able to support multiple hypervisors? Companies interested in a product supplied by multiple vendors also need to determine which one will provide support, he said.

The hyperconvergence market has changed quite a bit since its early days when it was dominated by pure-play startups such as Nutanix and SimpliVity. Today, infrastructure vendors such as Cisco and NetApp have moved into the space and SimpliVity is now part of Hewlett-Packard Enterprise. Nutanix remains a top supplier after going public last year, and some startups remain, but they face stiff competition from the established vendors.

Here’s a look at some of the key players in hyperconvergence today. Please note this list is in alphabetical order and not a ranking.

(Image: kentoh/Shutterstock)



Source link