Category Archives: Stiri IT Externe

10 Hyperconvergence Vendors Setting the Pace


As companies look for ways to make their IT infrastructure more agile and efficient, hyperconvergence has become a top consideration. The integrated technology promises faster deployment and simplified management for the cloud era.

An Enterprise Strategy Group survey last year found that 70% of 308 respondents plan to use hyperconverged infrastructure while 15% already use it and 10% are interested in it. IDC reported that hyperconverged sales grew 48.5% year over year in the second quarter of this year, generating $763.4 million in sales. Transparency Market Research estimates the global HCI market to reach $31 billion by 2025, up from $1.5 billion last year.

“It’s moved well beyond the hype phase into the established infrastructure phase,” Christian Perry, research manager covering IT infrastructure at 451 Research, told me in an interview.

With hyperconvergence, organizations can quickly deploy infrastructure to support new workloads, divisions, or projects, he said. “In that sense, it really provides an on-premises cloud-like option.”

Hyperconverged infrastructure leverages software to integrate compute and storage typically in a single appliance on commodity hardware. Fully virtualized, hyperconverged products take a building-block approach and are designed to scale out easily by adding nodes. According to IDC, a key differentiator for hyperconverged systems, compared to other integrated systems, is their scale-out architecture and ability to provide all compute and storage functions through the same x86 server-based resources.

ESG Analyst Dan Conde told me that some newer hyperconverged systems include broader networking features, but that for the most part, the technology’s focus is on storage and “in-the-box” connectivity.

VDI has been a top use case for hyperconverged infrastructure, but Perry said 451 Research is seeing the technology used for a range of use cases, including data protection, and traditional virtualized workloads such as Microsoft applications. Because it’s easy to deploy, the technology is well suited for branch and remote locations, but companies are also running it in the core data centers alongside traditional infrastructure, he said.

Vendor lock-in, high cost, and inflexible scaling (compute and storage capacity must be added at the same rate) are among the drawbacks that some have cited with hyperconvergence platforms. Perry said he hasn’t seen scalability issues among adopters, and that opex costs are much lower than traditional infrastructure. Hyperconverged products also have proven to be highly resilient, he added.

Perry said the first step for organizations evaluating hyperconverged products is to clearly identify their use case, which will narrow their choices. They also should take into account how the product will integrate with the rest of their infrastructure; for example, if it uses a different hypervisor, will the IT team be able to support multiple hypervisors? Companies interested in a product supplied by multiple vendors also need to determine which one will provide support, he said.

The hyperconvergence market has changed quite a bit since its early days when it was dominated by pure-play startups such as Nutanix and SimpliVity. Today, infrastructure vendors such as Cisco and NetApp have moved into the space and SimpliVity is now part of Hewlett-Packard Enterprise. Nutanix remains a top supplier after going public last year, and some startups remain, but they face stiff competition from the established vendors.

Here’s a look at some of the key players in hyperconvergence today. Please note this list is in alphabetical order and not a ranking.

(Image: kentoh/Shutterstock)



Source link

Final Ubuntu Desktop 17.10 Beta Arrives » Linux Magazine


Canonical has announced the release of the final beta of Ubuntu 17.10, code named Artful Aardvark. With this release, Ubuntu codenames have gone back to the beginning of the English alphabet. It’s actually an apt name, because with this release, Ubuntu is kind of starting fresh. Canonical dropped its desktop ambitions earlier this year, signaling the shutdown of efforts like Unity. This is the first release of Ubuntu that comes with Gnome as the official and default desktop environment and shell.

However, Canonical has ensured that people upgrading from the previous release of Ubuntu, running Unity 7, will not be in for a shock. Ubuntu developers have worked on adding some custom features and functionalities so that users don’t have to change their workflow too much.

Will Cooke, Director of Ubuntu Desktop at Canonical said, “… we’ve spent time making sure that the people who having been using Unity 7 for years don’t have to change their workflow too much. The most obvious example of this is the Ubuntu Dock (based on Dash To Dock and developed upstream).”

Ubuntu is also adopting Wayland as the default display server for the desktop, depending on the hardware. However, users can switch between Wayland and Xorg. Beyond these cosmetic changes to help existing Ubuntu users, Canonical is sticking to default Gnome settings and features. Some of the new features include the newly designed Gnome Settings. Ubuntu 17.10 brings support for all driverless printers, which means no need to install drivers.

Canonical has also discontinued its own Ubuntu Store, and it now defaults to Gnome software, which also allows it to update the system itself.

With this release, you can also move away from distro-specific RPM and DEB packages and use bundled Snap packages. Unfortunately, the rest of the desktop Linux world is rallying behind Flatpak, so it will be interesting to see if Canonical drops Snaps on the desktop and adopts Flatpak.

You can download the beta from the official Ubuntu page.



Source link

Kubernetes 1.8 Announced » Linux Magazine


The Kubernetes community has announced the release of Kubernetes 1.8, which comes with many new features. Kubernetes is an implementation of the Borg system that Google internally runs to power its own clusters have become one of the hottest open source projects. It has become the defacto container management and orchestration tool.

This release is as much about the code as it’s about technology. According to Kubernetes, “In addition to functional improvements, we’re increasing project-wide focus on maturing process, formalizing architecture, and strengthening Kubernetes’ governance model.”

One of the highlights of this release is graduating role based access control (RBAC) to stable stage. RBAC allows admins to restrict system access to authorized users, adding another layer of security. In Kubernetes, RBAC allows cluster administrators to dynamically define roles to enforce access policies through the Kubernetes API.

This release also comes with a beta support for filtering outbound traffic through network policies augmenting existing support for filtering inbound traffic to a pod. These two new features offer admins powerful tools for enforcing organizational and regulatory security requirements within Kubernetes.

Kubernetes 1.8 is available for download on GitHub.



Source link

Big Data Storage: 7 Key Factors


Defining big data is actually more of a challenge than you might think. The glib definition talks of masses of unstructured data, but the reality is that it’s a merging of many data sources, both structured and structured, to create a pool of stored data that can be analyzed for useful information.

We might ask, “How big is big data?” The answer from storage marketers is usually “Big, really big!” or “Petabytes!”, but again, there are many dimensions to sizing what will be stored. Much big data becomes junk within minutes of being analyzed, while some needs to stay around. This makes data lifecycle management crucial. Add to that globalization, which brings foreign customers to even small US retailers. The requirements for personal data lifecycle management under the European Union General Data Protection Regulation go into effect in May 2018 and penalties for non-compliance are draconian, even for foreign companies, at up to 4% of global annual revenues per affected person.

For an IT industry just getting used to the term terabyte, storing petabytes of new data seems expensive and daunting. This would most definitely be the case with RAID storage array; in the past, an EMC salesman could retire on the commissions from selling the first petabyte of storage. But today’s drives and storage appliances have changed all the rules about the cost of capacity, especially where open source software can be brought into play.

In fact, there was quite a bit of buzz at the Flash Memory Summit in August about appliances holding one petabyte in a single 1U rack. With 3D NAND and new form factors like Intel’s “Ruler” drives, we’ll reach the 1 PB goal within a few months. It’s a space, power, and cost game changer for big data storage capacity.

Concentrated capacity requires concentrated networking bandwidth. The first step is to connect those petabyte boxes with NVMe over Ethernet, running today at 100 Gbps, but vendors are already in the early stages of 200Gbps deployment. This is a major leap forward in network capability, but even that isn’t enough to keep up with drives designed with massive internal parallelism.

Compression of data helps in many big data storage use cases, from removing repetitive images of the same lobby to repeated chunks of Word files. New methods of compression using GPUs can handle tremendous data rates, giving those petabyte 1U boxes a way of quickly talking to the world.

The exciting part of big data storage is really a software story. Unstructured data is usually stored in a key/data format, on top of traditional block IO, which is an inefficient method that tries to mask several mismatches. Newer designs range from extended metadata tagging of objects to storing data in an open-ended key/data format on a drive or storage appliance. These are embryonic approaches, but the value proposition seems clear.

Finally, the public cloud offers a home for big data that is elastic and scalable to huge sizes. This has the obvious value of being always right-sized to enterprise needs and AWS, Azure and Google have all added a strong list of big data services to match. With huge instances and GPU support, cloud virtual machines can emulate an in-house server farm effectively, and make a compelling case for a hybrid or public cloud-based solution.

Suffice to say, enterprises have a lot to consider when they map out a plan for big data storage. Let’s look at some of these factors in more detail.

(Images: Timofeev Vladimir/Shutterstock)



Source link

Oracle Donating Java EE to the Eclipse Foundation » Linux Magazine


Oracle is donating yet another open source technology that it acquired from Sun Microsystems. After discussions with IBM, Red Hat, and a few open source foundations, Oracle has chosen the Eclipse Foundation as the rightful home for the Java Enterprise Edition (Java EE) platform.

“The Eclipse Foundation has strong experience and involvement with Java EE and related technologies. This will help us transition Java EE rapidly, create community-friendly processes for evolving the platform, and leverage complementary projects such as MicroProfile. We look forward to this collaboration,” said David Delabassee, Software Evangelist at Oracle.

To ensure smooth transition to the new home, Oracle has made certain changes to its proposal.

The company will relicense Java EE technologies and related GlassFish technologies to the foundation. This would include Reference Implementations (RIs), Technical Compatibility Kits (TCKs), and associated project documentation.

Oracle is also recommending a new name and new branding for the platform within the foundation. However, for continuity, the company intends to enable the use of existing javax package names and component specification names for existing Java Specification Requests (JSRs)



Source link