Category Archives: Stiri IT Externe

6 Hot Tech Trends That Will Impact the Enterprise in 2018


The start of a new year always brings a flood of forecasts from technology pundits for what might happen in the next 12 months. For some reason, 2018 triggered even more prognostications from tech experts than usual. We received dozens of predictions for networking, storage, and data center trends that IT pros should expect to see this year.

After sorting through them, we noticed a pattern: many experts predict more of the same.  The trends and hot technologies from 2017 such as machine learning and automation will continue to influence IT infrastructure into 2018, but the pace and intensity of innovation and adoption seems likely to increase.

“It’s no secret that AI and machine learning are driving a lot of the innovation across the various ecosystems and technology domains that IT cares about,” Rohit Mehra, program VP of network infrastructure at IDC, said in a webcast on the firm’s 2018 predictions for worldwide enterprise infrastructure.

In fact, the rapid incorporation of AI into the workplace will mean that by 2021, more than half of enterprise infrastructure will use some form of cognitive and artificial intelligence to improve productivity, manage risk, and reduce costs, according to IDC.  

To be sure, 2018 will another year of rapid change for IT infrastructure. Read ahead for six key tech trends that infrastructure pros should keep an eye on in the months ahead.

(Image: alleachday/Shutterstock)



Source link

Google Announces Kubeflow to Bring Kubernetes t… » Linux Magazine


After Kubernetes and TensorFlow, Google has now released Kubeflow, a new open source project that makes it easy to consume machine learning (ML) stacks with Kubernetes.

Kubernetes is being touted as the cloud Linux, and an increasing number of people are employing it in different use cases. Machine learning is one of the fastest growing use cases for Kubernetes, but it’s quite a challenge to get the entire machine learning stack up and running.

“Building any production-ready machine learning system involves various components, often mixing vendors and hand-rolled solutions. Connecting and managing these services for even moderately sophisticated setups introduces huge barriers of complexity in adopting machine learning,” said David Aronchick and Jeremy Lewi, Project Manager and Engineer, respectively, on the Kubeflow project. “Infrastructure engineers will often spend a significant amount of time manually tweaking deployments and hand rolling solutions before a single model can be tested.”

Kubeflow solves this problem because it makes using ML stacks on Kubernetes fast and extensible. It’s hosted on GitHub, and the repository contains three components: JupyterHub, to create and manage interactive Jupyter notebooks; a TensorFlow (TF) Custom Resource Definition (CRD) that can be configured to use CPUs or GPUs and adjusted to the size of a cluster with a single setting; and a TF Serving container.

Kubeflow is a muticloud solution, and if you can run Kubernetes in your environment, you can run Kubeflow.



Source link

The Best Is Yet to Come


The storage interface improves flash performance, and will reshape the industry when coupled with SCM.

Non-volatile memory express, NVMe, has been around for a while; development of the interface standard started in 2007 and it was first released in 2011. NVMe promises boosts in storage performance and much lower latency for flash drives, but the real rewards will come down the road when the interface is paired with the next-generation storage media called storage class memory, or SCM. That’s when data storage will take a significant leap forward.

Like all new technologies, the evolution of NVMe has come in phases. About two years ago, a lot of the big storage vendors began using NVMe as an interface for cache. At the time, it offered a high-speed connection, but was still quite expensive, and therefore was focused only on narrow use cases in the array.    

Over the next two years, there’s going to be a land grab of sorts and NVMe will be a given in enterprise storage, just like flash is today. The difference in benefits will be based around implementation. We learned a few years ago with flash that data services matter and proprietary doesn’t work. More businesses require data services to meet requirements for security, protection and availability for their workloads. And a proprietary approach impedes agility and scalability. To keep pace with customer demands, successful vendors will embrace industry standards.

Those lessons will need to be heeded as NVMe becomes more mainstream. It’s also important to understand that in the larger picture, NVMe is only part of the story. NVMe is just the interface or protocol, not the media type.  The transitions in interface and media are moving on parallel tracks. On the interface side, NVMe takes advantage of the parallelism in CPUs and SSDs, leaving behind the overhead of storage protocols like SAS and SATA that were designed for spinning disks.

On the media side, NVMe opens the door to next-generation media. SCM is just beginning to enter the picture in enterprise storage and some day may completely replace SSDs. For now, NVMe will mostly be leveraged with SSDs (NAND flash media), which will improve latency, but come at a premium price. That said, SCM like Intel Optane could be the X-factor in the next generation of storage, with much lower latency than NAND flash.

As SCM becomes available in mainstream enterprise arrays, the expense will make it a subset of the overall persistent storage, with the rest of the array being flash. Therefore, it will be critical to have intelligent software built into the array to make cost-effective use of this media. Then enterprises will be able to consolidate all mission-critical workloads onto a single array; you won’t want to have a dedicated array for your high-performance applications and a separate one for the rest of your tier 1 apps.

All-flash arrays are important, but most people in the high end are already there and wondering what’s coming next and what they have to do to future-proof their investment. NVMe will offer a marked improvement in performance and latency over SAS and SATA for all-flash environments, but it will be the pairing of NVMe with SCM that will propel the industry forward.



Source link

LibreOffice Based CODE 3.0 Released » Linux Magazine


Collabora Productivity, a UK-based company that offers a cloud-based LibreOffice solution, has announced the release of CODE 3.0.

CODE is the community version of LibeOffice Online, which is available free to anyone who wants to run LibreOffice in their own cloud. In a press release,  Collabora Productivity stated, “CODE is the LibreOffice Online solution with the latest developments, perfect for home users that want to integrate their own online Office Suite with their preferred File Share and Sync solution. It allows editing of richly formatted documents directly from a web browser, with excellent support for all popular office file formats, including text documents (docx, doc, odt, …), spreadsheets (xlsx, xls, ods, …), and presentations (pptx, ppt, odp, …).”

Michael Meeks, General Manager of Collabora Productivity, told us that 3.0 is an interesting release in which they have started to bring parts of the rich LibreOffice functionality to the browser. Combined with collaboration, it’s easy to deploy and powerful to use. “In the Office world, people have a choice of any two of feature-depth, collaboration, or web deployment. We’re starting to provide all three,” said Meeks.

CODE 3.0 comes with many new features, including full-feature editing dialog, as seen in the desktop version of LibreOffice. The main purpose of CODE is to provides users early access to the very latest feature additions and updates to LibreOffice Online, to enable them to develop, test to make it better, and contribute back to LibreOffice.

Collabora sells a CODE-based commercial version called Collabora Online.



Source link

What’s Ahead for Infrastructure in 2018


Interop ITX research reveals enterprise storage and networking plans for the year ahead.

In IT, we hear all the time about the rise of the cloud. The way some vendors and industry pundits talk, you’d think all organizations are jumping to public cloud services and doing away with their on-premises infrastructure. Not so fast.

According to the Interop ITX and InformationWeek 2018 State of Infrastructure study, IT infrastructure is alive and well. In fact, many organizations are focused on expanding their on-premises capabilities in the upcoming year. They’re investing in data center, storage, and networking technologies to keep up with soaring data demands and to advance their digital initiatives.

For details on how enterprises are planning to expand their infrastructure, check out this snapshot of our survey’s top findings:



Source link