Category Archives: Stiri IT Externe

KDE Plasma 5.13 Is Here » Linux Magazine


The KDE Project has announced the release of Plasma 5.13, the latest version of its desktop environment. KDE is known for its modular design and under-the-hood customization. However, at times these benefits come at the cost of resource efficiency. But as KDE is targeting mobile devices, this release takes advantage of that work and has been optimized to run smoothly on under-powered ARM laptops, high-end gaming PCs, and everything in between. Resource efficiency also means that on powerful machines, more resources will be free for applications instead of being consumed by the desktop itself.

Web browsers are the gateway to the Internet; Plasma 5.13 comes with browser integration that allows users to monitor and control supported browsers, including Chrome/Chromium and Firefox, from the desktop widget. Users will be able to play and pause media playing in web browsers, offering users better control over not only their own entertainment, but also to control annoying autoplaying videos embedded on websites.

The community has also improved the KDE Connect experience; users can now send links directly to phone using KDE Connect. The Media Control Widget has been redesigned with added support for the MPRIS specification, which means media players can now be controlled from the media controls in the desktop tray or from a phone using KDE Connect.

On the security side, Vaults, Plasma’s storage encryption utility, includes a new CryFS backend,
better error reporting, a more polished interface, and the ability to remotely open and close vaults via KDE Connect.

KDE already had good multi-monitor support, where you could even choose a customized layout for each monitor. The 5.13 release makes it easier to connect external monitors. When a new external monitor is connected, a dialog pops up offering the option to control the position of the additional monitor in correlation to the primary monitor.

The desktop has also received some visual upgrades, from the login screen to icons. Plasma 5.13 will appear in different distributions depending on their own release cycle, but users can test the latest release with KDE’s own distribution called “neon”. openSUSE Tumbleweed and Arch Linux will be among the first to offer this release.

Source:



Source link

GitLabs Drops Pricing After Microsoft, GitHub A… » Linux Magazine


As the news broke that Microsoft was acquiring GitHub, panicked users started to move their accounts to GitLabs, a fully open source implementation of Linus Torvalds Git.

While many leading figures of the open source world argues that GitHub is actually now in a more accountable and reliable position compared to earlier, because Microsoft will be treading carefully so as to not stain the positive image the company has been building with the open source community.

However, that didn’t stop users from move away from GitHub. Sensing an opportunity, GitLabs dropped pricing for its self-hosted GitLab Ultimate plan and its hosted Gold plan; both plans are now available for free to open source projects and educational institutions.

In an interview to Frederic Lardinois of TechCrunch, GitLab CEO Sid Sijbrandij said, “Most education and open source projects don’t have access to enhanced security or performance management tools for their software projects. At GitLab, we are happy to have achieved a level of success that allows us to extend the full set of features to these important communities by offering GitLab Ultimate & GitLab Gold plans for free.”

A caveat, these prices have been dropped, but these users won’t get any commercial support form GitLabs that paying users get.



Source link

How Erasure Coding is Evolving


Data resiliency is at a crossroads. Traditional SAN storage solutions than run on Redundant Array of Independent Disks (RAID) are creaking under the strain of new data demands. While striping, mirroring, and parity in RAID implementations provide various degrees of protection, the cost of resiliency, recovery timings, and RAID’s recovery process vulnerability issues are all paving the way for alternatives.

One option is erasure coding , which is distinctly different than other hardware-based systems. EC is an algorithm-based implementation that’s is not tied down to any specific hardware. It breaks the data into fragments, augments and encodes them with redundant pieces of information, and then distributes encoded fragments across disks, storage nodes, or locations. With erasure coding, data which becomes unreadable on a node can still be reconstructed using information about the data stored elsewhere.

Unlike RAID, the EC does not require a specialized hardware controller and provides better resiliency. More importantly, it provides protection during the recovery processes. Depending on the degree of resiliency, complete recovery is even possible when only half of the data elements are available — that’s a major advantage over RAID. Compared with mirroring, EC also consumes less storage. The down side, however, is that EC is CPU-intensive and can cause latency issues.

Storage efficiency vs. fault tolerance

Erasure coding is most often rendered using Reed-Solomon (RS) codes. For those familiar with RS codes, two performance metrics matter: storage efficiency and fault tolerance. EC involves a trade-off between the two. Storage efficiency is an indicator of additional storage required to assure resiliency, whereas fault tolerance is an indicator of the possibility of recovery in the event of element failures.

These metrics are inversely proportional to one another; more fault tolerance reduces the storage efficiency. That is to say, the more distributed, and therefore geographically widespread the data is stored, the more latency occurs as the time required to recall from different locations or systems is greater.  

Hyperscale data centers pose fresh challenges for data resiliency in terms of node failures and degraded reads. Modern erasure code algorithms have evolved to include local regeneration codes, codes with availability, codes with sequential recovery, coupled layer MSR codes, selectable recovery codes, and others that are highly customized.

Acceleration and off-loading

Erasure codes are compute intensive and it has become necessary to offload that compute from the main CPU. Research looking into options for optimizing various aspects is well underway in academia and in industry. Innovations in data center hardware are promising too. Whether virtual or bare metal, there is a greater probability of freeing up computation resources here, like GPU and FGPA.

One of the requirements of GPU-based acceleration is parallelization of the EC algorithms. Parallelization is based on the concept of parallel computing, when multiple processes are executed concurrently and the modern resiliency codes have some cases of the vector codes. These vector approaches make it possible to leverage GPU cores and high-speed on core memory (like Texture Memory) to achieve parallelism.

Fabric acceleration is another trend in EC off-loading. Next-generation host channel adapters (HCA) offer calculation engines, making full use of features like RDMA and verbs. Encode and transfer operations are handled in HCA. With RDMA, it proposes more acceleration for storage clusters.

Data resiliency, compression, and deduplication advances are evolving at breakneck speed. It is an exciting time for erasure coding: extreme low latencies of NVMe technologies, tighter integration of storage with application characteristics, and newer virtualization options are opening up a myriad of use cases. As traditional RAID systems reach their data resiliency limits, data center and storage professionals can consider systems based on erasure coding as a strong option to provide resiliency, protect data during recovery, and minimize storage requirements.

Dinesh Kumar Bhaskaran, Director of Technology and Innovation at Aricent, has more than 15 years of experience in embedded and enterprise storage technologies. He also works for the innovation group leading efforts in the field of hyper converged infrastructure. His areas of interest include Erasure Coding, Heterogeneous Computing and Operating Systems.



Source link

Open Source Storage: 6 Benefits


Storage software creation, delivery, and support are all evolving at a high rate today. We’ve added open source coding, support-services bundling, platform pre-integration, code as a service, microservice architectures, and scalable  software-defined storage services to the traditional bundled proprietary code approach. Open source packages in the storage word are now mainstream solutions.

The acceptance of open source storage is no accident. The leaders in the space, such as Ceph and Gluster, are all characterized by large communities, well-organized communications between developers, liaison with the customer base, and the support of a commercial vendor delivering full technical support and, typically, for-profit enterprise editions with additional features. These open source storage products compete with for-profit code and maintain leadership in most areas other than prices.

Apart from the leading packages, we see many other examples of open source storage code arising from communities of interest, such as the Btrfs and OpenZFS file systems, the LizardFS and Lustre distributed file systems, and Pydio, a file sharing system. , These projects vary in fullness of feature set and code quality, so that in their early stages it is definitely buyer beware. These packages, however, are a rich source of innovation for the storage industry and some will likely grow beyond their niche status in a couple of years, so it is impossible to dismiss them out of hand.

The community nature of open source means several things. First, it makes niche solutions easier to obtain since the community pre-defines a receptive customer base and a roadmap of needs. Compare this with the traditional startup – raising funds, defining an abstract product, developing it, and then finding customers. Community-based solutions lead to much more innovation. Often, solutions serving your specific needs are available, though a thorough evaluation is needed to offset risk.

In and of itself, open source storage code would not be interesting without the availability of commodity  hardware platforms that are much cheaper than gear from major league traditional vendors. It’s relatively easy to integrate open-source code onto these low-cost, highly standardized platforms. Generally, the standardization inherent in commodity hardware makes most open source code plug-and-play, irrespective of the hardware configuration.

In this slideshow, I delve into six open source storage benefits, and why you should consider open source storage for your data center.

(Image: Camm Productions/Shutterstock)



Source link

The NVMe Transition


The buzzword of the moment in the storage industry is NVMe, otherwise known as Non-Volatile Memory Express. NVMe is a new storage protocol that vastly improves the performance of NAND flash and storage class memory devices. How is it being implemented and are all NVMe-enabled devices equal? And what should IT infrastructure pros consider before making the NVMe transition?

Background

NVMe was developed as a successor to existing SAS and SATA protocols. Both SAS and SATA were designed for the age of hard drives where mechanical head movement masked any storage protocol inefficiencies. Today with NAND flash, and in the future with storage class memory, the bottlenecks of SAS/SATA are more apparent because NAND flash is such a high-performance persistent media. NVMe addresses these performance problems and also implements greater parallel operations. The result is around a 10x improvement in IOPS for NVMe solid-state drives compared to SAS/SATA SSDs.

Adoption models

Storage vendors are starting to roll out products that replace their existing architectures with ones based on NVMe. At the back-end of traditional storage arrays, drives have been connected using SAS. In recent weeks, both Dell EMC and NetApp have announced updates to their product portfolios that replace SAS with NVMe.

Dell EMC released PowerMax, the NVMe successor to VMAX. NetApp introduced AFF A800, which includes NVMe shelves and drives. In both cases, the vendors claim latency improves to around the 200-300µs level, with up to 300GB per second of throughput. Remember that both of these platforms scale out, so these estimates are for systems at their greatest level of scale.

Pure Storage recently announced an update to its FlashArray//X platform with the release of the //X90 model. This offers native NVMe through the use of DirectFlash modules. In fact, the FlashArray family has been NVMe-enabled for some time, which means the transition for customers can be achieved without a forklift upgrade, whereas PowerMax and AFF A800 are new hardware platforms.

NVMe is already included in systems from other vendors such as Tegile, which brought its NVMe-enabled platforms to market in August 2017. Vexata has also implemented both NVMe NAND and Optane in a hardware product specifically designed for NVMe media. The Optane version of the VX-100 platform can deliver latency figures as low as 40µs with 80GB/s of bandwidth in just two controllers, Vexata claims.

End-to-end NVMe

A new term we’re starting to see emerge is end-to-end NVMe. This means that from host to drive, each step of the architecture is delivered with the NVMe protocol. The first step was to enable back-end connectivity through NVMe; the next step is to enable NVMe from host to array.

Existing storage arrays have used either Fibre Channel or iSCSI for host connections. Fibre Channel actually uses the SCSI protocol and of course, iSCSI is SCSI over Ethernet. A new protocol, NVMeoF, or NVMe over Fabrics, allows the NVMe protocol to be used on either Fibre Channel or Ethernet networks.

Implementing NVMeoF for Ethernet requires new adaptor cards, whereas NVMeoF for Fibre Channel will work with the latest Gen5 16Gb/s and Gen6 32Gb/s hardware. However, it’s early days for both of these protocols, so don’t expect them to have the maturity of existing storage networking.

Controller bottlenecks

One side effect of having faster storage media is the ability to max out the capability of the storage controller. A single Intel Xeon processor can fully exploit perhaps only four to five NVMe drives, which means storage arrays may not fully exploit the capabilities of the NVMe drive itself.

Vendors have used two techniques to get around this problem. The first is to implement scale-out architectures, with multiple nodes deploying compute and storage;  WekaIO and Excelero use this approach. Both vendors offer software-based solutions that deliver scale-out architectures specifically designed for NVMe. WekaIO Matrix is a scale-out file system, whereas Excelero NVMesh is a scale-out block storage solution. In both instances, the software can be implemented in a traditional storage array design or used in a hyperconverged model.

The second approach is to disaggregate the functions of the controller and allow the host to talk directly to the NVMe drives. This is how products from E8 Storage and Apeiron Data work. E8 storage appliances package up to 24 drives in a single shelf, which is directly connected to host servers over 100Gb/s Ethernet or Infiniband. The result is up to 10 million read IOPS and 40GB/s of bandwidth at latency levels close to those of the SSD media itself.

Apeiron’s ADS1000 uses custom FPGA hardware and hardened layer 2 Ethernet to connect hosts directly to NVMe drives using a protocol the vendor calls NVMe over Ethernet. The product offers near line-speed connectivity with only a few microseconds of latency on top of the media itself. This allows a single drive enclosure to deliver around 18 million IOPS with around 72GB/s of sustained throughput.

Choices

So what’s the best route to using NVMe technology in your data center? Moving to traditional arrays with an NVMe back-end would provide an easy transition for customers that already use technology from the likes of Dell or NetApp. However, these arrays may not fully benefit from the performance NVMe can offer because of bottlenecks at the controller and delays introduced with existing storage networking.

The disaggregated alternatives offer higher performance at much lower latency, but won’t simply slot into existing environments. Hosts potentially need dedicated adaptor cards, faster network switches, and host drivers.

As with any transition, IT organizations should be reviewing requirements to see where NVMe benefits their needs. If ultra-low latency is important, then this could justify implementing a new storage architecture.

Remember that NVMe will — in the short-term at least — be sold at a premium, so it also makes sense to ensure the benefits of the transition to NVMe justify the cost.



Source link