Category Archives: Stiri IT Externe

CNCF Announces Serverless Whitepaper » Linux Magazine


Serverless or Function as a Service is one of the hottest topics these days. But what is ‘serverless computing’ and who is it for? Can it replace the existing models? These are some of the many questions the CNCF (Cloud Native Computing Foundation) is attempting to answer in a whitepaper drafted by the CNCF Serverless Working Group.

“Serverless is a natural evolution of cloud-native computing. The CNCF is advancing serverless adoption through collaboration and community-driven initiatives that will enable interoperability,” said Chris Aniszczyk, COO, CNCF.

According to the whitepaper, “Serverless computing refers to the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.”

Being a new technology, there is a lot of work to be done for the healthy growth of serverless ecosystem. The CNCF has recognized its role in the space and attempting to address those needs. The CNCF will start a drive to encourage more serverless technology vendors and open source developers to join the CNCF. It will also look at ways to foster an open ecosystem by establishing interoperable APIs, ensuring interoperable implementations with vendor commitments and open source tools.

CNCF is a Linux Foundation Collaborative project that was created to foster innovation in the cloud native space. Kubernetes was its anchor project.

You can read the whitepaper on GitHub.



Source link

Ubuntu to Start Collecting Some Data with Ubunt… » Linux Magazine


Canonical, the parent company of Ubuntu, is planning to collect diagnostic data from its desktop operating system. In a message posted to the Ubuntu Developer mailing list, Will Cooke, Director of Ubuntu Desktop explained the reason behind this move, “We want to be able to focus our engineering efforts on the things that matter most to our users, and in order to do that we need to get some more data about sort of setups our users have and which software they are running on it.”

Ubuntu installer will have a checkbox with wordings like “send diagnostics information to help improve Ubuntu.”

Canonical has chosen to keep this feature opt-out, instead of opt-in. Which means unless you uncheck the box, Canonical will collect diagnostic data. Ubuntu privacy policy will be updated to reflect this change. In order to give users more control over the features, there will be an option in the Gnome System Settings to opt-out of it.

What kind of data will Canonical be collecting? Nothing invasive. They would like to know which flavor and version of Ubuntu you are running, whether you have network connectivity (one may wonder how will they get the data if there no network connectivity?). They will also collect data about the processor, GPU, screen-resolution, memory, storage and OEM manufacturer. Other data includes location (not IP address), installation duration, status of auto-login, disk layout.

Cooke said that all of this data will be made public. It could be a great way for Canonical to start collecting stats about Linux desktop. There are no credible stats about who is using the platform. Canonical’s move can be a step in that direction.



Source link

6 Ways to Transform Legacy Data Storage Infrastructure


So you have a bunch of EMC RAID arrays and a couple of Dell iSCSI SAN boxes, topped with a NetApp filer or two. What do you say to the CEO who reads my articles and knows enough to ask about solid-state drives, all-flash appliances, hyperconverged infrastructure, and all the other new innovations in storage? “Er, er, we should start over” doesn’t go over too well! Thankfully, there are some clever — and generally inexpensive — ways to answer the question, keep your job, and even get a pat on the back.

SSD and flash are game-changers, so they need to be incorporated into your storage infrastructure. SSDs are better than enterprise-class hard drives from a cost perspective because they will speed up your workload and reduce the number of storage appliances and servers needed. It’s even better if your servers support NVMe, since the interface is becoming ubiquitous and will replace both SAS and (a bit later) SATA, simply because it’s much faster and lower overhead.

As far as RAID arrays, we have to face up to the harsh reality that RAID controllers can only keep up with a few SSDs. The answer is either an all-flash array and keeping the RAID arrays for cool or cold secondary storage usage, or a move to a new architecture based on either hyperconverged appliances or compact storage boxes tailored for SSDs.

All-flash arrays become a fast storage tier, today usually Tier 1 storage in a system. They are designed to bolt onto an existing SAN and require minimal change in configuration files to function. Typically, all-flash boxes have smaller capacities than the RAID arrays, since they have enough I/O cycles to do near-real-time compression coupled with the ability to down-tier (compress) data to the old RAID arrays.

With an all-flash array, which isn’t outrageously expensive, you can boast to the CEO about 10-fold boosts in I/O speed, much lower latency , and as a bonus a combination of flash and secondary storage that usually has 5X effective capacity due to compression. Just tell the CEO how many RAID arrays and drives you didn’t buy. That’s worth a hero badge!

The idea of a flash front-end works for desktops, too. Use a small flash drive for the OS (C-drive) and store colder data on those 3.5” HDDs. Your desktop will boot really quickly, especially with Windows 10 and program loads will be a snap.

Within servers, the challenge is to make the CPU, rather than the rest of the system, the bottleneck. Adding SSDs as primary drives makes sense, with HDDs in older arrays doing duty as bulk secondary storage, just as with all-flash solutions, This idea has fleshed out into the hyperconverged infrastructure (HCI) concept where the drives in each node are shared with other servers in lieu of dedicated storage boxes. While HCI is a major philosophical change, the effort to get there isn’t that huge.

For the savvy storage admin, RAID arrays and iSCSI storage can both be turned into powerful object storage systems. Both support a JBOD (just a bunch of drives) mode, and if the JBODs are attached across a set of server nodes running “free” Ceph or Scality Ring software, the result is a decent object-storage solution, especially if compression and global deduplication are supported.

Likely by now, you are using public clouds for backup. Consider “perpetual “storage using a snapshot tool or continuous backup software to reduce your RPO and RTO. Use multi-zone operations in the public cloud to converge DR onto the perpetual storage setup, as part of a cloud-based DR process. Going to the cloud for backup should save a lot of capital expense money.

On the software front, the world of IT is migrating to a services-centric software-defined storage (SDS), which allows scaling and chaining of data services via a virtualized microservice concept. Even older SANs and server drives can be pulled into the methodology, with software making all legacy boxes in a data center operate as a single pool of storage. This simplifies storage management and makes data center storage more flexible.

Encryption ought to be added to any networked storage or backup. If this prevents even one hacker from reading your files in the next five years, you’ll look good! If you are running into a space crunch and the budget is tight, separate out your cold data, apply one of the “Zip” programs and choose the encrypted file option. This saves a lot of space and gives you encryption!

Let’s take a closer look at what you can do to transform your existing storage infrastructure and extend its life.

(Image: Production Perig/Shutterstock)



Source link

What NVMe over Fabrics Means for Data Storage


NVMe-oF will speed adoption of Non-Volatile Memory Express in the data center.

The last few years have seen Non-Volatile Memory Express (NVMe) completely revolutionize the storage industry. Its wide adoption has driven down flash memory prices. With lower prices and better performance, more enterprises and hyper-scale data centers are migrating to NVMe. The introduction of NVMe over Fabrics (NVMe-oF) promises to accelerate this trend.

The original base specification of NVMe is designed as a protocol for storage on flash memory that uses existing, unmodified PCIe as a local transport. This layered approach is very important. NVMe does not create a new electrical or frame layer; instead it takes advantage of what PCIe already offers. PCIe has a well-known history as a high speed interoperable bus technology. However, while it has those qualities, it’s not well suited for building a large storage fabric or covering distances longer than a few meters. With that limitation, NVMe would be limited to being used as a direct attached storage (DAS) technology, essentially connecting SSDs to the processor inside a server, or perhaps to connect all-flash arrays (AFA) within a rack. NVMe-oF allows things to be taken much further.

Connecting storage nodes over a fabric is important as it allows multiple paths to a given storage resource. It also enables concurrent operations to distributed storage, and a means to manage potential congestion. Further, it allows thousands of drives to be connected in a single pool of storage, since it is no longer limited by the reach of PCIe, but can also take advantage of a fabric technology like RoCE or Fibre Channel.

NVMe-oF describes a means of binding regular NVMe protocol over a chosen fabric technology, a simple abstraction enabling native NVMe commands to be transported over a fabric with minimal processing to map the fabric transport to PCIe and back.  Product demonstrations have shown that the latency penalty for accessing an NVMe SSD over a fabric as opposed to a direct PCIe link can be as low as 10 microseconds.

The layered approach means that a binding specification can be created for any fabric technology, although some fabrics may be better suited for certain applications. Today there are bindings for RDMA (RoCE, iWARP, Infiniband) and Fibre Channel. Work on a binding specification for TCP/IP has also begun.

Different products will use this layered capability in different ways. A simple NVMe-oF target, consisting of an array of NVMe SSDs, may expose all of its drives individually to the NVMe-oF host across the fabric, allowing the host to access and manage each drive individually. Other solutions may take a more integrated approach, using the drives within the array to create one big pool of storage offered that to the NVMe-oF initiator. With this approach, management of drives can be done locally within the array, without requiring the attention of the NVMe-oF initiator, or any higher layer software application. This also allows for the NVMe-oF target to implement and offer NVMe protocol features that may not be supported by drives within the array.

A good example of this is a secure erase feature. A lower cost drive may not support the feature, but if that drive is put into a NVMe-oF AFA target, the AFA can implement that secure erase feature and communicate to the initiator. The NVMe-oF target will handle the operations to the lower cost drive in order to properly support the feature from the perspective of the initiator. This provides implementers with a great deal of flexibility to meet customer needs by varying hardware vs. software feature implementation, drive cost, and performance.

The recent plugfest at UNH-IOL focused on testing simple RoCE and Fibre Channel fabrics. In these tests, a single initiator and target pair were connected over a simple two switch fabric. UNH-IOL performed NVMe protocol conformance testing, generating storage traffic  to ensure data could be transferred error-free. Additionally, testing involved inducing network disruptions to ensure the fabric could recover properly and transactions could resume.

In the data center, storage is used to support many different types of applications with an unending variety of workloads. NVMe-oF has been designed to enable flexibility in deployment, offering choices for drive cost and features support, local or remote management, and fabric connectivity. This flexibility will enable wide adoption. No doubt, we’ll continue to see expansion of the NVMe ecosystem.



Source link

Torvalds is Not Happy with Intel’s Patch, Calls… » Linux Magazine


Intels’ woes are not going away. After releasing the patches for Spectre/Meltdown, the company is asking users to stop installing these patches until a better version is out.

“We recommend that OEMs, cloud service providers, system manufacturers, software vendors, and end users stop deployment of current versions on specific platforms,” Navin Shenoy, executive vice president of Intel wrote in an announcement, “as they may introduce higher than expected reboots and other unpredictable system behavior.”

Red Hat has already reverted the patches that the companies earlier released for the RHEL family of products, after reports of rebooting problems.

Linus Torvalds, the creator of Linux, reserves the harshest words for Intel. “… I really don’t want to see these garbage patches just mindlessly sent around,” wrote Torvalds on the LKML mailing list.

Though not everyone on the mailing list thought it was such a bad thing. One maintainer said, “Certainly it’s a nasty hack, but hey — the world was on fire and in the end we didn’t have to just turn the data centres off and go back to goat farming, so it’s not all bad.”

Another maintainer chimed in and said, “As a hack for existing CPUs, it’s just about tolerable — as long as it can die entirely by the next generation.”

Torvalds didn’t buy either arguments. “That’s part of the big problem here. The speculation control cpuid stuff shows that Intel actually seems to plan on doing the right thing for meltdown (the main question being _when_). Which is not a huge surprise, since it should be easy to fix, and it’s a really honking big hole to drive through. Not doing the right thing for meltdown would be completely unacceptable,” said Torvalds. “So the IBRS garbage implies that Intel is _not_ planning on doing the right thing for the indirect branch speculation. Honestly, that’s completely unacceptable too.”



Source link