Monthly Archives: July 2018

Clear Linux Makes a Strong Case for Your Next Cloud Platform | Linux.com


There are so many Linux distributions available, some of which are all-purpose and some that have a more singular focus. Truth be told, you can take most general distributions and turn them into purpose-driven platforms. But, when it comes to things like cloud and IoT, most prefer distributions built with that specific use in mind. That’s where the likes of Clear Linux comes in. This particular flavor of Linux was designed for the cloud, and it lets you install either an incredibly bare OS or one with exactly what you need to start developing for cloud and/or IoT.

What is Clear Linux?

Clear Linux comes from Intel’s Open Source Technology Center, which focuses primarily on the cloud. With that in mind, it should come as no surprise that Clear Linux was designed specifically for the cloud while best leveraging Intel hardware. Because Clear Linux focuses primarily on Intel hardware, it can make best use of power management features and performance optimizations. Clear Linux also features:

  • FUSE file system debugging tool (for complete debug info)

  • Automatic feedback-directed optimizer

  • Function multi-versioning (to assist in developing platforms that can run anywhere)

  • Autoproxy (no need to manually configure proxies)

  • Telemetry (detect and respond to quality issues)

  • Mixer tool (for composing a specific use-case OS)

  • Software Update

  • Stateless (separates the OS configuration, per-system configuration, and VT user-data)

Supported hardware

Clear Linux can run on very minimal hardware (e.g., a single core CPU with 128MB of RAM and 600MB of storage). But it should be known (as you might already suspect), Clear Linux can only run on Intel 64-bit architecture and the hardware must support UEFI (otherwise it won’t boot). The following processor families have been verified to run Clear Linux:

  • 2nd Generation, or later, Intel® Core™ processor family.

  • Intel® Xeon® Processor E3

  • Intel® Xeon® Processor E5

  • Intel® Xeon® Processor E7

  • Intel® Atom™ processor C2000 product family for servers – Q3 2013 version or later.

  • Intel® Atom™ processor E3800 series – Q4 2013 version or later.

Beyond Intel architecture, the minimum system requirements are:

I want to show you how to get Clear Linux up and running. I’ll be demonstrating on a VirtualBox VM. The installation isn’t terribly difficult, but there are certain things you need to know.

Installation

If you are going the route of VirtualBox virtual machine, create the VM as per normal, but you must enable EFI. To do this, open the Settings for the VM and click on the System tab. In this tab (Figure 1), click the checkbox for Enable EFI (special OSes only).

Once you have the VM setup, download the clear-23550-installer.iso.xz file. You will then need to uncompress the file with the command:

unxz clear-23550-installer.iso.xz

Once the file is uncompressed, you’ll see the clear-23550-installer.iso file. If you are going to use this as a virtual machine, you can attach that to the VM. If you’re installing on bare metal, burn that file to either a CD/DVD or USB flash drive as a bootable media. Boot the media or start the VirtualBox VM to be greeted by the text-based installer (Figure 2).

The installer is fairly straightforward. However, you do need to know that if you don’t go the Manual/Advanced route, you’ll wind up with a very bare system. Most of the questions asked during the installation are self explanatory. Just make sure when you reach the Choose Installation Type screen to select Manual (Figure 3).

If you go with the default (Automatic), you cannot select any additional packages. Chances are, you will want to go the Manual route (Figure 4).

Another task you are able to take care of in the Manual installation is the ability to create an admin user. If you don’t do that, the only user available is root. Of course, even if you go with the minimal installation, you can add users manually with the useradd command.

Post installation

After the installation completes, reboot and login. If you didn’t install the graphical environment (which probably will be the case, as this is geared toward the cloud), you might want to install more applications. This is done via bundles, using the swupd command. Say you want to run container applications with Docker. For this, you’d add the containers-basic bundle (for a list of all available bundles, see this page). To install the containers-basic bundle, the command is:

sudo swupd bundle-add containers-basic

After that command runs, you can now deploy container applications from Dockerhub. There are quite a large amount of bundles you can add. You can even install the GNOME desktop with the command:

sudo swupd bundle-add desktop-autostart

In fact, using bundles, you can pretty much define exactly how Clear Linux is going to function. Just remember, the base install is pretty empty, so you’ll want to go through the list of bundles and install everything you’ll need to make Clear Linux serve your specific purpose.

It should also be noted that the swupd command also takes care of the updating of bundles. This process is handled automatically. The autoupdate process can be enabled and disabled with the following two commands:

sudo swupd autoupdate --enable

sudo swupd autoupdate --disable

You can also force a manual update with the command:

sudo swupd update

Once you have everything installed and updated, you can start developing for the cloud and/or IoT; Clear Linux will serve you well in that regard.

Make it yours

If you want a cloud/IoT-specific Linux distribution that lets you build a distribution for a very specific need, Clear Linux might be just what you’re looking for. Give it a go for yourself.

Software-Defined Storage: Getting Started


Drawn by the combined lures of automation, flexibility, increased storage capacity, and improved staff efficiency, a growing number of enterprises are pondering a switch to software-defined storage (SDS).

SDS lets adopters separate storage resources from the underlying hardware platform. The approach enables storage to become an integral part of a larger software-designed data center (SDDC) architecture in which resources can be more easily automated and orchestrated.

SDS has moved from the early adoption stage into the mainstream, with enterprises in banking, manufacturing, pharmaceuticals, healthcare, media and government rapidly transitioning to the technology. “These customers have adopted SDS for a variety of use cases, including long-term archives, backup storage, media content distribution, big data lakes and healthcare image archives,” explained Jerome Lecat, CEO of Scality, a cloud and object storage technology provider.

Greg Schulz, founder of and a senior advisor at storage consulting firm Server StorageIO, said enterprises of all types and sizes are now poised to make the move to SDS. “Across the board, big and small, from government sector to private sector,” he said., “Likewise, across different types of applications.”

Getting started

Successful SDS adopters typically began by selecting a discrete use case as a starting point. “Within the enterprise, we see Tier 2 applications, such as backup and archive, as an optimal way to store mission-critical data that is large-scale and a perfect way to demonstrate the scalability, availability and cost-advantages of SDS,” Lecat said. “Over time, more use cases, including big data and deep learning, can be brought online to further improve the economic advantages of SDS.”

Enterprises that recently moved to a hyperconverged infrastructure (HCI) are already working with SDS, noted Sascha Giese, a senior sales engineer at IT infrastructure monitoring and management technology provider SolarWinds. “A good starting point for such organizations would be to evaluate whether HCI has benefitted your organization and, if so, consider whether to expand the SDS footprint in your data center.”

Even organizations that haven’t embraced HCI usually already have some type of virtualization in their environments, observed Matt Sirbu, director of data management and data center infrastructure at Softchoice, an IT infrastructure solutions provider.

“VMware, HyperV are really software-defined compute solutions,” he said. Software-defined storage products extend virtualization benefits to the data layer, but adopters also need to closely examine the supporting infrastructure. “Any business, when they come up to their next infrastructure refresh cycle, should start to evaluate newer technologies to see what the benefits will be to their organization by leveraging software-defined across all layers, compute and storage,” he said.

Jonathan Halstuch, co-founder and chief technology officer of RackTop Systems, a data management technology supplier, noted that it’s important to find an SDS product that can meet both current and future storage requirements, particularly in critical areas like compliance and security. “Be discriminating and find a solution that will reduce complexity and tasks for the IT department,” he advised. “Then begin to migrate workloads that are the easiest to migrate or are datasets that have special requirements that are currently being unmet, such as encryption, performance or accessibility.”

The end of a refresh cycle is a logical time to begin exploring SDS. “An organization should assess their technology roadmap for the next few years and consider making the switch to an SDS solution,” said Maghen Hannigan, director of converged and integrated solutions at technology products and services distributor Tech Data. “If an existing environment is in need of a new storage administrator, it may be worth considering (hiring) a new systems administrator proficient in software-defined storage.”

A refresh cycle-motivated commitment to SDS can be either large or small.  “It may be as simple as dropping in an SDS solution in place of legacy storage,” Halstuch explained. “However, it may make more sense to rethink the current architecture, review a hybrid cloud strategy and review the current staffing profile to determine what is the best SDS solution to adopt and how it fits into the long-term vision of the organization.”

Potential pitfalls

One mistake organizations often make when planning an SDS transition is to view the technology as a “point product” decision. “Software-defined solutions are ideally part of a larger stack that offers a common operational model for compute, storage, network and cloud,” said Lee Caswell, VP of products, storage and availability at VMware. . “The software-defined solution offers a digital foundation with investment protection for any hardware, any application, and any cloud.”

“In general, we see organizations regret their decisions to move to SDS either too abruptly or without proper planning,” said Daniel Gilfix, marketing manager of Red Hat’s storage division. “We witness the frustration of those who venture into the area without the proper skill sets, as if any storage administrator or cloud practitioner can pick up the knowledge and training overnight.”

Perhaps the biggest mistake SDS newcomers make is believing that the technology is a “silver bullet” for all workloads. “It’s important to look at the workload demands,” Sirbu stated. “All organizations can benefit from (SDS) for a large portion of their workloads, but it really comes down to analyzing business requirements with available IT resources to come up with the optimal solution to run their operations.”



Source link

New Ubuntu Rethinks Desktop Ecosystem | Software


By Jack M. Germain

Apr 26, 2018 9:29 AM PT

Canonical on Thursday released Ubuntu Linux 18.04, which utilizes live patching and a new metric data collection system. Notably missing is the Unity desktop that had distinguished the distro but was poorly received.

New Ubuntu Rethinks Desktop Ecosystem

Canonical last year made the switch from Unity 7 to upstream GNOME as Ubuntu’s default desktop environment. Unity is not an option in Ubuntu 18.04 and will not be available in desktop offerings moving forward.

“The overall response was positive,” said Will Cooke, engineering director for desktop at Canonical. The development team tweaked the GNOME shell just enough to give it a face that clearly identifies it as part of Ubuntu.

The main reason for dropping Unity was lack of uptake. The team decided to stop investing in its homegrown desktop environment and return to Ubuntu’s roots with upstream GNOME, Cooke noted.

Progress Path

The development team used Ubuntu version 17.10 as its proving ground for transitioning from Unity 7 to the GNOME shell. Primarily, that was for its long-term support.

That transition proved that users would have a seamless upgrade path, Cooke said. The five-year support also set the groundwork for developers to build for a common platform, as the same Ubuntu version runs in the cloud and on all devices.

“This is the main reason we continue to see uptake on Ubuntu from developers,” he remarked. Ubuntu offers “reliability and a proven background of uptake and security, and other critical packages.”

What to Expect

Live patching is an important new feature in Ubuntu 18.04. It allows the installation of updates on a running machine without requiring a reboot, enabling the immediate application of security updates.

Another big thing, particularly for the Ubuntu team, is a new system for acquiring data on metrics. Ubuntu essentially will phone home to report hardware details and user installation options.

The metric information-gathering includes anonymized details on the age of the machine, how much RAM it has, and whether the user installed it from a DVD or USB stick, or upgraded in place.

No identifiable user information will be uploaded, but users can opt out of the sharing part if they wish, said Cooke.

The goal is to find out details about preferences and hardware to help the development team better address a particular market, he said.

“Until now, we simply have not had the ability to gather that information,” Cooke continued. “It will focus our energies for future releases. We also intend to make those details available to other projects. For instance, if we discover that a majority of users have older hardware, we must tailor our development to those machine capabilities.”

Minimal for Enterprise

Ubuntu 18.04 includes a new feature that addresses a growing enterprise concern: home user clutter. IT managers in workplace environments easily can strip out software that does not pertain to the work environment, such as games.

“They do not really want them, and they do not really need them,” said Cooke, noting that this minimal install capability meets requests from IT managers.

It cost enterprises money to have someone go through each installation and remove those items or create automation to do those removals for them, but Ubuntu 18.04 now does that for them.

The minimum install option goes through the process of stripping out home-user-centric applications.

“It is significant and a needed convenience,” Cooke said.

Craft Snaps Take Over

Ubuntu 18.04 relies on Snapcraft to feed software applications to the operating system. It ships with Snaps by default.

Snaps speed up software delivery and make the process more secure, according to Evan Dandrea, engineering manager for Snapcraft at Canonical.

Snapcraft, developed by Canonical, lets software vendors distribute to all of Ubuntu and a growing list of distributions platforms with a single artifact. It replaces different packaging systems like .deb and .rpm.

“Snaps let vendors publish a software update at their own pace. Vendors are not locked into a release cycle of Ubuntu or any other distribution. The updates themselves apply automatically and can roll back if anything goes wrong,” Dandrea said.

Expanding the Process

For many applications in use today, it takes a long time to get updates vetted through a distro’s community software repository. The process involves installing, modifying and reinstalling.

In 18.04, for the first time, Ubuntu delivers important applications by default in a Snap. Thousands more applications are integrated into the app store, so users no longer have to search around for the latest versions of their software, according to Dandrea.

“The goal is to give everyone access to the latest software without a lot of frustration,” he said.

With Snaps, each update is tamper-proof. The applications are locked down, much like they are in Docker, but Snap is much more lightweight, Dandrea said.

Growing the Platform

Ubuntu’s focus on delivering software via Snapcraft offers several benefits, noted Dandrea. One is that enterprise users do not face a risk of downtime. Another is that home users can register up to three machines on their UbuntuOne account.

All users will find the service more streamlined and simpler to use. In general, users can expect Ubuntu 18.04 to be fast and light as well as reliable, stable and secure, according to Dandrea.

The Snapcraft ecosystem is gaining momentum. Major software outlets, such as Spotify and Google, have adopted the Snap platform. Developer sign-up has tripled in the last three months alone, he said.

Developer tools are now available for Snap construction. Snaps are no longer just about Ubuntu. It has become a team effort.

“We are seeing cross-distribution success. For instance, if you are running any distribution besides Ubuntu, you no longer have to wait for local repositories to repackage the latest releases,” said Dandrea.

Dev Advantages

Developers can reach the largest population of Linux users of all distributions with one release. Self-contained libraries are included in the Snap package.

That means software developers no longer have to debug their way through every conceivable combination. If an application needs a dependency, it is bundled with the Snap, noted Dandrea.

“The bottom line is Snaps are lowering the barrier of entry in developing for Linux or publishing software for Linux,” he said. “They require no additional infrastructure.”

Bonus Feature

One new feature in the latest Ubuntu release appeals to software developers in particular: the ability to run Ubuntu on a Windows computer in a virtual machine. This gives developers a seamless experience moving between Linux and Windows on a single machine, with the ability to copy and paste between them.

“This ability was a huge demand from the developer community,” said Cooke. “This is another obstacle removed from their path to really allow them to benefit from the power of Ubuntu from their Windows machine.”


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

Shuttleworth on Ubuntu 18.04: Multicloud Is the New Normal | Software


By Jack M. Germain

Apr 29, 2018 5:00 AM PT

Canonical last week released the
Ubuntu 18.04 LTS platform for desktop, server, cloud and Internet of Things use. Its debut followed a two-year development phase that led to innovations in cloud solutions for enterprises, as well as smoother integrations with private and public cloud services, and new tools for container and virtual machine operations.

The latest release drives new efficiencies in computing and focuses on the big surge in artificial intelligence and machine learning, said Canonical CEO Mark Shuttleworth in a global conference call.

Ubuntu has been a platform for innovation over the last decade, he noted. The latest release reflects that innovation and comes on the heels of extraordinary enterprise adoption on the public cloud.

The IT industry has undergone some fundamental shifts since the last Ubuntu upgrade, with digital disruption and containerization changing the way organizations think about next-generation infrastructures. Canonical is at the forefront of this transformation, providing the platform for enabling change across the public and private cloud ecosystem, desktop and containers, Shuttleworth said.

“Multicloud operations are the new normal,” he remarked. “Boot time and performance-optimized images of Ubuntu 18.04 LTS on every major public cloud make it the fastest and most-efficient OS for cloud computing, especially for storage and compute-intensive tasks like machine learning,” he added.

Ubuntu 18.04 comes as a unified computing platform. Having an identical platform from workstation to edge and cloud accelerates global deployments and operations. Ubuntu 18.04 LTS features a default GNOME desktop. Other desktop environments are KDE, MATE and Budgie.

Diversified Features

The latest technologies under the Ubuntu 18.04 hood are focused on real-time optimizations and an expanded Snapcraft ecosystem to replace traditional software delivery via package management tools.

For instance, the biggest innovations in Ubuntu 18.04 are related to enhancements to cloud computing, Kubernetes integration, and Ubuntu as an IoT control platform. Features that make the new Ubuntu a platform for artificial intelligence and machine learning also are prominent.

The Canonical distribution of Kubernetes (CDK) runs on public clouds, VMware, OpenStack and bare metal. It delivers the latest upstream version, currently Kubernetes 1.10. It also supports upgrades to future versions of Kubernetes, expansion of the Kubernetes cluster on demand, and integration with optional components for storage, networking and monitoring.

As a platform for AI and ML, CDK supports GPU acceleration of workloads using the Nvidia DevicePlugin. Further, complex GPGPU workloads like Kubeflow work on CDK. That performance reflects joint efforts with Google to accelerate ML in the enterprise, providing a portable way to develop and deploy ML applications at scale. Applications built and tested with Kubeflow and CDK are perfectly transportable to Google Cloud, according to Shuttleworth.

Developers can use the new Ubuntu to create applications on their workstations, test them on private bare-metal Kubernetes with CDK, and run them across vast data sets on Google’s GKE, said Stephan Fabel, director of product management at Canonical. The resulting models and inference engines can be delivered to Ubuntu devices at the edge of the network, creating an ideal pipeline for machine learning from the workstation to rack, to cloud and device.

Snappy Improvements

The latest Ubuntu release allows desktop users to receive rapid delivery of the latest applications updates. Besides having access to typical desktop applications, software devs and enterprise IT teams can benefit from the acceleration of snaps, deployed across the desktop to the cloud.

Snaps have become a popular way to get apps on Linux. More than 3,000 snaps have been published, and millions have been installed, including official releases from Spotify, Skype, Slack and Firefox,

Snaps are fully integrated into Ubuntu GNOME 18.04 LTS and KDE Neon. Publishers deliver updates directly, and security is maintained with enhanced kernel isolation and system service mediation.

Snaps work on desktops, devices and cloud virtual machines, as well as bare-metal servers, allowing a consistent delivery mechanism for applications and frameworks.

Workstations, Cloud and IoT

Nvidia GPGPU hardware acceleration is integrated in Ubuntu 18.04 LTS cloud images and Canonical’s OpenStack and Kubernetes distributions for on-premises bare metal operations. Ubuntu 18.04 supports Kubeflow and other ML and AI workflows.

Kubeflow, the Google approach to TensorFlow on Kubernetes, is integrated into Canonical Kubernetes along with a range of CI/CD tools, and aligned with Google GKE for on-premises and on-cloud AI development.

“Having an OS that is tuned for advanced workloads such as AI and ML is critical to a high-velocity team,” said David Aronchick, product manager for Cloud AI at Google. “With the release of Ubuntu 18.04 LTS and Canonical’s collaborations to the Kubeflow project, Canonical has provided both a familiar and highly performant operating system that works everywhere.”

Software engineers and data scientists can use tools they already know, such as Ubuntu, Kubernetes and Kubeflow, and greatly accelerate their ability to deliver value for their customers, whether on-premises or in the cloud, he added.

Multiple Cloud Focus

Canonical has seen a significant adoption of Ubuntu in the cloud, apparently because it offers an alternative, said Canonical’s Fabel.

Typically, customers ask Canonical to deploy Open Stack and Kubernetes together. That is a pattern emerging as a common operational framework, he said. “Our focus is delivering Kubernetes across multiple clouds. We do that in alignment with Microsoft Azure service.”

Better Economics

Economically, Canonical sees Kubernetes as a commodity, so the company built it into Ubuntu’s support package for the enterprise. It is not an extra, according to Fabel.

“That lines up perfectly with the business model we see the public clouds adopting, where Kubernetes is a free service on top of the VM that you are paying for,” he said.

The plan is not to offer overly complex models based on old-school economic models, Fabel added, as that is not what developers really want.

“Our focus is on the most effective delivery of the new commodity infrastructure,” he noted.

Private Cloud Alternative to VMware

Canonical OpenStack delivers private cloud with significant savings over VMware and provides a modern, developer-friendly API, according to Canonical. It also has built-in support for NFV and GPGPUs. The Canonical OpenStack offering has become a reference cloud for digital transformation workloads.

Today, Ubuntu is at the heart of the world’s largest OpenStack clouds, both public and private, in key sectors such as finance, media, retail and telecommunications, Shuttleworth noted.

Other Highlights

Among Ubuntu 18.04’s benefits:

  • Containers for legacy workloads with LXD 3.0 — LXD 3.0 enables “lift-and-shift” of legacy workloads into containers for performance and density, an essential part of the enterprise container strategy.

    LXD provides “machine containers” that behave like virtual machines in that they contain a full and mutable Linux guest operating system, in this case, Ubuntu. Customers using unsupported or end-of-life Linux environments that have not received fixes for critical issues like Meltdown and Spectre can lift and shift those workloads into LXD on Ubuntu 18.04 LTS with all the latest kernel security fixes.

  • Ultrafast Ubuntu on a Windows desktop — New Hyper-V optimized images developed in collaboration with Microsoft enhance the virtual machine experience of Ubuntu in Windows.
  • Minimal desktop install — The new minimal desktop install provides only the core desktop and browser for those looking to save disk space and customize machines with their specific apps or requirements. In corporate environments, the minimal desktop serves as a base for custom desktop images, reducing the security cross-section of the platform.

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

Fedora 28 Comes With New Software Options | Software


By Jack M. Germain

May 1, 2018 6:31 AM PT

The Fedora Project on Tuesday announced the general availability of Fedora 28, which introduces a new software delivery system based on a modular repository.

Fedora 28 Comes With New Software Options

The new system provides alternative versions of the software and updates that come with the default release, according to Fedora Project Leader Matthew Miller. It enables users to update specific components at the speed that meets their needs. Modularity gives users more control over their computing environments.

Fedora Linux is the free community version of Red Hat Enterprise Linux, or RHEL. Fedora 28 continues the strategy of releasing three versions — built from a common set of base packages — that target different user bases. They are Fedora 28 Server, Fedora 26 Workstation and Fedora 28 Atomic Host.

“The Fedora Project is one of the most important validation platforms for Red Hat Linux,” said Rob Enderle, principal analyst at the Enderle Group.

“The modularity of the effort makes it uniquely capable of providing the right capability needed for the specific purpose through enhanced control for its users,” he told LinuxInsider.

Upgrade Highlights

All editions of Fedora 28 feature numerous bug fixes and performance tweaks as well as new and enhanced additions. The Fedora 28 base package offers updated compilers and languages, including the latest version of the GNU Compiler Collection (GCC) 8, Golang 1.10 and Ruby 2.5.

All three Fedora 28 editions also bring improvements to Virtualbox guest support. They simplify the user experience in running Fedora 28 as a Virtualbox guest on other operating systems.

“The Fedora Project’s mission is to bring leading-edge innovation to our users, and Fedora 28 delivers on that through the addition of some of the latest open source technologies, including GNOME 3.28 and Kubernetes 1.9,” the Fedora Project’s Miller said.

Fedora Spins the Options

Fedora is a large project that does not fit into a one-size-fits-all Linux distribution. Having different editions allows the project to target the needs of different user groups.

“That’s why we have the Fedora Editions, which are focused at various particular user audiences, as well as our various Labs and Spins, which address niches not covered by those,” Miller told LinuxInsider.

In addition, Fedora 28 offers Spins, or alternative desktops, that target specific use cases. Among them are KDE Plasma and Xfce Desktop and Labs.

For example, a popular Lab edition is the Python Classroom Lab, which provides an easy, out-of-the-box environment for teaching the Pyton programming language. Another option is the Robotics Suite, which routinely has been used to win world-class robotic soccer competitions.

New Workstation Tools

The latest version of Fedora’s desktop-focused edition includes new tools and features for general users. Fedora 28 Workstation also upgrades users to GNOME 3.28. The latest GNOME version adds the capability to set favorite files, folders and contacts for easier organization and access.

Fedora Workstation is designed to be the best desktop environment for software developers, from students to startups to enterprise developers. While it provides polish for general users, the target for features and UI decisions is developers, Miller said.

The new Application Usage tool provides a technology preview to help users diagnose and resolve performance and capacity issues more easily. Fedora Workstation 28 also introduces GNOME Photos as the default photo management application, providing a simple application for viewing, browsing and organizing photos.

Additional enhancements include Thunderbolt 3 connection support and improved emoji support. Active-by-default power saving features improve laptop battery life.

Fedora 28 Server

The Fedora 28 Server edition’s most significant addition is the new modularity initiative. Modularity is an important component for programming stacks and database instances, giving administrators more choices among software versions they can deploy and support.

Fedora Server delivers enhancements to the more traditional Linux server, such as new approaches like Modularity.

Additionally, Fedora 28 Server includes support for AArch64 as a primary architecture. It provides an additional operating system option for systems administrators considering emerging hardware technologies.

Fedora 28 Atomic Host

Fedora Atomic Host is designed to provide a minimal footprint operating platform. This makes it a well-suited option for running containerized workloads across various footprints, including the public cloud.

It lets users run the image-based all-in-one-containers approach that Atomic Host is designed to handle.

Available with a two-week refresh schedule, Fedora Atomic Host includes a base image for creating virtual machines, an Atomic Host image for creating container deployment hosts, and base container images to leverage as a starting point for Fedora-based containerized applications.

New for Fedora 28 Atomic Host is the inclusion of Kubernetes 1.9. This version adds numerous innovative features for orchestrating container-native workloads.

Modularity Explained

The Fedora team’s goal is to keep the bundled software in each release very close to the current software release. For example, the latest version of the Django Web framework is 2.0, and that is the default version in Fedora 28.

One concern due to Fedora’s rapid life cycle is that users may lack continuity from one release to the next. Some specific software stacks must update more slowly.

On other enterprise-focused distributions with a slower lifecycle, modularity could be used to help address the opposite problem. Modularity makes newer software available when the base operating system has a lifetime of a decade or more, Miller said.

In the case of running Django Web framework, a lot of software still depends on 1.6. If you should need that, the only options would be to run an old, out-of-date Fedora OS version that would not get security updates or not use Fedora at all, Miller explained.

The modular repo is a collection of software with alternate versions, he added. Fedora 28 includes the Django 1.6 version. So, you can use the “dnf module” commands to select that version on systems if needed.

Different Approach

Modularity in Fedora 28 is an enhancement to the existing package management system and is implemented as an extension to DNF. The key thing is that it lets you select different streams of available software.

Once you have done that, installations and updates to packages stay within the stream you have selected. So, Django 1.6 won’t update to 2.0 without a manual switch, according to Miller.

“From a user perspective, this is not a whole new way of doing things, like the switch to containers might be. It is just the package manager getting smarter about handling different versions of the same thing,” he said.

Not a Snapcraft Alternative

“Users need more flexibility, and we want to provide that to them,” Miller said. “There are other ways to package up software so different versions can coexist, but they tend to have a lot of overhead on our side, and be somewhat complicated from a user perspective. Modularity makes both of those things easier.”

Fedora does support the Snapcraft process of delivering rapid software updates via snap packages, he said. However, Fedora’s modularity approach is different from snaps.

“It is more comparable to the
Amazon Linux Extras Repository, which is another — albeit rudimentary — way of providing alternate versions of software,” Miller added.

“Modules can provide a source of software that could be used to build OCI/Docker containers, Flatpaks, or Snaps,” explained Miller. “This is one of the reasons we have chosen building blocks as part of the new logo for Fedora Server.”


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link