Monthly Archives: November 2017

GPLv3 Comes to the Rescue of GPL Violators » Linux Magazine


Red Hat is working with major tech companies, including Facebook, Google, and IBM, to make it easier for GPL violators to cure violations. The companies are adopting the cure provisions of GNU GPLv3 to help companies fix violations.

One of the biggest concerns when using open source components in commercial products is licence compliance. Multiple efforts are made by organizations like the Linux Foundation to help companies consume open source software without worrying about compliance. However, in some cases, like VMware, organizations such as The Software Freedom Conservancy take aggressive routes that end up hurting collaboration and open source projects in question. Such legal actions also send a shockwave that touching open source can be dangerous.

Companies don’t violate licences on purpose. “Most GPL violations occur by mistake, without ill will. Copyleft enforcement should assist these distributors to become helpful participants in the free software projects on which they rely,” said Joshua Gay of the Free Software Foundation.

However, these companies must have ways to fix the violations. GPLv2, one of the most prominent copyleft licences, permanently terminates permissions at the moment of violations. This heavy-handed approach discouraged cooperation and collaboration and led to more hostile resolutions like legal actions, which, besides lawyers, no one has any interest. Linus Torvalds once said that “we lose” the moment we get lawyers involved.

With GPLv3, The Free Software Foundation created an opportunity for users to address violations. It was a fix for the heavy-handed approach that GPLv2 adopted for violations. GPLv3 provides an opportunity to first time violators to restore all rights automatically once the violations are fixed. It was designed to attract more collaboration and amicable resolutions to violations instead of hostile actions.

Red Hat, Facebook, Google and IBM are committed to extending the GPLv3 approach to license compliance errors to the software code that each licenses under GPLv2 and LGPLv2.1 and v2.

With the adoption of this balanced approach, companies will feel more comfortable using open source components in their products without the fear of prosecution for any mistaken violation.

“We felt strongly that the large ecosystems of projects using GPLv2 and LGPLv2.x would benefit from adoption of this more balanced approach to termination derived from GPLv3,” said Red Hat in a blog post.

This step by Red Hat is a move in the right direction.



Source link

Linus Torvalds’ Precious Advice to Security Exp… » Linux Magazine


Linus Torvalds, the creator of the Linux kernel, is no fan of the security community. In his opinion security is just bugs that get exploited. “I don’t trust security people to do sane things,” said Torvalds, responding to a merge request by one of the top kernel developers Kees Cook.

What ticked Torvalds off this time was that Kees’ patch had the potential to break things, and he added a fallback mode. Kees wrote, “This has lived in -next for quite some time without major problems, but there were some late-discovered missing whitelists, so a fallback mode was added just to make sure we don’t break anything. I expect to remove the fallback mode in a release or two.”

Torvalds refused to merge and said, “If you can make a smaller pull request that introduces the infrastructure, but that _obviously_ cannot actually break anything, that would be more likely to be palatable.”

To which Kees responded, “This is why I introduced the fallback mode: with both kvm and sctp (ipv6) not noticed until late in the development cycle, I became much less satisfied it had gotten sufficient testing. I wanted to make sure there was a way for the series to land without actually breaking things due to any missed whitelists.”

Torvalds said, “I’m not at all interested in killing processes. The only process I’m interested in is the _development_ process, where we find bugs and fix them.”

But this time Torvalds has a valuable piece of advice for security people. He said that the primary focus should be “debugging” and making sure the kernel released in a year is better than the one released today. He dismissed the popular notion of kill processes for bugs. “… the hardening efforts should instead _start_ from the standpoint of ‘let’s warn about what looks dangerous, and maybe in a _year_ when we’ve warned for a long time, and we are confident that we’ve actually caught all the normal cases, _then_ we can start taking more drastic measures’,” said Torvalds, “Stop this idiotic ‘kill on sight, ask questions later’.”



Source link

9 Significant Infrastructure Mergers of 2017


As the cloud and software continue to transform IT infrastructure, vendors are making moves to keep up. This year saw plenty of tech M&A activity as established infrastructure vendors bought up hot startups to expand their platforms with new capabilities for the modern enterprise.

Tech giant HPE made a splash in the hot hyperconverged infrastructure space with its $650 million SimpliVity acquisition while rival Cisco acquired its HCI partner, Springpath. Hyperconvergence has been one of the hottest trends reshaping the data center, and established infrastructure suppliers have been eager to get in on what’s estimated to become a $31 billion global market by 2025.

Meanwhile, the market for software-defined WAN – another hot trend in enterprise infrastructure — consolidated with the acquisition of two of the leading pioneers in the market, Viptela by the ever-acquisitive Cisco and VeloCloud by VMware. The deals left few pure-play suppliers in the fast-growing SD-WAN market, which IHS Markit estimates will jump from $137 million worldwide in the first half of this year to $3.3 billion by 2021.

Other technologies that IT heavyweights snapped up include flash storage and analytics.

Many of the acquisitions are driven by enterprise adoption of cloud and more specifically, hybrid cloud, Dan Conde, analyst at ESG told me in an interview. Hyperconverged infrastructure enables private cloud, and acquiring a leader in that market gave HPE the ability to offer customers a range of cloud options. “People want the agility of cloud on-premises for a variety of reasons,” he said.

The SD-WAN craze, meanwhile, is driven by the need for companies to provide cloud access to their employees, Conde said. “They realize a lot of traffic goes to Office 365, G Suite, or Salesforce, and they better find a way to adapt their branch office networking and routing to access the cloud efficiently and securely,” he said.

Despite the big SD-WAN acquisitions this year, he still sees plenty of opportunity in the market, including managed SD-WAN services. ESG research has found that many companies plan to buy SD-WAN from service providers, he said.

Continue on to review some of the top M&A deals that will impact IT infrastructure in the years to come.

(Image: Freedomz/Shutterstock)



Source link

Photon Could Be Your New Favorite Container OS | Linux.com


Containers are all the rage, and with good reason. As discussed previously, containers allow you to quickly and easily deploy new services and applications onto your network, without requiring too much in the way of added system resources. Containers are more cost-effective than using dedicated hardware or virtual machines, and they’re easier to update and reuse.

Best of all, containers love Linux (and vice versa). Without much trouble or time, you can get a Linux server up and running with Docker and deploying containers. But, which Linux distribution is best suited for the deployment of your containers? There are a lot of options. You could go with a standard Ubuntu Server platform (which makes installing Docker and deploying containers incredibly easy), or you could opt for a lighter weight distribution one geared specifically for the purpose of deploying containers.

One such distribution is Photon. This particular platform was created in 2005 by VMware; it includes the Docker daemon and works with container frameworks, such as Mesos and Kubernetes. Photon is optimized to work with VMware vSphere, but it can be used on bare metal, Microsoft Azure, Google Compute Engine, Amazon Elastic Compute Cloud, or VirtualBox.

Photon manages to stay slim by only installing what is absolutely necessary to run the Docker daemon. In the end, the distribution comes in around 300 MB. This is just enough Linux make it all work. The key features to Photon are:

  • Kernel tuned for performance.

  • Kernel is hardened according to the Kernel Self-Protection Project (KSPP).

  • All installed packages are built with hardened security flags.

  • Operating system boots with validated trust.

  • Photon management daemon manages firewall, network, packages, and users on remote Photon OS machines.

  • Support for persistent volumes.

  • Project Lightwave integration.

  • Timely security patches and updates.

Photon can be used via ISO, OVA, Amazon Machine Image, Google Compute Engine image, and Azure VHD. I’ll show you how to install Photon on VirtualBox, using an ISO image. The installation takes about five minutes and, in the end, you’ll have a virtual machine, ready to deploy containers.

Creating the virtual machine

Before you deploy that first container, you have to create the virtual machine and install Photon. To do this, open up VirtualBox and click the New button. Walk through the Create Virtual Machine wizard (giving Photon the necessary resources, based on the usage you predict the container server will need). Once you’ve created the virtual machine, you need to first make a change to the settings. Select the newly created virtual machine (in the left pane of the VirtualBox main window) and then click Settings. In the resulting window, click on Network (from the left navigation).

In the Networking window (Figure 1), you need to change the Attached to drop-down to Bridged Adapter. This will ensure your Photon server is reachable from your network. Once you’ve made that change, click OK.

Select your Photon virtual machine from the left navigation and then click Start. You will be prompted to locate and attach the IOS image. Once you’ve done that, Photon will boot up and prompt you to hit Enter to begin the installation. The installation is ncurses based (there is no GUI), but it’s incredibly simple.

In the next screen (Figure 2), you will be asked if you want to do a Minimal, Full, or OSTree Server. I opted to go the Full route. Select whichever option you require and hit enter.

In the next window, select the disk that will house Photon. Since we’re installing this as a virtual machine, there will be only one disk listed (Figure 3). Tab down to Auto and hit Enter on your keyboard. The installation will then require you to type (and verify) an administrator password. Once you’ve done that, the installation will begin and finish in less than five minutes.

Once the installation completes, reboot the virtual machine and log in with the username root and the password you created during installation. You are ready to start working.

Before you begin using Docker on Photon, you’ll want to upgrade the platform. Photon uses the yum package manager, so login as root and issue the command yum update. If there are any updates available, you’ll be asked to okay the process (Figure 4).

Usage

As I mentioned, Photon comes with everything you need to deploy containers or even create a Kubernetes cluster. However, out of the box, there are a few things you’ll need to do. The first thing is to enable the Docker daemon to run at start. To do this, issue the commands:

systemctl start docker

systemctl enable docker

Now we need to create a standard user, so we’re not running the docker command as root. To do this, issue the following commands:

useradd -m USERNAME

passwd USERNAME

Where USERNAME is the name of the user to add.

Next we need to add the new user to the docker group with the command:

usermod -a -G docker USERNAME

Where USERNAME is the name of the user just created.

Log out as the root user and log back in as the newly created user. You can now work with the docker command without having to make use of sudo or switching to the root user. Pull down an image from Docker Hub and start deploying containers.

An outstanding container platform

Photon is, without a doubt, an outstanding platform, geared specifically for containers. Do note that Photon is an open source project, so there is no paid support to be had. If you find yourself having trouble with Photon, hop on over to the Issues tab in the Photon Project’s Github page, where you can read and post about issues. And if you’re interested in forking Photon, you’ll find the source code on the project’s official Github page.

Give Photon a try and see if it doesn’t make deploying Docker containers and/or Kubernetes clusters significantly easier.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

How to Install and Use Docker on Linux | Linux.com


Containers are all the rage in IT — with good reason. Containers are lightweight, standalone packages that contain everything needed to run an application (code, libraries, runtime, system settings, and dependencies). Each container is deployed with its own CPU, memory, block I/O, and network resources, all without having to depend upon an individual kernel and operating system. And that is the biggest difference between a container and a virtual machine; whereas a virtual machine is a full-blown operating system platform, running on a host OS, a container is not.

Containers allow you to expand your company offerings (either internal or external) in ways you could not otherwise. For example, you can quickly deploy multiple instances of NGINX (even with multiple stagings — such as development and production). Unlike doing this with Virtual Machines, containers will not put nearly the hit on your system resources.

Docker makes creating, deploying, and managing containers incredibly simple. What’s best is that installing and using Docker is second-nature to the Linux platform.

I’m going to demonstrate how easy it is to install Docker on Linux, as well as walking you through the first steps of working with Docker. I’ll be demonstrating on the Ubuntu 16.04 Server platform, but the process is very similar on most all Linux distributions.

I will assume you already have Ubuntu Server 16.04 up and running and ready to go.

Installation

Since Ubuntu Server 16.04 is sans GUI, the installation and usage of Docker will be handled entirely through the command line. Before you run the installation command, make sure to update apt and then run any necessary upgrades. Do note, if your server’s kernel upgrades, you’ll need to reboot the system. Thus, you might want to plan to do this during a time when a server reboot is acceptable.

To update apt, issue the command:

sudo apt update

Once that completes, upgrade with the command:

sudo apt upgrade

If the kernel upgrades, you’ll want to reboot the server with the command:

sudo reboot

If the kernel doesn’t upgrade, you’re good to install Docker (without having to reboot). The Docker installation command is:

sudo apt install docker.io

If you’re using a different Linux distribution, and you attempt to install (using your distribution’s package manager of choice), only to find out docker.io isn’t available, the package you want to install is called docker. For instance, the installation on Fedora would be:

sudo dnf install docker

If your distribution of choice is CentOS 7, installing docker is best handled via an installation script. First update the platform with the command sudo yum check-update. Once that completes, issue the following command to download and run the necessary script:

curl -fsSL https://get.docker.com/ | sh

Out of the box, the docker command can only be run with admin privileges. Because of security issues, you won’t want to be working with Docker either from the root user or with the help of sudo. To get around this, you need to add your user to the docker group. This is done with the command:

sudo usermod -a -G docker $USER

Once you’ve taken care of that, log out and back in, and you should be good to go. That is, unless your platform is Fedora. When adding a user to the docker group to this distribution, you’ll find the group doesn’t exist. What do you do? You create it first. Here are the commands to take care of this:

sudo groupadd docker && sudo gpasswd -a ${USER} docker && sudo systemctl restart docker

newgrp docker

Log out and log back in. You should be ready to use Docker.

Starting, stopping, and enabling Docker

Once installed, you will want to enable the Docker daemon at boot. To do this, issue the following two commands:

sudo systemctl start docker

sudo systemctl enable docker

Should you need to stop or restart the Docker daemon, the commands are:

sudo systemctl stop docker

sudo systemctl restart docker

Docker is now ready to deploy containers.

Pulling images

For Docker, images serve as the building blocks of your containers. You can pull down a single image (say NGINX) and deploy as many containers as you need from that image. To use images, you must first pull them onto your system. Images are pulled from registries and your Docker installation includes usage of the default Docker Hub — a registry that contains a large amount of contributed images (from official images to user-contributed).

Let’s say you want to pull down an image for the Nginx web server. Before doing so, let’s check to see what images are already to be found on our system. Issue the command docker images and you should see that no images are to be found (Figure 1).

Let’s fix that. We’ll download the Nginx image from Docker Hub with the command:

docker pull nginx

The above command will pull down the latest (official) Nginx image from Docker Hub. If we run the command docker images, we now see the image listed (Figure 2).

Notice I said “official” Nginx image? You will find there are plenty of unofficial Nginx images to be found on Docker Hub. Many of these unofficial images have been created to serve specific purposes. You can see a list of all Nginx images, found on Docker Hub, with the command

docker search nginx

As you can see (Figure 3), there are Nginx images to be had for numerous purposes (reverse proxy, PHP-FPM-capable, LetsEncrypt, Bitnami, Nginx for Raspberry Pi and Drupal, and much more).


Say, for example, you want to pull down the Nginx image with reverse proxy functionality built in. That unofficial image is called jwilder/nginx-proxy. To pull that image down, issue the command:

docker pull jwilder/nginx-proxy

Issue the command docker images to see the newly pulled images (Figure 4).

As a word of caution, I recommend only working with the official images, as you cannot be certain if an unofficial image will contain malicious code.

You now have images, ready to be used for the deploying of containers. When next we visit this topic, we’ll begin the process deploying those containers, based on the Nginx image.

Docker is an incredibly powerful system that can make your job easier and your company more flexible and agile. For more information on what Docker can do, issue the command man docker and read through the man page.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.