Tag Archives: linux

Microsoft Offers its Patent Portfolio » Linux Magazine

In a surprise and historical move Microsoft has released its entire patent portfolio to Open Innovation Network (OIN) by joining the organization. Microsoft has released all 60,000 patents to OIN.

“We bring a valuable and deep portfolio of over 60,000 issued patents to OIN for the benefit of Linux and other open source technologies,” said Erich Andersen Corporate Vice President, Deputy General Counsel.

These patents also include those 235 patents that Microsoft once claimed were infringed upon by the Linux kernel. Linus Torvalds had dismissed those claims stating, “Microsoft just made up the number.”

It’s a major u-turn for Microsoft, which has a history of exploiting patents as a weapon against Linux players. This move brings an end to the long hostility between Linux and Microsoft.

There are more than 2,650 members, including numerous Fortune 500 enterprises, that make OIN the largest patent non-aggression community.

OIN has created a massive pool of patents affecting Linux and open source projects. The organization offers these patents on a royalty-free basis to member organizations. Companies not yet member of OIN can also tap into its pool of patents if they promise not to assert its patents against the Linux system.

Back in 2005, OIN was created by a group of companies with vested interests in open source. The goal was to fend off any patent attacks on open source companies. Founding members included IBM, NEC, SUSE/Novell, Philips, Red Hat, and Sony.

Microsoft has around 90,000 patents, but over 30,000 as still pending with the US Patent Office. Once those patents are approved, they will also become part of OIN pool.

Source link

Redis Labs Modules Forked » Linux Magazine

As expected, developers from the desktop projects Fedora and Debian have forked the modules that database vendor Redis Labs put under the Commons Clause.

The Commons Clause is an extra license rider that prohibits the user from “selling” the software, and “selling” is defined to include selling services such as hosting and consulting. According to Redis Labs and the creators of the Commons Clause, the rider was created to prevent huge hosting companies like Amazon from using the code without contributing to the project. Unfortunately, the license also has the effect of making the Redis Labs modules incompatible with the open source licenses used with Linux and other FOSS projects.

To fix the problem, Debian and Fedora came together to fork these modules. Nathan Scott, Principal Software Engineer at Red Hat, wrote on a Google Group, “…we have begun collaborating on a set of module repositories forked from prior to the license change. We will maintain changes to these modules under their original open source licenses, applying only free and open fixes and updates.”

It was an expected move. When license changes are made to any open source project, often some open source community jumps in and forks the project to keep a version fully compatible with the earlier open source license. The fork means commercial vendors like Amazon will still be able to use these modules without contributing anything to Redis Labs or the newly forked project. However, not all forks are successful. It’s not the license that matters. What matters is the expertise of the developers who write and maintain the codebase. Google once forked Linux for Android, but eventually ended up merging with the mainline kernel.

In a previous interview, Redis Labs told me that they were not sure whether adding the Commons Clause to these licenses would work or not; they already tried the Affero GPL (AGPL)  license, which is also designed to address the so-called application service provider loophole that allows cloud vendors to avoid contributing back their changes, but the move to the AGPL didn’t help them get vendors like Amazon to contribute.

Redis Labs added the Commons Clause to only those modules that their staff wrote; there is no change to the modules written by external parties.

Source link

4 Useful Tools to Run Commands on Multiple Linux Servers | Linux.com

In this article, we will show how to run commands on multiple Linux servers at the same time. We will explain how to use some of the widely known tools designed to execute repetitive series of commands on multiple servers simultaneously. This guide is useful for system administrators who usually have to check the health of multiple Linux servers everyday.

For the purpose of this article, we assume that you already have SSH setup to access all your servers and secondly, when accessing multiple servers simultaneously, it is appropriate to set up key-based password-less SSH on all of your Linux servers. This above all enhances server security and also enables ease of access.

1. PSSH – Parallel SSH

Parallel-SSH is an open source, fast and easy-to-use command line based Python toolkit for executing ssh in parallel on a number of Linux systems. It contains a number of tools for various purposes such as parallel-ssh, parallel-scp, parallel-rsync, parallel-slurp and parallel-nuke (read the man page of a particular tool for more information).

Read more at Tecmint

Click Here!

4 Must-Have Tools for Monitoring Linux | Linux.com

Linux. It’s powerful, flexible, stable, secure, user-friendly… the list goes on and on. There are so many reasons why people have adopted the open source operating system. One of those reasons which particularly stands out is its flexibility. Linux can be and do almost anything. In fact, it will (in most cases) go well above what most platforms can. Just ask any enterprise business why they use Linux and open source.

But once you’ve deployed those servers and desktops, you need to be able to keep track of them. What’s going on? How are they performing? Is something afoot? In other words, you need to be able to monitor your Linux machines. “How?” you ask. That’s a great question, and one with many answers. I want to introduce you to a few such tools—from command line, to GUI, to full-blown web interfaces (with plenty of bells and whistles). From this collection of tools, you can gather just about any kind of information you need. I will stick only with tools that are open source, which will exempt some high-quality, proprietary solutions. But it’s always best to start with open source, and, chances are, you’ll find everything you need to monitor your desktops and servers. So, let’s take a look at four such tools.


We’ll first start with the obvious. The top command is a great place to start, when you need to monitor what processes are consuming resources. The top command has been around for a very long time and has, for years, been the first tool I turn to when something is amiss. What top does is provide a real-time view of all running systems on a Linux machine. The top command not only displays dynamic information about each running process (as well as the necessary information to manage those processes), but also gives you an overview of the machine (such as, how many CPUs are found, and how much RAM and swap space is available). When I feel something is going wrong with a machine, I immediately turn to top to see what processes are gobbling up the most CPU and MEM (Figure 1). From there, I can act accordingly.

There is no need to install anything to use the top command, because it is installed on almost every Linux distribution by default. For more information on top, issue the command man top.


If you thought the top command offered up plenty of information, you’ve yet to experience Glances. Glances is another text-based monitoring tool. In similar fashion to top, glances offers a real-time listing of more information about your system than nearly any other monitor of its kind. You’ll see disk/network I/O, thermal readouts, fan speeds, disk usage by hardware device and logical volume, processes, warnings, alerts, and much more. Glances also includes a handy sidebar that displays information about disk, filesystem, network, sensors, and even Docker stats. To enable the sidebar, hit the 2 key (while glances is running). You’ll then see the added information (Figure 2).

You won’t find glances installed by default. However, the tool is available in most standard repositories, so it can be installed from the command line or your distribution’s app store, without having to add a third-party repository.

GNOME System Monitor

If you’re not a fan of the command line, there are plenty of tools to make your monitoring life a bit easier. One such tool is GNOME System Monitor, which is a front-end for the top tool. But if you prefer a GUI, you can’t beat this app.

With GNOME System Monitor, you can scroll through the listing of running apps (Figure 3), select an app, and then either end the process (by clicking End Process) or view more details about said process (by clicking the gear icon).

You can also click any one of the tabs at the top of the window to get even more information about your system. The Resources tab is a very handy way to get real-time data on CPU, Memory, Swap, and Network (Figure 4).

If you don’t find GNOME System Monitor installed by default, it can be found in the standard repositories, so it’s very simple to add to your system.


If you’re looking for an enterprise-grade networking monitoring system, look no further than Nagios. But don’t think Nagios is limited to only monitoring network traffic. This system has over 5,000 different add-ons that can be added to expand the system to perfectly meet (and exceed your needs). The Nagios monitor doesn’t come pre-installed on your Linux distribution and although the install isn’t quite as difficult as some similar tools, it does have some complications. And, because the Nagios version found in many of the default repositories is out of date, you’ll definitely want to install from source. Once installed, you can log into the Nagios web GUI and start monitoring (Figure 5).

Of course, at this point, you’ve only installed the core and will also need to walk through the process of installing the plugins. Trust me when I say it’s worth the extra time.
The one caveat with Nagios is that you must manually install any remote hosts to be monitored (outside of the host the system is installed on) via text files. Fortunately, the installation will include sample configuration files (found in /usr/local/nagios/etc/objects) which you can use to create configuration files for remote servers (which are placed in /usr/local/nagios/etc/servers).

Although Nagios can be a challenge to install, it is very much worth the time, as you will wind up with an enterprise-ready monitoring system capable of handling nearly anything you throw at it.

There’s More Where That Came From

We’ve barely scratched the surface in terms of monitoring tools that are available for the Linux platform. No matter whether you’re looking for a general system monitor or something very specific, a command line or GUI application, you’ll find what you need. These four tools offer an outstanding starting point for any Linux administrator. Give them a try and see if you don’t find exactly the information you need.

Exploring the Linux Kernel: The Secrets of Kconfig/kbuild | Linux.com

The Linux kernel config/build system, also known as Kconfig/kbuild, has been around for a long time, ever since the Linux kernel code migrated to Git. As supporting infrastructure, however, it is seldom in the spotlight; even kernel developers who use it in their daily work never really think about it.

To explore how the Linux kernel is compiled, this article will dive into the Kconfig/kbuild internal process, explain how the .config file and the vmlinux/bzImage files are produced, and introduce a smart trick for dependency tracking.


The first step in building a kernel is always configuration. Kconfig helps make the Linux kernel highly modular and customizable. Kconfig offers the user many config targets:

Read more at OpenSource.com

Click Here!