Monthly Archives: March 2018

Simple Load Balancing with DNS on Linux | Linux.com


When you have server back ends built of multiple servers, such as clustered or mirrowed web or file servers, a load balancer provides a single point of entry. Large busy shops spend big money on high-end load balancers that perform a wide range of tasks: proxy, caching, health checks, SSL processing, configurable prioritization, traffic shaping, and lots more.

But you don’t want all that. You need a simple method for distributing workloads across all of your servers and providing a bit of failover and don’t care whether it is perfectly efficient. DNS round-robin and subdomain delegation with round-robin provide two simple methods to achieve this.

DNS round-robin is mapping multiple servers to the same hostname, so that when users visit foo.example.com multiple servers are available to handle their requests.

Subdomain delegation with round-robin is useful when you have multiple subdomains or when your servers are geographically dispersed. You have a primary nameserver, and then your subdomains have their own nameservers. Your primary nameserver refers all subdomain requests to their own nameservers. This usually improves response times, as the DNS protocol will automatically look for the fastest links.

Round-Robin DNS

Round-robin has nothing to do with robins. According to my favorite librarian, it was originally a French phrase, ruban rond, or round ribbon. Way back in olden times, French government officials signed grievance petitions in non-hierarchical circular, wavy, or spoke patterns to conceal whoever originated the petition.

Round-robin DNS is also non-hierarchical, a simple configuration that takes a list of servers and sends requests to each server in turn. It does not perform true load-balancing as it does not measure loads, and does no health checks, so if one of the servers is down, requests are still sent to that server. Its virtue lies in simplicity. If you have a little cluster of file or web servers and want to spread the load between them in the simplest way, then round-robin DNS is for you.

All you do is create multiple A or AAAA records, mapping multiple servers to a single host name. This BIND example uses both IPv4 and IPv6 private address classes:

fileserv.example.com.  IN  A  172.16.10.10
fileserv.example.com.  IN  A  172.16.10.11
fileserv.example.com.  IN  A  172.16.10.12

fileserv.example.com.  IN  AAAA  fd02:faea:f561:8fa0:1::10
fileserv.example.com.  IN  AAAA  fd02:faea:f561:8fa0:1::11
fileserv.example.com.  IN  AAAA  fd02:faea:f561:8fa0:1::12

Dnsmasq uses /etc/hosts for A and AAAA records:

172.16.1.10  fileserv fileserv.example.com
172.16.1.11  fileserv fileserv.example.com
172.16.1.12  fileserv fileserv.example.com
fd02:faea:f561:8fa0:1::10  fileserv fileserv.example.com
fd02:faea:f561:8fa0:1::11  fileserv fileserv.example.com
fd02:faea:f561:8fa0:1::12  fileserv fileserv.example.com

Note that these examples are simplified, and there are multiple ways to resolve fully-qualified domain names, so please study up on configuring DNS.

Use the dig command to check your work. Replace ns.example.com with your name server:

$ dig @ns.example.com fileserv A fileserv AAA

That should display both IPv4 and IPv6 round-robin records.

Subdomain Delegation and Round-Robin

Subdomain delegation combined with round-robin is more work to set up, but it has some advantages. Use this when you have multiple subdomains or geographically-dispersed servers. Response times are often quicker, and a down server will not respond, so clients will not get hung up waiting for a reply. A short TTL, such as 60 seconds, helps this.

This approach requires multiple name servers. In the simplest scenario, you have a primary name server and two subdomains, each with its own name server. Configure your round-robin entries on the subdomain servers, then configure the delegations on your primary server.

In BIND on your primary name server, you’ll need at least two additional configurations, a zone statement, and A/AAAA records in your zone data file. The delegation looks something like this on your primary name server:

ns1.sub.example.com.  IN A     172.16.1.20
ns1.sub.example.com.  IN AAAA  fd02:faea:f561:8fa0:1::20
ns2.sub.example.com.  IN A     172.16.1.21
ns2.sub.example.com.  IN AAA   fd02:faea:f561:8fa0:1::21

sub.example.com.  IN NS    ns1.sub.example.com.
sub.example.com.  IN NS    ns2.sub.example.com.

Then each of the subdomain servers have their own zone files. The trick here is for each server to return its own IP address. The zone statement in named.conf is the same on both servers:

zone "sub.example.com" {
    type master;
    file "db.sub.example.com";
};

Then the data files are the same, except that the A/AAAA records use the server’s own IP address. The SOA (start of authority) refers to the primary name server:

; first subdomain name server
$ORIGIN sub.example.com.
$TTL 60
sub.example.com  IN SOA ns1.example.com. admin.example.com. (
        2018123456      ; serial
        3H              ; refresh
        15              ; retry
        3600000         ; expire
)

sub.example.com. IN NS ns1.sub.example.com.
sub.example.com. IN A  172.16.1.20
ns1.sub.example.com.  IN AAAA  fd02:faea:f561:8fa0:1::20
; second subdomain name server
$ORIGIN sub.example.com.
$TTL 60
sub.example.com  IN SOA ns1.example.com. admin.example.com. (
        2018234567      ; serial
        3H              ; refresh
        15              ; retry
        3600000         ; expire
)

sub.example.com. IN NS ns1.sub.example.com.
sub.example.com. IN A  172.16.1.21
ns2.sub.example.com.  IN AAAA  fd02:faea:f561:8fa0:1::21

Next, make your round-robin entries on the subdomain name servers, and you’re done. Now you have multiple name servers handling requests for your subdomains. Again, BIND is complex and has multiple ways to do the same thing, so your homework is to ensure that your configuration fits with the way you use it.

Subdomain delegations are easier in Dnsmasq. On your primary server, add lines like this in dnsmasq.conf to point to the name servers for the subdomains:

server=/sub.example.com/172.16.1.20
server=/sub.example.com/172.16.1.21
server=/sub.example.com/fd02:faea:f561:8fa0:1::20
server=/sub.example.com/fd02:faea:f561:8fa0:1::21

Then configure round-robin on the subdomain name servers in /etc/hosts.

For way more details and help, refer to these resources:

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

The Evolution of Object Storage


It’s a truism that the amount of data created every year continues to grow at exponential rates. Almost every business now dependends on technology and the information those businesses generate has arguably become their greatest asset. Unstructured data, the kind best kept in object stores, has seen the biggest growth. So, where are we with object storage technology and what can we expect in the future?

Object storage systems

Object storage evolved out of the need to store large volumes of unstructured data for long periods of time at high levels of resiliency. Look back 20 years and we had block (traditional storage) and NAS appliances (typically as file servers). NAS – the most practical platform for unstructured at the time – didn’t really scale to the petabyte level and certainly didn’t offer the levels of resiliency expected for long-term data retention. Generally, businesses used tape for this kind of requirement, but of course tape is slow and inefficient.

Object storage developed to fill the gap by offering online access to content and over the years has developed into a mature technology. With new protection methods like erasure coding, the issue of securing data in a large-scale archive is generally solved.

Object stores use web-based protocols to store and retrieve data. Essentially, most offer four primitives, based on the CRUD acronym – Create, Read, Update, Delete. In many instances, Update is simply a Delete and Create pair of operations. This means interacting with an object store is relatively simple — issue a REST-based API call using HTTP that embeds the data and associated metadata.

This simplicity of operation highlights an issue for object storage: Applications need to be rewritten to use an object storage API. Thankfully vendors do offer SDKs to help in this process, but application changes are required. This problem points to the first evolution we’re seeing with object: multi-protocol access.

Multi-protocol

It’s fair to say that object stores have had multi-protocol access for some time, in the form of gateways or additional software that uses the object store back-end as a large pool of capacity. The problem with these kind of implementations is whether they truly offer concurrent access to the same data from different protocol stacks. It’s fine to be storing and retrieving objects with NFS, but how about storing with NFS and accessing with a web-based protocol?

Why would a business want to have the ability to store with one protocol and access via another? Well, offering NFS means applications can use an object store with no modification. Providing concurrent web-based access allows analytics tools to access the data without introducing performance issues associated with the NFS protocol, like locking or multiple threads hitting the same object. The typical read-only profile of analytics software means data can be analyzed without affecting the main application.

Many IoT devices, like video cameras, will only talk NFS, so ingesting this kind of content into an object store means file-based protocols are essential.

Scalability

One factor influencing the use of object stores is the ability to scale down, rather than just scale up. Many object storage solutions start at capacities of many hundreds of terabytes, which isn’t practical for smaller IT organizations. We’re starting to see vendors address this problem by producing products that can scale to the tens of terabytes of capacity.

Obviously, large-capacity hard drives and flash can be a problem here, but object stores could be implemented for the functional benefits, like storing data in a flat name space. So, vendors are offering solutions that are software-only and can be deployed either on dedicated hardware or as virtual instances on-premises or in the public cloud.

With IoT likely to be a big creator of data and that data being created over wide geographic distributions, then larger numbers of smaller object stores will prove a benefit in meeting the ongoing needs of IoT.

Software-defined

Turning back to the software-only solutions again for a moment, providing software-only solutions means businesses can choose the right type of hardware for their environments. Where hardware supply contracts already exist, businesses can simply pay for the object storage software and deploy on existing equipment. This includes testing on older hardware that might otherwise be disposed of.

Open source

The software-defined avenue leads on to another area in which object store is growing: open source. Ceph was one of the original platforms developed as an open source model. OpenIO offers the same experience, with advanced functionality, like serverless, charged as a premium. Minio, another open source solution, recently received $20 million in funding to take its platform to a wider audience, including Docker containers.

Trial offerings

The focus on software means it’s easy for organizations to try out object stores. Almost all vendors with the exception of IBM Cloud Storage and DDN offer some sort of trial process by either downloading the software or using the company’s lab environment. Providing trials opens software to easier evaluation and adoption in the long run.

What’s ahead

Looking at the future for object storage, it’s fair to say that recent developments have been about making solutions more consumable. There’s a greater focus on software-only and vendors are working on ease of use and installation. Multi-protocol connects more applications, making it easier to get data into object stores in the first place. I’m sure in the coming years we will see object stores continue to be an important platform for persistent data storage.



Source link

Install Firefox in a Snap on Linux » Linux Magazine


Linux desktop has an app fragmentation problem. Each distribution has its own application distribution mechanism which ends up duplicating maintainer resources and is almost always a bottleneck when it comes to delivering updates to apps.

The Linux desktop communities are trying to solve that problem with solutions like App Image, Flatpack and Snaps. While Flatpack is backed by Red Hat/Fedora developers, Snaps is backed by Canonical. App Image is relatively independent. Once again there is fragmentation which means either app developers ‘waste’ developer resources and create a package for all three formats or choose one. Eventually the Linux world may settle down on one, but for now we have to deal with all of the three.

Mozilla has officially picked Snap to offer Firefox browser for Linux. According to Canonical, by launching as a snap, the Firefox Quantum browser is available to an increased amount of Linux users with the snap working natively on Ubuntu, Arch, Linux Mint, Fedora, Solus, Debian and other Linux distributions that support snaps.

“Mozilla has long been a leader in the open source space,” said Jamie Bennett, VP of Engineering,  Devices & IoT at Canonical. “As such we are very happy to announce that they are joining the community of applications already available as snaps. Through their unique format, snaps can help bring some of the world’s most popular apps to almost any Linux desktop, server, device or cloud machine, allowing users to select the right distro for them without having to worry about updates, security or compatibility issues further down the line.”

There are a lot of advantages of using Snap like mechanism over the traditional method as you get updates as soon as the vendor releases it, no need to add 3rd party repositories or wait for weeks for official packages to land in official repositories.

If you want to grab a snap of Firefox, visit this link: https://snapcraft.io/store.



Source link

Gnome 3.28 Released » Linux Magazine


The Gnome Foundation has announced the release of Gnome 3:28 codenamed ‘Chongqing’ to honor the GNOME.Asia 2017 team. As we know Gnome strives for simplicity to make it easier for a user to use desktop Linux. They use simple names for applications – Files, Web, Software, Videos – which accompany simpler user interface. Gnome 3:28 continues that trend.

According to the release notes, there are two new features in GNOME 3.28 that make it easier to keep track of the things. You can now star files and folders in Files (the file manager of Gnome). Once they’ve been added, starred items can be easily viewed in a special location that can be opened from the sidebar. That’s a handy feature to get quick access to files that you want.

The second feature is the arrival of ‘favorites’ to the ‘Contact’ app. Now user can pin contacts of people they often interact with, similar to the way users do it on iOS and Android devices. The Contact app also allows users to sort names by first and last names.

There have been notable improvements in the entertainment department. The Photos app can now import files from removable devices, such as SD cards and USB drives. According to the release notes, this feature automatically detects devices that contain new images and it also allows organizing new images into albums as they’re imported.

With 3:28, Gnome has introduced a new app called ‘Usage’ that does exactly what it sound like; it tells you which app is consuming what system resources. Since the tool is just being introduced, it’s very basic and includes features for examining CPU and memory consumption. It highlights problem areas so you can easily identify issues and solve them.

From the security point of view, this release comes with extended device support.The integrated Thunderbolt 3 connection support offers security checks to prevent data theft through unauthorized Thunderbolt 3 connections.

The list of new features is long, the best way to learn about them is by updating Gnome on your system.



Source link

Linux Foundation LFCS: Ahmed Alkabary | Linux.com


The Linux Foundation offers many resources for Linux and open source developers, users, and administrators. One of the most important offerings is its Linux Certification Program, which is designed to give you a way to differentiate yourself in a job market that’s hungry for your skills.

How well does the certification prepare you for the real world? To illustrate that, the Linux Foundation will be featuring some of those who have recently passed the certification examinations. These testimonials should help you decide if either the Linux Foundation Certified System Administrator or the Linux Foundation Certified Engineer certification is right for you. In this installment of our series, we talk with Ahmed Alkabary.

An introduction

Alkabary writes, “I want to share my experience with the LFCS as I do believe it’s a unique one.  It all started by my getting the award of Academic Aces in 2016 LIFT and then I took a free exam coupon to LFCS…”

Linux.com: How did you become interested in Linux and open source?

Ahmed Alkabary: I always knew about Linux as an alternative to Windows, but never really got to experience it until 2011. I decided to buy a new laptop, and the laptop that stood out for me had Linux pre-installed on it. I remember well the pre-installed distribution was openSUSE. I was hesitant to buy it as I had no experience with Linux whatsoever, but I thought to myself, Well, I can just install windows on it if I don’t like it. Once I booted the system and saw how fast and neat everything was, I thought it is a message from the Linux gods.

It’s really weird because on my first day I felt that Linux was meant for me not just as an operating system to use, but I felt my life will be centered around Linux from that day.

I was a first-year computer science student at the time, so I quickly developed a passion for operating systems. I immediately started experimenting with Linux by installing different distros and trying to understand the filesystem, as well as everything behind Linux itself. I was treating Windows as an operating system that I use just to check email and do google searches, but Linux made me think about operating systems in a whole different level. It’s like driving a Ferrari, and suddenly you get super excited about cars and how they work!  Also Linux being free was a huge factor for me to become interested in it.

Linux.com: What Linux Foundation course did you achieve certification in? Why did you select that particular course?

Alkabary: I did get my LFCS certification, and I chose this certification because It’s a very important and a prestigious certification to achieve if one is seriously considering a career in Linux. Also it has all the fundamentals covered. More importantly, the LFCS exam is very hands on, which makes it leagues better than other Linux certs that are multiple choice based. And so earning the LFCS certification makes me feel that am up to any task that I take on at my job. Since this exam is hands on, it’s not like cram multiple choice exams that really don’t verify any skill besides memorization.

Employers can rest assured that anyone who passes this exam has a solid understanding of LInux and can be a trustworthy Linux sysadmin. One other advantage is that

the exam is online. Instead of traveling to a testing center, the test can be done from the comfort of your own room, on your favorite chair, on your favorite computer.

Linux.com: What are your career goals? How do you see The Linux Foundation certification helping you achieve those goals and benefiting your career?

Alkabary: I am currently working as a junior Linux administrator at ISM Canada (an IBM company). My career goal is to become a senior Linux administrator/Kernel Developer. My ultimate goal is to become someone who advocates for Linux and to become a pioneer of this awesome piece of software. The Linux certification makes me feel more confident with my skills and makes me feel like am able to reach all these goals I’ve set for myself. I will prepare for the LFCE exam, which will make me even much more comfortable with Linux and will go a long way to ensuring more success at my current job (as every question I had in the LFCS exam was basically a task that I had to do in my position). Some questions even made me realize I was doing certain things incorrectly at work.

Linux.com: What other hobbies or projects are you involved in? Do you participate in any open source projects at this time?

Alkabary: I am very interested in the Linux kernel. I am currently learning about it and want to get into Linux driver development and Cgroups. It is a very steep learning curve and quite complicated compared to Linux administration, and there aren’t many helpful resources. Within this realm,  The Linux Foundation made it easier by offering a course on Linux kernel internal development. I recently read an article that talks about how Linux kernel skills are very scarce and are in huge demand at the moment. I believe The Linux Foundation should create a Linux kernel development certification, which would be a serious breakthrough, because many more would get interested in developing for the Linux kernel. A certification program will make it much easier for kernel enthusiasts to contribute to the kernel project.

Linux.com: Do you plan to take future Linux Foundation courses? If so, which ones?

Alkabary: Yes, I am planning to take the LFCE and also am very happy about the partnership with Microsoft, as I can now take the Linux on Azure certification, which is a joint certification between The Linux Foundation and Microsoft. At work, we recently implemented an Azure stack, so It will definitely help me quite a lot taking Linux on Azure certification.

In what ways do you think the certification will help you as a systems administrator in today’s market?

Alkabary: It will help me verify my Linux skills and will make me more confident and excited about career goals. All the exam objectives are basically part of my everyday work tasks that get assigned to me at work. So passing the exam makes me feel I am better at my job. Also LFCS is one of two steps to achieve the Microsoft Azure MCSA (Linux on Azure) and we do implement Azure solutions here at ISM Canada, so getting an MCSA will definitely be a huge asset (and will definitely help me contribute with a greater impact and help to make me a leader within my organization within a very few months of employment).

Are you currently working as a Linux systems administrator? If so, what role does Linux play?

Alkabary: Yes, I am currently working as a Linux sysadmin in a mid-range environment.  My job is pretty much centered around Linux. I build and patch Linux servers as well as performing maintenance-related Linux tasks and administering a wide array of Unix/Linux servers.

What Linux distribution do you prefer and why?

Alkabary: I would have to say openSUSE tumbleweed, as it is kind of my first Linux love. It is very beautifully designed and I also like to work with YAST, as well as the zypper package manager.

But I also like Fedora, as it is based on Red Hat which most of my work revolves around. So I would say openSUSE is my favorite hobby distro and Fedora is my fave professional distro.

Where do you see the Linux job market growing the most in the coming years?

Alkabary: I see it developing on the cloud, just like Linux is the most used OS on Azure, Amazon, and OpenStack. I would also say more growth will occur within the mobile world as well (we all know that android is based on Linux). But I do can see Linux continuing its dominance within the cloud and mobile in the coming years for sure. And that’s not neglecting the fact that Linux is growing in popularity for personal use everyday and so it’s becoming more popular as a desktop as well.

What advice would you give those considering certification for their preparation?

Alkabary: There are free Linux Foundation courses on edX, so that would be a great starting point. The LFS 201 course on edX is a great preparation course as well. I also used Sander Van Vugt’s LFCS series, which is really good also.

I highly recommend that everyone take the LFCS and LFCE exams. It will open doors, and will verify their Linux skills, and last but not least it’s probably redundant to say now but Linux skills are in great demand. So a job is almost guaranteed with LFCS and LFCE certification.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Read more:

Linux Foundation LFCS and LFCE: Alberto Bullo

Linux Foundation LFCS and LFCE: Miltos Tsatsakis

Linux Foundation Certified System Administrator: Gabriel Rojo Argote

Linux Foundation LFCE Georgi Yadkov Shares His Certification Journey

Linux Foundation LFCS and LFCE: Pratik Tolia

Linux Foundation Certified Engineer: Gbenga “Christopher” Adigun

Linux Foundation Certified Engineer: Karthikeyan Ramaswamy

Linux Foundation Certified System Administrator: Muneeb Kalathil