Monthly Archives: February 2018

DNS and DHCP with Dnsmasq | Linux.com


Last week, we learned a batch of tips and tricks for Dnsmasq. Today, we’re going more in-depth into configuring DNS and DHCP, including entering DHCP hostnames automatically into DNS, and assigning static IP addresses from DHCP.

You will edit three configuration files on your Dnsmasq server: /etc/dnsmasq.conf, /etc/resolv.conf, and /etc/hosts. Just like the olden days when we had nice clean configuration files for everything, instead of messes of scripts and nested configuration files.

Use Dnsmasq’s built-in syntax checker to check for configuration file errors, and run Dnsmasq from the command-line rather than as daemon so you can quickly test configuration changes and log the results. (See last week’s tutorial to learn more about this.)

Taming Network Manager and resolv.conf

Disable Network Manager on your Dnsmasq server, and give its network interfaces static configurations. You also need control of /etc/resolv.conf, which in these modern times is usually controlled by other processes, such as Network Manager. In these cases /etc/resolv.conf is a symbolic link to another file such as /run/resolvconf/resolv.conf or /var/run/NetworkManager/resolv.conf. To get around this delete the symlink and then re-create the /etc/resolv.conf file. Now your changes will not be overwritten.

There are many ways to use Dnsmasq and /etc/resolv.conf together. My preference is to enter only 127.0.0.1 in /etc/resolv.conf, and enter all upstream nameservers in /etc/dnsmasq.conf. You don’t need to touch any client configurations because Dnsmasq will provide all network information to them via DHCP.

Local DHCP

This example configuration includes some typical global options, and then defines a single DHCP address range. Replace the italicized values with your own values.

# global options
domain-needed
bogus-priv
no-resolv
filterwin2k
expand-hosts
domain=mydomain.net
local=/mydomain.net/
listen-address=127.0.0.1
listen-address=192.168.10.4

# DHCP range
dhcp-range=192.168.10.10,192.168.10.50,12h
dhcp-lease-max=25

dhcp-range=192.168.10.10,192.168.10.50,12h defines a range of 40 available address leases, with a lease time of 12 hours. This range must not include your Dnsmasq server. You may define the lease time in seconds, minutes, or hours. The default is one hour and the minimum possible is two minutes. If you want infinite lease times then don’t specify a lease time.

dhcp-lease-max=25 defines how many leases can be active at one time. You can have large address pool available and then limit the number of active leases to prevent denial of service problems from hosts going nuts and demanding a lot of DHCP leases.

DHCP Zones and Options

You can define DHCP zones for different subnets, like this example that has an eth and a wifi zone, and then give each zone different options. This example shows how to define the zones:

dhcp-range=eth,192.168.10.10,192.168.10.50,12h
dhcp-range=wifi,192.168.20.10,192.168.20.50,24h

The default route advertised to all clients is the address of your Dnsmasq server. You can configure DHCP to assign each zone a different default route:

dhcp-option=eth,3,192.168.10.0
dhcp-option=wifi,3,192.168.20.0

How do you know that 3 is the default route option? Run dnsmasq --help dhcp to see all the IPv4 options. dnsmasq --help dhcp6 lists the IPv6 options. (See man 5 dhcp-options for more information on options.) You may also use the option names instead of the numbers, like this example for your NTP server:

dhcp-option=eth,option:ntp-server,192.168.10.5

Upstream Name Servers

Controlling which upstream name servers your network uses is one of the nicer benefits of running your own name server, instead of being stuck with whatever your ISP wants you to use. This example uses the Google public name servers. You don’t have to use Google; a quick Web search will find a lot of public DNS servers.

server=8.8.4.4
server=8.8.8.8

DNS Hosts

Adding DNS hosts to Dnsmasq is almost as easy as falling over. All you do is add them to /etc/hosts, like this, using your own addresses and hostnames:

127.0.0.1       localhost
192.168.10.2    webserver
192.168.10.3    fileserver 
192.168.10.4    dnsmasq
192.168.10.5    timeserver

Dnsmasq reads /etc/hosts, and these hosts are available to your LAN either by hostname or by their fully-qualified domain names. The expand-hosts option in /etc/dnsmasq.conf expands the hostnames to the domain= value, for example webserver.mydomain.net

Set Static Addresses from DHCP

This is my favorite thing. You may assign static IP addresses to your LAN hosts by MAC address, or by hostname. The address must fall in a range you have already configured with dhcp-range=:

dhcp-host=d0:50:99:82:e7:2b,192.168.10.46
dhcp-host=turnip,192.168.10.45

On most Linux distributions it is the default for dhclient to send the hostname. You can confirm this in dhclient.conf, with the send host-name option. Do not have any duplicate entries in /etc/hosts.

Here we are again at the end already. Check out these articles for more Dnsmasq features and howtos:

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

6 Ways to Transform Legacy Data Storage Infrastructure


So you have a bunch of EMC RAID arrays and a couple of Dell iSCSI SAN boxes, topped with a NetApp filer or two. What do you say to the CEO who reads my articles and knows enough to ask about solid-state drives, all-flash appliances, hyperconverged infrastructure, and all the other new innovations in storage? “Er, er, we should start over” doesn’t go over too well! Thankfully, there are some clever — and generally inexpensive — ways to answer the question, keep your job, and even get a pat on the back.

SSD and flash are game-changers, so they need to be incorporated into your storage infrastructure. SSDs are better than enterprise-class hard drives from a cost perspective because they will speed up your workload and reduce the number of storage appliances and servers needed. It’s even better if your servers support NVMe, since the interface is becoming ubiquitous and will replace both SAS and (a bit later) SATA, simply because it’s much faster and lower overhead.

As far as RAID arrays, we have to face up to the harsh reality that RAID controllers can only keep up with a few SSDs. The answer is either an all-flash array and keeping the RAID arrays for cool or cold secondary storage usage, or a move to a new architecture based on either hyperconverged appliances or compact storage boxes tailored for SSDs.

All-flash arrays become a fast storage tier, today usually Tier 1 storage in a system. They are designed to bolt onto an existing SAN and require minimal change in configuration files to function. Typically, all-flash boxes have smaller capacities than the RAID arrays, since they have enough I/O cycles to do near-real-time compression coupled with the ability to down-tier (compress) data to the old RAID arrays.

With an all-flash array, which isn’t outrageously expensive, you can boast to the CEO about 10-fold boosts in I/O speed, much lower latency , and as a bonus a combination of flash and secondary storage that usually has 5X effective capacity due to compression. Just tell the CEO how many RAID arrays and drives you didn’t buy. That’s worth a hero badge!

The idea of a flash front-end works for desktops, too. Use a small flash drive for the OS (C-drive) and store colder data on those 3.5” HDDs. Your desktop will boot really quickly, especially with Windows 10 and program loads will be a snap.

Within servers, the challenge is to make the CPU, rather than the rest of the system, the bottleneck. Adding SSDs as primary drives makes sense, with HDDs in older arrays doing duty as bulk secondary storage, just as with all-flash solutions, This idea has fleshed out into the hyperconverged infrastructure (HCI) concept where the drives in each node are shared with other servers in lieu of dedicated storage boxes. While HCI is a major philosophical change, the effort to get there isn’t that huge.

For the savvy storage admin, RAID arrays and iSCSI storage can both be turned into powerful object storage systems. Both support a JBOD (just a bunch of drives) mode, and if the JBODs are attached across a set of server nodes running “free” Ceph or Scality Ring software, the result is a decent object-storage solution, especially if compression and global deduplication are supported.

Likely by now, you are using public clouds for backup. Consider “perpetual “storage using a snapshot tool or continuous backup software to reduce your RPO and RTO. Use multi-zone operations in the public cloud to converge DR onto the perpetual storage setup, as part of a cloud-based DR process. Going to the cloud for backup should save a lot of capital expense money.

On the software front, the world of IT is migrating to a services-centric software-defined storage (SDS), which allows scaling and chaining of data services via a virtualized microservice concept. Even older SANs and server drives can be pulled into the methodology, with software making all legacy boxes in a data center operate as a single pool of storage. This simplifies storage management and makes data center storage more flexible.

Encryption ought to be added to any networked storage or backup. If this prevents even one hacker from reading your files in the next five years, you’ll look good! If you are running into a space crunch and the budget is tight, separate out your cold data, apply one of the “Zip” programs and choose the encrypted file option. This saves a lot of space and gives you encryption!

Let’s take a closer look at what you can do to transform your existing storage infrastructure and extend its life.

(Image: Production Perig/Shutterstock)



Source link

What NVMe over Fabrics Means for Data Storage


NVMe-oF will speed adoption of Non-Volatile Memory Express in the data center.

The last few years have seen Non-Volatile Memory Express (NVMe) completely revolutionize the storage industry. Its wide adoption has driven down flash memory prices. With lower prices and better performance, more enterprises and hyper-scale data centers are migrating to NVMe. The introduction of NVMe over Fabrics (NVMe-oF) promises to accelerate this trend.

The original base specification of NVMe is designed as a protocol for storage on flash memory that uses existing, unmodified PCIe as a local transport. This layered approach is very important. NVMe does not create a new electrical or frame layer; instead it takes advantage of what PCIe already offers. PCIe has a well-known history as a high speed interoperable bus technology. However, while it has those qualities, it’s not well suited for building a large storage fabric or covering distances longer than a few meters. With that limitation, NVMe would be limited to being used as a direct attached storage (DAS) technology, essentially connecting SSDs to the processor inside a server, or perhaps to connect all-flash arrays (AFA) within a rack. NVMe-oF allows things to be taken much further.

Connecting storage nodes over a fabric is important as it allows multiple paths to a given storage resource. It also enables concurrent operations to distributed storage, and a means to manage potential congestion. Further, it allows thousands of drives to be connected in a single pool of storage, since it is no longer limited by the reach of PCIe, but can also take advantage of a fabric technology like RoCE or Fibre Channel.

NVMe-oF describes a means of binding regular NVMe protocol over a chosen fabric technology, a simple abstraction enabling native NVMe commands to be transported over a fabric with minimal processing to map the fabric transport to PCIe and back.  Product demonstrations have shown that the latency penalty for accessing an NVMe SSD over a fabric as opposed to a direct PCIe link can be as low as 10 microseconds.

The layered approach means that a binding specification can be created for any fabric technology, although some fabrics may be better suited for certain applications. Today there are bindings for RDMA (RoCE, iWARP, Infiniband) and Fibre Channel. Work on a binding specification for TCP/IP has also begun.

Different products will use this layered capability in different ways. A simple NVMe-oF target, consisting of an array of NVMe SSDs, may expose all of its drives individually to the NVMe-oF host across the fabric, allowing the host to access and manage each drive individually. Other solutions may take a more integrated approach, using the drives within the array to create one big pool of storage offered that to the NVMe-oF initiator. With this approach, management of drives can be done locally within the array, without requiring the attention of the NVMe-oF initiator, or any higher layer software application. This also allows for the NVMe-oF target to implement and offer NVMe protocol features that may not be supported by drives within the array.

A good example of this is a secure erase feature. A lower cost drive may not support the feature, but if that drive is put into a NVMe-oF AFA target, the AFA can implement that secure erase feature and communicate to the initiator. The NVMe-oF target will handle the operations to the lower cost drive in order to properly support the feature from the perspective of the initiator. This provides implementers with a great deal of flexibility to meet customer needs by varying hardware vs. software feature implementation, drive cost, and performance.

The recent plugfest at UNH-IOL focused on testing simple RoCE and Fibre Channel fabrics. In these tests, a single initiator and target pair were connected over a simple two switch fabric. UNH-IOL performed NVMe protocol conformance testing, generating storage traffic  to ensure data could be transferred error-free. Additionally, testing involved inducing network disruptions to ensure the fabric could recover properly and transactions could resume.

In the data center, storage is used to support many different types of applications with an unending variety of workloads. NVMe-oF has been designed to enable flexibility in deployment, offering choices for drive cost and features support, local or remote management, and fabric connectivity. This flexibility will enable wide adoption. No doubt, we’ll continue to see expansion of the NVMe ecosystem.



Source link

Arch Anywhere Is Dead, Long Live Anarchy Linux | Linux.com


Arch Anywhere was a distribution aimed at bringing Arch Linux to the masses. Due to a trademark infringement, Arch Anywhere has been completely rebranded to Anarchy Linux. And I’m here to say, if you’re looking for a distribution that will enable you to enjoy Arch Linux, a little Anarchy will go a very long way. This distribution is seriously impressive in what it sets out to do and what it achieves. In fact, anyone who previously feared Arch Linux can set those fears aside… because Anarchy Linux makes Arch Linux easy.

Let’s face it; Arch Linux isn’t for the faint of heart. The installation alone will turn off many a new user (and even some seasoned users). That’s where distributions like Anarchy make for an easy bridge to Arch. With a live ISO that can be tested and then installed, Arch becomes as user-friendly as any other distribution.

Anarchy Linux goes a little bit further than that, however. Let’s fire it up and see what it does.

The installation

The installation of Anarchy Linux isn’t terribly challenging, but it’s also not quite as simple as for, say, Ubuntu, Linux Mint, or Elementary OS. Although you can run the installer from within the default graphical desktop environment (Xfce4), it’s still much in the same vein as Arch Linux. In other words, you’re going to have to do a bit of work—all within a text-based installer.

To start, the very first step of the installer (Figure 1) requires you to update the mirror list, which will likely trip up new users.

From the options, select Download & Rank New Mirrors. Tab down to OK and hit Enter on your keyboard. You can then select the nearest mirror (to your location) and be done with it. The next few installation screens are simple (keyboard layout, language, timezone, etc.). The next screen should surprise many an Arch fan. Anarchy Linux includes an auto partition tool. Select Auto Partition Drive (Figure 2), tab down to Ok, and hit Enter on your keyboard.

You will then have to select the drive to be used (if you only have one drive this is only a matter of hitting Enter). Once you’ve selected the drive, choose the filesystem type to be used (ext2/3/4, btrfs, jfs, reiserfs, xfs), tab down to OK, and hit Enter. Next you must choose whether you want to create SWAP space. If you select Yes, you’ll then have to define how much SWAP to use. The next window will stop many new users in their tracks. It asks if you want to use GPT (GUID Partition Table). This is different than the traditional MBR (Master Boot Record) partitioning. GPT is a newer standard and works better with UEFI. If you’ll be working with UEFI, go with GPT, otherwise, stick with the old standby, MBR. Finally select to write the changes to the disk, and your installation can continue.

The next screen that could give new users pause, requires the selection of the desired installation. There are five options:

  • Anarchy-Desktop

  • Anarchy-Desktop-LTS

  • Anarchy-Server

  • Anarchy-Server-LTS

  • Anarchy-Advanced

If you want long term support, select Anarchy-Desktop-LTS, otherwise click Anarchy-Desktop (the default), and tab down to Ok. Click Enter on your keyboard. After you select the type of installation, you will get to select your desktop. You can select from five options: Budgie, Cinnamon, GNOME, Openbox, and Xfce4.
Once you’ve selected your desktop, give the machine a hostname, set the root password, create a user, and enable sudo for the new user (if applicable). The next section that will raise the eyebrows of new users is the software selection window (Figure 3). You must go through the various sections and select which software packages to install. Don’t worry, if you miss something, you can always installed it later.

Once you’ve made your software selections, tab to Install (Figure 4), and hit Enter on your keyboard.

Once the installation completes, reboot and enjoy Anarchy.

Post install

I installed two versions of Anarchy—one with Budgie and one with GNOME. Both performed quite well, however you might be surprised to see that the version of GNOME installed is decked out with a dock. In fact, comparing the desktops side-by-side and they do a good job of resembling one another (Figure 5).

My guess is that you’ll find all desktop options for Anarchy configured in such a way to offer a similar look and feel. Of course, the second you click on the bottom left “buttons”, you’ll see those similarities immediately disappear (Figure 6).

Regardless of which desktop you select, you’ll find everything you need to install new applications. Open up your desktop menu of choice and select Packages to search for and install whatever is necessary for you to get your work done.

Why use Arch Linux without the “Arch”?

This is a valid question. The answer is simple, but revealing. Some users may opt for a distribution like Arch Linux because they want the feeling of “elitism” that comes with using, say, Gentoo, without having to go through that much hassle. With regards to complexity, Arch rests below Gentoo, which means it’s accessible to more users. However, along with that complexity in the platform, comes a certain level of dependability that may not be found in others. So if you’re looking for a Linux distribution with high stability, that’s not quite as challenging as Gentoo or Arch to install, Anarchy might be exactly what you want. In the end, you’ll wind up with an outstanding desktop platform that’s easy to work with (and maintain), based on a very highly regarded distribution of Linux.

That’s why you might opt for Arch Linux without the Arch.

Anarchy Linux is one of the finest “user-friendly” takes on Arch Linux I’ve ever had the privilege of using. Without a doubt, if you’re looking for a friendlier version of a rather challenging desktop operating system, you cannot go wrong with Anarchy.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Advanced Dnsmasq Tips and Tricks | Linux.com


Many people know and love Dnsmasq and rely on it for their local name services. Today we look at advanced configuration file management, how to test your configurations, some basic security, DNS wildcards, speedy DNS configuration, and some other tips and tricks. Next week, we’ll continue with a detailed look at how to configure DNS and DHCP.

Testing Configurations

When you’re testing new configurations, you should run Dnsmasq from the command line, rather than as a daemon. This example starts it without launching the daemon, prints command output, and logs all activity:

# dnsmasq --no-daemon --log-queries
dnsmasq: started, version 2.75 cachesize 150
dnsmasq: compile time options: IPv6 GNU-getopt 
 DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack 
 ipset auth DNSSEC loop-detect inotify
dnsmasq: reading /etc/resolv.conf
dnsmasq: using nameserver 192.168.0.1#53
dnsmasq: read /etc/hosts - 9 addresses

You can see tons of useful information in this small example, including version, compiled options, system name service files, and its listening address. Ctrl+c stops it. By default, Dnsmasq does not have its own log file, so entries are dumped into multiple locations in /var/log. You can use good old grep to find Dnsmasq log entries. This example searches /var/log recursively, prints the line numbers after the filenames, and excludes /var/log/dist-upgrade:

# grep -ir --exclude-dir=dist-upgrade dnsmasq /var/log/

Note the fun grep gotcha with --exclude-dir=: Don’t specify the full path, but just the directory name.

You can give Dnsmasq its own logfile with this command-line option, using whatever file you want:

# dnsmasq --no-daemon --log-queries --log-facility=/var/log/dnsmasq.log

Or enter it in your Dnsmasq configuration file as log-facility=/var/log/dnsmasq.log.

Configuration Files

Dnsmasq is configured in /etc/dnsmasq.conf. Your Linux distribution may also use /etc/default/dnsmasq, /etc/dnsmasq.d/, and /etc/dnsmasq.d-available/. (No, there cannot be a universal method, as that is against the will of the Linux Cat Herd Ruling Cabal.) You have a fair bit of flexibility to organize your Dnsmasq configuration in a way that pleases you.

/etc/dnsmasq.conf is the grandmother as well as the boss. Dnsmasq reads it first at startup. /etc/dnsmasq.conf can call other configuration files with the conf-file= option, for example conf-file=/etc/dnsmasqextrastuff.conf, and directories with the conf-dir= option, e.g. conf-dir=/etc/dnsmasq.d.

Whenever you make a change in a configuration file, you must restart Dnsmasq.

You may include or exclude configuration files by extension. The asterisk means include, and the absence of the asterisk means exclude:

conf-dir=/etc/dnsmasq.d/,*.conf, *.foo
conf-dir=/etc/dnsmasq.d,.old, .bak, .tmp 

You may store your host configurations in multiple files with the --addn-hosts= option.

Dnsmasq includes a syntax checker:

$ dnsmasq --test
dnsmasq: syntax check OK.

Useful Configurations

Always include these lines:

domain-needed
bogus-priv

These prevent packets with malformed domain names and packets with private IP addresses from leaving your network.

This limits your name services exclusively to Dnsmasq, and it will not use /etc/resolv.conf or any other system name service files:

no-resolv

Reference other name servers. The first example is for a local private domain. The second and third examples are OpenDNS public servers:

server=/fooxample.com/192.168.0.1
server=208.67.222.222
server=208.67.220.220

Or restrict just local domains while allowing external lookups for other domains. These are answered only from /etc/hosts or DHCP:

local=/mehxample.com/
local=/fooxample.com/

Restrict which network interfaces Dnsmasq listens to:

interface=eth0
interface=wlan1

Dnsmasq, by default, reads and uses /etc/hosts. This is a fabulously fast way to configure a lot of hosts, and the /etc/hosts file only has to exist on the same computer as Dnsmasq. You can make the process even faster by entering only the hostnames in /etc/hosts, and use Dnsmasq to add the domain. /etc/hosts looks like this:

127.0.0.1       localhost
192.168.0.1     host2
192.168.0.2     host3
192.168.0.3     host4

Then add these lines to dnsmasq.conf, using your own domain, of course:

expand-hosts
domain=mehxample.com

Dnsmasq will automatically expand the hostnames to fully qualified domain names, for example, host2 to host2.mehxample.com.

DNS Wildcards

In general, DNS wildcards are not a good practice because they invite abuse. But there are times when they are useful, such as inside the nice protected confines of your LAN. For example, Kubernetes clusters are considerably easier to manage with wildcard DNS, unless you enjoy making DNS entries for your hundreds or thousands of applications. Suppose your Kubernetes domain is mehxample.com; in Dnsmasq a wildcard that resolves all requests to mehxample.com looks like this:

address=/mehxample.com/192.168.0.5

The address to use in this case is the public IP address for your cluster. This answers requests for hosts and subdomains in mehxample.com, except for any that are already configured in DHCP or /etc/hosts.

Next week, we’ll go into more detail on managing DNS and DHCP, including different options for different subnets, and providing authoritative name services.

Additional Resources

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.