Tag Archives: servers

DNS and DHCP with Dnsmasq | Linux.com

Last week, we learned a batch of tips and tricks for Dnsmasq. Today, we’re going more in-depth into configuring DNS and DHCP, including entering DHCP hostnames automatically into DNS, and assigning static IP addresses from DHCP.

You will edit three configuration files on your Dnsmasq server: /etc/dnsmasq.conf, /etc/resolv.conf, and /etc/hosts. Just like the olden days when we had nice clean configuration files for everything, instead of messes of scripts and nested configuration files.

Use Dnsmasq’s built-in syntax checker to check for configuration file errors, and run Dnsmasq from the command-line rather than as daemon so you can quickly test configuration changes and log the results. (See last week’s tutorial to learn more about this.)

Taming Network Manager and resolv.conf

Disable Network Manager on your Dnsmasq server, and give its network interfaces static configurations. You also need control of /etc/resolv.conf, which in these modern times is usually controlled by other processes, such as Network Manager. In these cases /etc/resolv.conf is a symbolic link to another file such as /run/resolvconf/resolv.conf or /var/run/NetworkManager/resolv.conf. To get around this delete the symlink and then re-create the /etc/resolv.conf file. Now your changes will not be overwritten.

There are many ways to use Dnsmasq and /etc/resolv.conf together. My preference is to enter only in /etc/resolv.conf, and enter all upstream nameservers in /etc/dnsmasq.conf. You don’t need to touch any client configurations because Dnsmasq will provide all network information to them via DHCP.

Local DHCP

This example configuration includes some typical global options, and then defines a single DHCP address range. Replace the italicized values with your own values.

# global options

# DHCP range

dhcp-range=,,12h defines a range of 40 available address leases, with a lease time of 12 hours. This range must not include your Dnsmasq server. You may define the lease time in seconds, minutes, or hours. The default is one hour and the minimum possible is two minutes. If you want infinite lease times then don’t specify a lease time.

dhcp-lease-max=25 defines how many leases can be active at one time. You can have large address pool available and then limit the number of active leases to prevent denial of service problems from hosts going nuts and demanding a lot of DHCP leases.

DHCP Zones and Options

You can define DHCP zones for different subnets, like this example that has an eth and a wifi zone, and then give each zone different options. This example shows how to define the zones:


The default route advertised to all clients is the address of your Dnsmasq server. You can configure DHCP to assign each zone a different default route:


How do you know that 3 is the default route option? Run dnsmasq --help dhcp to see all the IPv4 options. dnsmasq --help dhcp6 lists the IPv6 options. (See man 5 dhcp-options for more information on options.) You may also use the option names instead of the numbers, like this example for your NTP server:


Upstream Name Servers

Controlling which upstream name servers your network uses is one of the nicer benefits of running your own name server, instead of being stuck with whatever your ISP wants you to use. This example uses the Google public name servers. You don’t have to use Google; a quick Web search will find a lot of public DNS servers.


DNS Hosts

Adding DNS hosts to Dnsmasq is almost as easy as falling over. All you do is add them to /etc/hosts, like this, using your own addresses and hostnames:       localhost    webserver    fileserver    dnsmasq    timeserver

Dnsmasq reads /etc/hosts, and these hosts are available to your LAN either by hostname or by their fully-qualified domain names. The expand-hosts option in /etc/dnsmasq.conf expands the hostnames to the domain= value, for example webserver.mydomain.net

Set Static Addresses from DHCP

This is my favorite thing. You may assign static IP addresses to your LAN hosts by MAC address, or by hostname. The address must fall in a range you have already configured with dhcp-range=:


On most Linux distributions it is the default for dhclient to send the hostname. You can confirm this in dhclient.conf, with the send host-name option. Do not have any duplicate entries in /etc/hosts.

Here we are again at the end already. Check out these articles for more Dnsmasq features and howtos:

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

6 Ways to Transform Legacy Data Storage Infrastructure

So you have a bunch of EMC RAID arrays and a couple of Dell iSCSI SAN boxes, topped with a NetApp filer or two. What do you say to the CEO who reads my articles and knows enough to ask about solid-state drives, all-flash appliances, hyperconverged infrastructure, and all the other new innovations in storage? “Er, er, we should start over” doesn’t go over too well! Thankfully, there are some clever — and generally inexpensive — ways to answer the question, keep your job, and even get a pat on the back.

SSD and flash are game-changers, so they need to be incorporated into your storage infrastructure. SSDs are better than enterprise-class hard drives from a cost perspective because they will speed up your workload and reduce the number of storage appliances and servers needed. It’s even better if your servers support NVMe, since the interface is becoming ubiquitous and will replace both SAS and (a bit later) SATA, simply because it’s much faster and lower overhead.

As far as RAID arrays, we have to face up to the harsh reality that RAID controllers can only keep up with a few SSDs. The answer is either an all-flash array and keeping the RAID arrays for cool or cold secondary storage usage, or a move to a new architecture based on either hyperconverged appliances or compact storage boxes tailored for SSDs.

All-flash arrays become a fast storage tier, today usually Tier 1 storage in a system. They are designed to bolt onto an existing SAN and require minimal change in configuration files to function. Typically, all-flash boxes have smaller capacities than the RAID arrays, since they have enough I/O cycles to do near-real-time compression coupled with the ability to down-tier (compress) data to the old RAID arrays.

With an all-flash array, which isn’t outrageously expensive, you can boast to the CEO about 10-fold boosts in I/O speed, much lower latency , and as a bonus a combination of flash and secondary storage that usually has 5X effective capacity due to compression. Just tell the CEO how many RAID arrays and drives you didn’t buy. That’s worth a hero badge!

The idea of a flash front-end works for desktops, too. Use a small flash drive for the OS (C-drive) and store colder data on those 3.5” HDDs. Your desktop will boot really quickly, especially with Windows 10 and program loads will be a snap.

Within servers, the challenge is to make the CPU, rather than the rest of the system, the bottleneck. Adding SSDs as primary drives makes sense, with HDDs in older arrays doing duty as bulk secondary storage, just as with all-flash solutions, This idea has fleshed out into the hyperconverged infrastructure (HCI) concept where the drives in each node are shared with other servers in lieu of dedicated storage boxes. While HCI is a major philosophical change, the effort to get there isn’t that huge.

For the savvy storage admin, RAID arrays and iSCSI storage can both be turned into powerful object storage systems. Both support a JBOD (just a bunch of drives) mode, and if the JBODs are attached across a set of server nodes running “free” Ceph or Scality Ring software, the result is a decent object-storage solution, especially if compression and global deduplication are supported.

Likely by now, you are using public clouds for backup. Consider “perpetual “storage using a snapshot tool or continuous backup software to reduce your RPO and RTO. Use multi-zone operations in the public cloud to converge DR onto the perpetual storage setup, as part of a cloud-based DR process. Going to the cloud for backup should save a lot of capital expense money.

On the software front, the world of IT is migrating to a services-centric software-defined storage (SDS), which allows scaling and chaining of data services via a virtualized microservice concept. Even older SANs and server drives can be pulled into the methodology, with software making all legacy boxes in a data center operate as a single pool of storage. This simplifies storage management and makes data center storage more flexible.

Encryption ought to be added to any networked storage or backup. If this prevents even one hacker from reading your files in the next five years, you’ll look good! If you are running into a space crunch and the budget is tight, separate out your cold data, apply one of the “Zip” programs and choose the encrypted file option. This saves a lot of space and gives you encryption!

Let’s take a closer look at what you can do to transform your existing storage infrastructure and extend its life.

(Image: Production Perig/Shutterstock)

Source link

Advanced Dnsmasq Tips and Tricks | Linux.com

Many people know and love Dnsmasq and rely on it for their local name services. Today we look at advanced configuration file management, how to test your configurations, some basic security, DNS wildcards, speedy DNS configuration, and some other tips and tricks. Next week, we’ll continue with a detailed look at how to configure DNS and DHCP.

Testing Configurations

When you’re testing new configurations, you should run Dnsmasq from the command line, rather than as a daemon. This example starts it without launching the daemon, prints command output, and logs all activity:

# dnsmasq --no-daemon --log-queries
dnsmasq: started, version 2.75 cachesize 150
dnsmasq: compile time options: IPv6 GNU-getopt 
 DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack 
 ipset auth DNSSEC loop-detect inotify
dnsmasq: reading /etc/resolv.conf
dnsmasq: using nameserver
dnsmasq: read /etc/hosts - 9 addresses

You can see tons of useful information in this small example, including version, compiled options, system name service files, and its listening address. Ctrl+c stops it. By default, Dnsmasq does not have its own log file, so entries are dumped into multiple locations in /var/log. You can use good old grep to find Dnsmasq log entries. This example searches /var/log recursively, prints the line numbers after the filenames, and excludes /var/log/dist-upgrade:

# grep -ir --exclude-dir=dist-upgrade dnsmasq /var/log/

Note the fun grep gotcha with --exclude-dir=: Don’t specify the full path, but just the directory name.

You can give Dnsmasq its own logfile with this command-line option, using whatever file you want:

# dnsmasq --no-daemon --log-queries --log-facility=/var/log/dnsmasq.log

Or enter it in your Dnsmasq configuration file as log-facility=/var/log/dnsmasq.log.

Configuration Files

Dnsmasq is configured in /etc/dnsmasq.conf. Your Linux distribution may also use /etc/default/dnsmasq, /etc/dnsmasq.d/, and /etc/dnsmasq.d-available/. (No, there cannot be a universal method, as that is against the will of the Linux Cat Herd Ruling Cabal.) You have a fair bit of flexibility to organize your Dnsmasq configuration in a way that pleases you.

/etc/dnsmasq.conf is the grandmother as well as the boss. Dnsmasq reads it first at startup. /etc/dnsmasq.conf can call other configuration files with the conf-file= option, for example conf-file=/etc/dnsmasqextrastuff.conf, and directories with the conf-dir= option, e.g. conf-dir=/etc/dnsmasq.d.

Whenever you make a change in a configuration file, you must restart Dnsmasq.

You may include or exclude configuration files by extension. The asterisk means include, and the absence of the asterisk means exclude:

conf-dir=/etc/dnsmasq.d/,*.conf, *.foo
conf-dir=/etc/dnsmasq.d,.old, .bak, .tmp 

You may store your host configurations in multiple files with the --addn-hosts= option.

Dnsmasq includes a syntax checker:

$ dnsmasq --test
dnsmasq: syntax check OK.

Useful Configurations

Always include these lines:


These prevent packets with malformed domain names and packets with private IP addresses from leaving your network.

This limits your name services exclusively to Dnsmasq, and it will not use /etc/resolv.conf or any other system name service files:


Reference other name servers. The first example is for a local private domain. The second and third examples are OpenDNS public servers:


Or restrict just local domains while allowing external lookups for other domains. These are answered only from /etc/hosts or DHCP:


Restrict which network interfaces Dnsmasq listens to:


Dnsmasq, by default, reads and uses /etc/hosts. This is a fabulously fast way to configure a lot of hosts, and the /etc/hosts file only has to exist on the same computer as Dnsmasq. You can make the process even faster by entering only the hostnames in /etc/hosts, and use Dnsmasq to add the domain. /etc/hosts looks like this:       localhost     host2     host3     host4

Then add these lines to dnsmasq.conf, using your own domain, of course:


Dnsmasq will automatically expand the hostnames to fully qualified domain names, for example, host2 to host2.mehxample.com.

DNS Wildcards

In general, DNS wildcards are not a good practice because they invite abuse. But there are times when they are useful, such as inside the nice protected confines of your LAN. For example, Kubernetes clusters are considerably easier to manage with wildcard DNS, unless you enjoy making DNS entries for your hundreds or thousands of applications. Suppose your Kubernetes domain is mehxample.com; in Dnsmasq a wildcard that resolves all requests to mehxample.com looks like this:


The address to use in this case is the public IP address for your cluster. This answers requests for hosts and subdomains in mehxample.com, except for any that are already configured in DHCP or /etc/hosts.

Next week, we’ll go into more detail on managing DNS and DHCP, including different options for different subnets, and providing authoritative name services.

Additional Resources

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

LibreOffice 6.0 Released » Linux Magazine

The Document Foundation announced the release of LibreOffice 6.o, the biggest release of the fully Open Source Office Suite since LibreOffice 5.x that was released in 2016.

One of the biggest improvements in LibreOffice 6.o is the introduction of Notebook Bar, a ribbon interface that makes it easier to perform many tasks without having to dig through menus. It enhances productivity. You can enable the feature from the ‘advanced’ settings of LibreOffice. There are many different modes that users can choose for their own workflow, including a sidebar and a minimalistic single toolbar.

LibreOffice 6.0 claims to offer better file compatibility with Microsoft Office documents. The release also offers ability to export documents as ePub, an ebook format. “OOXML interoperability has been improved in several areas: import of SmartArt and import/export of ActiveX controls, support of embedded text documents and spreadsheets, export of embedded videos to PPTX, export of cross-references to DOCX, export of MailMerge fields to DOCX, and improvements to the PPTX filter to prevent the creation of broken files,” said Italo Vignoli, one of the co-founders of the Document Foundation.

There is still no release of LibreOffice for mobile platforms that competes with Microsoft Office and Google Docs. LibreOffice Viewer has largely been an app to view documents on mobile devices. But the Document Foundation promises that the upcoming release of LibreOffice Viewer for Android will be able to create new documents. “It will offer a tab-based toolbar with formatting options, and will let users add pictures either from the camera or from a file stored locally or in the cloud,” said Vignoli.

LibreOffice, however, is available for the cloud through Collabora, a company that provides L3 support for LibreOffice. LibreOffice 6.x will bring more capabilities to LibreOffice Online. “New features introduced with LibreOffice 6.0 aim to align the functionality of the desktop and cloud versions, especially in areas where users expect similar behavior,” said Vignoli.

Collabora offers Collabora Online Developer Edition (CODE), a free of cost solution that’s based on the latest version of LibreOffice. Users can run it on their own servers with other open source solutions like Nextcloud for storage, sharing and syncing capabilities. Collabora also offers a paid version of LibreOffice Online as part of its Collabora Cloudsuite.

LibreOffice 6.0 is available for Linux, macOS and Windows.

Source link

Full-Stack Engineer: 3 Key Skills

Until fairly recently, most infrastructure professionals typically learned one area of the data center extremely well and spent their entire careers refining that specialty. Someone might be a storage professional or a networking professional, but rarely did he or she need to know both. And some were hyper-specialized, perhaps focusing in on Cisco routers or Linux servers.

While employers are still posting jobs for these types of positions, many are starting to look for IT staff who have broad rather than deep knowledge. As trends like cloud computing, DevOps and containerization have become more prevalent, organizations need IT workers who understand it all:  servers, storage, networking, virtualization, applications, security, and even the basics of how the business functions.

Scott Lowe, engineering architect at VMware, likes to refer to this type of well-rounded IT worker as a “full-stack engineer.” He knows the “full-stack” moniker is often used for developers who work on both front-end and back-end programming, but Lowe said he co-opted the term to describe infrastructure/applications engineers who are being forced to move out of the one area where they’ve worked.

Lowe hosts a popular podcast called The Full-Stack Journey, speaks regularly at Interop ITX, and also writes a blog that covers cloud computing, virtualization, networking and open source tools. Network Computing recently spoke with Lowe about why demand is growing for full-stack engineers.

He traced the origins of the full-stack movement to a number of converging trends.

First, he noted that IT groups are under increasing pressure to define the business value for every project or purchase they undertake. For example, if an organization is going to replace a server, IT often needs to justify that update to the business. That means IT professionals “need to be more aware of what technology is being used for. That’s what’s pulling us up the stack,” explained Lowe. Full-stack engineers need to understand which applications are running on the servers and why they are important to the business.

Second, he said that the trend toward cloud computing had made organizations realize that they have an alternative to in-house infrastructure, which has changed their perspective on IT investments. Also, because many organizations are moving workloads to the public cloud, “IT professionals have to shift their skillsets because the skillset they need to be effective and to thrive when those environments are in play are different than the skillsets they needed in order to thrive and be effective in a private data center,” Lowe said.

In addition, many organizations have “an increasing desire and need to use automation as a way of providing more consistent standardized configurations and to make IT organizations more effective,” said Lowe. That, too, is affecting the skills that IT professionals need to have in order to be successful.

So what skills do infrastructure pros need to have if they want to become full-stack engineers? Lowe said three types of skills are key:

1. Automation

Lowe said that there is no one characteristic that defines a full-stack engineer, “but the thing that comes the closest is fully embracing automation and orchestration in everything that they do.” That encompasses a wide range of tools and technologies, ranging from configuration management to containers to infrastructure as code.

2. Public cloud

With the public cloud becoming more prevalent among enterprises, Lowe also advised IT pros to develop their cloud computing skills. He specifically called out Amazon Web Services (AWS) and Microsoft Azure as two vendors that are important.

3. Continuous learning

The last skill on this list isn’t so much a set of knowledge to acquire as a necessary mindset. “Accept or embrace the idea that learning is going to be an integral part of your career moving forward,” advised Lowe. He said that because this is a dynamic and ever-changing industry, “our skillset also has to be dynamic and ever-changing.”

Scott Lowe will offer more advice about moving up the stack at Interop ITX 2018, where he will present “The Full Stack Journey: A Career Perspective.”

Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!


Source link