Monthly Archives: May 2017

Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview | Linux.com


The software industry is rapidly seeing the value of using containers as a way to ease development, deployment, and environment orchestration for app developers. That’s because containers effectively manage environmental differences, allow for improved scalability, and provide predictability that supports Continuous Delivery (CD) of features. In addition to the technical advantages, containers have been shown to dramatically reduce the cost model of complex environments.

Large-scale and highly-elastic applications that are built in containers definitely have their benefits, but managing the environment can be daunting. This is where an orchestration tool like Kubernetes really shines.

Kubernetes is a platform-agnostic container orchestration tool created by Google and heavily supported by the open source community as a project of the Cloud Native Computing Foundation. It allows you to spin up a number of container instances and manage them for scaling and fault tolerance. It also handles a wide range of management activities that would otherwise require separate solutions or custom code, including request routing, container discovery, health checking, and rolling updates.

Kenzan is a services company that specializes in building applications at scale. We’ve seen cloud technology evolve over the last decade, designing microservice-based applications around the Netflix OSS stack, and more recently implementing projects using the flexibility of container technology. While each implementation is unique, we’ve found the combination of microservices, Kubernetes, and Continuous Delivery pipelines to be very powerful.

Crossword Puzzles, Kubernetes, and CI/CD

This article is the first in a series of four blog posts. Our goal is to show how easy it is to set up a fully-containerized application stack in Kubernetes with a simple CI/CD pipeline to manage the deployments.

We’ll describe the setup and deployment of an application we created especially for this series. It’s called the Kr8sswordz Puzzle, and working with it will help you link together some key Kubernetes and CI/CD concepts. The application will start simple enough, then as we progress we will introduce components that demonstrate a full application stack, as well as a CI/CD pipeline to help manage that stack, all running as containers on Kubernetes. Check out the architecture diagram below to see what you’ll be building.

The completed application will show the power and ease with which Kubernetes manages both apps and infrastructure, creating a sandbox where you can build, deploy, and spin up many instances under load.

Get Kubernetes up and Running

The first step in building our Kr8sswordz Puzzle application is to set up Kubernetes and get comfortable with running containers in a pod. We’ll install several tools explained along the way: Docker, Minikube, and Kubectl.

To complete these exercises, you’ll need a computer running an up-to-date version of Linux or macOS. Your computer should have 16 GB of RAM.

Exercise 1: Install Docker

Docker is one of the most widely-used container technologies and works directly with Kubernetes. In this exercise we’ll install Docker and then try out a few commands.

Install Docker on Linux

To quickly install Docker on Ubuntu 16.04 or higher, open a terminal and enter the following commands (see the Linux installation instructions for other distributions):

sudo apt-get update

curl -fsSL https://get.docker.com/ | sh

After installation, create a Docker group so you can run Docker commands as a non-root user (you’ll need to log out and then log back in after running this command):

sudo usermod -aG docker $USER

When you’re all done, make sure Docker is running:

sudo service docker start

Install Docker on macOS

Download Docker for Mac (stable) and follow the installation instructions. To launch Docker, double-click the Docker icon in the Applications folder. Once it’s running, you’ll see a whale icon in the menu bar.

2wFuUBKImxVs4uoJ8wc-giTDD_vtnEI5R2GXzlRp

Try Some Docker Commands

You can test out Docker by opening a terminal window and entering the following commands:

# Display the Docker version

docker version

# Pull and run the Hello-World image from Docker Hub

docker run hello-world

# Pull and run the Busybox image from Docker Hub

docker run busybox echo "hello, you've run busybox"

# View a list of containers that have run

docker ps -a

Note that Images are specs that define all the files and resources needed for a container to run. Many OSS images are publicly available on DockerHub. For more on Docker, see Docker Getting Started. For a complete listing of commands, see The Docker Commands

Exercise 2: Install Minikube and Kubectl

Minikube is a single-node Kubernetes cluster that makes it easy to run Kubernetes locally on your computer. We will use Minikube as the primary Kubernetes cluster to run our application on. Kubectl is a command line interface (CLI) for Kubernetes and the way we will interface with our cluster.

In this exercise we’ll quickly install both Minikube and kubectl. (For all the details, check out Running Kubernetes Locally via Minikube.)

Install Virtual Box

Download and install the latest version of VirtualBox for your operating system. VirtualBox lets Minikube run a Kubernetes node on a virtual machine (VM).

Install Minikube

Head over to the Minikube releases page and install the latest version of Minikube using the recommended method for your operating system. This will set up our Kubernetes node.

On Linux, install Minikube using the following command. For example, for v0.18.0:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-
  linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

On macOS, install Minikube using the following command. For example, for v0.18.0:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-
  darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

Install Kubectl

The last piece of the puzzle is to install kubectl so we can talk to our Kubernetes node.

On Linux, install kubectl using the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl 
  -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/
  linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

On macOS, install kubectl using the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl 
  -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/
  darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

Exercise 3: Install Prerequisites

In the next exercises, we’re going to use Minikube to start a local Kubernetes cluster and deploy some pods. You can manually enter each of the commands in the exercises, but to make things a little easier we’ve created a tutorial script that will walk you through the steps (and save you some typing). Let’s get the interactive tutorial going.

Install NodeJS

To use the tutorial, you need to install NodeJS and npm. (You can skip this if you want to enter all the commands manually—but really, who loves typing that much?)

On Linux, follow the NodeJS installation steps for your distribution. To quickly install NodeJS and npm on Ubuntu 16.04 or higher, open a terminal and enter the following commands:

curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -

sudo apt-get install -y nodejs

On macOS, download the NodeJS installer, and then double-click the .pkg file to install NodeJS and npm.

Fork the Git Repo

Now it’s time to make your own copy of the Kubernetes CI/CD repository on Github.

1. Install Git on your computer if you don’t have it already.

On Linux, use the following command:

sudo apt-get install git

On macOS, download and run the macOS installer for Git. To install, first double-click the .dmg file to open the disk image. Right-click the .pkg file and click Open, and then click Open again to start the installation.

2. Fork Kenzan’s Kubernetes CI/CD repository on Github. This has all the containers and other goodies for our Kr8sswordz Puzzle application, and you’ll want to fork it as you’ll later be modifying some of the code.

   a. Sign up if you don’t yet have an account on Github.  

   b. On the Kubernetes CI/CD repository on Github, click the Fork button in the upper right and follow the instructions.

VWmK6NaGcXD3TPZL6YRk_XPNZ8lqloN6of6yIUe7

   c. Make sure you’re in your home directory, and then clone your newly forked repository with the following terminal command:

git clone https://github.com/YOURUSERNAME/kubernetes-ci-cd

Clear out Minikube

Let’s get rid of the leftovers from any previous experiments you might have conducted with Minikube. Enter the following terminal command:

minikube delete; sudo rm -rf ~/.minikube; sudo rm -rf ~/.kube

Next, change directories to the cloned repository and install the tutorial script:

cd ~/kubernetes-ci-cd

npm install

Once that operation completes, go ahead and start the script:

npm run part1

Exercise 4: Run a Test Pod

In this exercise we’ll test out Minikube by running a pod based on a public image on DockerHub.

You don’t have to actually type the commands below—just press Enter at each step and the script will enter the command for you!

1. Start up the Kubernetes cluster with Minikube, giving it some extra resources.

minikube start --memory 8000 --cpus 2 --kubernetes-version v1.6.0

2. Enable the Minikube add-ons Heapster and Ingress.

minikube addons enable heapster; minikube addons enable ingress

In a second terminal, you can inspect the pods in the cluster. You should see the add-ons heapster, influxdb-grafana, and nginx-ingress-controller. Enter the command:

kubectl get pods --all-namespaces

3. Wait 20 seconds, and then view the Minikube Dashboard, a web UI for managing deployments. You may have to refresh the web browser if you don’t see the dashboard right away.

sleep 20; minikube service kubernetes-dashboard --namespace kube-system

4. Deploy the public nginx image from DockerHub into a container in a pod. Nginx is an open source web server. The nginx image is automatically downloaded from Docker Hub if it’s not available locally.

kubectl run nginx --image nginx --port 80

After running the command, you should be able to see the new deployment in the Minikube Dashboard with the Heapster graphs. (If you don’t see the graphs, just wait a few minutes.)

taZzJW57y2HD12JINuNJeuo-9LrkFMLjQEfcU0G5

5. Create a service for deployment. This will expose the nginx pod so you can access it with a web browser.

kubectl expose deployment nginx --type NodePort --port 80

6. Launch a web browser to test the service. The nginx welcome page displays, which means the service is up and running. Nice work!

minikube service nginx

5Mm8CSeIyO1clhqVqD4v-j4hZGWjUMPGCI1MA36E

Exercise 5: Create a Local Image Registry

In the previous exercise, we ran a public image from Docker Hub. While Docker Hub is great for public images, setting up a private image repository on the site involves some security key overhead that we don’t want to deal with. Instead, we’ll set up our own local image registry. We’ll then build, push, and run a sample Hello-Kenzan app from the local registry. (Later, we’ll use the registry to store the container images for our Kr8sswordz Puzzle app.)

The interactive tutorial should still be running—just press Enter to run each step below.

7. Set up the cluster registry by applying a .yml manifest file.

kubectl apply -f manifests/registry.yml

8. Wait for the registry to finish deploying. Note that this may take several minutes.

kubectl rollout status deployments/registry

9. View the registry user interface in a web browser. Right now it’s empty, but you’re about to change that.

minikube service registry-ui

10 . Let’s make a change to an HTML file in the cloned project. Running the command below will open the file /applications/hello-kenzan/index.html in the nano text editor.

nano applications/hello-kenzan/index.html

Change some text inside one of the <p> tags. For example, change “Hello from Kenzan!” to “Hello from Me!”. When you’re done, press Ctrl+X to close the file. You’ll be prompted to save your changes — enter Y to save changes and Enter to write to the specified file.

11. Now let’s build an image, giving it a special name that points to our local cluster registry.

docker build -t 127.0.0.1:30400/hello-kenzan:latest 
  -f applications/hello-kenzan/Dockerfile applications/hello-kenzan

12. We’ve built the image, but before we can push it to the registry, we need to set up a temporary proxy. By default the Docker client can only push to HTTP (not HTTPS) via localhost. To work around this, we’ll set up a container that listens on 127.0.0.1:30400 and forwards to our cluster.

docker stop socat-registry; docker rm socat-registry; 
  docker run -d -e "REGIP=`minikube ip`" --name socat-registry 
  -p 30400:5000 chadmoon/socat:latest bash -c "socat 
  TCP4-LISTEN:5000,fork,reuseaddr TCP4:`minikube ip`:30400"

13. With our proxy container up and running, we can now push our image to the local repository.

docker push 127.0.0.1:30400/hello-kenzan:latest

Refresh the browser window with the registry, UI and you’ll see the image has appeared.

DUUet5TikWjRivAuP0aELBrwSx0QxKPBrOKfIzlB

14. The proxy’s work is done, so you can go ahead and stop it.

docker stop socat-registry;

15. With the image in our cluster registry, the last thing to do is apply the manifest to create and deploy the hello-kenzan service based on the image.

kubectl apply -f applications/hello-kenzan/k8s/deployment.yaml

16. Launch a web browser and view the service.

minikube service hello-kenzan

Notice the change you made to the index.html file. That change was baked into the image when you built it and then was pushed to the registry. Pretty cool!

qVV21ca0wRMcsx54irULH37aQgtt6DuECYg2-8s0

If you’re done working in Minikube for now, you can go ahead and stop the cluster by entering the following command:

minikube stop

Up Next

So far, we’ve installed Docker, Minikube, and Kubectl. To build out our Kr8sswordz Puzzle stack, we’ve run our image repository as a pod in Minikube, and we were able to test building and pushing to it with our Hello-Kenzan app.

Stay tuned for Part 2 of the series, where we will continue to build out our infrastructure by adding in a CI/CD component: Jenkins running in its own pod. Using a Jenkins 2.0 Pipeline script, we will build, push, and deploy our Hello-Kenzan app, giving us the infrastructure for continuous deployment that will later be used with our Kr8sswordz Puzzle app. Though it might seem mind-bending to deploy containers from a Jenkins container within Kubernetes, Part 2 will show how easy it is to manage deployments this way.

Kenzan is a software engineering and full service consulting firm that provides customized, end-to-end solutions that drive change through digital transformation. Combining leadership with technical expertise, Kenzan works with partners and clients to craft solutions that leverage cutting-edge technology, from ideation to development and delivery. Specializing in application and platform development, architecture consulting, and digital transformation, Kenzan empowers companies to put technology first.

Want to learn more about Kubernetes? Get unlimited access to the new Kubernetes Fundamentals training course for one year for $199. Sign up now!

A Primer for Enterprise IT Pros


The buzz around containers, particularly the Docker container platform, is hard to avoid. Containerization of applications promises speed and agility, capabilities that are essential in today’s fast-paced IT environment. But outside the world of DevOps, containers can still be an unfamiliar technology.

At Interop ITX, Stephen Foskett, organizer of Tech Field Day and proprietor of Gestalt IT, provided some clarity about application containerization. In a presentation entitled, “The Case For Containers,” he explained the basics about the technology and what enterprise IT shops can expect from it.

First off, container technology isn’t anything new, he said. “The reason we’re hearing about it is Docker. They’ve done a nice job of productizing it.”

He explained that containers are similar to virtual machines “except for this whole idea of user space.” A container, which uses operating system-level virtualization, has strict boundaries around a limited set of libraries and is custom-designed to run a specific application. That focus on one application is a key differentiator from virtual machines and makes containers important for enterprise IT, he said.

Docker, which launched as an open source project in 2013, “got a lot of things right,” Foskett said. For example, Docker Hub makes it easy to locate images, which become containers when users instantiate them. Docker also uses layered storage, which conserves space. At the same time, though, that easy storage can create lead to performance issues, he added.

Cattle or pets?

Since cloud technologies began altering the IT landscape, cattle vs. pets has become a common meme. “Many in DevOps will tell you they’re [containers] a cattle approach, but they’re not really cattle; they’re pets,” Foskett said.

While containers can be spun up and torn down quickly, the problem is that by default, Docker doesn’t actually destroy the container, which can lead to container sprawl. “When you exit a container, the container stays there with the data as you left it,” unless manually deleted with the rm command, Foskett said.

“If you run a container and stop it, and the image stays around, someone can easily restart the container and access what you were doing,” he said. “That’s probably not a problem on your test laptop, but you can’t do that if you’re engineering a system.”

Another sticky issue for enterprises: It can be difficult to know the origin of images in the Docker Hub. “You can’t guarantee it’s something good,” Foskett said. “Many enterprises aren’t too thrilled with this concept.”

He advised practicing good hygiene when using containers by keeping images simple and using external volume storage to reduce the risk of data exposure. “Then the container itself stays pristine; you don’t have data building up in it.”

Container benefits

One of the main reasons he’s excited, as a system administrator, about containers is that they allow users to specify the entire application environment, Foskett said. A consistent application environment means not having to worry about OS levels, patches, or incompatible applications and utilities

“This is the critical reason containers are going to be relevant in the enterprise data center,” he said.

Another container benefit is security, Foskett said. Security breaches often stem from escalation of privileges to utilities and application components, which affects an entire system. Containerized applications don’t contain unused utilities, so there’s less exposure to infection.

Foskett said containers also enable scalable application platforms using microservices. Instead of monolithic systems that are hard to scale, enterprises can have containerized applications for specific functions.

Getting started

Foskett advised attendees to start experimenting with Docker and Windows containers. “One of the coolest things about Docker is that it’s really easy to try,” he said.

A Docker Enterprise Edition is in the works, which will include certified containers and plugins. When you download a container from Docker Hub, “you know it’s really going to be what it says it is,” he said.

Docker Inc., the company that manages the Docker open source project and the ecosystem around it, has traditionally focused on developers, but has shifted to an enterprise mindset, Foskett said. “They’re addressing concerns we have.”

While real microservices won’t happen for another five to ten years, “the future really is containerized,” Foskett told attendees. “This isn’t just a fad or a trend, but an important movement in IT that has important benefits to people like you and me.”

 

 

 

 

 



Source link

Feren OS Could Be the Best-Looking Desktop on the Market | Linux.com


Imagine taking Linux Mint, placing the Cinnamon desktop on it and then theming it to not only to serve as a perfect drop-in replacement for Windows 7 but to be one of the most beautiful Linux desktops you’ve seen in a long while. That’s what Feren OS has managed —  and has done so with aplomb.

Feren OS first arrived in 2015 and recently unleashed their 2017 iteration of the platform…with stunning results. This is truly one of those instances that, upon installation, you’ll find yourself doing a double (or triple) take, asking, “Is this really Linux?” Not that the state of the Linux desktop is behind the competition, in fact, I consider many of the Linux desktops to be light years ahead of other desktops. But, Feren OS has achieved something special; they’ve created a Linux distribution that anyone could use, for nearly any purpose, with zero learning curve.

Let’s take a look at this new(ish) distro to see exactly what makes it special. We’ll also dig deep to see what kind of caveats lay under the polish (if any).

That look

You cannot deny Feren OS has done their homework to create a desktop anyone would be instantly familiar with (Figure 1).

There’s so little to say about this desktop. Why? Because you already know it. At least, anyone that’s used a desktop interface in the past 10 years will know it. You have a “start” button, a bottom panel, quick launchers, a system tray, desktop icons, a clock…all the usual pieces are in perfect place to make things simple, efficient, and elegant.

Click on the “start” button to reveal a very standard menu (Figure 2) that includes all the regular (necessary) categories for Accessories, Games, Graphics, Internet, Office, and more.

The Feren OS desktop does something very interesting by breaking up the usual Settings options within its own submenu. In fact, what the designers/developers have done with the various configuration options (I believe) is quite smart. Click on the “start” button and then click Preferences. In this tab, you can scroll through all of the possible configuration categories available for the desktop (Figure 3).

All told, there are 49 different configuration sub-categories to be found, ranging from Applets, to Bluetooth, to Desktop Sharing, Fingerprint GUI, Firewall Configuration, Graphics Tablet, Hot Corner, and so much more.

On the desktop, you’ll find an icon for the Feren OS Themer. Click on that and you can switch the desktop theme from Feren OS, Windows, Apple, Linux, and Google themes (Figure 4). This might well be the single best desktop themer on the market.

The software

Diving into the main menu, I did find an oddity on the Feren OS desktop. Click on the Office category and you’ll find both LibreOffice and WPS Office installed. Don’t get me wrong, I’m a fan of both LibreOffice and WPS Office, but I question the inclusion of both suites. Considering the installation of Feren OS is already quite large, why include both? Pick one or the other. After playing around with both on Feren OS, I find WPS Office the likelier candidate as it offers really solid MS Office compatibility and actually looks better with the overall theme of the desktop. Given how much time and effort has been put into look and feel of the Feren OS desktop, one would almost have to take that into account.

There is also an odd choice of default web browser to be found. Although I do like the Vivaldi browser, it is a curiosity as to why it was chosen as the default. It works and works well; but it is not nearly as familiar as, say, Chrome or Firefox. Do not despair, though; the developers have included a Web Browser Manager tool to make the installation of other browsers a snap. Open the “start” menu, click Internet, and then click on Web Browser Manager to see that you can install either Firefox or Chrome with the click of a button (Figure 5).

As you can see, the Web Browser Manager was borrowed from Zorin OS. This was a wise addition to the software stack (given the developer’s choice of making Vivaldi the default).

Click on over to Start > Games to find both PlayOnLinux and Steam installed. With these available, gamers aren’t left out of the mix. Feren OS can work and play with the best of them.

For the installation of new software, Feren OS includes the very popular (and incredibly user-friendly GNOME Software). Open up the tool to find thousands of software titles ready to install (Figure 6).

The caveat

Of course, no desktop is perfect. The biggest (and really only) issue with Feren OS is its size. The download alone is 3.6GB. When you go to install the operating system it will inform you of its need for at least 18GB of space. That’s quite a lot of space for an operating system. That size also translates to some significant minimum requirements. Feren OS doesn’t actually list minimum requirements for the platform; instead, they offer up a list of hardware known to work well with the platform. From experience, I can say that 3GB of RAM is on the low side for the platform. If you want to get the most out of the OS, your best bet is a machine with 8GB of RAM. That’s a healthy amount of RAM, but this is an operating system that has a lot to offer. The conclusion? Old hardware need not apply. This might turn a lot of users off, but Feren OS is a modern take on the Linux desktop, one which offers a lot bells and whistles.

The conclusion

If you’re looking for a slick desktop that requires nearly zero in the way of learning curve, Feren OS might well fit the bill. With the right hardware, this new(ish) Linux desktop platform performs and impresses. Give it a spin and see it if isn’t exactly what you’ve been looking for.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

lnav – Watch and Analyze Apache Logs from a Linux Terminal


Title: 
lnav – Watch and Analyze Apache Logs from a Linux Terminal

Creating Virtual Machines in KVM: Part 2 — Networking | Linux.com


When last we met, we learned the basics of creating new virtual machines in Creating Virtual Machines in KVM: Part 1. Now we’re going to learn how to control Internet access for our virtual machines, network VMs with each other, and create new virtual networks.

Internet Access

Some Linux distributions, such as CentOS 7 and Red Hat Enterprise Linux 7, do not start networking by default, so you have to enable it. If you don’t have networking in a virtual machine, first check whether it is enabled.

The default network is NAT (network address transation) when you create a new virtual machine — assuming your particular Linux distribution has not mucked with this. This forwards network traffic through your host system; if the host is connected to the Internet, then your virtual machines have Internet access.

The virtual machine manager also creates an Ethernet bridge between the host and virtual network, so you can ping the IP addresses of your VMs from the host, and your VMs can ping the IP address of the host.

Confirm your virtual network type by opening the information tab on any running VM; this is the little white “i” in a blue circle on the top left of your virtual machine console (Figure 1).

Your virtual machines have their own virtual network, which is on a different subnet than the host. Your VMs should be able to ping each other by IP address and by hostname, because your virtual network has its own name server. When your ping tests succeed, then you can set up services such as web, email, SSH, and so on, just like on any Linux machine.

Virtual Networks

Go to Edit > Connection Details > Virtual Networks in your virtual machine manager to view the details of your virtual network (Figure 2).

This shows the network name, Ethernet bridge name, the DHCP address range, and status. As your collection of VMs grows you may wish to give them separate subnets. How to do this? With ease. Click the little green “Add network” button at the bottom left of the Virtual Networks tab.

In step 1, enter your new network name, which is anything you want.

In step 2, enter your new network address. The field background changes to green when you enter a non-colliding address (Figure 3). Enable DHCP with a click. How easy is that?

In step 3, enable IPv6. Or not.

In step 4, you have the option to either create an isolated network with no external access or one with external access via NAT or routing. NAT is the easiest (Figure 4).

Click Finish. This returns you to the Connection Details screen, where you can admire your networks list.

Using Your New Virtual Network

Open the information tab on a running VM and delete your existing network configuration. Look for the “NIC :[mac address]” entry in the left pane, where all of your hardware is listed, and right-click/Remove Hardware to remove it.

Next, click the Add Hardware button at the bottom. Select Network and choose your new network from the Network Source dropdown.

Distributions that use Network Manager should pick up the new assignment automatically. If you’re not using Network Manager, then renew your DHCP lease or reboot.

Useful Commands

The virtual machine manager is a nice tool, but it is complex. It is usually faster to run command-line queries to get answers. brctl, bridge control, lists your Ethernet bridges and their status:

$ brctl show
bridge name bridge id           STP enabled  interfaces
virbr0      8000.000000000000   yes
virbr1      8000.000000000000   yes
virbr2      8000.fe540075e883   yes           vnet0
                                              vnet1

The virsh command is very useful for querying and managing virtual machines. List all of your virtual networks and their status:

$ virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes
 net2                 inactive   no            yes
 net3                 active     yes           yes

List all of your virtual machines and their status:

$ virsh list --all
 Id    Name                           State
----------------------------------------------------
 1     Ubuntu-1604                    running
 2     centos7.0                      running
 -     opensuse-leap                  shut off
 

Get information on a single virtual network:

$ virsh net-info net3
Name:           net3
UUID:           b3b23db5-fc8e-4428-8913-1287a179ec68
Active:         yes
Persistent:     yes
Autostart:      yes
Bridge:         virbr2

Dump complete information about a virtual network in XML format:

$ virsh net-dumpxml  net3
<network connections='2'>
  <name>net3</name>
  <uuid>b3b23db5-fc8e-4428-8913-1287a179ec68</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr2' stp='on' delay='0'/>
  <mac address='52:54:00:ca:b2:c3'/>
  <domain name='net3'/>
  <ip address='192.168.10.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.10.128' end='192.168.10.254'/>
    </dhcp>
  </ip>
</network>

Domains vs. Hostnames

Domains and hostnames are not the same thing, although they can be the same if you desire. Virtual machine hostnames are the standard Linux hostnames, and you manage them just like any Linux.

The virsh list command returns a list of your virtual machine names, also called domains. These are the names that you configured at creation. Look on the information > Overview tab of a running VM to see its domain name. This has nothing to do with DNS domain names; they’re just arbitrary names for our VMs.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.