Monthly Archives: January 2018

How to Create a Docker Image |

In the previous article, we learned about how to get started with Docker on Linux, macOS, and Windows. In this article, we will get a basic understanding of creating Docker images. There are prebuilt images available on DockerHub that you can use for your own project, and you can publish your own image there.

We are going to use prebuilt images to get the base Linux subsystem, as it’s a lot of work to build one from scratch. You can get Alpine (the official distro used by Docker Editions), Ubuntu, BusyBox, or scratch. In this example, I will use Ubuntu.

Before we start building our images, let’s “containerize” them! By this I just mean creating directories for all of your Docker images so that you can maintain different projects and stages isolated from each other.

$ mkdir dockerprojects

cd dockerprojects

Now create a Dockerfile inside the dockerprojects directory using your favorite text editor; I prefer nano, which is also easy for new users.

$ nano Dockerfile

And add this line:

FROM Ubuntu


Save it with Ctrl+Exit then Y.

Now create your new image and provide it with a name (run these commands within the same directory):

$ docker build -t dockp .

(Note the dot at the end of the command.) This should build successfully, so you’ll see:

Sending build context to Docker daemon  2.048kB

Step 1/1 : FROM ubuntu

---> 2a4cca5ac898

Successfully built 2a4cca5ac898

Successfully tagged dockp:latest

It’s time to run and test your image:

$ docker run -it Ubuntu

You should see root prompt:


This means you are literally running bare minimal Ubuntu inside Linux, Windows, or macOS. You can run all native Ubuntu commands and CLI utilities.


Let’s check all the Docker images you have in your directory:

$docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

dockp               latest              2a4cca5ac898        1 hour ago          111MB

ubuntu              latest              2a4cca5ac898        1 hour ago          111MB

hello-world         latest              f2a91732366c        8 weeks ago         1.85kB

You can see all three images: dockp, Ubuntu, and hello-world, which I created a few weeks ago when working on the previous articles of this series. Building a whole LAMP stack can be challenging, so we are going create a simple Apache server image with Dockerfile.

Dockerfile is basically a set of instructions to install all the needed packages, configure, and copy files. In this case, it’s Apache and Nginx.

You may also want to create an account on DockerHub and log into your account before building images, in case you are pulling something from DockerHub. To log into DockerHub from the command line, just run:

$ docker login

Enter your username and password and you are logged in.

Next, create a directory for Apache inside the dockerproject:

$ mkdir apache

Create a Dockerfile inside Apache folder:

$ nano Dockerfile

And paste these lines:

FROM ubuntu

MAINTAINER Kimbro Staken version: 0.1

RUN apt-get update && apt-get install -y apache2 && apt-get clean && rm -rf /var/lib/apt/lists/*



ENV APACHE_LOG_DIR /var/log/apache2


CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"] 

Then, build the image:

docker build -t apache .

(Note the dot after a space at the end.)

It will take some time, then you should see successful build like this:

Successfully built e7083fd898c7

Successfully tagged ng:latest

Swapnil:apache swapnil$

Now let’s run the server:

$ docker run –d apache


Eureka. Your container image is running. Check all the running containers:

$ docker ps

CONTAINER ID  IMAGE        COMMAND                 CREATED            

a189a4db0f7 apache "/usr/sbin/apache2ctl"  10 seconds ago

You can kill the container with the docker kill command:

$docker kill a189a4db0f7

So, you see the “image” itself is persistent that stays in your directory, but the container runs and goes away. Now you can create as many images as you want and spin and nuke as many containers as you need from those images.

That’s how to create an image and run containers.

To learn more, you can open your web browser and check out the documentation about how to build more complicated Docker images like the whole LAMP stack. Here is a Dockerfile file for you to play with.  In the next article, I’ll show how to push images to DockerHub.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Install Webmin on Ubuntu 17.10 (Artful Aardvark) Server

Sponsored Link

Webmin is a web-based interface for system administration for Unix. Using any modern web browser, you can setup user accounts, Apache, DNS, file sharing and much more. Webmin removes the need to manually edit Unix configuration files like /etc/passwd, and lets you manage a system from the console or remotely.

Install webmin on ubuntu 17.10

Edit the /etc/apt/sources.list file using the following command

sudo nano /etc/apt/sources.list

Add the following line

deb sarge contrib

Save and exit the file

Add the GPG Key


sudo apt-key add jcameron-key.asc

Install webmin using the following commands

sudo apt-get update
sudo apt-get install apt-transport-https
sudo apt-get install webmin

Access webmin

Go to https://serverip:10000

Ubuntu in particular don’t allow logins by the root user by default. However, the user created at system installation time can use sudo to switch to root. Webmin will allow any user who has this sudo capability to login with full root privileges.

Once you logged in you should see similar to the following screen

Sponsored Link

Related posts

Install Latest version of node.js and npm on Ubuntu 17.10 Server

Node.js® is a JavaScript runtime built on Chrome’s V8 JavaScript engine. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world.

npm is the package manager for JavaScript and the world’s largest software registry. Discover packages of reusable code — and assemble them in powerful new ways.

Install node.js on Ubuntu 17.10 server

From the terminal run the following command

curl -sL | sudo -E bash —

sudo apt-get install -y nodejs

This will install all the dependent packages required to run Node in our system.

Check node and npm version details

node -v


npm -v


Sponsored Link

Related posts

4 Tools for Network Snooping on Linux |

Computer networking data has to be exposed, because packets can’t travel blindfolded, so join us as we use whois, dig, nmcli, and nmap to snoop networks.

Do be polite and don’t run nmap on any network but your own, because probing other people’s networks can be interpreted as a hostile act.

Thin and Thick whois

You may have noticed that our beloved old whois command doesn’t seem to give the level of detail that it used to. Check out this example for

$ whois
Domain Name: LINUX.COM
Registry Domain ID: 4245540_DOMAIN_COM-VRSN
Registrar WHOIS Server:
Registrar URL:
Updated Date: 2018-01-10T12:26:50Z
Creation Date: 1994-06-02T04:00:00Z
Registry Expiry Date: 2018-06-01T04:00:00Z
Registrar: NameCheap Inc.
Registrar IANA ID: 1068
Registrar Abuse Contact Email:
Registrar Abuse Contact Phone: +1.6613102107
Domain Status: ok
DNSSEC: unsigned

There is quite a bit more, mainly annoying legalese. But where is the contact information? It is sitting on (see the third line of output above):

$ whois -h

I won’t print the output here, as it is very long, containing the Registrant, Admin, and Tech contact information. So what’s the deal, Lucille? Some registries, such as .com and .net are “thin” registries, storing a limited subset of domain data. To get complete information use the -h, or --host option, to get the complete dump from the domain’s Registrar WHOIS Server.

Most of the other top-level domains are thick registries, such as .info. Try whois to see an example.

Want to get rid of the obnoxious legalese? Use the -H option.

Digging DNS

Use the dig command to compare the results from different name servers to check for stale entries. DNS records are cached all over the place, and different servers have different refresh intervals. This is the simplest usage:

$ dig
<<>> DiG 9.10.3-P4-Ubuntu <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<<- opcode: QUERY, status: NOERROR, id: 13694
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 1440
;                     IN      A

;; ANSWER SECTION:  10800   IN  A  10800   IN  A  10800   IN  A  10800   IN  A

;; Query time: 92 msec
;; WHEN: Tue Jan 16 15:17:04 PST 2018
;; MSG SIZE  rcvd: 102

Take notice of the SERVER: line near the end of the output. This is your default caching resolver. When the address is localhost, that means there is a DNS server installed on your machine. In my case that is Dnsmasq, which is being used by Network Manager:

$ ps ax|grep dnsmasq
2842 ?        S      0:00 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground 
--no-hosts --bind-interfaces --pid-file=/var/run/NetworkManager/ 

The dig default is to return A records, which define the domain name. IPv6 has AAAA records:

$ $ dig AAAA
;; ANSWER SECTION:  60  IN AAAA  64:ff9b::9765:105  60  IN AAAA  64:ff9b::9765:4105  60  IN AAAA  64:ff9b::9765:8105  60  IN AAAA  64:ff9b::9765:c105

Checkitout, has IPv6 addresses. Very good! If your Internet service provider supports IPv6 then you can connect over IPv6. (Sadly, my overpriced mobile broadband does not.)

Suppose you make some DNS changes to your domain, or you’re seeing dig results that don’t look right. Try querying with a public DNS service, like OpenNIC:

$ dig @
;; Query time: 231 msec

dig confirms that you’re getting your lookup from You can query all kinds of servers and compare results.

Upstream Name Servers

I want to know what my upstream name servers are. To find this, I first look in /etc/resolv/conf:

$ cat /etc/resolv.conf 
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)

Thanks, but I already knew that. Your Linux distribution may be configured differently, and you’ll see your upstream servers. Let’s try nmcli, the Network Manager command-line tool:

$ nmcli dev show | grep DNS

Now we’re getting somewhere, as that is the address of my mobile hotspot, and I should have thought of that myself. I can log in to its weird little Web admin panel to see its upstream servers. A lot of consumer Internet gateways don’t let you view or change these settings, so try an external service such as What’s my DNS server?

List IPv4 Addresses on your Network

Which IPv4 addresses are up and in use on your network?

$ nmap -sn
Starting Nmap 7.01 ( ) at 2018-01-14 14:03 PST
Nmap scan report for Mobile.Hotspot (
Host is up (0.011s latency).
Nmap scan report for studio (
Host is up (0.000071s latency).
Nmap scan report for nellybly (
Host is up (0.015s latency)
Nmap done: 256 IP addresses (2 hosts up) scanned in 2.23 seconds

Everyone wants to scan their network for open ports. This example looks for services and their versions:

$ nmap -sV

Starting Nmap 7.01 ( ) at 2018-01-14 16:46 PST
Nmap scan report for Mobile.Hotspot (
Host is up (0.0071s latency).
Not shown: 997 closed ports
22/tcp filtered ssh
53/tcp open     domain  dnsmasq 2.55
80/tcp open     http    GoAhead WebServer 2.5.0

Nmap scan report for studio (
Host is up (0.000087s latency).
Not shown: 998 closed ports
22/tcp  open  ssh     OpenSSH 7.2p2 Ubuntu 4ubuntu2.2 (Ubuntu Linux; protocol 2.0)
631/tcp open  ipp     CUPS 2.1
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

Service detection performed. Please report any incorrect results at .
Nmap done: 256 IP addresses (2 hosts up) scanned in 11.65 seconds

These are interesting results. Let’s try the same run from a different Internet account, to see if any of these services are exposed to big bad Internet. You have a second network if you have a smartphone. There are probably apps you can download, or use your phone as a hotspot to your faithful Linux computer. Fetch the WAN IP address from the hotspot control panel and try again:

$ nmap -sV

Starting Nmap 7.01 ( ) at 2018-01-14 17:05 PST
Nmap scan report for
Host is up (0.0061s latency).
All 1000 scanned ports on are closed

That’s what I like to see. Consult the fine man pages for these commands to learn more fun snooping techniques.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Hybrid Cloud: 4 Top Use Cases

In the early days of cloud computing, experts talked a lot about the relative merits of public and private clouds and which would be the better choice for enterprises. These days, most enterprises aren’t deciding between public or private clouds; they have both. Hybrid and multi-cloud environments have become the norm.

However, setting up a true hybrid cloud, with integration between a public cloud and private cloud environment, can be very challenging.

“If the end user does not have specific applications in mind about what they are building [a hybrid cloud] for and what they are doing, we find that they typically fail,” Camberley Bates, managing director and analyst at Evaluator Group, told me in an interview.

So which use cases are best suited to the hybrid cloud? Bates highlighted three scenarios where organizations are experiencing the greatest success with their hybrid cloud initiatives, and one use case that’s popular but more challenging.

1. Disaster recovery and business continuity

Setting up an independent environment for disaster recovery (DR) or business continuity purposes can be a very costly proposition. Using a hybrid cloud setup, where the on-premises data center fails over to a public cloud service in the case of an emergency, is much more affordable. Plus, it can give enterprises access to IT resources in a geographic location far enough away from their primary site that they are unlikely to be affected by the same disaster events.

Bates noted that costs are usually big driver for choosing hybrid cloud over other DR options. With hybrid cloud, “I have a flexible environment where I’m not paying for all of that infrastructure all the time constantly.” she said. “I have the ability to expand very rapidly if I need to. I have a low-cost environment. So if I combine those pieces, suddenly disaster recovery as an insurance policy environment is cost effective.”

2. Archive

Using a hybrid cloud for archive data has very similar benefits as disaster recovery, and enterprises often undertake DR and archive hybrid cloud efforts simultaneously.

“There’s somewhat of a belief system that some people have that the cloud is cheaper than on-prem, which is not necessarily true,” cautioned Bates. However, she added, “It is really cheap to put data at rest in a hybrid cloud for long periods of time. So if I have data that is truly at rest and I’m not moving it in and out, it’s very cost effective.”

3. DevOps application development

Another area where enterprises are experiencing a lot of success with hybrid clouds is with application development. As organizations have embraced DevOps and agile methodologies, IT teams are looking for ways to speed up the development process.

Bates said, “The DevOps guys are using [public cloud] to set up and do application development.” She explained, “The public cloud is very simple and easy to use. It’s very fast to get going with it.”

But once applications are ready to deploy in production, many enterprises choose to move them back to the on-premises data center, often for data governance or cost reasons, Bates explained. The hybrid cloud model makes it possible for the organization to meet its needs for speed and flexibility in development, as well as its needs for stability, easy management, security, and low costs in production.

4. Cloud bursting

Many organizations are also interested in using a hybrid cloud for “cloud bursting.” That is, they want to run their applications in a private cloud until demand for resources reaches a certain level, at which point they would fail over to a public cloud service.

However, Bates said, “Cloud bursting is a desire and a desirable capability, but it is not easy to set up, is what our research found.”

Bates has seen some companies, particularly financial trading companies, be successful with hybrid cloud setups, but this particular use case continues to be very challenging to put into practice.

Learn more about why enterprises are adopting hybrid cloud and best practices that lead to favorable outcomes at Camberley Bates’ Interop ITX session, “Hybrid Cloud Success & Failure: Use Cases & Technology Options.” 

Get live advice on networking, storage, and data center technologies to build the foundation to support software-driven IT and the cloud. Attend the Infrastructure Track at Interop ITX, April 30-May 4, 2018. Register now!


Source link