Tag Archives: servers

Install and monitor services using Monit on ubuntu 17.10 Server


Sponsored Link

Monit is a utility for managing and monitoring, processes, files, directories and devices on a UNIX system. Monit conducts automatic maintenance and repair and can execute meaningful causal actions in error situations.

Monit Features

* Daemon mode — poll programs at a specified interval
* Monitoring modes — active, passive or manual
* Start, stop and restart of programs
* Group and manage groups of programs
* Process dependency definition
* Logging to syslog or own logfile
* Configuration — comprehensive controlfile
* Runtime and TCP/IP port checking (tcp and udp)
* SSL support for port checking
* Unix domain socket checking
* Process status and process timeout
* Process cpu usage
* Process memory usage
* Process zombie check
* Check the systems load average
* Check a file or directory timestamp
* Alert, stop or restart a process based on its characteristics
* MD5 checksum for programs started and stopped by monit
* Alert notification for program timeout, restart, checksum, stop resource and timestamp error
* Flexible and customizable email alert messages
* Protocol verification. HTTP, FTP, SMTP, POP, IMAP, NNTP, SSH, DWP,LDAPv2 and LDAPv3
* An http interface with optional SSL support to make monit accessible from a webbrowser

Install Monit on Ubuntu 17.10 server

sudo apt-get install monit

This will complete the installation.

Configuring Monit

Default configuration file located at /etc/monit/monitrc you need to edit this file to configure your options

sudo vi /etc/monit/monitrc

Sample Configuration file as follows and uncomment all the following options

## Start monit in background (run as daemon) and check the services at 2-minute
## intervals.
#
set daemon 120

## Set syslog logging with the ‘daemon’ facility. If the FACILITY option is
## omited, monit will use ‘user’ facility by default. You can specify the
## path to the file for monit native logging.
#
set logfile syslog facility log_daemon

## Set list of mailservers for alert delivery. Multiple servers may be
## specified using comma separator. By default monit uses port 25 — it is
## possible to override it with the PORT option.
#
set mailserver localhost # primary mailserver

## Monit by default uses the following alert mail format:

From: [email protected]$HOST # sender
Subject: monit alert — $EVENT $SERVICE # subject

$EVENT Service $SERVICE

Date: $DATE
Action: $ACTION
Host: $HOST # body
Description: $DESCRIPTION

Your faithful,
monit

## You can override the alert message format or its parts such as subject
## or sender using the MAIL-FORMAT statement. Macros such as $DATE, etc.
## are expanded on runtime. For example to override the sender:
#
set mail-format { from: [email protected] }

## Monit has an embedded webserver, which can be used to view the
## configuration, actual services parameters or manage the services using the
## web interface.
#
set httpd port 2812 and
use address localhost # only accept connection from localhost
allow localhost # allow localhost to connect to the server and
allow 172.29.5.0/255.255.255.0
allow admin:monit # require user ‘admin’ with password ‘monit’

===> Change 172.29.5.0/255.255.255.0 to your network ip range

# Monitoring the apache2 web services.
# It will check process apache2 with given pid file.
# If process name or pidfile path is wrong then monit will
# give the error of failed. tough apache2 is running.
check process apache2 with pidfile /var/run/apache2.pid

#Below is actions taken by monit when service got stuck.
start program = “/etc/init.d/apache2 start”
stop program = “/etc/init.d/apache2 stop”
# Admin will notify by mail if below of the condition satisfied.
if cpu is greater than 60% for 2 cycles then alert
if cpu > 80% for 5 cycles then restart
if totalmem > 200.0 MB for 5 cycles then restart
if children > 250 then restart
if loadavg(5min) greater than 10 for 8 cycles then stop
if 3 restarts within 5 cycles then timeout
group server

#Monitoring Mysql Service

check process mysql with pidfile /var/run/mysqld/mysqld.pid
group database
start program = “/etc/init.d/mysql start”
stop program = “/etc/init.d/mysql stop”
if failed host 127.0.0.1 port 3306 then restart
if 5 restarts within 5 cycles then timeout

#Monitoring ssh Service

check process sshd with pidfile /var/run/sshd.pid
start program “/etc/init.d/ssh start”
stop program “/etc/init.d/ssh stop”
if failed port 22 protocol ssh then restart
if 5 restarts within 5 cycles then timeout

You can also include other configuration files via include directives:

include /etc/monit/default.monitrc
include /etc/monit/mysql.monitrc

This is only sample configuration file. The configuration file is pretty self-explaining; if you are unsure about an option, take a look at the monit documentation

After configuring your monit file you can check the configuration file syntax using the following command

sudo monit -t

Once you don’t have any syntax errors you need to enable this service by changing the file /etc/default/monit

sudo vi /etc/default/monit

# You must set this variable to for monit to start

startup=0

to

# You must set this variable to for monit to start

startup=1

Now you need to start the service using the following command

sudo systemctl restart monit.service

Monit Web interface

Monit Web interface will run on the port number 2812.If you have any firewall in your network setup you need to enable this port.

Now point your browser to http://yourserverip:2812/ (make sure port 2812 isn’t blocked by your firewall), log in with admin and monit.If you want a secure login you can use https check here

Once you logged in you should see the following screen with all the services we are monitoring


Apache web server process details

Sponsored Link




Related posts

Composable Infrastructure: A Skeptic’s View


One of the buzzwords you hear about in data centers these days is composable infrastructure. Hewlett-Packard Enterprise, Cisco, Intel and others have touted the concept as a more efficient way to provision and manage on-premises data center infrastructure and dynamically support workloads. Dell also recently got into the act, introducing Kinetic.

But at Interop ITX, attendees heard a less than enthusiastic perspective on composable infrastructure. Rob Hirschfeld, CEO of RackN who has been in the cloud and infrastructure space for nearly 15 years including serving on the Open Stack Foundation Board, said IT infrastructure buyers should carefully consider whether the technology truly solves a problem for their business.

“I’m pretty skeptical about composable infrastructure,” he said, prefacing his talk. “I’m not a fan of bright and shiny for bright and shiny’s sake. It needs to solve a problem.”

Hirschfeld noted that while his focus is software, what his company does — develop software to automate bare metal servers — has a lot in common with composable hardware. Composable infrastructure is “about how you change your hardware’s form factor,” he said.

From his perspective, the important criteria when buying IT infrastructure are: commodity components are interchangeable, it’s manageable at scale, and it reduces implementation complexity. “If you’re not reducing the complexity of your environment, then you’re ultimately creating technical debt or other problems,” he said.

 

So what is composable infrastructure? Hirschfeld provided this definition: A server chassis that allows you to dynamically reallocate resources like RAM, storage, networking, and GPUs between CPUs to software define a physical server.

“So it’s very much like a virtual machine, but with bare metal,” he added.

Today’s composable infrastructure solutions use high-speed interconnections — PCIe and NVMe — to extend the bus length of the components in a single computer, he said. The CPU remains the central identity of a system, but resources like RAM can be reassigned.

Hirschfeld noted his distaste for the term “composable,” which can be confusing taken out of context. Moreover, composable infrastructure can be confused with converged infrastructure, which he described as creating infrastructure using common building blocks instead of having specialized compute/storage units. While converged infrastructure is often used to simplify implementation of virtualized infrastructure, practically speaking, composable infrastructure competes with virtualized infrastructure, he said.

Composable infrastructure is designed to enable the creation of “heterogeneous machine configurations without knowing in advance your target configuration needs,” according to Hirschfeld, who added that virtualized infrastructure can accomplish the same thing.

While composable infrastructure is cool technology, IT buyers need to consider it from a practical point of view, Hirschfeld said. “My concern with this model is that I have 10 chasses of composable infrastructure and each has 20% spare capacity. Now I have to figure out how to manage that,” Hirschfeld said.

Most people he knows don’t dynamically scale their capacity, which is why he’s a skeptic, he said.

“I’m not saying don’t buy this hardware. There are legitimate vendors and it might solve your use case,” he said. “But understand what your use cases are and pressure test against other solutions on the market because this is a premium model.”

Hirschfeld isn’t sold on the benefits composable infrastructure vendors promise, such as reduced overprovisioning, improved time to service, and availability.

In his view, there are two types of IT infrastructure buyers: Those who want to buy an appliance and are willing to spend money on standard systems, and those who are focused on scale, are cost-sensitive, and have a multi-vendor deployment.

“In both cases, you’ll have a pretty predictable use of infrastructure,” Hirschfeld said. “If you don’t, you’re probably not buying infrastructure, but buying it from a cloud provider.”



Source link

6 Reasons SSDs Will Take Over the Data Center


The first samples of flash-based SSDs surfaced 12 years ago, but only now does the technology appear poised to supplant hard drives in the data center, at least for primary storage. Why has it taken so long? After all, flash drives are as much as 1,000x faster than hard-disk drives for random I/O.

Partly, it has been a misunderstanding that overlooks systems, and focuses instead on storage elements and CPUs. This led the industry to focus on cost per terabyte, while the real focus should have been the total cost of a solution with or without flash. Simply put, most systems are I/O bound and the use of flash inevitably means needing fewer systems for the same workload. This typically offsets the cost difference.

The turning point in the storage industry came with all-flash arrays: simple drop-in devices that instantly and dramatically boosted SAN performance. This has evolved into a model of two-tier storage with SSDs as the primary tier and a slower, but cheaper, secondary tier of HDDs

Applying the new flash model to servers provides much higher server performance, just as price points for SSDs are dropping below enterprise hard drive prices. With favorable economics and much better performance, SSDs are now the preferred choice for primary tier storage.

We are now seeing the rise of Non-Volatile Memory Express (NVMe), which aims to replace SAS and SATA as the primary storage interface. NVMe is a very fast, low-overhead protocol that can handle millions of IOPS, far more than its predecessors. In the last year, NVMe pricing has come close to SAS drive prices, making the solution even more attractive. This year, we’ll see most server motherboards supporting NVMe ports, likely as SATA-Express, which also supports SATA drives.

NVMe is internal to servers, but a new NVMe over Fabrics (NVMe-oF) approach extends the NVMe protocol from a server out to arrays of NVMe drives and to all-flash and other storage appliances, complementing, among other things, the new hyper-converged infrastructure (HCI) model for cluster design.

The story isn’t all about performance, though. Vendors have promised to produce SSDs with 32 and 64TB capacity this year. That’s far larger than the biggest HDD, which is currently just 16TB and stuck at a dead-end at least until HAMR is worked out.

The brutal reality, however, is that solid-state opens up form-factor options that hard disk drives can’t achieve. Large HDDs will need to be 3.5 in form-factor. We already have 32TB SSDs in a 2.5 inch size and new form-factors, such as M2.0 and the “ruler“(an elongated M2.0), which will allow for a lot of capacity in a small appliance. Intel and Samsung are talking petabyte- sized storage in 1U boxes.

The secondary storage market is slow and cheap, making for a stronger barrier to entry against SSDs. The rise of 3D NAND and new Quad-Level Cell (QLC) flash devices will close the price gap to a great extent, while the huge capacity per drive will offset the remaining price gap by reducing the number of appliances.

Solid-state drives have a secret weapon in the battle for the secondary tier. Deduplication and compression become feasible because of the extra bandwidth in the whole storage structure, effectively multiplying capacity by factors of 5X to 10X. This lowers the cost of QLC-flash solutions below HDDs in price-per-available terabyte.

In the end, perhaps in just three or four years flash and SSDs will take over the data center and kill hard drives off for all but the most conservative and stubborn users. On the next pages, I drill down into how SSDs will dominate data center storage.

(Image: Timofeev Vladimir/Shutterstock)



Source link

Ubuntu 18.04 Released » Linux Magazine


Canonical has announced the release of Ubuntu 18.04, aka Bionic Beaver. It’s an LTS (long term supported) release that’s suitable for enterprise customers and servers. Ubuntu 18.04 LTS will be supported for 5 years for Ubuntu Desktop, Ubuntu Server, and Ubuntu Core.

Ubuntu 18.04 is also the first Ubuntu LTS, after years, that ships with Gnome as the default desktop shell. Despite being a Gnome distribution, Canonical was using its own Unity Shell on top of Gnome to gain more influence and control over the user experience.

Ubuntu 18.04 comes with a customized version of Linux kernel 4.15 based Linux kernel, which adds support for the latest hardware and peripherals. Some of the hardware focused improvements that this kernel brings to Ubuntu include CPU controller for the cgroup v2 interface, AMD secure memory encryption support, the latest MD driver with software RAID enhancements, improved power, and management for systems with SATA Link Power Management.

Java users will continue to use OpenJDK 8, which has moved to universe and will remain available for the life of 18.04. The move is intended to help developers with migration times for packages, custom applications, or scripts that can’t be built with OpenJDK 10 or 11.

Security, as usual is one of the core features of Ubuntu 18.04. In a conference call, Mark Shuttleworth, the founder and CEO of Canonical, said that Ubuntu 18.04 is fully protected against Spectre and Meltdown.



Source link

How to Compile a Linux Kernel | Linux.com


Once upon a time the idea of upgrading the Linux kernel sent fear through the hearts of many a user. Back then, the process of upgrading the kernel involved a lot of steps and even more time. Now, installing a new kernel can be easily handled with package managers like apt. With the addition of certain repositories, you can even easily install experimental or specific kernels (such as real-time kernels for audio production) without breaking a sweat.

Considering how easy it is to upgrade your kernel, why would you bother compiling one yourself? Here are a few possible reasons:

  • You simply want to know how it’s done.

  • You need to enable or disable specific options into a kernel that simply aren’t available via the standard options.

  • You want to enable hardware support that might not be found in the standard kernel.

  • You’re using a distribution that requires you compile the kernel.

  • You’re a student and this is an assignment.

Regardless of why, knowing how to compile a Linux kernel is very useful and can even be seen as a right of passage. When I first compiled a new Linux kernel (a long, long time ago) and managed to boot from said kernel, I felt a certain thrill coursing through my system (which was quickly crushed the next time I attempted and failed).
With that said, let’s walk through the process of compiling a Linux kernel. I’ll be demonstrating on Ubuntu 16.04 Server. After running through a standard sudo apt upgrade, the installed kernel is 4.4.0-121. I want to upgrade to kernel 4.17. Let’s take care of that.

A word of warning: I highly recommend you practice this procedure on a virtual machine. By working with a VM, you can always create a snapshot and back out of any problems with ease. DO NOT upgrade the kernel this way on a production machine… not until you know what you’re doing.

Downloading the kernel

The first thing to do is download the kernel source file. This can be done by finding the URL of the kernel you want to download (from Kernel.org). Once you have the URL, download the source file with the following command (I’ll demonstrate with kernel 4.17 RC2):

wget https://git.kernel.org/torvalds/t/linux-4.17-rc2.tar.gz

While that file is downloading, there are a few bits to take care of.

Installing requirements

In order to compile the kernel, we’ll need to first install a few requirements. This can be done with a single command:

sudo apt-get install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison

Do note: You will need at least 12GB of free space on your local drive to get through the kernel compilation process. So make sure you have enough space.

Extracting the source

From within the directory housing our newly downloaded kernel, extract the kernel source with the command:

tar xvzf linux-4.17-rc2.tar.gz

Change into the newly created directory with the command cd linux-4.17-rc2.

Configuring the kernel

Before we actually compile the kernel, we must first configure which modules to include. There is actually a really easy way to do this. With a single command, you can copy the current kernel’s config file and then use the tried and true menuconfig command to make any necessary changes. To do this, issue the command:

cp /boot/config-$(uname -r) .config

Now that you have a configuration file, issue the command make menuconfig. This command will open up a configuration tool (Figure 1) that allows you to go through every module available and enable or disable what you need or don’t need.

It is quite possible you might disable a critical portion of the kernel, so step through menuconfig with care. If you’re not sure about an option, leave it alone. Or, better yet, stick with the configuration we just copied from the running kernel (as we know it works). Once you’ve gone through the entire list (it’s quite long), you’re ready to compile!

Compiling and installing

Now it’s time to actually compile the kernel. The first step is to compile using the make command. So issue make and then answer the necessary questions (Figure 2). The questions asked will be determined by what kernel you’re upgrading from and what kernel you’re upgrading to. Trust me when I say there’s a ton of questions to answer, so give yourself plenty of time here.

After answering the litany of questions, you can then install the modules you’ve enabled with the command:

make modules_install

Once again, this command will take some time, so either sit back and watch the output, or go do something else (as it will not require your input). Chances are, you’ll want to undertake another task (unless you really enjoy watching output fly by in a terminal).

Now we install the kernel with the command:

sudo make install

Again, another command that’s going to take a significant amount of time. In fact, the make install command will take even longer than the make modules_install command. Go have lunch, configure a router, install Linux on a few servers, or take a nap.

Enable the kernel for boot

Once the make install command completes, it’s time to enable the kernel for boot. To do this, issue the command:

sudo update-initramfs -c -k 4.17-rc2

Of course, you would substitute the kernel number above for the kernel you’ve compiled. When that command completes, update grub with the command:

sudo update-grub

You should now be able to restart your system and select the newly installed kernel.

Congratulations!

You’ve compiled a Linux kernel! It’s a process that may take some time; but, in the end, you’ll have a custom kernel for your Linux distribution, as well as an important skill that many Linux admins tend to overlook.