Tag Archives: servers

How to Synchronize Time with NTP in Linux | Linux.com


The Network Time Protocol (NTP) is a protocol used to synchronize computer system clock automatically over a networks. The machine can have the system clock use Coordinated Universal Time (UTC) rather than local time.

The most common method to sync system time over a network in Linux desktops or servers is by executing the ntpdate command which can set your system time from an NTP time server. In this case, the ntpd daemon must be stopped on the machine where the ntpdate command is issued.

In most Linux systems, the ntpdate command is not installed by default. To install it, execute the below command:

$ sudo apt-get install ntpdate    [On Debian/Ubuntu]
$ sudo yum  install ntpdate       [On CentOS/RHEL]
$ sudo dnf install ntpdate        [On Fedora 22+]

Read more at Tecmint

Click Here!

Storage Management Software: Users Weigh In


Data storage never seems to stop evolving in ways that challenge IT departments. Aside from the need to deal with perpetual growth, data storage now requires management across cloud and on-premises infrastructure as well as hybrid environment. Different workloads also require varying service levels from storage solutions. Storage management tools have had to keep up with this rapid change.

Storage management tools give storage managers a way to stay on top of storage systems. They enable storage managers to track utilization, monitor performance and more. What do users actually think of storage management tools on the market today?

The discussion about storage management software on IT Central Station reveals that storage is about more than just storing data. It’s about keeping businesses running optimally. When customers can’t see their data, that’s not a storage problem. It’s a business problem. For this reason, storage managers appreciate storage management solutions that offer real time visibility into storage performance and the ability to compare relative performance from multiple storage systems. They like products that are responsive and efficient to use, with a “single pane of glass” and automated alerting.

The following reviews from IT Central Station users highlight the pros and cons of two top storage management software products: NetApp OnCommand Insight and Dell EMC ControlCenter.

NetApp OnCommand Insight

A storage administrator at a financial services company who goes by the handle StorageA7774, cited the product’s comprehensive view:

“Since we have to monitor multiple systems, it gives us a single pane of glass to look at all of our environments. Also, to compare and contrast, if one environment is having some issues, we can judge it against the other environments to make sure everything is on par with one another. In the financial services industry, customer responsiveness is very important. Financial advisors cannot sit in front of a customer and say, ‘I can’t get your data.’ Thus, being up and running and constantly available is a very important area for our client.”

Carter B., a storage administrator at a manufacturing company, cited a several ways OnCommand Insight helps his organization:

“The tracking of utilization of our storage systems; seeing the throughput—these are the most important metrics for having a working operating system and working storage system. It’s centralized. It’s got a lot of data in there. We can utilize the data that’s in there and the output to other systems to run scripts off of it. Therefore, it’s pretty versatile.”

However, a systems administrator at a real estate/law firm with the handle SystemsA3a53, noted a small drawback:

“There was a minor issue where we were receiving a notification that a cluster was not available, or communication to the cluster. OnCommand Manager could not reach a cluster, which is really much like a false positive. The minor issues were communications within the systems.”

And StorageA970f, a storage architect at a government agency, suggested an improvement to the tool’s interface:

“Maybe a little bit more graphical interface. Right now — and this is going to sound really weird — but whatever the biggest server is, the one that is utilizing the most storage space, instead of showing me that server and how much storage space, it just shows it to you in a big font. Literally in a big font. That’s it. So if your server is named Max and you’ve got another server named Sue, and Max is taking up most of your space, all it’s going to show is just Max is big, Sue is little. That’s is really weird, because I really want to see more than that. You can click on Max, drill down in and see the stuff. But I would rather, on my front interface, say, ‘Oh, gosh, Max is using 10 terabytes. Sue is only using one. She’s fixing to choke. Let me move some of this over.’”

Dell EMC ControlCenter

Gianfranco L., data manager at a tech services company, described how Dell EMC ControlCenter helps his organization:

“We use the SNMP gateway to aggregate hardware and performance events. The alerting feature is valuable because it completes the gap of storage monitoring. Often the storage comes with a tele-diagnostic service. For security purposes, it’s very important for us to be aware of every single failure in order to be more proactive and not only reactive.”
 

Bharath P., senior storage consultant at a financial services firm, described what he likes about the product.

“Centralized administration and management of SAN environment in the organization are valuable features. Improvements to my organization include ease of administration and that it fits in well with all the EMC SAN storage”

However, Hari K., senior infrastructure analyst at a financial services firm, said there’s room for improvement with EMC ControlCenter:

“It needed improvement with its stability. Also, since it was agent-based communication, we always had to ensure that the agents were running on the servers all the time.”

Gianfranco L., also cited an area where the product could do better:

“The use of agents is not easy. The architectural design of using every single agent for every type of storage can be reviewed with the use of general proxies. The general proxies also discover other vendors’ storage. This can be done with custom made scripts.”



Source link

Tig – A Command Line Browser for Git Repositories | Linux.com


In a recent article, we’ve described how to install and use GRV tool for viewing Git repositories in Linux terminal. In this article, we would like to introduce to you another useful command-line based interface to git called Tig.

Tig is a free open source, cross platform ncurses-based text-mode interface for git. It is a straight-forward interface to git that can help in staging changes for commit at chunk level and works as a pager for output from different Git commands. It can run on Linux, MacOSX as well as Windows systems.

Read more at Tecmint

 

Click Here!

Install Munin on Ubuntu 17.10 Server


Sponsored Link

Munin the monitoring tool surveys all your computers and remembers what it saw. It presents all the information in graphs through a web interface. Its emphasis is on plug and play capabilities. After completing a installation a high number of monitoring plugins will be playing with no more effort.

Using Munin you can easily monitor the performance of your computers, networks, SANs, applications, weather measurements and whatever comes to mind. It makes it easy to determine “what’s different today” when a performance problem crops up. It makes it easy to see how you’re doing capacity-wise on any resources.

Munin uses the excellent RRDTool (written by Tobi Oetiker) and the framework is written in Perl, while plugins may be written in any language. Munin has a master/node architecture in which the master connects to all the nodes at regular intervals and asks them for data. It then stores the data in RRD files, and (if needed) updates the graphs. One of the main goals has been ease of creating new plugins (graphs).

Preparing Your system

Install apache web server using the following command

sudo apt-get install apache2

Now proceed with munin server installation using the following command from your terminal

sudo apt-get install munin

Once the package is installed, you only need to make a few changes to get your installation working.

Configuring Munin server

You need to edit the /etc/munin/munin.conf file

sudo vi /etc/munin/munin.conf

Change the following lines

Change 1

#dbdir /var/lib/munin
#htmldir /var/cache/munin/www
#logdir /var/log/munin
#rundir /var/run/munin

to

dbdir /var/lib/munin
htmldir /var/www/munin
logdir /var/log/munin
rundir /var/run/munin

Change 2

#tmpldir /etc/munin/templates

to

tmpldir /etc/munin/templates

Change 3

the server name on the line localhost.localdomain should be updated to display the hostname, domain name, or other identifier you’d like to use for your monitoring server

# a simple host tree
[localhost.localdomain]
address 127.0.0.1
use_node_name yes

to

[MuninMonitor]
address 127.0.0.1
use_node_name yes

Change 4

You need to edit the munin apache configuration

sudo vi /etc/munin/apache.conf

Change the following line in the starting of the file

Alias /munin /var/cache/munin/www

to

Alias /munin /var/www/munin

and

We also need to allow connections from outside of the local computer for this do the following changes

<Directory /var/cache/munin/www>
Order allow,deny
Allow from localhost 127.0.0.0/8 ::1
Options None

to

<Directory /var/munin/www>
Order allow,deny
#Allow from localhost 127.0.0.0/8 ::1
Allow from all
Options None

you will need to create the directory path that you referenced in the munin.conf file and modify the ownership to allow munin to write to it:

sudo mkdir /var/www/munin

sudo chown munin:munin /var/www/munin

Now you need to restart the munin and apache services using the following commands

sudo service munin-node restart

sudo service apache2 restart

It might take a few minutes to generate the necessary graphs and html files. After about five minutes, your files should be created and you will be able to access your data. You should be able to access your munin details at:

http://yourserver_ip_address/munin

Screenshots

1

2

If you get an error message in your browser similar to the following, you need to wait longer for munin to create the files

Forbidden

You don’t have permission to access /munin/

Configure Remote Monitoring

Munin can easily monitor multiple servers at once.If you want to monitor remote servers you need to following this procedure.

First you need to install munin client package using the following commands

sudo apt-get install munin-node

Now you need to edit the munin-node.conf file to specify that your monitoring server is allowed to poll the client for information.

sudo vi /etc/munin/munin-node.conf

Search for the section that has the line “allow ^127.0.0.1$”. Modify the IP address to reflect your monitoring server’s IP address.If your server ip is 172.30.2.100

allow ^.172.30.2.100$

Save and exit the file

You need to restart the munin client using the following information

sudo service munin-node restart

Now you need to login in to your munin server and edit the munin.conf file

sudo vi /etc/munin/munin.conf

Copy the following section and change the ip address to your remote server client ip address

[MuninMonitor]
address 127.0.0.1
use_node_name yes

to

[MuninMonitor]
address 172.30.2.101
use_node_name yes

Finall you need to restart the apache server using the following command

sudo service apache2 restart

Additional Plugins

The munin-plugins-extra package contains performance checks additional services such as DNS, DHCP, Samba, etc. To install the package run the following command from the terminal

sudo apt-get install munin-plugins-extra

Make sure you have install this package on both the server and node machines.

Sponsored Link




Related posts

Facebook Debuts Data Center Fabric Aggregator


At the Open Compute Project Summit in San Jose on Tuesday, Facebook engineers showcased their latest disaggregated networking design, taking the wraps off new data center hardware. Microsoft, meanwhile, announced an effort to disaggregate solid-state drives to make them more flexible for the cloud.

The Fabric Aggregator, built on Facebook’s Wedge 100 gigabit top-of-rack switch and Open Switching System (FBOSS) software, is designed as a distributed network system to accommodate the social media giant’s rapid growth. The company is planning to build its twelfth data center and is expanding one in Nebraska from two buildings to six.

“We had tremendous growth of east-west traffic,” Sree Sankar, technical product manager at Facebook said, referring to the traffic flowing between buildings in a Facebook data center region. “We needed a change in the aggregation tier. We were already using the largest chassis switch.”

The company needed a system that would provide power efficiency and have a flexible design, she said. Engineers used Wedge 100 and FBOSS as building blocks and developed a cabling assembly unit to emulate the backplane. The design provides operational efficiency, 60% better power efficiency, and higher port density. Sankar said Facebook was able to deploy it quickly in its data center regions in the past nine months. Engineers can easily scale Fabric Aggregator up or down according to data center demands.

“It redefines network capacity in our data centers,” she said.

Facebook engineers wrote a detailed description of Fabric Aggregator in a blog post. They submitted the specifications for all the backplane options to the OCP, continuing their sharing tradition. Facebook’s networking contributions to OCP include its Wedge switch and Edge Fabric traffic control system. The company has been a major proponent of network disaggregation, saying traditional proprietary network gear doesn’t provide the flexibility and agility they need.

Seven years ago, Facebook spearheaded the creation of the Open Compute Project with a focus on open data center components such as racks and servers. The OCP now counts more than 4,000 engineers involved in its various projects and more than 370 specification and design packages, OCP CEO Rocky Bullock said in kicking off this week’s OCP Summit, which drew some 3,000 attendees.  

Microsoft unveils Project Denali

While Facebook built on its disaggregated networking approach, Microsoft announced Project Denali, an effort to create new standards for flash storage to optimize it for the cloud through disaggregation.

Kushagra Vaid, general manager of Azure Infrastructure at Microsoft, said cloud providers are top consumers of flash storage, which amounts to billions of dollars in annual spending. SSDs, however, with their “monolithic architecture” aren’t designed to be cloud friendly, he said.  

Any SSD innovation requires that the entire device be tested, and new functionality isn’t provided in a consistent manner, he said. “At cloud scale, we want to drive every bit of efficiency,” Vaid said. Microsoft engineers wanted to figure out a way to provide the same kind of flexibility and agility with SSDs as disaggregation brought to networking.

“Why can’t we do the same thing with SSDs?” he said.

Project Denali “standardizes the SSD firmware interfaces by disaggregating the functionality for software-defined data layout and media management,” Vaid wrote in a blog post.

“Project Denali is a standardization and evolution of Open Channel that defines the roles of SSD vs. that of the host in a standard interface. Media management, error correction, mapping of bad blocks and other functionality specific to the flash generation stays on the device while the host receives random writes, transmits streams of sequential writes, maintains the address map, and performs garbage collection. Denali allows for support of FPGAs or microcontrollers on the host side,” he wrote.

Vaid said this disaggregation provides a lot of benefits. “The point of creating a standard is to give choice and provide flexibility… You can start to think at a bigger scale because of this disaggregation, and have each layer focus on what it does best.”

Microsoft is working with several partners including CNEX Labs and Intel on Project Denali, which it plans to contribute to the OCP.

Hear more from Facebook and the Open Compute Project when they present live at the Network Transformation Summit at Interop ITX, April 30 and May 1 in Las Vegas. Register now!



Source link