Tag Archives: servers

4 Useful Tools to Run Commands on Multiple Linux Servers | Linux.com


In this article, we will show how to run commands on multiple Linux servers at the same time. We will explain how to use some of the widely known tools designed to execute repetitive series of commands on multiple servers simultaneously. This guide is useful for system administrators who usually have to check the health of multiple Linux servers everyday.

For the purpose of this article, we assume that you already have SSH setup to access all your servers and secondly, when accessing multiple servers simultaneously, it is appropriate to set up key-based password-less SSH on all of your Linux servers. This above all enhances server security and also enables ease of access.

1. PSSH – Parallel SSH

Parallel-SSH is an open source, fast and easy-to-use command line based Python toolkit for executing ssh in parallel on a number of Linux systems. It contains a number of tools for various purposes such as parallel-ssh, parallel-scp, parallel-rsync, parallel-slurp and parallel-nuke (read the man page of a particular tool for more information).

Read more at Tecmint

Click Here!

4 Must-Have Tools for Monitoring Linux | Linux.com


Linux. It’s powerful, flexible, stable, secure, user-friendly… the list goes on and on. There are so many reasons why people have adopted the open source operating system. One of those reasons which particularly stands out is its flexibility. Linux can be and do almost anything. In fact, it will (in most cases) go well above what most platforms can. Just ask any enterprise business why they use Linux and open source.

But once you’ve deployed those servers and desktops, you need to be able to keep track of them. What’s going on? How are they performing? Is something afoot? In other words, you need to be able to monitor your Linux machines. “How?” you ask. That’s a great question, and one with many answers. I want to introduce you to a few such tools—from command line, to GUI, to full-blown web interfaces (with plenty of bells and whistles). From this collection of tools, you can gather just about any kind of information you need. I will stick only with tools that are open source, which will exempt some high-quality, proprietary solutions. But it’s always best to start with open source, and, chances are, you’ll find everything you need to monitor your desktops and servers. So, let’s take a look at four such tools.

Top

We’ll first start with the obvious. The top command is a great place to start, when you need to monitor what processes are consuming resources. The top command has been around for a very long time and has, for years, been the first tool I turn to when something is amiss. What top does is provide a real-time view of all running systems on a Linux machine. The top command not only displays dynamic information about each running process (as well as the necessary information to manage those processes), but also gives you an overview of the machine (such as, how many CPUs are found, and how much RAM and swap space is available). When I feel something is going wrong with a machine, I immediately turn to top to see what processes are gobbling up the most CPU and MEM (Figure 1). From there, I can act accordingly.

There is no need to install anything to use the top command, because it is installed on almost every Linux distribution by default. For more information on top, issue the command man top.

Glances

If you thought the top command offered up plenty of information, you’ve yet to experience Glances. Glances is another text-based monitoring tool. In similar fashion to top, glances offers a real-time listing of more information about your system than nearly any other monitor of its kind. You’ll see disk/network I/O, thermal readouts, fan speeds, disk usage by hardware device and logical volume, processes, warnings, alerts, and much more. Glances also includes a handy sidebar that displays information about disk, filesystem, network, sensors, and even Docker stats. To enable the sidebar, hit the 2 key (while glances is running). You’ll then see the added information (Figure 2).

You won’t find glances installed by default. However, the tool is available in most standard repositories, so it can be installed from the command line or your distribution’s app store, without having to add a third-party repository.

GNOME System Monitor

If you’re not a fan of the command line, there are plenty of tools to make your monitoring life a bit easier. One such tool is GNOME System Monitor, which is a front-end for the top tool. But if you prefer a GUI, you can’t beat this app.

With GNOME System Monitor, you can scroll through the listing of running apps (Figure 3), select an app, and then either end the process (by clicking End Process) or view more details about said process (by clicking the gear icon).

You can also click any one of the tabs at the top of the window to get even more information about your system. The Resources tab is a very handy way to get real-time data on CPU, Memory, Swap, and Network (Figure 4).

If you don’t find GNOME System Monitor installed by default, it can be found in the standard repositories, so it’s very simple to add to your system.

Nagios

If you’re looking for an enterprise-grade networking monitoring system, look no further than Nagios. But don’t think Nagios is limited to only monitoring network traffic. This system has over 5,000 different add-ons that can be added to expand the system to perfectly meet (and exceed your needs). The Nagios monitor doesn’t come pre-installed on your Linux distribution and although the install isn’t quite as difficult as some similar tools, it does have some complications. And, because the Nagios version found in many of the default repositories is out of date, you’ll definitely want to install from source. Once installed, you can log into the Nagios web GUI and start monitoring (Figure 5).

Of course, at this point, you’ve only installed the core and will also need to walk through the process of installing the plugins. Trust me when I say it’s worth the extra time.
The one caveat with Nagios is that you must manually install any remote hosts to be monitored (outside of the host the system is installed on) via text files. Fortunately, the installation will include sample configuration files (found in /usr/local/nagios/etc/objects) which you can use to create configuration files for remote servers (which are placed in /usr/local/nagios/etc/servers).

Although Nagios can be a challenge to install, it is very much worth the time, as you will wind up with an enterprise-ready monitoring system capable of handling nearly anything you throw at it.

There’s More Where That Came From

We’ve barely scratched the surface in terms of monitoring tools that are available for the Linux platform. No matter whether you’re looking for a general system monitor or something very specific, a command line or GUI application, you’ll find what you need. These four tools offer an outstanding starting point for any Linux administrator. Give them a try and see if you don’t find exactly the information you need.

How to Create SSH Tunneling or Port Forwarding in Linux | Linux.com


SSH tunneling (also referred to as SSH port forwarding) is simply routing local network traffic through SSH to remote hosts. This implies that all your connections are secured using encryption. It provides an easy way of setting up a basic VPN (Virtual Private Network), useful for connecting to private networks over unsecure public networks like the Internet.

You may also be used to expose local servers behind NATs and firewalls to the Internet over secure tunnels, as implemented in ngrok.

SSH sessions permit tunneling network connections by default and there are three types of SSH port forwarding: local, remote and dynamic port forwarding.

In this article, we will demonstrate how to quickly and easily setup a SSH tunneling or the different types of port forwarding in Linux.

Read more at Tecmint

Click Here!

The Crypto-Criminal Bar Brawl | Enterprise Security


As if e-commerce companies didn’t have enough problems with transacting securely and defending against things like fraud, another avalanche of security problems — like cryptojacking, the act of illegally mining cryptocurrency on your end servers — has begun.

We’ve also seen a rise in digital credit card skimming attacks against popular e-commerce software such as Magento. Some of the attacks are relatively naive and un-targeted, taking advantage of lax security on websites found to be vulnerable, while others are highly targeted for maximum volume.

Indeed, it’s so ridiculous that there are websites such as
MageReport.com
and
Mage Scan
that will provide scans of your website for any client-facing malware.

As for server-side problems, you might be out of luck. A lot of e-commerce software lives in a typical LAMP stack, and while there is a plethora of security software for Windows-based environments, the situation is fairly bleak for Linux.

For a long time, Linux enjoyed a kind of smug arrogance with regard to security, and its advocates pooh-poohed the notoriously hackable Windows operating system. However, it’s becoming ultra clear that it’s just as susceptible, if not more so, for specific software such as e-commerce solutions.

Bridges Falling Down

Why have things seemingly gotten so much worse lately? It is not that security controls and processes have changed dramatically. It’s more that the attacks have become more lucrative, more tempting, and easier to get away with, thanks to the rise of cryptocurrency. It allows attackers to generate money quickly, easily and, more important, anonymously.

Folks — this is the loudspeaker — our digital roads and bridges are falling down. They are old and decrepit. Our security controls and processes have not kept pace with the rapid advancement of malware, it’s ease of use, and its coupling with a new range of software that allows attackers to hide their trails more effectively.

Things like cryptocurrency, however, are just the symptom of a greater issue. That issue is the fact that the underlying software foundations we’ve been using ever since the first browsers appeared are built on a fundamentally flawed architecture.

Feature and Flaw

The general purpose operating system that allowed every company to have a whole slew of easy-to-use desktop software in the 90s, and that built up amazingly large Internet companies in the early 2000s, has an Achilles heel. It is explicitly designed to run multiple programs on the same system — such as cryptominers on the server that runs your WooCommerce or Magento application.

It is an old concept that dates back to the late 1960s, when the first general purpose operating systems, such as Unix, were introduced. Back then, the computers had a business need to run multiple programs and applications on them. The systems back then were just too big and too expensive not to. They literally filled entire walls.

That’s not the case in 2018. Today our computers are “virtual,” and they can be taken down and brought up with the push of a button — usually by other programs. It’s a completely different world.

Now for end user computing devices such as personal laptops and phones, we want this design characteristic, as we have the need to use the browser, check our email, use the calendar and such. However, on the server side where our databases and websites live, it’s a flaw.

Virtual Ransacking

This seemingly innocuous design characteristic is what allows attackers to run their programs, such as cryptominers, on your servers. It is what allows attackers to insert card skimmers into your websites. It is what allows the attackers to run malware on your servers that try and shut down other pieces of malware in order to remain the dominant attacker.

Yes, you read that right — many of these variants now have so much free rein on so many thousands of websites that they literally fight against each other for your computing resources. This is how bad it’s gotten. It’s as if the cryptocriminals threw a party at your house while you were gone and then got into a big brawl and tore up all your furniture and ransacked your house. Then they woke up the next day and laughed all the way to the bank.

This isn’t the only way to deploy software, though. Consider famous software companies such as Uber, Airbnb, Twitter and Facebook. If you talk to their engineers, they’ll tell you that they already have to isolate a given program per server — in this case, a virtual machine. Why? It’s because they simply have too much software to begin with.

Instead of dealing with a single database, they might have to deal with hundreds or thousands. Likewise, the old concept of allowing multiple users on a given system doesn’t make a lot of sense anymore. It has evolved to the point where identity access management lives outside of the single server model.

Hack Attacks Are Not Inevitable

Unikernels embrace this new model of software provisioning yet enforce it at the same time. They run only one single application per virtual machine (the server). They can not, by design, run other programs on the same server.

This completely prevents attackers from running their programs on your server. It prevents them from downloading new software onto the server and massively limits their ability to inject malicious content, such as credit card skimming scripts and cryptomining programs.

Instead of scanning for hacked systems or unpatched systems waiting to be attacked, you could even run outdated software that has known bugs in it, and these same styles of attacks would fall flat, as there would be no capability to execute them. This is all enforced at the operating system level and backed by hardware baked-in isolation.

Are we going to continue to let the cryptocriminals run free on our servers? How are you going to call the cops on people you can’t even see who might live halfway around the world? Don’t fall prey to the notion that hackers are natural disasters and it’s only inevitable that they’ll get you one day. It doesn’t need to be like that. We don’t have to deploy our software like we are using computers from the 1970s. It’s time that we rebuilt our digital infrastructure.


Ian Eyberg is CEO of
NanoVMs, based in San Francisco. A self-taught expert in computer science, specifically operating systems and mainstream security, Eyberg is dedicated to initiating a revolution and mass-upgrading of global software infrastructure, which for the most part is based on 40-year-old tired technology. Prior to cracking the code of unikernels and developing a commercial viable solution, Eyberg was an early engineer at Appthority, an enterprise mobile security company.





Source link

How to Monitor Network Traffic with Linux and vnStat | Linux.com


If you’re a network or a Linux admin, sometimes you need to monitor network traffic coming and going to/from your Linux servers. As there are a number of tools with which to handle this task, where do you turn? One very handy tool is vnStat. With vnStat you get a console-based network traffic monitor that is capable of monitoring and logging traffic on selected interfaces for specific dates, times, and intervals. Along with vnStat, comes a PHP script that allows you to view network traffic of your configured interface via a web-based interface.

I want to show you how to install and use both vnStat and vnStat-PHP on Linux. I’ll demonstrate on Ubuntu Server 18.04, but the tool is available for most distributions.

Read more at TechRepublic

Click Here!