Monthly Archives: January 2017

TrentaOS Is an Elegant Desktop Linux with a Few Rough Edges | Linux.com


It appears we have another Linux desktop renaissance on our hands. Back in the late 1990s, it seemed like everyone was creating a new Linux distribution—each with its own unique take on the platform—until there were so many to choose from, one never knew where to begin. This time around, we have a growing number of distributions, each making slight variations to something already in existence. And that, I believe, is a good thing. Why? Refinement and specificity.
Consider TrentaOS, for example. Here we have a new platform (still very much in alpha), based on Ubuntu, with a decidedly Mac feel, by way of GNOME. If you look at the landscape of Linux, you’ll find several distributions already doing the Mac-like desktop quite well  (Elementary OS and ZorinOS immediately come to mind). So why another? What can TrentaOS offer that differs from what others are doing?

First off, the similarities to Mac exist only on the surface. Click on the Applications menu and your experience veers back toward the GNOME side of things (thanks to the GNOME Dash). This is where you start to see the weakness of TrentaOS, wherein the developers/designers have added a beautiful icon set/theme and a dock to GNOME. Beyond that, it’s pretty much Ubuntu GNOME. Even so, the look of the TrentaOS desktop is quite lovely (Figure 1), but what happens when you go to work? Let’s take a look.

The good

Before we dive into what might be wrong with TrentaOS, let’s take a look at what’s right. You must first remember that, as I mentioned, we’re talking about an alpha release, so it’s very rough and there’s plenty of room to grow. In fact, I would guess that what we’ll see when the official first release lands will be quite different from what we’re looking at now.

Even so, let’s start with the good bits that make TrentaOS something you should put on your radar.

To begin, the TrentaOS desktop offers a simplistic elegance that is very MacOS-like, with a bit more transparency tossed into the mix. As well, the developers have done a great job rolling in a very tasteful flat theme (Figure 2).

With the file manager open, you should notice something else the developers have enabled on the desktop—a global menu. This, of course, comes by way of the GNOME Application Menu, but it was a good choice for the developers to have it enabled by default. Why? Because it clears up in-app real estate and allows the flat theme to be more cohesive throughout.

Another plus (at least for the moment) was the intentional retention of the GNOME Dash. This take on the Application menu is one of the finest on the market. It makes for simple application searching and launching, while keeping with the minimalism of TrentaOS.

The inclusion of VLC media player is a definite step in the right direction that all desktops should consider. VLC is, by far, the superior video player and should be considered the de facto standard. Why other distributions do not do this is beyond me.


TrentaOS also opts to go for the very minimal Musique music player (as opposed to the GNOME default Rhythmbox). Musique is a wise choice for this distribution because it not only fits well with the theme, it’s also a very simple player that anyone could use. Musique offers only the features you need to play your music and not much more. It’s simple and elegant.

Finally, kudos for including GIMP. I understand why some distributions leave that application out (to save space), but every time I install a distribution that doesn’t include the flagship, open source image editing tool, I feel as if something is missing (and immediately install GIMP).

The bad

Again, you should remember that TrentaOS is still in alpha, so much of the bad will hopefully vanish as the distribution migrates toward beta and official release. Nevertheless, I would be remiss to not point out certain issues. I will also not address stability issues, as this is part and parcel to TrentaOS being in alpha. Issues such as title bars all of a sudden going missing and lagging transitions will all sort themselves out. So we’ll stick with those issues that have nothing to do with the growing pains associated with being alpha.
The first issue is fairly glaring. Shortly after installing TrentaOS, I was prompted to do a distribution upgrade. I decided no harm could come of that and proceeded (as this was a brand new virtual machine install). Upon reboot, I was greeted with a stock Ubuntu GNOME desktop…all signs of TrentaOS had been stripped away. To solve that issue, I reinstalled the OS. The next time I did a standard upgrade, I immediately noticed that TrentaOS was still relying on the now-defunct Ubuntu Software Center. Considering that GNOME has already migrated to GNOME Software, this seems quite out of place. The Ubuntu Software Center has been left behind for good reason, and the TrentaOS developers would be wise to make this change.

Another issue occurred when doing a standard update. The updater locked up in the middle of the process. I’d like to toss this off as an alpha issue, but I’ve seen it happen in non-alpha releases. After having to force-quit the updater, I had to issue the sudo dpgk –configure -a twice; even then, more issues appeared such that I had to manually delete folders in /var/lib/dpkg/updates in order to get the update to successfully run (done from the command line).

Finally, after running sudo apt-get update && sudo apt-get upgrade && sudo apt-get autoremove, I was able to restart the system and boot back into an improved experience. Unfortunately, the Ubuntu Software Center was still front and center. It seems the only way to jettison that package from TrentaOS is to run a distribution upgrade, which then removes all traces of TrentaOS and replaces it with a stock Ubuntu GNOME.

Finally, TrentaOS ships with a much outdated version of LibreOffice. Even after the update, LibreOffice was running at version 4.2.8.2. Yes, you can download the latest version of the office suite, remove the currently installed version, and install the 5.x iteration of LibreOffice, but this isn’t a process that should be required to gain the latest stable release of a crucial piece of software.

The conclusion

TrentaOS is a project I certainly hope will continue. It has a lot of possibility to stand alongside the likes of Elementary OS as a modern, Mac-like take on the Linux desktop. And if there’s one thing Linux needs, it’s more such desktop environments that build upon what is already working to attract new users by way of elegant, simple solutions. TrentaOS is a beautiful desktop that could be one such solution.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Packet Blast: Top Tech Blogs, Jan. 27


We collect the top expert content in the infrastructure community and fire it along the priority queue. 



Source link

Install Munin (Monitoring Tool) on Ubuntu 16.10 (Yakkety Yak) Server


Sponsored Link

Munin the monitoring tool surveys all your computers and remembers what it saw. It presents all the information in graphs through a web interface. Its emphasis is on plug and play capabilities. After completing a installation a high number of monitoring plugins will be playing with no more effort.

Using Munin you can easily monitor the performance of your computers, networks, SANs, applications, weather measurements and whatever comes to mind. It makes it easy to determine “what’s different today” when a performance problem crops up. It makes it easy to see how you’re doing capacity-wise on any resources.

Munin uses the excellent RRDTool (written by Tobi Oetiker) and the framework is written in Perl, while plugins may be written in any language. Munin has a master/node architecture in which the master connects to all the nodes at regular intervals and asks them for data. It then stores the data in RRD files, and (if needed) updates the graphs. One of the main goals has been ease of creating new plugins (graphs).

Preparing Your system

Install apache web server using the following command

sudo apt-get install apache2

Now proceed with munin server installation using the following command from your terminal

sudo apt-get install munin

Once the package is installed, you only need to make a few changes to get your installation working.

Configuring Munin server

You need to edit the /etc/munin/munin.conf file

sudo vi /etc/munin/munin.conf

Change the following lines

Change 1

#dbdir /var/lib/munin
#htmldir /var/cache/munin/www
#logdir /var/log/munin
#rundir /var/run/munin

to

dbdir /var/lib/munin
htmldir /var/www/munin
logdir /var/log/munin
rundir /var/run/munin

Change 2

#tmpldir /etc/munin/templates

to

tmpldir /etc/munin/templates

Change 3

the server name on the line localhost.localdomain should be updated to display the hostname, domain name, or other identifier you’d like to use for your monitoring server

# a simple host tree
[localhost.localdomain]
address 127.0.0.1
use_node_name yes

to

[MuninMonitor]
address 127.0.0.1
use_node_name yes

Change 4

You need to edit the munin apache configuration

sudo vi /etc/munin/apache.conf

Change the following line in the starting of the file

Alias /munin /var/cache/munin/www

to

Alias /munin /var/www/munin

and

We also need to allow connections from outside of the local computer for this do the following changes

<Directory /var/cache/munin/www>
Order allow,deny
Allow from localhost 127.0.0.0/8 ::1
Options None

to

<Directory /var/munin/www>
Order allow,deny
#Allow from localhost 127.0.0.0/8 ::1
Allow from all
Options None

you will need to create the directory path that you referenced in the munin.conf file and modify the ownership to allow munin to write to it:

sudo mkdir /var/www/munin

sudo chown munin:munin /var/www/munin

Now you need to restart the munin and apache services using the following commands

sudo service munin-node restart

sudo service apache2 restart

It might take a few minutes to generate the necessary graphs and html files. After about five minutes, your files should be created and you will be able to access your data. You should be able to access your munin details at:

http://yourserver_ip_address/munin

Screenshots

1

2

If you get an error message in your browser similar to the following, you need to wait longer for munin to create the files

Forbidden

You don’t have permission to access /munin/

Configure Remote Monitoring

Munin can easily monitor multiple servers at once.If you want to monitor remote servers you need to following this procedure.

First you need to install munin client package using the following commands

sudo apt-get install munin-node

Now you need to edit the munin-node.conf file to specify that your monitoring server is allowed to poll the client for information.

sudo vi /etc/munin/munin-node.conf

Search for the section that has the line “allow ^127.0.0.1$”. Modify the IP address to reflect your monitoring server’s IP address.If your server ip is 172.30.2.100

allow ^.172.30.2.100$

Save and exit the file

You need to restart the munin client using the following information

sudo service munin-node restart

Now you need to login in to your munin server and edit the munin.conf file

sudo vi /etc/munin/munin.conf

Copy the following section and change the ip address to your remote server client ip address

[MuninMonitor]
address 127.0.0.1
use_node_name yes

to

[MuninMonitor]
address 172.30.2.101
use_node_name yes

Finall you need to restart the apache server using the following command

sudo service apache2 restart

Additional Plugins

The munin-plugins-extra package contains performance checks additional services such as DNS, DHCP, Samba, etc. To install the package run the following command from the terminal

sudo apt-get install munin-plugins-extra

Make sure you have install this package on both the server and node machines.

Sponsored Link



Related posts

Converged Vs. Hyperconverged Infrastructure: What’s The Difference?


Traditionally, the responsibility of assembling IT infrastructure falls to the IT team. Vendors provide some guidelines, but the IT staff ultimately does the hard work of integrating them. The ability to pick and choose components is a benefit, but requires effort in qualification of vendors, validation for regulatory compliance, procurement, and deployment.

Converged and hyperconverged infrastructure provides an alternative. In this blog, I’ll examine how they evolved from the traditional infrastructure model and compare their different features and capabilities.

Reference architectures

Reference architectures, which provide blueprints of compatible configurations, help to alleviate some of the burden of IT infrastructure integration. Hardware or software vendors provide defined behavior and performance given selected choices of hardware devices and software, along with configuration parameters. However, since reference architectures may involve different vendors, they can present problems in determining who IT groups need to call for support.

Furthermore, given that the systems combine components from multiple vendors, systems management remained difficult. For example, visibility into all levels of the hardware and software stack is not possible since management tools can’t assume how the infrastructure was set up. Even with systems management standards and APIs, tools aren’t comprehensive enough to understand device-specific information.

Converged infrastructure: ready-made

Converged infrastructures takes the idea of a reference architecture and integrates the system prior to shipping to customers; systems are pre-tested and pre-configured. One unpacks the box, plugs it into the network and power, and the system is ready to use.

IT organizations choose converged systems for ease of deployment and management instead of the benefits of an open, interoperable system with choice of components. Simplicity overcomes choice.

Hyperconverged: The building-block approach

Hyperconverged systems take the convergence concept one step further. These systems are preconfigured, but provide integration via software-defined capabilities and interfaces. Software interfaces act as a glue that supplements the pre-integrated hardware components.

In hyperconverged systems, functions such as storage are integrated through software interfaces, as opposed to the traditional physical cabling, configuration and connections. This type of capability is typically done using virtualization and can exploit commodity hardware and servers.

Local storage not a key differentiator

While converged systems may include traditional storage delivered using discrete NAS or Fibre Channel SAN, hyperconverged systems can take different forms of storage (rotating disk or flash) and present it via software in a unified way.  

A hyperconverged system  may use local storage, but it can use an external system with software interfaces to present a unified storage pool. Some vendors get caught up in the definition of whether the storage is implemented locally (implemented as a disk within the server) or as a separate storage system. I think that’s missing the bigger picture. What’s more important is the ability for the systems to scale.

Scale-out is key

Software enables hyperconverged systems to be used as scale-out building blocks. In the enterprise, storage is often an area of interest, since it has been difficult to scale out storage in the same way compute capacity expands by incrementally adding servers.

Hyperconverged building blocks enables graceful scale out, as capacity may increase without re-architecting the hardware infrastructure. The goal is to unify as many services using software that acts as layer separating the hardware infrastructure from the workload. That extra layer may result in some performance tradeoff, but some vendors believe that the systems are fast enough for most non-critical workloads.

Making a choice

How do enterprises choose converged vs hyperconverged systems? ESG’s research shows that enterprises choose converged infrastructure for mission-critical workloads, citing better performance, reliability, and scalability.  Enterprises choose hyperconverged systems for consolidating multiple functions into one platform, ease of use, and deploying tier-2 workloads.

Converged and hyperconverged systems continue to gain interest since they enable creation of on-premises clouds with elastic workloads and resource pooling. However, they can’t solve all problems for all customers. An ESG survey shows that, even five years out, over half the respondents plan to create an on-premises infrastructure strategy based on best-of-breed components as opposed to converged or hyperconverged infrastructure.

Thus, I recommend that IT organizations examine these technologies, but realize that they can’t solve every problem for every organization.

Hear more from Dan Conde live and in person at Interop ITX, where he will co-present “Things to Know Before You (Hyper) Converge Your Infrastructure,” with Jack Poller, senior lab analyst at Enterprise Strategy Group. Register now for Interop ITX, May 15-19 in Las Vegas.



Source link

Using Grep-Like Commands for Non-Text Files | Linux.com


In the previous article, I showed how to use the grep command, which is great at finding text files that contain a string or pattern. The idea of directly searching in a “grep-like” way is so useful that there are additional commands to let you search right into PDF documents and handle XML files more naturally. Things do not stop there, as you could consider raw network traffic a collection of data that you want to “grep” for information, too.

Grep on PDF files

Packages are available in Debian and Fedora Linux for pdfgrep. For testing, I grabbed version 1.2 of the Open Document Format specification. Running the following command found the matches for “ruby” in the specification. Adding the -H option will print the filename for each match (just as the regular grep does). The -n option is slightly different to regular grep. In regular grep, -n prints the line number that matches; in pdfgrep, the -n option will instead show the page number.

$ pdfgrep ruby OpenDocument-v1.2.pdf 
  6.4 <text:ruby
     6.4.2 <text:ruby
     6.4.3 <text:ruby
 17.10 <style:ruby-properties>..................................................................................................
     19.874.30 <text:ruby
     19.874.31 <text:ruby-text>..................................................................................................
 20.303 style:layout-grid-ruby-below......................................................................................... 783
 20.304 style:layout-grid-ruby-height........................................................................................ 783
 20.341 style:ruby
 20.342 style:ruby

$ pdfgrep -Hn ruby OpenDocument-v1.2.pdf 
OpenDocument-v1.2.pdf:10:   6.4 <text:ruby
OpenDocument-v1.2.pdf:10:      6.4.2 <text:ruby
OpenDocument-v1.2.pdf:10:      6.4.3 <text:ruby
OpenDocument-v1.2.pdf:26:  17.10 <style:ruby

Many other command-line options in pdfgrep operate like the ones in regular grep. You can use -i to ignore the case when searching, -c to see just the number of times the pattern was found, -C to show a given amount of context around matches, -r/-R to recursively search, –include and –-exclude to limit recursive searches, and -m to stop searching a file after a given number of matches.

Because PDF files can be encrypted, pdfgrep also has the –password option to allow you to provide decryption keys. You might consider the –unac option to be somewhat logically grouped into the -i (case-insensitive) class of options. With this option, accents and ligatures are removed from both the search pattern and the content as it is considered. So the single character æ will be considered as “ae” instead. This makes it simpler to find things when typing at a console. Another interesting option in pdfgrep is -p, which shows the number of matches on a page.

Grep for XML

On Fedora Linux, you can dnf install xgrep to get access to the xgrep command. The first thing you might like to do with xgrep is search using -x to look for an XPath expression as shown below.

$ cat sample.xml 
<root>
 <sampledata>
   <foo>Foo Text</foo>
   <bar name="barname">Bar text</bar>
 </sampledata>
</root>

$ xgrep -x '//foo[contains(.,"Foo")]' sample.xml 
<!--         Start of node set (XPath: //foo[contains(.,"Foo")])                 -->
<!--         Node   0 in node set               -->

   <foo>Foo Text</foo>

<!--         End of node set                    -->

$ xgrep -x '//bar[@name="barname"]' sample.xml 
<!--         Start of node set (XPath: //bar[@name="barname"])                 -->
<!--         Node   0 in node set               -->

   <bar name="barname">Bar text</bar>

<!--         End of node set                    -->

The xgrep -s option lets you poke around in XML elements looking for a regular expression. This might work slightly differently from what you expect at the start. The format for the pattern is to pick the element you are interested in and then use one or more subelement/regex/ expressions to limit the matches.

The example below will always print an entire sampledata element, and we limit the search to only those with a bar subelement that matches the ‘Bar’ regular expression. I didn’t find a way to pick off just the bar element, so it seems you are always looking for a specific XML element and limiting the results based on matching the subelements.

$ xgrep -s 'sampledata:bar/Bar/' sample.xml 
<!--         Start of node set (Search: sampledata:bar/Bar/)                 -->
<!--         Node   0 in node set               -->

 <sampledata>
   <foo>Foo Text</foo>
   <bar name="barname">Bar text</bar>
 </sampledata>

<!--         End of node set                    -->

As you can see from this example, xgrep is more about finding matching structure in an XML document. As such it doesn’t implement many of the normal grep command line options. There is also no support for file system recursion built into xgrep, so you have to combine with the find command as shown in the previous article if you want to dig around.

grep your network with ngrep

The ngrep project provides many of the features of grep but works directly on network traffic instead of files. Just running ngrep with the pattern you are after will sift through network packets until something matching is seen, and then you will get a message showing the network packet that matched and the hosts and ports that were communicating. Unless you have specially set up network permissions, you will likely have to run ngrep as the root user to get full access to raw network traffic.

# ngrep foobar
interface: eth1 (192.168.10.0/255.255.255.0)
filter: ((ip || ip6) || (vlan && (ip || ip6)))
match: foobar
###########...######
T 192.168.10.2:738 -> 192.168.10.77:2049 [AP]
.......2...........foobar.. 
...
################################################################
464 received, 0 dropped

Note that the pattern is an extended regular expression, not just a string. So you could find many types of foo using for example fooba[rz].

Similar to regular grep, ngrep supports (-i) for case insensitive search, (-w) for whole word matching, (-v) to invert the result, only showing packets that do not match your pattern, and (-n) to match only a given number of packets before exiting.

The ngrep tool also supports options to timestamp the matching packets with -t to print a timestamp when a match occurs, or -T to show the time delta between matches. You can also shut down connections using the -K option to kill TCP connections that match your pattern. If you are looking at low-level packets you might like to use -x to dump the packet as hexadecimal.

A very handy use for ngrep is to ensure that the network communications that you think are secure really have any security to them at all. This can easily be the case with apps on a phone. If you start ngrep with a reasonably rare string like “mysecretsarehere” and then send that same string in the app, you shouldn’t see it being found by ngrep. Just because you can’t see it in ngrep doesn’t mean the app or communication is secure, but at least there is something being done by the app to try to protect your data that is sent over the Internet.

grep your mail

While the name might be a little misleading, mboxgrep can search mbox files and maildir folders. I found that I had to use the -m option to tell mboxgrep that I wanted to look inside a maildir folder instead of a mailbox. The following command will directly search for matching messages in a maildir folder.

$ cd ~/mail/.Software
$ mboxgrep -m maildir "open source program delight" .

The mboxgrep tool has options to search only the header or body of emails, and you can choose between fcntl, flock, or no file locking during the search depending on your needs. mboxgrep can also recurse into your mail, handy if you have many subfolders you would like to search.

Wrap up

The grep tool has been a go-to tool for finding text files for decades. There are now a growing collection of grep-like tools that allow you to use the same syntax to find other matching things. Those things might be specific file formats — like XML and PDF — or you might consider searching network traffic for interesting events.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.