Monthly Archives: July 2017

Software-Defined Storage: 4 Factors Fueling Demand

As organizations look for cost-effective ways to house their ever-growing stores of data, many of them are turning to software-defined storage. According to market researchers at ESG, 52% of organizations are committed to software-defined storage (SDS) as a long-term strategy.

Some vendor-sponsored studies have found even higher rates of SDS adoption; while the findings are self-serving, they’re still noteworthy. For example, a SUSE report published in 2017 found that 63% of enterprises surveyed planned to adopt SDS within 12 months, and in DataCore Software’s sixth annual State of Software-Defined Storage, Hyperconverged and Cloud Storage survey, only 6% of respondents said they were not considering SDS.

What’s driving this interest in SDS? Let’s look at four important reasons why enterprises are considering the technology.

1. Avoid vendor lock-in

In an interview, Camberley Bates, managing director and analyst at Evaluator Group who spoke about SDS at Interop ITX,  said, “The primary driver of SDS is the belief that it delivers independence, and the cost benefit of not being tied to the hardware vendor.”

In fact, when DataCore asked IT professionals about the business drivers for SDS, 52% said that they wanted to avoid hardware lock-in from storage manufacturers.

However, Bates cautioned that organizations need to consider the costs and risk associated with integrating storage hardware and software on their own. She said that many organizations do not want the hassle of integration, which is driving up sales of pre-integrated appliances based on SDS technology.

2. Cost savings

Of course, SDS can also have financial benefits beyond avoiding lock-in. In the SUSE study, 72% of respondents said they evaluate their storage purchases based on total cost of ownership (TCO) over time, and 81% of those surveyed said the business case for SDS is compelling.

Part of the reason why SDS can deliver low TCO is because of its ability to simplify storage management. The DataCore study found that the top business driver for SDS, cited by 55% of respondents was “to simplify management of different models of storage.”

3. Support IT initiatives

Another key reason why organizations are investigating SDS is because they need to support other IT initiatives. In the SUSE survey, IT pros said that key technologies influencing their storage decisions included cloud computing (54%), big-data analytics (50%), mobility (47%) and the internet of things (46%).

Organizations are looking ahead to how these trends might change their future infrastructure needs. Not surprisingly, in the DataCore report, 53% of organizations said a desire to help future-proof their data centers was driving their SDS move.

4. Scalability

Many of those key trends that are spurring the SDS transition are dramatically increasing the amount of data organizations need to store. Because it offers excellent scalability, SDS appeals to enterprises experiencing fast data growth.

In the SUSE study, 96% of companies surveyed said they like the business scalability offered by SDS. In addition, 95% found scalable performance and capacity appealing.

As data storage demands continue to grow, this need to increase capacity while keeping overall costs down may be the critical factor in determining whether businesses choose to invest in SDS.


Source link

Ubuntu 17.10: Back to a GNOME Future |

It would have been impossible to avoid hearing that Canonical has decided to shift their flagship product away from their in-house Unity desktop back to an old friend: GNOME. You may remember that desktop — the one that so many abandoned after the shift from 2.x to 3.x.

A few years later, GNOME 3 is now one of the most rock-solid desktops to be found, and one of the most user-friendly Linux desktop distributions is heading back to that particular future. As much as I enjoyed Unity, this was the right move for Canonical. GNOME is a mature desktop interface that is as reliable as it is user-friendly.

I won’t spend too much time speculating on why this happened (there are already plenty of pieces on this topic). There has also been plenty of speculation as to whether or not Canonical will deliver a GNOME-based Ubuntu that offers some of the features found in Unity. To that, Ken VanDine said to OMGUbuntu that the Ubuntu team “…may consider a few tweaks here and there to ease our users into the new experience.”

That’s not much. It also means, features like the HUD will not be anywhere to be found. Unfortunately, there aren’t any (current) GNOME extensions to replicate that feature. For some (like myself), losing the HUD is big (but not unforgivable). Why? I’d always found that particular menu interface to be one of the single most efficient on the market.

My guess is that GNOME, as shipped with Ubuntu 17.10, will be a fairly vanilla take on the desktop (with a bit of Ubuntu theme-branding in the mix). If the daily builds of 17.10 are any indication, that will be exactly the case (Figure 1).

Extensions will be your friend

For those that consider GNOME to be a bit less efficient than Unity, know that extensions will be your friend. Again, you’re not going to find an extension to bring about every feature found in Unity, but you can at least gain some added functionality, to make the GNOME desktop a bit more familiar to those who’ve been working with Unity for the last few years.

The first two extensions I would suggest you examine are:

Which of the above will better suit your needs will depend on three things:

  • Where you like your panel

  • If you prefer a bit of transparency

  • If you prefer a separate top panel with your dock

With Dash to Dock, your GNOME Favorites (found within the Dash) is added to the desktop (Figure 2) to function in similar fashion to the Unity Launcher.

What I like about the Dash to Dock extension is that it not only allows you to add a bit of transparency to the dock, it can be placed on the top, bottom, left, or right edge of the display and does not do away with the top panel.

With the Dash to Panel (Figure 3), your Dash Favorites are placed in a panel that spreads across the screen and rolls in the top panel.

For those that might miss the look and feel of what Unity offered, Dash to Dock will be your preferred extension. For those that might like a traditional panel (such as that found in Windows 7 or KDE), Dash to Panel will be your go to.

If you do use Dash to Dock, you might want to enable the feature to move the applications button to the beginning of the dock (Figure 4).

For anyone that has been using Unity long enough, having that applications button at the bottom of the dock can be a real point of frustration. You can also shift Dash to Dock to panel mode (to even better emulate Unity (Figure 5).


One thing you must know is that, to gain access to the options for these extensions (and to even enable/disable them), you will need to install the GNOME Tweak Tool. To do this, open the GNOME Software tool, search for GNOME Tweak and click Install. Once installed, you can click the Launch button (Figure 6) and you’re ready to tweak your extensions (and other aspects of GNOME).

Trust me when I say, GNOME Tweak will make your transition from Unity to GNOME slightly smoother.

The end result

As much as the Unity lovers might hate to hear this, the switch to GNOME will wind up being quite welcome on all fronts. The primary reason is that GNOME is simply more mature than Unity. This translates to (at least from my experience thus far) a much better smoother and snappier desktop. And, with the addition of a few extensions, the only thing Unity fans will miss is the HUD. But for those that cannot let go of Unity, know that there has been a fork of Unity 7, now named Artemis. At the moment, there is not even an alpha to test, but this looks to be a very promising project that might be offering both a “pure” Unity-like desktop or a Plasma-like Unity desktop. Either way, for anyone hoping that Unity 7 will continue on… fear not.

Try it out now

If you can’t wait until October 2017, you can download the latest daily build and install your very own Ubuntu 17.10. I’ve been working with the daily build and have found it to be remarkably stable. If you go this route, just make sure to update regularly. If you’re not one to test pre-release software, the final release is but a few months away.

Once again, the future looks incredibly bright for the Ubuntu Linux desktop.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Remote Sessions Over IPv6 with SSH, SCP, and Rsync |

Our familiar old file-copying friends SSH, SCP, and Rsync are all IPv6-ready, which is the good news. The bad news is they have syntax quirks which you must learn to make them work. Before we get into the details, though, you might want to review the previous installments in our meandering IPv6 series:


Like all good Linux admins, you know and use SSH and SCP. Both have some differences and quirks for IPv6 networks. These quirks are in the remote addresses, so once you figure those out, you can script SSH and SCP just like you’re used to, and use public key authentication.

By default, the sshd daemon listens for both IPv4 and IPv6 protocols. You can see this with netstat:

$ sudo netstat -pant|grep sshd
tcp   0  0*  LISTEN   1228/sshd       
tcp6  0  0 :::22       :::*       LISTEN   1228/sshd

You may disable either one with the AddressFamily setting in sshd_config. This example disable IPv6:

AddressFamily inet

The default is any. inet6 means IPv6 only.

On the client side, logging in over IPv6 networks is the same as IPv4, except you use IPv6 addresses. This example uses a global unicast address in the private LAN address range:

$ ssh carla@2001:db8::2

Just like IPv4, you can log in, run a command, and exit all at once. This example runs a script to back up my files on the remote machine:

$ ssh carla@2001:db8::2 backup

You can also streamline remote root logins. Wise admins disable root logins over SSH, so you have to log in as an unprivileged user and then change to a root login. This is not so laborious, but we can do it all with a single command:

$ ssh -t  carla@2001:db8::2 "sudo su - root -c 'shutdown -h 120'" 
carla@2001:db8::2's password: 
[sudo] password for carla:

Broadcast message from carla@remote-server
        (/dev/pts/2) at 9:54 ...

The system is going down for halt in 120 minutes!

The shutdown example will stay open until it finished running, so you can change your mind and cancel the shutdown in the usual way, with Ctrl+c.

Another useful SSH trick is to force IPv6 only, which is great for testing:

$ ssh -6 2001:db8::2

You can also force IPv4 with with -4.

You may access hosts on your link local network by using the link local address. This has an undocumented quirk that will drive you batty, except now you know what it is: you must append your network interface name to the remote address with a percent sign.

$ ssh carla@fe80::ea9a:8fff:fe67:190d%eth0

scp is weird. You have to specify the network interface with the percent sign for link local addresses, enclose the address in square braces, and escape the braces:

$ scp filename [fe80::ea9a:8fff:fe67:190d%eth0]:
carla@fe80::ea9a:8fff:fe67:190d's password:

You don’t need the interface name for global unicast addresses, but still need the escaped braces:

$ scp filename [2001:db8::2]:
carla@2001:db8::2's password: 

This example logs into a different user account on the remote host, specifies the remote directory to copy the file into, and changes the filename:

scp filename userfoo@[fe80::ea9a:8fff:fe67:190d%eth0]:/home/userfoo/files/filename_2


rsync requires enclosing the remote IPv6 address in various punctuations. Global unicast addresses do not need the interface name:

$ rsync -av /home/carla/files/ 'carla@[2001:db8::2]':/home/carla/stuff
carla@f2001:db8::2's password: 
sending incremental file list

sent 100 bytes  received 12 bytes  13.18 bytes/sec
total size is 6,704  speedup is 59.86

Link local addresses must include the interface name:

$ rsync -av /home/carla/files/ 'carla@[fe80::ea9a:8fff:fe67:190d%eth0]':/home/carla/stuff

As always, remember that the trailing slash on your source directory, for example /home/carla/files/, means that only the contents of the directory are copied. Omitting the trailing slash copies the directory and its contents. Trailing slashes do not matter on your target directory.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Data Center Architecture: Converged, HCI, and Hyperscale

A comparison of three approaches to enterprise infrastructure.

If you are planning an infrastructure refresh or designing a greenfield data center from scratch, the hype around converged infrastructure, hyperconverged infrastructure (HCI) and hyperscale might have you scratching your head. In this blog, I’ll compare and contrast the three approaches and consider scenarios where one infrastructure architecture would be a better fit than the others.

Converged infrastructure

Converged infrastructure (CI) incorporates compute, storage and networking in a pre-packaged, turnkey solution. The primary driver behind convergence was server virtualization: expanding the flexibility of server virtualization to storage and network components. With CI, administrators could use automation and management tools to control the core components of the data center. This allowed for a single admin to provision, de-provision and make any compute, storage or networking changes on the fly.

Converged infrastructure platforms use the same silo-centric infrastructure components of traditional data centers. They’re simply pre-architected and pre-configured by the manufacturers. The glue that unifies the components is specialized management software. One of the earliest and most popular CI examples is Virtual Computing Environment (VCE). This was a joint venture by Cisco Systems, EMC, and VMware that developed and sold various sized converged infrastructure solutions known as Vblock. Today, Vblock systems are sold by the combined Dell-EMC entity, Dell Technologies.

CI solutions are a great choice for infrastructure pros who want an all-in-one solution that’s easy to buy and pre-packaged direct from the factory. CI is also easier from a support standpoint. If you maintain support contracts on your CI system, the manufacture will assist in troubleshooting end-to-end. That said, many vendors are shifting their focus towards hyperconverged infrastructures.

Hyperconverged infrastructure

HCI builds on CI. In addition to combining the three core components of a data center together, hyperconverged infrastructure leverages software to integrate compute, network and storage into a single unit as opposed to using separate components. This architecture design offers performance advantages and eliminates a great deal of physical cabling compared to silo- and CI-based data centers.  

Hyperconverged solutions also provide far more capability in terms of unified management and orchestration. The mobility of applications and data is greatly improved, as is the setup and management of functions like backups, snapshots, and restores. These operational efficiencies make HCI architectures more attractive from a cost-benefit analysis when compared to traditional converged infrastructure solutions.

In the end, a hyperconverged solution is all about simplicity and speed. A great use case for HCI would be a new virtual desktop infrastructure (VDI) deployment. Using the orchestration and automation tools available, you have the ideal platform to easily roll out hundreds or thousands of virtual desktops.


The key attribute of hyperscale computing is the de-coupling of compute, network and storage software from the hardware. That’s right, while HCI combined everything into a single chassis, hyperscale decouples the components.

This approach, as practiced by hyperscale companies like Facebook and Google, provides more flexibility than hyperconverged solutions, which tend to grow in a linear fashion. For example, if you need more storage on your HCI system, you typically must add a node blade that includes both compute and built-in storage. Some hyperconverged solutions are better than others in this regard, but most fall prey to linear scaling problems if your workloads don’t scale in step.

Another benefit of hyperscale architectures is that you can manage both virtual and bare metal servers on a single system. This is ideal for databases that tend to operate in a non-virtualized manner. Hyperscale is most useful in situations where you need to scale-out one resource independently from the others. A good example is IoT because it requires a lot of data storage, but not much compute. A hyperscale architecture also helps in situations where it’s beneficial to continue operating bare metal compute resources, yet manage storage resources in elastic pools.

Source link

An Introduction to the ss Command |

Linux includes a fairly massive array of tools available to meet almost every need. From development to security to productivity to administration…if you have to get it done, Linux is there to serve. One of the many tools that admins frequently turned to was netstat. However, the netstat command has been deprecated in favor of the faster, more human-readable ss command.

The ss command is a tool used to dump socket statistics and displays information in similar fashion (although simpler and faster) to netstat. The ss command can also display even more TCP and state information than most other tools. Because ss is the new netstat, we’re going to take a look at how to make use of this tool so that you can more easily gain information about your Linux machine and what’s going on with network connections.

The ss command-line utility can display stats for the likes of PACKET, TCP, UDP, DCCP, RAW, and Unix domain sockets. The replacement for netstat is easier to use (compare the man pages to get an immediate idea of how much easier ss is). With ss, you get very detailed information about how your Linux machine is communicating with other machines, networks, and services; details about network connections, networking protocol statistics, and Linux socket connections. With this information in hand, you can much more easily troubleshoot various networking issues.

Let’s get up to speed with ss, so you can consider it a new tool in your administrator kit.

Basic usage

The ss command works like any command on the Linux platform: Issue the command executable and follow it with any combination of the available options. If you glance at the ss man page (issue the command man ss), you will notice there aren’t nearly the options found for the netstat command; however, that doesn’t equate to a lack of functionality. In fact, ss is quite powerful.

If you issue the ss command without any arguments or options, it will return a complete list of TCP sockets with established connections (Figure 1).

Because the ss command (without options) will display a significant amount of information (all tcp, udp, and unix socket connection details), you could also send that command output to a file for later viewing like so:

ss > ss_output

Of course, a very basic command isn’t all that useful for every situation. What if we only want to view current listening sockets? Simple, tack on the -l option like so:

ss -l

The above command will only output a list of current listening sockets.

To make it a bit more specific, think of it this way: ss can be used to view TCP connections by using the -t option, UDP connections by using the -u option, or UNIX connections by using the -x option; so ss -t,  ss -u, or ss -x. Running any of those commands will list out plenty of information for you to comb through (Figure 2).

By default, using either the -t, the -u, or the -x options alone will only list out those connections that are established (or connected). If we want to pick up connections that are listening, we have to add the -a option like:

ss -t -a 

The output of the above command will include all TCP sockets (Figure 3).

In the above example, you can see that UDP connections (in varying states) are being made from the IP address of my machine, from various ports, to various IP addresses, through various ports. Unlike the netstat version of this command, ss doesn’t display PID and command name responsible for these connections. Even so, you still have plenty of information to begin troubleshooting. Should any of those ports or URLs be suspect, you now know what IP address/Port is making the connection. With this, you now have the information that can help you in the early stages of troubleshooting an issue.

Filtering ss with TCP States

One very handy option available to the ss command is the ability to filter using TCP states (the the “life stages” of a connection). With states, you can more easily filter your ss command results. The ss tool can be used in conjunction with all standard TCP states:

  • established

  • syn-sent

  • syn-recv

  • fin-wait-1

  • fin-wait-2

  • time-wait

  • closed

  • close-wait

  • last-ack

  • listening

  • closing

Other available state identifiers ss recognizes are:

  • all (all of the above states)

  • connected (all the states with the exception of listen and closed)

  • synchronized (all of the connected states with the exception of syn-sent)

  • bucket (states which are maintained as minisockets, for example time-wait and

  • syn-recv)

  • big (Opposite to bucket state)

The syntax for working with states is simple.

For tcp ipv4:
ss -4 state FILTER
For tcp ipv6:

ss -6 state FILTER

Where FILTER is the name of the state you want to use.

Say you want to view all listening IPv6 sockets on your machine. For this, the command would be:

ss -4 state listening

The results of that command would look similar to Figure 4.

Show connected sockets from specific address

One handy task you can assign to ss is to have it report connections made by another IP address. Say you want to find out if/how a machine at IP address has connected to your server. For this, you could issue the command:

ss dst

The resulting information (Figure 5) will inform you the Netid, the state, the local IP:port, and the remote IP:port of the socket.

Make it work for you

The ss command can do quite a bit to help you troubleshoot issues with your Linux server or your network. It would behoove you to take the time to read through the ss man page (issue the command man ss). But, at this point, you should at least have a fundamental understanding of how to make use of this must-know command.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.