Monthly Archives: August 2017

Is the Network Part of Your Data Backup Strategy?


Make sure to include the network in your data protection planning.

A data backup strategy is the backbone of any enterprise IT shop. Businesses need to protect their data from application or server failures, as well as improper data manipulation, deletion or destruction through accidental or nefarious methods such ransomware.

In planning their backup strategy, companies can overlook the network as part of the overall design. Distributed and server-to-cloud backups rely on the underlying network to move data from point A to B in a timely and secure manner. Therefore, it makes sense to include the network as an integral part of any data backup and recovery strategy. I’ll discuss four ways to do that.

Network redundancy

The first and most obvious step is to verify that your network maintains a proper level of end-to-end resiliency. Whether you are talking about local, off-site or backups to cloud service providers, the network should be designed so that there are no single points of failure that could potentially render a data backup or restore useless. A single point of failure refers to a device or link that, if it fails, will bring down all or a large portion of a LAN.

Also, consider how automated your network failover prevention mechanisms are. Traditional network redundancy techniques include dynamic routing protocols, HSRP/VRRP, VPN and WAN carrier diversity. More recently, SDN, SD-WAN and multi-cloud management are beginning to be included as part of a forward-thinking data backup roadmap.

Network baselining

Data backups have the potential to consume a tremendous amount of throughput. The major concern is that certain links along the way will become congested to the point that it negatively impacts other applications and users on the network. Avoiding network congestion by using a separate network that’s purpose-built for backups is cost prohibitive. Most enterprises perform backups using the same network hardware and links as their production traffic.

Consequently, a key step in any backup strategy is to properly baseline traffic across the network to determine how backups will impact link utilization. Understanding data flows and throughput requirements of data backups along with current utilization baselines over time allows engineers to design a backup strategy that will not impact daily operations. In some cases, this means that the timing of backups occur outside of network peak hours. In other situations, it will require upgrading the throughput capacity of certain network links along a backup path.

Once a backup plan is in place, it’s necessary to continue to monitor link utilization using NetFlow and SNMP tools to ensure that bottlenecks don’t creep up on you over time.

QoS

Another way to mitigate the impact backups can have on a shared network links is to leverage quality of service (QoS) techniques. Using QoS, we can identify, mark and ultimately prioritize traffic flows as they traverse a network. Large companies with highly complex networks and backup strategies often opt to mark and prioritize data backups at a lower class. This is so more critical, time-sensitive applications, such as voice and streaming video, take priority and freely traverse the network when link congestion occurs.

Backup packets are queued or dropped according to policy and will automatically  transmit when the congestion subsides. This allows for round-the-clock backups without the need for strict off-hours backup windows and alleviates concern that the backup process will impair production traffic that shares the same network links.

Data security

No conversation about backups is complete without discussing data security. From a network perspective, this includes a plan for extending internal security policies and tools out to the WAN and cloud where off-site backups will eventually reside.

Beyond these data protection basics, network and security administrators must also battle shadow IT. Shadow IT is becoming a serious problem that affects the safety and backup/restore capabilities of corporate data. Backups are only useful when they collect all critical data. Shadow IT is preventing this from happening because data is increasingly being stored in unauthorized cloud applications.

Tools such as NetFlow and cloud access security broker (CASB) platforms can help track down and curb the use of shadow IT. A CASB can monitor traffic destined to the Internet and control what cloud services employees can use.



Source link

Gnome is Celebrating it’s 20th Birthday » Linux Magazine


Gnome was started by Miguel de Icaza and Federico Mena Quintero on August 15, 1997. The primary goal of the project was to create a fully open source alternative of KDE, which was based on Qt widget toolkit that used a non-free licence back then.

Since its initial release in 1999, there have been 33 stable releases of GNOME till date. While Linux caters to power users, developers and sysadmins who prefer CLI, Gnome focuses on ease of use. No wonder Ubuntu, a distribution that targeted PC users picked Gnome as the default desktop environment.

Gnome has made some significant progress in the Linux desktop space with the 3.x family. They have built a distro agnostic software center that allows users of any distro to not only install and update applications, but also update the distribution itself.

Gnome also brought the capability of accessing Google Drive from within Linux desktops, a feature that’s not officially supported by Google.

No wonder that even the creator of Linux, Linus Torvalds runs Gnome as his favorite desktop.

Gnome used to be the default desktop environment for Ubuntu, before Canonical introduced its own Unity shell. As a result of that decision, Gnome lost millions of users. But recently, Canonical decided to pull out of desktop space and focus on enterprise. They ditched Unity and went back to Gnome. That means Gnome will return to millions of Ubuntu desktop users.

Happy 20th Birthday, Gnome.



Source link

Hyperconvergence Market in Flux


When Cisco announced Monday that it was buying hyperconvergence software startup Springpath, it did what many industry observers had been expecting for more than a year. In March 2017, Cisco unveiled its HyperFlex hyperconverged infrastructure system on its UCS platform in partnership with Springpath. The networking giant also made a significant investment into the startup.

“Cisco is just wrapping up what it started a year and a half ago,” Keith Townsend, principal at The CTO Advisor, said in an interview. “There’s nothing net new.”

The Cisco-Springpath $320 million deal culminates Cisco’s entry into the hyperconvergence market, a space that a few years ago was dominated by startups such as SimpliVity and Nutanix. Earlier this year, SimpliVity was acquired by Hewlett-Packard Enterprise. Nutanix, which went public last fall, remains a top player, and there are other startups such as Pivot3 and Scale Computing, but they face stiff competition from established players.

With HPE and Cisco now offering viable hyperconverged infrastructure, along with industry giant Dell Technologies, the technology is starting to become more of a feature than a standalone product, Townsend said. The acquisitions by HPE and Cisco filled out their respective portfolios, and provided them with a way to give enterprises less reason to jump ship, he added.

By all accounts, the hyperconverged market is hot. According to IDC, sales of hyperconverged systems grew nearly 65% year over year during the first quarter of 2017, generating $665 million in sales. Transparency Market Research expects the global HCI market to reach nearly $31 billion by 2025. Hyperconverged infrastructure leverages software to integrate compute, storage, and networking in a single appliance on commodity hardware.

But hyperconverged infrastructure players are all clamoring for the hybrid cloud space, which has yet to settle on a solution, said Camberley Bates, managing partner and analyst at Evaluator Group.

“That hybrid cloud environment has yet to figure out a standard,” she told me in an interview. “There are a lot of options enterprises are looking at. There’s been a lot of starts and stops.”

Enterprises are considering everything from converged systems – HCI’s predecessor — to building an environment with scale-out SAN storage, Bates said. “There’s no one architecture that’s winning out in that [hybrid cloud] space.”

Hyperconvergence is well suited for virtual desktop infrastructure environments – VDI has been its top use case so far, Bates said. Remote and branch offices and selected applications are other use cases. The technology is ideal for midmarket companies that don’t have a lot of IT staff, providing them with great agility and simplicity, she said.

Her firm sees the hyperconverged infrastructure market splitting into two types of systems: those that can scale and manage large environments and those that have difficulty doing that. The former type may wind up being more of a converged, scale-out architecture along the lines of what NetApp is expected to release with a SolidFire-based system later this year, she said.

Townsend said enterprises should assess hyperconverged infrastructure systems like any other platform. Enterprises that are comfortable with niche players should consider them, but others can easily find a product from one of their existing vendors. “You can get a respectable HCI solution from one of your large vendors that integrates with your existing purchasing strategy,” he said.

With so many choices when it comes to hybrid cloud, Bates said enterprises should start by defining their requirements. “Define what you’re trying to do, then build requirements before you look at technology solutions. There are lots of ways to address this.”

 



Source link

Chakra Linux: Its Own Beast, Its Own Beauty | Linux.com


There are so many Linux distributions available—so many, in fact, that it can become a bit of a challenge to find the one right for you. After you’ve looked at them enough, it seems the variations tend to blur together, such that one flavor of Linux is only a slight shift away from another.

Perhaps your distribution of choice may have a sweet-looking desktop, but it might be the standard Ubuntu underneath. Or, maybe you’ve found that a distro is using the same GNOME as everyone else, with the slightest variation under the hood. That’s how it goes in the selection of Linux. The good news is that, even with that familiarity, there are some truly brilliant distributions available. Some of them advance the desktop interface well beyond the standard, while others go out of their way to be familiar.

Chakra Linux is a combination of the above descriptions. What initially started out as a variation on Arch Linux, with the name KDEmod, the distribution used a “lightified and modular” version of the KDE desktop, built exclusively for Arch Linux. This take on KDE offered a significant performance increase and improved customizations over the standard KDE installation, and it quickly gained a following.

After a while, however, it was determined that the LiveCD Project would be a much better fit and Chakra was born. Since then, the environment has slowly morphed into its own beast, a unique merging of ideologies and designs that deliver a solid and beautiful experience.

Chakra’s main vision is to provide a pure KDE/Qt desktop, with a nod to simplicity and transparency. Of course, simplicity is in the eye of the beholder. Even though KDE makes for an incredibly user-friendly environment, a new-to-Linux user will find themselves a bit confused when it comes to certain tasks. Let’s dive in and see what Chakra’s all about.

At first blush

Chakra is a beautiful desktop—if you’re okay with flat themes (Figure 1).

Check out the Chakra menu, and you’ll find just about every piece of software you need. The one caveat to this is the choice of office suit. Instead of the more popular LibreOffice, Chakra installs the KDE-specific Calligra. As long as you don’t need to interact with MS Office, that’s not completely a bad thing. If, however, you have to open any MS Office document (later than Office 2007), you’re out of luck. As much as I respect what Calligra is doing, without the ability to interact with the likes of .docx, it simply will not do (especially in a business environment).

Of course, that’s not really too much of an issue, as you can open up the Octopi software installer and install LibreOffice. Unfortunately, the version available is a bit out of date. Currently, the latest release of LibreOffice is 5.4.0.3 and the version available to Chakra (after an initial update) is 5.3.5-1. Unfortunately, there is no latest version of the office suite to be downloaded for the platform, so users would have to wait until the developers make it available for update.

Speaking of which…

Another issue new users will find with Chakra is the update process. With most Linux distributions, you can head to the desktop menu and find an entry for updating software. Not so with Chakra. Any updating to be done, must be handled through the command line. So to upgrade your system, you must open up the terminal window and issue the command:

sudo pacman -Syu

After running the command, you must okay every single one of the updates, before the process will continue (do note, the first run will take considerable time).

Speaking of which…

A unique aspect of Chakra is that it is a half-rolling release distribution. This means that Chakra works with a stable core and rolling applications on top. With this idea, you will always have the latest desktop software, running on top of a not-so-latest release of the core. This forms a solid foundation for which to run software. For instance, after an initial update, Chakra was using KDE 5.10.4-1 (the latest release) and kernel is 4.8.6-1. The mainline kernel is currently at 4.13-rc4, so this is certainly a stable kernel.

Fortunately, you do not have to install applications from the command line (you certainly can, if you choose). If you prefer a GUI for installing software, Chakra includes the Octopi front end for that particular task (Figure 2).

It is very important that you run the sudo pacman -Syu command before first using Octopi—otherwise, you’ll have out of date packages available for installation.

Some nice choices

Looking beyond the Calligra over LibreOffice issue, some nice choices have been made for default software. Take, for instance, the selection of Clementine as the default music player. This happens to be my favorite of all the available players on the market. It offers an amazing array of features, with a great user interface. Good choice. Another solid choice is Qupzilla, a lightweight web browser that uses the Qt webengine. This browser is faster to open that Firefox, offers more features than Midori, and renders as well as Chrome.

Another nice touch is the addition of the Yakuake drop-down terminal. I like a good terminal that is quick to open and quick to get out of the way; Yakuake does this with ease.

Who should be using Chakra

Chakra is a unique distribution that offers an interface and stability that begs for new users to come play, but with just enough added complexity that might challenge them to learn a bit more (or make them slightly hesitant). On the other hand, Chakra does deliver serious performance and plenty of tools (such as Package Changelogs, Chakra Bug Tracking System, Qt Designer, Vim text editor, and more) that will make more hard-core users quite happy.

If you’re a new user, who doesn’t mind working at the command line now and then, Chakra will serve you well. If you’re already well versed in Linux, Chakra will give you just enough to keep you curious and happy, while remaining stable underneath.

Manipulate IPv6 Addresses with ipv6calc | Linux.com


Last week, you may recall, we looked at calculating network addresses with ipcalc. Now, dear friends, it is my pleasure to introduce you to ipv6calc, the excellent IPv6 address manipulator and query tool by Dr. Peter Bieringer. ipv6calc is a little thing; on Ubuntu /usr/bin/ipv6calc is about 2MB, yet it packs in a ton of functionality. 

Here are some of ipv6calc’s features:

  • IPv4 assignment databases (ARIN, IANA, APNIC, etc.)
  • IPv6 assignment databases
  • Address and logfile anonymization
  • Compression and expansion of addresses
  • Query addresses for geolocation, registrar, address type
  • Multiple input and output formats

It includes multiple commands. We’re looking at the ipv6calc command in this article. It also includes ipv6calcweb and mod_ipv6calc for websites, ipv6logconv log converter, and ipv6logstats log statistics generator.

If your Linux distribution doesn’t compile all options, it’s easy to build it yourself by following instructions on The ipv6calc Homepage.

One useful feature it does not include is a subnet calculator. We’ll cover this in a future article.

Run ipv6calc -vv to see a complete features listing. Refer to man ipv6calc and The ipv6calc Homepage to learn all the command options.

Compression and Decompression

Remember how we can compress those long IPv6 addresses by condensing the zeroes? ipv6calc does this for you:

$ ipv6calc --addr2compaddr 2001:0db8:0000:0000:0000:0000:0000:0001
2001:db8::1

You might recall from Practical Networking for Linux Admins: Real IPv6 that the 2001:0DB8::/32 block is reserved for documentation and testing. You can uncompress IPv6 addresses:

$ ipv6calc --addr2uncompaddr 2001:db8::1
2001:db8:0:0:0:0:0:1

Uncompress it completely with the --addr2fulluncompaddr option:

$ ipv6calc --addr2fulluncompaddr 2001:db8::1
2001:0db8:0000:0000:0000:0000:0000:0001

Anonymizing Addresses

Anonymize any address this way:

$ ipv6calc --action anonymize 2001:db8::1
No input type specified, try autodetection...found type: ipv6addr
No output type specified, try autodetection...found type: ipv6addr
2001:db8::9:a929:4291:c02d:5d15

If you get tired of “no input type” messages, you can specify the input and output types:

$ ipv6calc --in ipv6addr --out ipv6addr  --action anonymize 2001:db8::1
2001:db8::9:a929:4291:c02d:5d15

Or use the “quiet” option to suppress the messages:

$ ipv6calc -q --action anonymize 2001:db8::1
2001:db8::9:a929:4291:c02d:5d15

Getting Information

What with all the different address classes and sheer size of IPv6 addresses, it’s nice to have ipv6calc tell you all about a particular address:

$ ipv6calc -qi 2001:db8::1
Address type: unicast, global-unicast, productive, iid, iid-local
Registry for address: reserved(RFC3849#4)
Address type has SLA: 0000
Interface identifier: 0000:0000:0000:0001
Interface identifier is probably manual set

$ ipv6calc -qi fe80::b07:5c7e:2e69:9d41
Address type: unicast, link-local, iid, iid-global, iid-eui64
Registry for address: reserved(RFC4291#2.5.6)
Interface identifier: 0b07:5c7e:2e69:9d41
EUI-64 identifier: 09:07:5c:7e:2e:69:9d:41
EUI-64 identifier is a global unique one

One of these days, I must write up a glossary of all of these crazy terms, like EUI-64 identifier. This means Extended Unique Identifier (EUI), defined in RFC 2373. This still doesn’t tell us much, does it? EUI-64 addresses are the link local IPv6 addresses, for stateless auto-configuration. Note how ipv6calc helpfully provides the relevant RFCs.

This example queries Google’s public DNS IPv6 address, showing information from the ARIN registry:

$ ipv6calc -qi 2001:4860:4860::8844
Address type: unicast, global-unicast, productive, iid, iid-local
Country Code: US
Registry for address: ARIN
Address type has SLA: 0000
Interface identifier: 0000:0000:0000:8844
Interface identifier is probably manual set
GeoIP country name and code: United States (US)
GeoIP database: GEO-106FREE 20160408 Bu
Built-In database: IPv6-REG:AFRINIC/20150904 APNIC/20150904 ARIN/20150904 
IANA/20150810 LACNIC/20150904 RIPENCC/20150904

You can filter these queries in various ways:

$ ipv6calc -qi --mrmt GEOIP 2001:4860:4860::8844
GEOIP_COUNTRY_SHORT=US
GEOIP_COUNTRY_LONG=United States
GEOIP_DATABASE_INFO=GEO-106FREE 20160408 Bu

$ ipv6calc -qi --mrmt  IPV6_COUNTRYCODE 2001:4860:4860::8844
IPV6_COUNTRYCODE=US

Run ipv6calc -vh to see a list of feature tokens and which ones are installed.

DNS PTR Records

Now we’ll use Red Hat in our examples. To find the IPv6 address of a site, you can use good old dig to query the AAAA records:

$ dig AAAA www.redhat.com
[...]
;; ANSWER SECTION:

e3396.dscx.akamaiedge.net. 20   IN      AAAA    2600:1409:a:3a2::d44
e3396.dscx.akamaiedge.net. 20   IN      AAAA    2600:1409:a:397::d44

And now you can run a reverse lookup:

$ dig -x 2600:1409:a:3a2::d44 +short
g2600-1409-r-4.4.d.0.0.0.0.0.0.0.0.0.0.0.0.0.2.a.3.0.a.
 0.0.0.deploy.static.akamaitechnologies.com.
g2600-1409-000a-r-4.4.d.0.0.0.0.0.0.0.0.0.0.0.0.0.2.a. 
 3.0.deploy.static.akamaitechnologies.com.

As you can see, DNS is quite complex these days thanks to cloud technologies, load balancing, and all those newfangled tools that datacenters use.

There are many ways to create those crazy long PTR strings for your own DNS records. ipv6calc will do it for you:

$ ipv6calc -q --out revnibbles.arpa 2600:1409:a:3a2::d44
4.4.d.0.0.0.0.0.0.0.0.0.0.0.0.0.2.a.3.0.a.0.0.0.9.0.4.1.0.0.6.2.ip6.arpa.

If you want to dig deeper into IPv6, try reading the RFCs. Yes, they can be dry, but they are authoritative. I recommend starting with RFC 8200, Internet Protocol, Version 6 (IPv6) Specification.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.