Monthly Archives: January 2018

Subgraph: This Security-Focused Distro Is Malware’s Worst Nightmare | Linux.com


By design, Linux is a very secure operating system. In fact, after 20 years of usage, I have personally experienced only one instance where a Linux machine was compromised. That instance was a server hit with a rootkit. On the desktop side, I’ve yet to experience an attack of any kind.
That doesn’t mean exploits and attacks on the Linux platform don’t exist. They do. One only need consider Heartbleed and Wannacry, to remember that Linux is not invincible.

See: Linux Malware on the Rise: A Look at Recent Threats

With the Linux desktop popularity on the rise, you can be sure desktop malware and ransomware attacks will also be on the increase. That means Linux users, who have for years ignored such threats, should begin considering that their platform of choice could get hit.

What do you do?

If you’re a Linux desktop user, you might think about adopting a distribution like Subgraph. Subgraph is a desktop computing and communication platform designed to be highly resistant to network-borne exploits and malware/ransomware attacks. But unlike other platforms that might attempt to achieve such lofty goals, Subgraph makes this all possible, while retaining a high-level of user-friendliness. Thanks to the GNOME desktop, Subgraph is incredibly easy to use.

What Subgraph does differently

It all begins at the core of the OS. Subgraph ships with a kernel built with grsecurity/PaX (a system-wide patch for exploit and privilege escalation mitigation), and RAP (designed to prevent code-reuse attacks on the kernel to mitigate against contemporary exploitation techniques). For more information about the Subgraph kernel, check out the Subgraph kernel configs on GitHub.

Subgraph also runs exposed and vulnerable applications within unique environments, known as Oz. Oz is designed to isolate applications from one another and only grant resources to applications that need them. The technologies that make up Oz include:

Other security features include:

  • Most of the custom Subgraph code is written in the memory-safe language, Golang.

  • AppArmor profiles that cover many system utilities and applications.

  • Security event monitor.

  • Desktop notifications (coming soon).

  • Roflcoptor tor control port filter service.

Installing Subgraph

It is important to remember that Subgraph is in alpha release, so you shouldn’t consider this platform as a daily driver. Because it’s in alpha, there are some interesting hiccups regarding the installation. The first oddity I experienced is that Subgraph cannot be installed as a VirtualBox virtual machine. No matter what you do, it will not work. This is a known bug and, hopefully, the developers will get it worked out.

The second issue is that installing Subgraph by way of a USB device is very tricky. You cannot use tools like Unetbootin or Multiboot USB to create a bootable flash drive. You can use GNOME Disks to create a USB drive, but your best bet is the dd command. Download the ISO image, insert your USB drive into the computer, open a terminal window, and locate the name of the newly inserted USB device (the command lsblk works fine for this. Finally, write the ISO image to the USB device with the command:

dd bs=4M if=subgraph-os-alpha_XXX.iso of=/dev/SDX status=progress && sync

where XXX is the Subgraph release number and SDX is the name of your USB device.

Once the above command completes, you can reboot your machine and install Subgraph. The installation process is fairly straightforward, with a few exceptions. The first is that the installation completely erases the entire drive, before it installs. This is a security measure and cannot be avoided. This process takes quite some time (Figure 1), so let it do its thing and go take care of another task.

Next, you must create a passphrase for the encryption of the drive (Figure 2).

This passphrase is used when booting your device. If you lose (or forget) the passphrase, you won’t be able to boot into Subgraph. This passphrase is also the first line of defence against anyone who might try to get to your data, should they steal your device… so choose wisely.

The last difference between Subgraph and most other distributions, is that you aren’t given the opportunity to create a username. You do create a user password, which is used for the default user… named user. You can always create a new user (once the OS is installed), either by way of the command line or the GNOME Settings tool.

Once installed, your Subgraph system will reboot and you’ll be prompted for the disk encryption passphrase. Upon successful authentication, Subgraph will boot and land on the GNOME login screen. Login with username user and the password you created during installation.

Usage

There are two important things to remember when using Subgraph. First, as I mentioned earlier, this distribution is in alpha development, so things will go wrong. Second, all applications are run within sandboxes and networking is handled through Tor, so you’re going to experience slower application launches and network connections than you might be used to.

I was surprised to find that Tor Browser (the default—and only installed—browser) wasn’t installed out of the box. Instead, there’s a launcher on the GNOME Dash that will, upon first launch, download the latest version. That’s all fine and good, but the download and install failed on me twice. Had I been working through a regular network connection, this wouldn’t have been such a headache. However, as Subgraph was working through Tor, my network connection was painfully slow, so the download, verification, and install of Tor Browser (a 26.8 MB package) took about 20 minutes. That, of course, isn’t the fault of Subgraph but of the Tor network to which I was connected. Until Tor Browser was up and running, Subgraph was quite limited in what I could actually do. Eventually, Tor Browser downloaded and all worked as expected.

Application sandboxes

Not every application has to go through the process of downloading a new version upon first launch. In fact, Tor Browser was the only application I encountered that did. When you do open up a new application, it will first start its own sandbox and then open the application in question. Once the application is up and running, you will see a drop-down in the top panel that lists each current application sandbox (Figure 3).

From each application sub-menu, you can add files to that particular sandbox or you can shutdown the sandbox. Shutting down the sandbox effectively closes the application. This is not how you should close the application itself. Instead, close the application as you normally would and then, if you’re done working with the application, you can then manually close the sandbox (through the drop-down). If you have, say, LibreOffice open and you close it by way of closing the sandbox, you run the risk of losing information.

Because each application starts up in its own sandbox, applications don’t open as quickly as they would otherwise. This is the tradeoff you make for using Subgraph and sandboxes. For those looking to get the most out of desktop security, this is a worthwhile exchange.

A very promising distribution

For anyone hoping to gain the most security they can on a desktop computer, Subgraph is one seriously promising distribution. Although it does suffer from many an alpha woe, Subgraph looks like it could make some serious waves on the desktop—especially considering how prevalent malware and ransomware has become. Even better, Subgraph could easily become a security-focused desktop distribution that anyone (regardless of competency) could make use of. Once Subgraph is out of alpha, I predict big things from this unique flavor of Linux.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

How IT Storage Professionals Can Thrive In 2018


Just a few years ago, it took a much larger employee base to administer enterprise-level IT. Each staffer operated in a silo, managing a variety of areas that included storage. Like a Russian doll, these silos were broken down further into still more specialties. All told, the storage team of a large/global enterprise could be made up of as many as 100 throughout the enterprise.

Today, the idea of 100 staffers just to administer storage seems fantastic, as the IT staffs have focused more and more on their software and dev environments than their infrastructure. That old staff-size wasn’t bloat, however: Each member was considered vital, because complexity of an enterprise’s storage estate was a major issue; everything was complex, and nothing was intuitive.

But then revolution happened. It was introduced in the form of the 2008-2009 extensive worldwide economic downturn. Driven by the collapse of an unstable housing market, every sector of the economy stumbled, and businesses were forced to focus on leveraging technology for IT innovation. This disruption was followed by the AI Big Bang, and over time, a dissolution of traditional roles.

IT professionals suffered, especially within the storage industry. In many enterprises, as much as 50% of the storage workforce was pink-slipped. Despite this, the amount of data we’re administering has skyrocketed. IDC forecasts that by 2025, the global data sphere will grow to 163ZB or a trillion gigabytes.

IT employment levels eventually stabilized, but according to Computer Economics, organizations are experiencing productivity gains without accompanying significant increases in spending. In other words, IT organizations are getting more with less. Virtualization and automation have been speeding tasks, and the servers themselves are much faster than they once were.

Bureau of Labor Statistics says employment of computer and information technology occupations is projected to grow 13% from 2016 to 2026 for all IT jobs. IT staffers will nonetheless perform an extensive range of activities, says Gartner. In the next year, beyond management of software and hardware across applications and databases, servers, storage and networking, IT teams will also be expected to evangelize, consult, broker, coach and deliver solutions to their organizations.

Hiring managers will therefore increasingly focus on cultivating teams with more versatile skills, including non-IT functions. IT professionals must also be prepared to embrace education and certification initiatives to hone specialized skills that are broad enough to transfer to other platforms and verticals. Training will be the new normal.

The right tool for the job

Storage specialists will need a clear understanding of how systems can meet the needs of their enterprises. As with any hardware, IT admins require the right tool for the right job. They need to remember that a one-size-fits-all option is not a valid solution. Just as an expensive supercar can’t replace a city bus, some systems work better for their specific needs than others.

That means teams shouldn’t just throw money at a problem, but consider variables such as proximity to compute resources, diversity of performance, capital expenditure versus operating expenses and more. In general, storage professionals will need to right-size their solutions so they can scale to their changing needs. As with any purchase, no one wants to waste money on what they don’t need. But they also shouldn’t underestimate their long-term requirements in a manner that eventually hobbles their business. We’ve all heard the stories of enterprises held back by their storage systems.

Fundamentally, however, faster is usually better. Faster systems can provide more in-depth insights while responding to customers almost instantaneously. A system suited to your needs can also boost the performance of your existing applications. IT staffers will need to look for solutions that come with a portfolio of management tools. To improve storage efficiency, look for a solution with data reduction technologies like pattern removal, deduplication, and compression. And faster storage offerings leveraging flash technology have impact beyond the storage environment and associated applications to entire clouds and data centers.

With such tools, enterprise operations can maximize their resources for optimal speed while also reducing infrastructure costs across their compute and storage environments.

Get in tune with modernization

Storage professionals will need to embrace automation. Each storage pro will need to learn it, leverage it and understand its various use cases. In fact, teams should seek out as much automation as their vendor can provide, because their jobs will only continue the shift toward managing capacity with small staffs.

Additionally, IT pros will move to converged infrastructure, which simplifies IT by combining resources into a single, integrated solution. This approach reduces costs and while also minimizing compatibility issues among servers, storage systems, and network devices. Converged infrastructure can boost productivity by eliminating large portions of design, deployment, and management. Teams will be up and running faster so they can put their focus elsewhere.

Storage professionals should embrace their new hybrid job descriptions. They’ll likely need to reach beyond their domain skills, certifications, and comfort zones. As job their jobs continue to evolve, storage professionals will become hybrid specialists as the old silos will continue to collapse.

Some desired job skills are already evident, such a working knowledge of cloud. Others may be less so: Those with an understanding of the basics of marketing are more likely to thrive, as they argue for their fair slice of the budgeting pie. 

All told, it’s best to get in tune with modernization. After all, it’s unavoidable and fundamental to the IT workplace.

Eric Herzog is Chief Marketing Officer and Vice President, Worldwide Storage Channels for IBM Storage Systems and Software-Defined Infrastructure. Herzog has over 30 years of product management, marketing, business development, alliances, sales, and channels experience in the storage software, storage hardware, and storage solutions markets, managing all aspects of marketing, product management, sales, alliances, channels, and business development in both Fortune 500 and start-up storage companies.



Source link

Keep Accurate Time on Linux with NTP | Linux.com


How to keep the correct time and keep your computers synchronized without abusing time servers, using NTP and systemd.

What Time is It?

Linux is funky when it comes to telling the time. You might think that the time tells the time, but it doesn’t because it is a timer that measures how long a process runs. To get the time run the date command, and to view more than one date use cal. Timestamps on files are also a source of confusion as they are typically displayed in two different ways, depending on your distro defaults. This example is from Ubuntu 16.04 LTS:

$ ls -l
drwxrwxr-x 5 carla carla   4096 Mar 27  2017 stuff
drwxrwxr-x 2 carla carla   4096 Dec  8 11:32 things
-rw-rw-r-- 1 carla carla 626052 Nov 21 12:07 fatpdf.pdf
-rw-rw-r-- 1 carla carla   2781 Apr 18  2017 oddlots.txt

Some display the year, some display the time, which makes ordering your files rather a mess. The GNU default is files dated within the last six months display the time instead of the year. I suppose there is a reason for this. If your Linux does this, try ls -l --time-style=long-iso to display the timestamps all the same way, sorted alphabetically. See How to Change the Linux Date and Time: Simple Commands to learn all manner of fascinating ways to manage the time on Linux.

Check Current Settings

NTP, the network time protocol, is the old-fashioned way of keeping correct time on computers. ntpd, the NTP daemon, periodically queries a public time server and adjusts your system time as needed. It’s a simple lightweight protocol that is easy to set up for basic use. Systemd has barged into NTP territory with the systemd-timesyncd.service, which acts as a client to ntpd.

Before messing with NTP, let’s take a minute to check that current time settings are correct.

There are (at least) two timekeepers on your system: system time, which is managed by the Linux kernel, and the hardware clock on your motherboard, which is also called the real-time clock (RTC). When you enter your system BIOS you see the hardware clock time, and can change its settings. When you install a new Linux, and in some graphical time managers, you are asked if you want your RTC set to the UTC (Coordinated Universal Time) zone. It should be set to UTC, because all time zone and daylight savings time calculations are based on UTC. Use the hwclock command to check:

$ sudo hwclock --debug
hwclock from util-linux 2.27.1
Using the /dev interface to the clock.
Hardware clock is on UTC time
Assuming hardware clock is kept in UTC time.
Waiting for clock tick...
...got clock tick
Time read from Hardware Clock: 2018/01/22 22:14:31
Hw clock time : 2018/01/22 22:14:31 = 1516659271 seconds since 1969
Time since last adjustment is 1516659271 seconds
Calculated Hardware Clock drift is 0.000000 seconds
Mon 22 Jan 2018 02:14:30 PM PST  .202760 seconds

“Hardware clock is kept in UTC time” confirms that your RTC is on UTC, even though it translates the time to your local time. If it were set to local time it would report “Hardware clock is kept in local time.”

You should have a /etc/adjtime file. If you don’t, sync your RTC to system time:

$ sudo hwclock -w

This should generate the file, and the contents should look like this example:

$ cat /etc/adjtime
0.000000 1516661953 0.000000
1516661953
UTC

The new-fangled systemd way is to run timedatectl, which does not need root permissions:

$ timedatectl
      Local time: Mon 2018-01-22 14:17:51 PST
  Universal time: Mon 2018-01-22 22:17:51 UTC
        RTC time: Mon 2018-01-22 22:17:51
       Time zone: America/Los_Angeles (PST, -0800)
 Network time on: yes
NTP synchronized: yes
 RTC in local TZ: no

“RTC in local TZ: no” confirms that it is on UTC time. What if it is on local time? There are, as always, multiple ways to change it. The easy way is with a nice graphical configuration tool, like YaST in openSUSE. You can use timedatectl:

$ timedatectl set-local-rtc 0

Or edit /etc/adjtime, replacing UTC with LOCAL.

systemd-timesyncd Client

Now I’m tired, and we’ve just gotten to the good part. Who knew timekeeping was so complex? We haven’t even scratched the surface; read man 8 hwclock to get an idea of how time is kept on computers.

Systemd provides the systemd-timesyncd.service client, which queries remote time servers and adjusts your system time. Configure your servers in /etc/systemd/timesyncd.conf. Most Linux distributions provide a default configuration that points to time servers that they maintain, like Fedora:

[Time]
#NTP=
#FallbackNTP=0.fedora.pool.ntp.org  1.fedora.pool.ntp.org

You may enter any other servers you desire, such as your own local NTP server, on the NTP= line in a space-delimited list. (Remember to uncomment this line.) Anything you put on the NTP= line overrides the fallback.

What if you are not using systemd? Then you need only NTP.

Setting up NTP Server and Client

It is a good practice to set up your own LAN NTP server, so that you are not pummeling public NTP servers from all of your computers. On most Linuxes NTP comes in the ntp package, and most of them provide /etc/ntp.conf to configure the service. Consult NTP Pool Time Servers to find the NTP server pool that is appropriate for your region. Then enter 4-5 servers in your /etc/ntp.conf file, with each server on its own line:

driftfile   /var/ntp.drift
logfile     /var/log/ntp.log
server 0.europe.pool.ntp.org
server 1.europe.pool.ntp.org
server 2.europe.pool.ntp.org
server 3.europe.pool.ntp.org

The driftfile tells ntpd where to store the information it needs to quickly synchronize your system clock with the time servers at startup, and your logs should have their own home instead of getting dumped into the syslog. Use your Linux distribution defaults for these files if it provides them.

Now start the daemon; on most Linuxes this is sudo systemctl start ntpd. Let it run for a few minutes, then check its status:

$ ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================
+dev.smatwebdesi 192.168.194.89   3 u   25   64   37   92.456   -6.395  18.530
*chl.la          127.67.113.92    2 u   23   64   37   75.175    8.820   8.230
+four0.fairy.mat 35.73.197.144    2 u   22   64   37  116.272  -10.033  40.151
-195.21.152.161  195.66.241.2     2 u   27   64   37  107.559    1.822  27.346

I have no idea what any of that means, other than your daemon is talking to the remote time servers, and that is what you want. To permanently enable it, run sudo systemctl enable ntpd. If your Linux doesn’t use systemd then it is your homework to figure out how to run ntpd.

Now you can set up systemd-timesyncd on your other LAN hosts to use your local NTP server, or install NTP on them and enter your local server in their /etc/ntp.conf files.

NTP servers take a beating, and demand continually increases. You can help by running your own public NTP server. Come back next week to learn how.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Red Hat Enterprise Linux 7.5 Beta Out » Linux Magazine


Red Hat has announced the beta of RHEL 7.5, which supports alternative architectures, with variants available for IBM Power, IBM System z, and Arm deployments as well as x86.

Security is certainly the key highlight of this release. Red Hat said in a press release that RHEL 7.5 beta comes with security improvements and usability enhancements for cloud and remotely hosted systems that can more securely unlock Network Bound Disk Encrypted devices at boot time, designed to eliminate the need for manual intervention in an often inconveniently timed boot process.

This release also integrates Red Hat Ansible Automation with OpenSCAP, which enhances the ease of automating the remediation of compliance issues and enables administrators to scale policies across their environment more efficiently.

RHEL 7.5 beta also improves compliance for accurate timestamping and synchronization needs with the addition of failover with bonding interfaces for Precision Time Protocol (PTP) and Network Time Protocol (NTP).

RHEL 7.5 Beta enhances usability for Linux administrators, Windows administrators new to the platform, and developers seeking self-service capabilities alike with an easier to use cockpit administrator console. The console is designed to simplify the interface for managing storage, networking, containers, services, and more for individual systems.

Sys admins will love the automated creation of a “known-good” bootable snapshot to help speed recovery and rollback after patching, helping IT teams feel more confident that their systems are in working order.

Users can download the beta for testing.



Source link

Torvalds is Not Happy with Inet’s Patch, Calls… » Linux Magazine


Intels’ woes are not going away. After releasing the patches for Spectre/Meltdown, the company is asking users to stop installing these patches until a better version is out.

“We recommend that OEMs, cloud service providers, system manufacturers, software vendors, and end users stop deployment of current versions on specific platforms,” Navin Shenoy, executive vice president of Intel wrote in an announcement, “as they may introduce higher than expected reboots and other unpredictable system behavior.”

Red Hat has already reverted the patches that the companies earlier released for the RHEL family of products, after reports of rebooting problems.

Linus Torvalds, the creator of Linux, reserves the harshest words for Intel. “… I really don’t want to see these garbage patches just mindlessly sent around,” wrote Torvalds on the LKML mailing list.

Though not everyone on the mailing list thought it was such a bad thing. One maintainer said, “Certainly it’s a nasty hack, but hey — the world was on fire and in the end we didn’t have to just turn the data centres off and go back to goat farming, so it’s not all bad.”

Another maintainer chimed in and said, “As a hack for existing CPUs, it’s just about tolerable — as long as it can die entirely by the next generation.”

Torvalds didn’t buy either arguments. “That’s part of the big problem here. The speculation control cpuid stuff shows that Intel actually seems to plan on doing the right thing for meltdown (the main question being _when_). Which is not a huge surprise, since it should be easy to fix, and it’s a really honking big hole to drive through. Not doing the right thing for meltdown would be completely unacceptable,” said Torvalds. “So the IBRS garbage implies that Intel is _not_ planning on doing the right thing for the indirect branch speculation. Honestly, that’s completely unacceptable too.”



Source link