Tag Archives: servers

Combining Data Center Innovations to Reduce Ecological Footprints | IT Infrastructure Advice, Discussion, Community


The big tech companies are vying for positive coverage of their environmental initiatives. Microsoft just promoted its achievements in renewable energy, which will comprise 60 percent of the company’s electricity usage by the end of the year. Facebook made headlines for a forthcoming 100 percent renewable-powered facility in Los Lunas, New Mexico, while both Apple and Google claim 100 percent carbon neutrality.

These green milestones are important, but renewables represent only one environmental solution for the data center industry. Energy-intensive technologies, such as AI and blockchain, complicate the quest for clean, low-impact electricity generation. Additionally, the sector remains a large consumer of the planet’s other resources, including water and raw materials. Unfortunately, the search for energy efficiency can negatively affect other conservation efforts.

Current State of Play on the Search for Energy Efficiency

A case in point is adiabatic cooling, which evaporates water to ease the burden on HVAC systems. At a time when 2.7 billion people suffer from water scarcity, this approach can lead to intense resource competition, such as in Maharashtra, India, where drinking water had to be imported as thirsty colocation facilities proliferated.

Bolder strategies will be necessary to deliver the compute power, storage capacity, and network connectivity the world demands with fewer inputs of fossil fuels, water, rare earth metals, and other resources. Long range, there is hope for quantum computing, which has the potential to slash energy usage by more than 20 orders of magnitude over conventional technologies. This could cut Google’s annual burn rate, for instance, from gigawatt-hours to the nanowatt-hour range, reducing the need to produce more solar panels, wind turbines, and hydropower stations along the way.

Commercial launches – such as IBM’s Q System One – notwithstanding, the quantum moonshot still lies at least a decade away by most accounts, and the intervening barriers are significant. Quantum calculations remain vulnerable to complex errors, new programming approaches are required, and the nearest-term use cases tend toward high-end modeling, not replacing the standard web server or laptop.

Green Technology Solutions Closer to Earth

Fortunately, there are other technologies nearer at hand and more accessible for the average data center, colocation provider, or even regional office. For example, AI-based technologies are training as zombie killers, using machine learning to improve server allocation and power off the 25% of physical servers and 30% of virtual servers currently running but doing nothing. If underutilized IT assets are repurposed, this can not only help realize energy savings, it can delay new equipment purchases as well.

 

Then there is liquid cooling, well known from the industry’s mainframe origins. Although many companies won’t be able to redesign facilities a la Facebook’s designs, hardware manufacturers are delivering off-the-shelf liquid-cooled products. Use of rear-door heat exchangers and direct-to-chip cooling can help lower PUE from 1.5 or more down toward 1.1, and immersion cooling can deliver power savings of up to 50 percent. These technologies also enable greater density, which means doing more with less space—a good thing, as land, too, is a natural resource.

Consolidation trends will shift more of the environmental burden to the few outfits with pockets deep enough to do the seemingly impossible: sink data centers in the ocean for natural cooling, launch them into space, and “accelerate” workloads with the earliest, sure to be exorbitantly expensive, quantum computers ready for mission critical applications.

What’s Next for the “Green” Data Center

None of today’s available technologies, from AI-driven DCIM systems to advanced load balancers, is a panacea. With blockchain’s intense processing demands and consumers’ insatiable appetite for technology, among other pressures, the IT industry faces numerous forces working against its efforts to shrink resource consumption and carbon emissions.

While we await a breakthrough with the exponential impact of quantum computing, we will have to combine various solutions to drive incremental progress. In some cases, that will mean a return of cold storage to move rarely accessed information off powered storage arrays in favor of tape backups and similar “old school” methods. In others, it will mean allowing energy efficiency and component recyclability to tip the balance during hardware acquisition decisions. And in still others, newer edge computing applications may integrate small, modular pods that work on solar-wind hybrid energy systems.

Hopefully, the craving these dominant tech players display for positive environmental headlines, paired with a profit motive rewarding tiny efficiency gains achieved at hyperscale, will continue to propel advances in green solutions that can one day be implemented industry-wide.



Source link

Zombieload, RHEL 8.0, Linux 5.2 & GCC Happenings Dominated May


PHORONIX --

This month on Phoronix there were 316 original news articles and 25 featured/multi-page hardware reviews and benchmark articles. There was a lot of interesting happenings this month from the release of Linux 5.1 to the 5.2 kernel cycle then kicking off, MDS / Zombieload as the latest major Intel CPU vulnerability, GCC 9 saw its first stable release, Red Hat Enterprise Linux 8.0 was finally christened, and my personal favorite this month was the Intel Open-Source Technology Summit (OSTS) 2019 event.

Of the 330 original pieces of content written on Phoronix during May, all of which was written by your’s truly, here is a look back at the most popular articles in case you missed any of them. May was a very busy month but June will likely be at least as busy and just ahead of the super exciting July with AMD’s new product launches. There’s also work building around Linux 5.3, development is underway on Phoronix Test Suite 9.0, and much more. And next week on 5 June brings the 15th birthday of Phoronix.com! That date also marks 11 years already since the release of Phoronix Test Suite 1.0. A fun week ahead and I’ll try to have out a number of interesting articles to mark the occasion.

If you enjoy all of the daily content on Phoronix each and every day, consider showing your support by joining Phoronix Premium or making a PayPal tip. At the very least, please do not use any ad-blocker when viewing this web-site as pay per impression ads are the main source of income for allowing this site to continue into its 16th year. Thanks for your consideration.

Now the most popular featured articles included:

The Performance Impact Of MDS / Zombieload Plus The Overall Cost Now Of Spectre/Meltdown/L1TF/MDS
The past few days I’ve begun exploring the performance implications of the new Microarchitectural Data Sampling “MDS” vulnerabilities now known more commonly as Zombieload. As I shared in some initial results, there is a real performance hit to these mitigations. In this article are more MDS/Zombieload mitigation benchmarks on multiple systems as well as comparing the overall performance impact of the Meltdown/Spectre/Foreshadow/Zombieload mitigations on various Intel CPUs and also AMD CPUs where relevant.

Radeon RX 560/570/580 vs. GeForce GTX 1060/1650/1660 Linux Gaming Performance
If you are looking to soon upgrade your graphics card for Linux gaming — especially with the increasing number of titles running well under Steam Play — but only have a budget of around $200 USD for the graphics card, this comparison is for you. In this article we’re looking at the AMD Radeon RX 560 / RX 570 / RX 580 against the NVIDIA GeForce GTX 1060 / GTX 1650 / GTX 1660 graphics cards. Not only are we looking at the OpenGL/Vulkan Linux gaming performance both for native titles and Steam Play but also the GPU power consumption and performance-per-dollar metrics to help guide your next budget GPU purchasing decision.

Benchmarking AMD FX vs. Intel Sandy/Ivy Bridge CPUs Following Spectre, Meltdown, L1TF, Zombieload
Now with MDS / Zombieload being public and seeing a 8~10% performance hit in the affected workloads as a result of the new mitigations to these Microarchitectural Data Sampling vulnerabilities, what’s the overall performance look like now if going back to the days of AMD FX Vishera and Intel Sandybridge/Ivybridge processors? If Spectre, Meltdown, L1TF/Foreshadow, and now Zombieload had come to light years ago would it have shaken that pivotal point in the industry? Here are benchmarks looking at the the performance today with and without the mitigations to the known CPU vulnerabilities to date.

GCC 9 vs. Clang 8 C/C++ Compiler Performance On AMD Threadripper, Intel Core i9
Since the release of the GCC 9 stable compiler suite earlier this month we have begun firing up a number of compiler benchmarks for this annual feature update to the GNU Compiler Collection. For your viewing pleasure today is looking at the performance of GCC 8 against GCC 9 compared to LLVM Clang 8 as the latest release of this friendly open-source compiler competition. This GCC 8 vs. GCC 9 vs. Clang 8 C/C++ compiler benchmarking was done on an Intel Core i9 7980XE and AMD Ryzen Threadripper 2990WX high-end desktop/workstation systems.

A Look At The MDS Cost On Xeon, EPYC & Xeon Total Impact Of Affected CPU Vulnerabilities
This weekend I posted a number of benchmarks looking at the performance impact of the new MDS/Zombieload vulnerabilities that also included a look at the overall cost of Spectre/Meltdown/L1TF/MDS on Intel desktop CPUs and AMD CPUs (Spectre). In this article are similar benchmarks but turning the attention now to Intel Xeon hardware and also comparing those total mitigation costs against AMD EPYC with its Spectre mitigations.

AMD Radeon VII Linux Performance vs. NVIDIA Gaming On Ubuntu For Q2’2019
It’s been three months now since the AMD Radeon VII 7nm “Vega 20” graphics card was released and while we hopefully won’t be waiting much longer for Navi to make its debut, for the time being this is the latest and great AMD Radeon consumer graphics card — priced at around $700 USD. Here are some fresh benchmarks of the Radeon VII on Linux and compared to various high-end NVIDIA graphics cards while all testing happened from Ubuntu 19.04.

Red Hat Enterprise Linux 8.0 Benchmarks Against RHEL 7.6, Ubuntu 18.04.2 LTS, Clear Linux
Continuing on from the initial Red Hat Enterprise Linux 8.0 benchmarks last week, now having had more time with this fresh enterprise Linux distribution, here are additional benchmarks on two Intel Xeon servers when benchmarking RHEL 8.0, RHEL 7.6, Ubuntu 18.04.2 LTS, and Clear Linux. RHEL 8.0 is certainly delivering much better out-of-the-box performance than its aging predecessor but how can it compete with Ubuntu LTS and Clear Linux?

Firefox 68 Performance Is Looking Good With WebRender On Linux
With Firefox 67 having released this week, Firefox 68 is in beta and its performance from our tests thus far on Ubuntu Linux are looking real good. In particular, if enabling the WebRender option that remains off by default on Linux, there are some nice performance gains especially.

NVIDIA/AMD Linux Gaming Performance For Hitman 2 On Steam Play
While Hitman was ported to Linux by Feral Interactive, Hitman 2 that was released back in November of 2018 hasn’t seen a native Linux port. However, in recent months Hitman 2 has been running under DXVK+Proton with Steam Play for allowing this stealth video game to run nicely under Linux. More recently the latest Proton updates have worked around an issue that previously prevented our benchmarking of this game, so in this article is a look at the Hitman 2 Linux gaming performance with different AMD Radeon and NVIDIA GeForce graphics cards.

LG’s 4K FreeSync/Adaptive-Sync Display For Just $219 USD
Now that the Radeon FreeSync support is in good standing with Linux 5.0+ and Mesa 19.0+ (or Mesa 19.1+ for RADV Vulkan support) as well as NVIDIA offering G-SYNC Compatible Linux support, if you have been desiring a FreeSync/Adaptive-Sync display but are on a limited budget, LG has an interesting 24-inch contender… A 4K FreeSync-supported display for just $219 USD?!?

And the most popular news articles:

Canonical Releases “WLCS” Wayland Conformance Suite 1.0
While Ubuntu is not currently using Wayland by default with its GNOME Shell desktop and it doesn’t look like they will try again until Ubuntu 20.10, the option is still available and they continue working in the direction of a Wayland Linux desktop future. One of their interesting “upstream” contributions in this area is with the Wayland Conformance Suite.

Linux’s vmalloc Seeing “Large Performance Benefits” With 5.2 Kernel Changes
On top of all the changes queued for Linux 5.2 is an interesting last-minute performance improvement for the vmalloc code.

Dell’s New WD19 Thunderbolt/USB-C Docks Should Be Playing Nicely On Linux
In addition to Dell releasing “budget-friendly” laptops with Ubuntu Linux on Wednesday, the company released new Thunderbolt and USB-C docks that should be working fine out-of-the-box on Linux.

MDS / Zombieload Mitigations Come At A Real Cost, Even If Keeping Hyper Threading On
The default Linux mitigations for the new Microarchitectural Data Sampling (MDS) vulnerabilities (also known as “Zombieload”) do incur measurable performance cost out-of-the-box in various workloads. That’s even with the default behavior where SMT / Hyper Threading remains on while it becomes increasingly apparent if wanting to fully protect your system HT must be off.

Spectre/Meltdown/L1TF/MDS Mitigation Costs On An Intel Dual Core + HT Laptop
Following the recent desktop CPU benchmarks and server CPU benchmarks following the MDS/ZombieLoad mitigations coming to light and looking at the overall performance cost to mitigating these current CPU vulnerabilities, there was some speculation by some in the community that the older dual-core CPUs with Hyper Threading would be particularly hard hit. Here are some benchmarks of a Lenovo ThinkPad with Core i7 Broadwell CPU looking at those mitigation costs.

Arch-Based Antergos Linux Distribution Calls It Quits
The Arch-based Antergos Linux distribution that aimed to make Arch Linux more accessible to the Linux desktop masses is closing up shop.

x86 FPU Optimizations Land In Linux 5.2 That Torvalds Loves But Worries Of Regressions
As part of the first week of changes for the Linux 5.2 merge window, a patch series providing some x86 FPU optimizations were merged though there is some concern there could be regressions on older hardware.

systemd Clocks In At More Than 1.2 Million Lines
Five years ago today was the story on Phoronix how the systemd source tree was approaching 550k lines so curiosity got the best of me to see how large is the systemd Git repository today. Well, now it’s over 1.2 million lines.

Hands On With The Atomic Pi As A $35 Intel Atom Alternative To The Raspberry Pi
After a successful Kickstarter campaign and honoring those obligations, the Atomic Pi recently hit retail channels (albeit sold out currently) as a $35 Intel Atom powered single board computer to compete with the likes of the Raspberry Pi.

GCC 9.1 Released As Huge Compiler Update With D Language, Zen 2, OpenMP 5, C++2A, C2X
GNU Compiler Collection 9.1 was released today with a D language front-end joining the family while on the back-end is now the long-awaited Radeon GCN GPU target (although not too useful in its current form), Intel Cascadelake support, initial AMD Zen 2, C-SKY CPU support, OpenRISC CPU support, and many other features throughout this massive open-source compiler.

See you in June!


How to Set Up Your Computer to Auto-Restart After a Power Outage | How To


Aside from malware and viruses, nothing has the potential to be more dangerous to your computer’s health than power outages. Here is how to ensure your computer keeps it boot on when a power failure turns the lights off.

With the approach of the turbulent summer season, it is important to know what kills the electrical lifeline, how to safeguard your digital gear from fatal reboot disease, and how to reach the desktop when the computer refuses to restart. This knowledge is vital whether you use computers to do your job in a business office or your own home office environment.

To minimize the potential damage from electrical power fluctuations, you should have your computers and modems plugged directly into power surge protective strips. Surge protectors are effective protection against glitches due to normally fluctuating energy levels.

However, a direct lightning strike is likely to fry the surge protector and then burn out the electronic gadgets plugged into it. A good strategy is to unplug the surge protector from the electric wall socket when a storm arrives.

Another essential piece of protective equipment is an uninterruptible power supply, or UPS. A UPS is a sophisticated battery-containing device that supplies backup power to desktop PCs during electrical grid outages and brownouts. One of the most important services a UPS can deliver is continuation of the electrical power — usually about 15 minutes — giving you enough time to safely save your data and power down your equipment. The UPS will kick in when its sensors detect an interruption of electricity from the main service line to your home or office.

The latest UPS models can reset to an off position automatically as their rechargeable batteries run out of energy. When the normal power supply returns, your computer can restart without its power supply being blocked if it is so configured. The BIOS settings in many computers let you adjust the power settings so the computer senses when normal electrical supply returns. You can pick up a UPS at office supply retailers and box stores, as well as your favorite online shopping center.

The software that comes with it safeguards the computer when it is unattended. This is useful if you use remote access services and file-syncing cloud storage services. Getting your PC to restart automatically after a power outage involves getting the computer to “see” the power returning by making some changes to the PC’s BIOS settings and installing the UPS-included software. Read on to learn how to do this.

What Breaks the Power

Causes of power outages involve some obvious and a few subtle situations. Mother Nature, device fatigue and dumb luck all figure into the power breakdown equation. Other than being prepared before trouble strikes, there is little you can do when the power grid fails. Here is a quick list of power failure causes:

  • Weather — Lightning, high winds and ice are weather hazards that often impact the power supply. Interruptions can last several days, depending on how rapidly ground conditions improve to let work crews find and repair the damage. Lightning can strike equipment or trees, causing them to fall into electrical lines and equipment.
  • Severe distress — Earthquakes of all sizes and hurricanes can damage electrical facilities and power lines. This sometimes catastrophic damage can cause long-term power outages.
  • Equipment failure — Even when the weather is not a primary cause of a power outage, faulty equipment in the electrical system can be a primary cause of outages. Hardware breakdowns result from failure due to age, performance and other factors. Sometimes, adverse weather, such as lightning strikes, can weaken equipment. High demands on the electrical grid also can cause overloads and faults that make equipment more susceptible to failure over time.
  • Wildlife — Small creatures have an uncanny knack for squeezing into places they do not belong in search of food or warmth. When squirrels, snakes and birds come into contact with equipment such as transformers and fuses, they can cause equipment to fail momentarily or shut down completely.
  • Trees — Weather can be a secondary contributor, causing circumstances that can lead to power outages when trees interfere with power lines. During high winds and ice storms, tree limbs or entire trees can come into contact with poles and power lines.
  • Public damage — Accidents happen. Vehicle accidents or construction equipment can cause broken utility poles, downed power lines and equipment damage. Excavation digging is another cause of power loss when underground cables are disturbed.
  • Tracking — When dust accumulates on the insulators of utility poles and then combines with light moisture from fog or drizzle, it turns dust into a conductor. This causes equipment to fail.
  • Momentary circuit interruptions — Blinks, or short-duration interruptions, are annoying. However, they serve a valuable purpose by shutting off the flow of electricity briefly to prevent a longer power outage when an object comes in contact with electric lines, causing a fault. If power surge strips (not multi-socket power strips) are not attached to your computer gear, the sudden loss of electricity and then a surge of power can cause data loss or component failure.

Dealing With It

You can not prevent the power grid from going down, but you can takes steps to ensure that it does not take your computer down with it. You also can learn what to do if your computer refuses to boot up to the desktop once the power returns.

First, before trouble strikes, make sure you set the BIOS switches to enable your computer to restart after a power interruption. The BIOS circuits are hardwired to the computer’s motherboard. You must establish the restart settings when there is no loss of electricity. You must be able to start the computer to reach the BIOS controls.

Just how you get there depends on the make and model of your computer. The BIOS restart setting is operating system-independent. It does not matter whether you run Microsoft Windows or Linux as the operating system of choice. The BIOS is responsible for “bootstrapping” the computer hardware and telling it to begin the startup process that leads to your desktop.

Adjusting the Dials

Here is how to set your computer’s BIOS to start automatically after power outage.

  1. Power On your computer and press “DEL” or “F1” or “F2” or “F10” to enter the BIOS (CMOS) setup utility. The way to enter into BIOS Settings depends on the computer manufacturer. Watch for a message in tiny print along the bottom edge of the screen when it first turns on.
  2. Inside the BIOS menu, look under the “Advanced” or “ACPI” or “Power Management Setup” menus* for a setting named “Restore on AC/Power Loss” or “AC Power Recovery” or “After Power Loss.”

    *Note: The “Restore on AC/Power Loss” setting can be found under different places inside BIOS setup according to computer manufacturer.

  3. Set the “Restore on AC/Power Loss” setting to “Power On.”
  4. Save and exit from BIOS settings. (The menu on the screen will give you the function key combination to do this.)

If you use a Linux-powered computer as a server, it probably is essential for you to get it up and running as soon as the power comes back on. The server might be located in a less accessible part of the building.

You can select additional settings to ensure an unattended restart after a power interruption. There are four places where you have to set things up to continue without human intervention:

  • BIOS: Make sure that the BIOS is set up to boot when power resumes.
  • Boot loader: Set up the boot loader to not wait for a user to select what OS to boot. Boot into the default OS right away.
  • Login: Set up the boot procedure to log in to a particular user automatically after boot. Do not wait for a person to log in.
  • Application restart: Set up the boot procedure to start the application programs automatically without human intervention.

Set Up Auto-Restart

Some computers have a BIOS option that prepares the computer to restart more easily when failed power is restored. You need to check ahead of time to verify that your computer has this feature and it is activated.

Here is how to do this:

  1. Open your computer’s BIOS settings menu. This is a hardware-dependent process that works fairly similar on all computers whether you boot into Windows or Linux. Restart the computer and watch for the first flash-screen to appear.

    Look for the Setup function key description. It will be “Setup F2” or F12, or something similar. Restart the computer and at the same time press the appropriate function key. Tap the key repeatedly during this initial startup period and the BIOS Settings menu will appear.

  2. Look for the Power Settings menu item within the BIOS and change the AC Power Recovery or similar setting to “On.” Look for a power-based setting that affirms that the PC will restart when power becomes available. Some older PCs lack this functionality. If your gear has it, save the configuration by pressing the designated function key as displayed on the screen. This reboots the computer.

If you are using a UPS to provide a short-interval battery supply when the power outage occurs, see the additional steps below to make the hardware connections. Meanwhile, let’s focus on how to restart computers when the power grid is back online.

Get Windows 10 to Start Again

After a power outage, your Windows system may not boot or restart properly. Any attempt to boot the system could bring you to a stalled loading screen or a blue screen with an error message.

Power surges are a common cause of booting issues with Windows. The sudden loss of power can corrupt system files. These suggestions may help you get around that problem.

  1. Start Windows 10 in Safe Mode.
    • Press the power on button on the computer.
    • Press Windows logo key + I on your keyboard to open Settings.
    • Select Update & Security > Recovery.
    • Under Advanced startup, select Restart now.
    • After your PC restarts to the Choose an option screen, select: Troubleshoot > Advanced options > Startup Settings > Restart.
    • After your PC restarts, select an option to finish the process.
  2. Windows 10 System Configurations, Safe boot screenshot
  3. Here is a second method to restart Windows 10 after a power outage. Use the built-in System Configuration Utility
    • From the Win+X Menu, open Run box and type msconfig; then and hit the Enter key.
    • Under the Boot tab, check the Safe boot and Minimal options. Click Apply/OK and exit.

    When the computer restarts, it will automatically enter Safe Mode. It will continue to boot into Safe Mode until you change the setting back to normal boot.

    So before you shutdown Windows 10, open msconfig again and uncheck the Safe Boot check box; click Apply/OK, and then click the Restart button.

  4. Windows Startup Settings screenshot

Get Windows 7 to Reboot

Each version of Microsoft Windows has a slightly different procedure to apply. If you have not yet upgraded to Windows 10, follow these steps to jump-start Windows 7.

  • Press the power on button to attempt to restart the computer.
  • Press F8 before the Windows 7 logo appears.
  • At the Advanced Boot Options menu, select the Repair your computer option. Then press the Enter key.
Windows 7 System Recovery Options screenshot

Fix the Linux Boot Failure

Linux may be more able to fight off malware and viruses and such than Microsoft Windows. Still. it is no more immune to electrical surges and power grid outages than any other piece of electronic equipment.

The electricity issue attacks the hardware before it impacts the operating system by inadvertently corrupting Linux files. So you should make sure that your BIOS settings are enabled to restart after a wrongful shutdown when the power fails.

Follow the same steps detailed above for “adjusting the dials.” When trouble strikes, apply the steps outlined below to force your Linux-powered computer to restart into Safe Mode, which is actually a recovery mode.

The process with most Linux distributions can be a little different than with Windows-powered boxes. The process depends in large part on your computer hardware.

Some computers — especially those custom-made with Linux preinstalled — have a BIOS option called “fast boot” activated in the BIOS setup, which disables the F2 setup and F12 boot menu prompts.

This is something you will have to verify while the computer is still operational. In that case, power off your device and turn it back on. Hold down the F2 key (or whatever key combination is displayed on the screen).

Activate Safe/Recovery Mode in Linux

When you see the BIOS setup utility on the screen, disable “fast boot,” save the setting and reboot.

Using the “fast boot” option, the Linux OS, in essence, jumps into the startup routine by forcing the computer to run the Grand Unified Bootloader (GRUB) or GRUB 2 menu.

  1. Press the Computer’s power on/off button.
  2. Hold down the left Shift key as the computer starts to boot. If holding the Shift key doesn’t display the menu, press the ESC key repeatedly to display the GRUB 2 menu. Sometimes the SHIFT & ESC keys work instead.

From there you can choose the recovery option. Follow the on-screen directions to attempt to restart your Linux computer.

Linux GRUB restart sccreenshot

Use a Live CD Boot Repair Disk

Super Grub2 Disk and Rescatux are strong and reliable
emergency boot solutions for Linux computers. Super Grub2’s stark interface makes it intimidating to use. Rescatux is far more user-friendly. Both are developed by the same source.

Super Grub2 Disk is a bit limited in its fix-and-go capabilities. If all you need is to bypass the problem and boot your failed system, it usually does the job. If you need a bona fide repair solution, use Rescatux.

The Rescatux emergency repair app is actually a live Linux distro CD environment. You can boot the dead computer from the CD/DVD (which you obviously must have created ahead of time).

Linux GRUB repair restart screenshot

Make the Hardware Connections

One of the major benefits of having a connected UPS is the ability to have the computer restart once the power supply resumes. The main things to look for when investigating which UPS to get are the initial cost, the cost of replacement batteries and the frequency with which you’ll have to replace them, the ability to manage and monitor the UPS from Linux, and the watts and volt-amps provided.

The batteries in a UPS degrade over time, resulting in a loss in its total power capacity. You might have to replace the batteries in the UPS in three to five years. If you only need to run a machine for five minutes and have the choice of a UPS that can run a machine for seven minutes or one that can give you 10, you can get away without replacing the batteries in the larger capacity UPS for a longer time — although the batteries for the larger UPS likely will be more expensive as well.

If you run the Linux OS, make sure the UPS you buy has software that supports Linux. If it does not, you will have to manually turn off the computers before the UPS’ batteries run out of juice.

Follow these steps to connect the UPS to your computer and peripherals such as printer and modem.

  1. Plug the PC and monitor into available controlled AC outlets on the UPS. Do not plug a power strip into the UPS socket first. Plug each hardware directly into its own UPS connection.
  2. Connect the included USB cable between UPS and PC. It is used for communications. Do not use a powered USB hub between UPS and PC or the lack of power during an outage will cause communications to fail.
  3. Plug the UPS into the wall power supply and allow it to charge. This takes four or more hours to charge fully.

Install and Configure the UPS Software if available. The directions will vary based on the UPS you have and the software that comes with it.

  1. Install the included software.
  2. Navigate to the Energy Management tab or similar within the Configuration setting.
  3. Check the Enable Energy Management check box and choose the Default settings in PowerChute. Look for any “Turn On Again” settings in any other power management software and check as appropriate.

A Few More Tips

With no endorsement intended, following is a list products to provide a starting point for purchasing a UPS or supporting software.

  • PowerPanel for Linux is a simple command line Linux daemon to control a UPS system attached to a Linux-based computer. It provides all the functionality of
    PowerPanel Personal Edition software, including automatic shutdown, UPS monitoring, alert notifications, and more. PowerPanel for Linux is compatible with Fedora 23, Suse Enterprise 12 SP1, CentOS 7, Red Hat Enterprise 7.2, Ubuntu 15.10 and Debian 8.4.
  • Apcupsd is a program for monitoring UPSes and performing a graceful computer shutdown in the event of a power failure. It runs on Linux, Mac OS X, Win32, BSD, Solaris and other OSes.
  • Linux comes with GPL-licensed open source apcupsd server (daemon) that can be used for power management and controlling most of APC’s UPS models on Linux, BSD, Unix and MS-Windows operating systems. Apcupsd works with most of APC’s Smart-UPS models as well as most simple signaling models such a Back-UPS, and BackUPS-Office.
  • WinPower is a UPS monitoring software that provides a user-friendly interface to provide power protection for computer systems encountering power failure.

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

On the Road to a Fabric Infrastructure Reality | IT Infrastructure Advice, Discussion, Community


Although the industry aspires to build a NVMe over Fabrics (NVMe-oF) infrastructure – one that is built on “a set of compute, NVMe flash storage, memory, and I/O components joined through a fabric interconnect and the software to configure and manage them”– organizations are just starting to shift their IT efforts toward this transition. In 2019 we should realize the first step on a trajectory toward fabric infrastructure, including fabric-attached memory, with the wide-spread adoption of fabric-attached storage. This may seem like a small step, but the commitment to fabric-attached storage means we are taking the necessary steps, as an industry, to ensure all components are connected with one another, allowing compute to move closer to where the data is stored rather than data being resigned to several steps away from compute.

A Fork in the Road, Architecturally Speaking

Essentially, we’re at a fork in the road as general-purpose processors and infrastructures are failing to meet the demands of data-intensive applications and data-driven environments due to their uniform ratio of compute, storage, and network bandwidth resources. IT teams are trying to build flexible infrastructures using these traditional, rigid building blocks.

To meet the level of flexibility and predictable performance needed in today’s data center, a new architectural approach has emerged where compute, storage and network are disaggregated into shared resource pools and treated as services. The trend toward ‘composable’ architectures refers to the ability to make the resources available on-the-fly and create a virtual application environment with the optimum performance required to support workload demands.

At the same time, companies need to more closely analyze their workloads and determine where there are inefficiencies. How can they implement or best optimize their resources so they can unlock the potential of their data? For example, in workloads such as AI there may be less crunching and more analyzing. That type of architecture is very different than standard general-purpose processors with memory and storage attached. As companies think about how to optimize the tasks at hand, different architectures and ideas come into play. IT is moving away from solving problems the way they did in the past.

Green Light on A New Approach

As big data and fast data applications start to create more extreme workloads, purpose-built architectures will be required to pick up where today’s general-purpose architectures have reached their limit. Applications which require analytics, machine learning, artificial intelligence, and smart systems demand purpose-built architectures. Key to making this evolution happen is to embrace open standard interfaces for both disaggregated hardware elements and the software required to orchestrate them.

The first step in achieving this composability is the disaggregation of storage, compute, and networking resources. NVMe-oF allows flash storage to be disaggregated from the server to make that storage widely available to multiple applications and servers. Connecting storage nodes over a fabric is important as it allows multiple paths to a given storage resource. Giving hardware more granularity enables higher utilization.

The second step in delivering a Composable Disaggregated Infrastructure (CDI) is the adoption of standard APIs – such as Redfish® and Swordfish™ – to dynamically assign resources when needed.

The new architecture enables customers to adapt to changing workloads. Capacity and performance can be added independently, reducing cost and complexity. Multiple applications can be served with a common storage pool, which improves capacity utilization and reduces isolated silos of storage.

Looking Ahead: The Future of Data Infrastructure

Innovative companies are leveraging open frameworks such as composable infrastructure to forge a path toward making fabric-based infrastructures a reality.

Steps are being taken today to develop frameworks in which storage, compute, and networking resources can independently scale. Software is used to orchestrate these resource pools into logical application servers, on-the-fly. This allows storage to be disaggregated from compute, enabling applications to share a common pool of storage capacity. Data can easily be shared between applications or needed capacity can be allocated to an application regardless of location, so they are highly configurable.

Change doesn’t happen overnight. It’s an evolution, not a revolution and will take some time for these functions and architectures to take shape. However, these early innovations are paving the way toward making fabric-based infrastructures a reality.



Source link

Linux Mint Turns Cinnamon Experience Bittersweet | Reviews


By Jack M. Germain

May 24, 2019 5:00 AM PT

Linux Mint Turns Cinnamon Experience Bittersweet

Linux Mint no longer may be an ideal choice for above-par performance out of the box, but it still can serve diehard users well with the right amount of post-installation tinkering.

The Linux Mint distro clearly is the gold standard for measuring Cinnamon desktop integration. Linux Mint’s developers turned the GNOME desktop alternative into one of the best Linux desktop choices. Linux Mint Cinnamon, however, may have lost some of its fresh minty flavor.

The gold standard for version 19.1 Tessa seems to be a bit tarnished when compared to some other distros offering a Cinnamon environment. Given that the current Linux Mint version was released at the end of last December, it may be a bit odd for me to focus on a review some five months later.

Linux Mint is my primary driver, though, so at long last I am getting around to sharing my lukewarm experiences. I have run Linux Mint Cinnamon on three primary work and testing computers since parting company with Ubuntu Linux Unity and several other Ubuntu flavors many years ago. I have recommended Linux Mint enthusiastically to associates and readers in my personal and professional roles.

Linux Mint Cinnamon desktop icons, desklets, applets

The Linux Mint Cinnamon desktop lets you place launch icons and screen desklets on the desktop and applets on the panel bar for added functionality.

– click image to enlarge –


However, my ongoing dissatisfaction with Tessa has led me to rethink my continuing allegiance. I’ve patiently waited for a kernel or core component upgrade to fix what has been giving Linux Mint a less than cool taste, at least for me. As I have waited, updates have come and gone — but not the fix for the maladies that linger within.

Comparing Tessa’s performance with a few recent distros that run the Cinnamon desktop apparently caused the self-appointed Mint police on a Linux Mint community forum to vilify my views. More on that situation later.

Linux Mint is an Ubuntu-based distribution that comes with four choices to provide a classic desktop experience. Version 19.1 (Tessa) is based on Ubuntu 18.04 Bionic Beaver and is scheduled to receive long-term support (LTS) until April 2023. It is available in three desktop versions: Cinnamon, MATE and Xfce, as well as a Debian Linux-based offering —
LMDE3.

Performance Woes

The problem for some Linux OS reviewers — including me — as well as a cadre of users is that Tessa’s performance is not always optimum. Linux Mint requires overly long bootup times. It takes longer to load many applications compared to how quickly the same software loads in other distros.

Lots of stumbling occurred while I was running Tessa on three computers that ran previous versions without encountering those issues. Out of the box, the performance was sluggish. At times the desktop interaction and system activity become unresponsive for fleeting seconds. A collection of little things and a few major annoyances made working with Tessa into an unhappy computing experience.

I deal primarily with the Cinnamon desktop, but the issues were not isolated to it. Some published documents offering “performance booster” tips for Linux Mint include fixes for MATE and XFCE editions.

I got used to the performance malaise to an extent, and I tried to ignore the issues. However, in recently testing other Cinnamon desktop iterations, I noticed that those same issues were not present.

Two that come to mind are
Feren OS and
Condres OS. There are others.

Cinnamon Itself Remains Tasty

Overall, I consider the Cinnamon desktop to be one of the most configurable and productive desktop options in Linux. Linux Mint’s developers worked on numerous improvements in version 19.1, which was a major upgrade from Linux Mint 19.

For instance, they reduced input lag on Nvidia cards and made the window manager feel more responsive when moving windows. Developers made it easy to turn off vertical sync in the System Settings. This delegates VSYNC to your GPU driver.

If that driver performs well, the input lag goes away and performance improves, according to release notes. Again, this might account for some of the performance factors. Maybe not.

The Linux Mint team ported a huge number of upstream changes from the GNOME project’s Mutter window manager to the Muffin window manager, a fork of Mutter by the Linux Mint team. Might this be another possible cause for performance issues in 19.1 despite the community’s claims that the OS is now more responsive? Again, maybe not.

The code base for Mint 19 is different. Since I really started having issues with LM with the upgrade to 19.1, I suspect that the fly in the Mint ointment landed there.

Waiting, Not Switching

The Cinnamon desktop is the perfect fit for my workflow and computing productivity. Even with the availability of Cinnamon on other distros, I am hesitant to switch players and move to a smaller distro community. I see value in using an OS maintained by a large thriving Linux community that took on open source giants and developed an equally powerful Linux distro alternative.

This is what makes the Linux experience so different than using proprietary operating systems. Linux users have choices. We are not locked into a rigid single computing path.

If one variation of a favorite desktop or distribution style has a problem, users can change distros to try something similar or something very different. Linux applications are mostly interchangeable. So is the data we use.

It is relatively easy to move from one Linux platform to another — or change distros and still be able to keep a favorite desktop environment.

So waiting for fixes seemed a better option than leaving Linux Mint behind, at least for now. Some Linux distro developers put their own unique styles into a particular desktop to make it different or better than plain vanilla versions. That is the case with Linux Mint.

Critical of the Critic

I logged onto the Linux Mint user forum recently to look for helpful hints on solving performance issues. I used my own LM forum user credentials, which are not identifiable with this publication. Of course, I found nothing. What I did find was my name and reference to the Linux Mint-related comments from a few of my LinuxInsider reviews. That is when I discovered the vitriol directed at me.

One of the suggestions made to me in the LM forum was to buy a new computer or upgrade to lots of RAM if I wanted trouble-free performance. Merely upgrading from LM 19 or doing anything other than a clean install on a new computer would have been asking for trouble. The implication was that nobody else had trouble, so whatever was causing my so-called issues must have been my fault.

Really? My computers running Linux Mint all far exceed the recommended hardware requirements. Is Linux Mint falling into the required upgrade path just like Windows 10?

Other user forum comments included the alleged performance troubles I “claimed” to be having were simply my fault because I was obviously a newbie, didn’t know what I was doing, or was trying to “get more eyeballs” for my LinuxInsider reviews by making “snide, unsubstantiated comments” derogatory to Linux Mint.

The trolls rejected my polite explanation that I was a long-time Linux Mint user who went from having no issues with earlier versions to experiencing the same issues on the same three computers. Since nobody else had trouble, it must have been me, they suggested. Another suggestion was that maybe I was making up the problems.

One of the sticking points was that in my recent comments about other Cinnamon desktop Linux distros I reviewed, I suggested that they did not have the performance snags and thus might be better alternatives to Linux Mint. In general, the LM forum trolls were angered that anyone — particularly ME — would be so heretical as to make negative attacks on the Great Linux Mint god.

Of course, the Linux Mint god protectors had no way of knowing that LinuxInsider readers on several occasions had conversed with me via email about similar issues they experienced with Linux Mint. They had asked what better options I could recommend for running a Cinnamon-based Linux distro.

I tried to explain to the LM forum naysayers that my comments were neither snide nor unsubstantiated, and that I still used Linux Mint 19.1 Cinnamon, in fact. Of course, the flamers once again insisted that I had attacked Linux Mint unfairly and repeatedly. So I stepped out of the conversation.

Ironically, while the LM forum diatribe was unfolding, I received an email at ECT News Network from a supposed reader who claimed to be interested in my reviews about Linux Mint. She asked me to send her a link of all my published reviews on that topic.

One forum participant actually jumped into the fray to suggest there were performance issues that he had addressed in his own blog about Linux Mint. He
posted a link to fixes I could try.

Mixed Success at a Price

That post was very useful and informative. It laid out fixes to try for all three Linux Mint Tessa desktops. I tried several of the suggested tweaks, and the improved performance speed was enough to salvage my faltering relationship with Linux Mint.

I noticed what appeared to be a pattern in the tweaks. Many of them address default settings. That makes perfect sense, since other than adding a few favorite applets to the Cinnamon bottom panel after installation, I had made few changes. I had not ventured to change the look-and-feel factors.

One major tweak involved overriding the memory swap settings. The speedup tips for Tessa noted that by default the “swappiness” factor (aka the inode cache) was set to 60. The suggested fix was to reduce the size to 10. The tweak tips author noted that this area was the “absolute number one” fix to try.

That process involved typing a string of commands into a terminal and rebooting the computer. It worked! Booting time still takes longer than booting other distros, but the overall system responsiveness definitely was improved.

LibreOffice presented a glaring example of unacceptable performance. Before the swap tweak, it took two minutes or more to load a document or spreadsheet. Subsequent reloads took a bit less time. Now that loading time interval is cut down by at least half the time.

Applying other speedup tweaks also improved performance system-wide, but those tweaks came at a price. The adjustments involved turning off most of the visual effects, such as animations. That resulted in turning Linux Mint into more of a plain vanilla experience without many of the special effects that made Linux Mint’s integration of Cinnamon, MATE and XFCE different from the rest.


Linux mint 19.1 Tessa, Scale and Expo views

Scale and Expo views of running applications on multiple desktops are among the special effects not hampered by tweaks to speed up Linux mint 19.1 Tessa.

– click image to enlarge –


Bottom Line

I’d love to hear about your experiences in using the Linux OS. Use the link below to offer your perspective in our Reader’s Comments section.

If you now use or in the past used Linux Mint, what can you share about your experience with its performance?

Do you think distro developers should be more forthcoming with users in addressing issues such as how to tweak their distribution for better performance?

Want to Suggest a Review?

Is there a Linux software application or distro you’d like to suggest for review? Something you love or would like to get to know?

Please
email your ideas to me, and I’ll consider them for a future Linux Picks and Pans column.

And use the Reader Comments feature below to provide your input!


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link