Monthly Archives: December 2017

Dell to Disable Intel’s Insecure IME » Linux Magazine

Intel’s IME (Intel vPro Management Engine) came under fire recently when security researchers found serious bugs that allowed a remote attacker to take control of the affected systems.

“The exploitation allows an attacker to get full control over business computers, even if they are turned off (but still plugged into an outlet). We really hope by bringing this to light, it will raise awareness about security issues in firmware and avoid possible issues in the future,” wrote Embedi, the security firm that discovered the bug.

Intel doesn’t share any information about these “secretive” Management Engine technologies. ME modules sit above the operating systems and users have no access or control over the technology. Organizations like EFF are calling for more transparency around ME modules. EFF asked Intel to “Provide a way for their customers to audit ME code for vulnerabilities. That is presently impossible because the code is kept secret.”

Because Intel doesn’t provide any such information, PC vendors and users don’t have any means to audit or fix such vulnerabilities. Now one PC vendor has taken steps to protect its users. Dell is now disabling IME in all new systems, and users will have to pay to enable the service.

In a statement to ExtremeTech, Dell said, “Dell has offered a configuration option to disable the Intel vPro Management Engine (ME) on select commercial client platforms for a number of years (termed Intel vPro – ME inoperable, custom order on Some of our commercial customers have requested such an option from us, and in response, we have provided the service of disabling the Management Engine in the factory to meet their specific needs. As this SKU can also disable other system functionality it was not previously made available to the general public.”

PC vendors, especially those selling Linux preloaded systems, are following the suite and disabling ME by default. Dell is the biggest PC vendor, and if other vendors start disabling the engine, Intel might be compelled to either open source the technology or offer more transparency around it.

Source link

KubeCon Concluded in Austin, Texas » Linux Magazine

Kubernetes has become the Linux of the cloud. It has seen massive adoption in the last three years. The first release of Kubernetes was announced in 2014. All three major cloud providers, including Google (the creator of Kubernetes), Microsoft, and AWS now support Kubernetes. Even Docker started offering Kubernetes as an orchestrator along with its own orchestrator Swarm. Cloud Foundry has adopted Kubernetes as Cloud Foundry Container Runtime, and OpenStack vendors have already adopted Kubernetes to deploy OpenStack as an application. All major Linux vendors, including Red Hat, SUSE, and Canonical offer Kubernetes distributions.

The adoption and growth of Kubernetes was the theme of KubeCon, the Kubernetes conference that was held between December 6 and 8 in Austin, Texas. During the conference, Oracle open sourced its Kubernetes tools for serverless deployment and multicloud management.

Microsoft announced that Azure would bring new serverless and DevOps capabilities to the Kubernetes community, and Bitnami launched a new in-cluster Kubernetes Application Consol.

The Kubernetes community announced the 1.0 release of CoreDNS, a cluster DNS for Kubernetes. JFrog and Baidu joined CNCF (Cloud Native Computing Foundation), the home of Kubernetes, as Gold members.

Source link

7 Enterprise Storage Trends for 2018

Enterprises today are generating and storing more data than ever, and the trend shows no sign of slowing down. The rise of big data, the internet of things, and analytics are all contributing to the exponential data growth. The surge is driving organizations to expand their infrastructure, particularly data storage.

In fact, the rapid growth of data and data storage technology is the biggest factor driving change in IT infrastructure, according to the Interop ITX and InformationWeek 2018 State of Infrastructure study. Fifty-five percent of survey respondents choose it as one of the top three factors, far exceeding the need to integrate with cloud services.

Organizations have been dealing with rapid data growth for a while, but are reaching a tipping point, Scott Sinclair, senior analyst at ESG, said in an interview.

“If you go from 20 terabytes to 100 terabytes, that’s phenomenal growth but from a management standpoint, it’s still within the same operating process,” he said. “But if you go from a petabyte to 10 or 20 petabytes, now you start taking about a fundamentally different scale for infrastructure.”

Moreover, companies today see the power of data and understand that they need to harness it in order to become competitive, Sinclair said.

“Data has always been valuable, but often it was used for a specific application or workload. Retaining data for longer periods was more about disaster recovery, having an archive, or for regulatory compliance,” he said. “As we move more into the digital economy, companies want to leverage data, whether it’s to provide more products and services, become more efficient, or better engage with their customers.”

To support their digital strategy, companies are planning to invest in more storage hardware in their data centers, store more data in the cloud, and investigate emerging technologies such as software-defined storage, according to the 2018 State of Infrastructure study. Altogether, they’re planning to spend more on storage hardware than other infrastructure.

Read on for more details from the research and to find out about enterprise storage plans for 2018. Click on the row of buttons below or on the arrows on either side of the images. For the full survey results, download the complete report.

(Image: Peshkova/Shutterstock)

Source link

GeckoLinux Brings Flexibility and Choice to openSUSE |

I’ve been a fan of SUSE and openSUSE for a long time. I’ve always wanted to call myself an openSUSE user, but things seemed to get in the way—mostly Elementary OS. But every time an openSUSE spin is released, I take notice. Most recently, I was made aware of GeckoLinux—a unique take (offering both Static and Rolling releases) that offers a few options that openSUSE does not. Consider this list of features:

  • Live DVD / USB image

  • Editions for the following desktops: Cinnamon, XFCE, GNOME, Plasma, Mate, Budgie, LXQt, Barebones

  • Plenty of pre-installed open source desktop programs and proprietary media codecs

  • Beautiful font rendering configured out of the box

  • Advanced Power Management (TLP) pre-installed

  • Large amount of software available in the preconfigured repositories (preferring packages from the Packman repo—when available)

  • Based on openSUSE (with no repackaging or modification of packages)

  • Desktop programs can be uninstalled, along with all of their dependencies (whereas openSUSE’s patterns often cause uninstalled packages to be re-installed automatically)

  • Does not force the installation of additional recommended packages, after initial installation (whereas openSUSE pre-installs patterns that automatically installs recommended package dependencies the first time the package manager is used)

The choice of desktops alone makes for an intriguing proposition. Keeping a cleaner, lighter system is also something that would appeal to many users—especially in light of laptops running smaller, more efficient solid state drives.

Let’s dig into GeckoLinux and see if it might be your next Linux distribution.


I don’t want to say too much about the installation—as installing Linux has become such a no-brainer these days. I will say that GeckoLinux has streamlined the process to an impressive level. The installation of GeckoLinux took about three minutes total (granted I am running it as a virtual machine on a beast of a host—so resources were not an issue). The difference between installing GeckoLinux and openSUSE Tumbleweed was significant. Whereas GeckoLinux installed in single digits, openSUSE took more 10 minutes to install. Relatively speaking, that’s still not long. But we’re picking at nits here, so that amount of time should be noted.

The only hiccup to the installation was the live distro asking for a password for the live user. The live username is linux and the password is, as you probably already guessed, linux. That same password is also the same used for admin tasks (such as running the installer).

You will also note, there are two icons on the desktop—one to install the OS and another to install language packs. Run the OS installer. Once the installation is complete—and you’ve booted into your desktop—you can then run the Language installer (if you need the Language packs—Figure 1).

After the Language installer finished, you can then remove the installer icon from the desktop by right-clicking it and selecting Move to Trash.

Those fonts

The developer claims beautiful font rendering out of the box. In fact, the developer makes this very statement:

GeckoLinux comes preconfigured with what many would consider to be good font rendering, whereas many users find openSUSE’s default font configuration to be less than desirable.

Take a glance at Figure 2. Here you see a side by side comparison of openSUSE (on the left) and GeckLinux (on the right). The difference is very subtle, but GeckoLinux does, in fact, best openSUSE out of the box. It’s cleaner and easier to read. The developer claims are dead on. Although openSUSE does a very good job of rendering fonts out of the box, GeckoLinux improves on that enough to make a difference. In fact, I’d say it’s some of the cleanest (out of the box) looking fonts I’ve seen on a Linux distribution.

I’ve worked with distributions that don’t render fonts well. After hours of writing, those fonts tend to put a strain on my eyes. For anyone that spends a good amount of time staring at words, well-rendered fonts can make the difference between having eye strain or not. The openSUSE font rendering is just slightly blurrier than that of GeckoLinux. That matters.

Installed applications

GeckoLinux does exactly what it claims—installs just what you need. After a complete installation (no post-install upgrading), GeckoLinux comes in at 1.5GB installed. On the other hand, openSUSE’s post-install footprint is 4.3GB.  In defense of openSUSE, it does install things like GNOME Games, Evolution, GIMP, and more—so much of that space is taken up with added software and dependencies. But if you’re looking for a lighter weight take on openSUSE, GeckoLinux is your OS.

GeckoLinux does come pre-installed with a couple of nice additions—namely the Clementine Audio player (a favorite of mine), Thunderbird (instead of Evolution), PulseAudio Volume Control (a must for audio power users), Qt Configuration, GParted, Pidgen, and VLC.

If you’re a developer, you won’t find much in the way of development tools on GeckoLinux. But that’s no different than openSUSE (even the make command is missing on both). Naturally, all the developer tools you need (to work on Linux) are available to install (either from the command line or from with YaST2).


Between openSUSE and GeckoLinux, there is very little noticeable difference in performance. Opening Firefox on both resulted in maybe a second or two variation (in favor of GeckoLinux). It should be noted, however, that the installed Firefox on both was quite out of date (52 on GeckoLinux and 53 on openSUSE). Even after a full upgrade on both platforms, Firefox was still listed at release 52 on GeckoLinux, whereas openSUSE did pick up Firefox 57. After downloading the Firefox Quantum package on GeckoLinux, the application opened immediately—completely blowing away both out of the box experiences on openSUSE and GeckLinux. So the first thing you will want to do is get Firefox upgraded to 57.

If you’re hoping for a significant performance increase over openSUSE, look elsewhere. If you’re accustomed to the performance of openSUSE (it not being the sprightliest of platforms), you’ll feel right at home with GeckoLinux.

The conclusion

If you’re looking for an excuse to venture back into the realm of openSUSE, GeckoLinux might be a good reason. It’s slightly better looking, lighter weight, and with similar performance. It’s not perfect and, chances are, it won’t steal you away from your distribution of choice, but GeckoLinux is a solid entry in the realm of Linux desktops.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Software-Defined Data Centers: VMware Designs

These are best practices and proven practices for how a design for all components in the SDDC might look. It will highlight a possible cluster layout, including a detailed description of what needs to be put where, and why a certain configuration needs to be made.

Typically, every design should have an overview to quickly understand what the solution is going to look like and how the major components are related. In the SDDC one could start drawing the vSphere Clusters, including their functions.

Logical overview of the SDDC clusters

This following image describes an SDDC that is going to be run on the three-cluster approach:


The three clusters are as follows:

  • The management cluster for all SDDC managing services
  • The NSX edge cluster where all the north-south network traffic is flowing through
  • The actual payload cluster where the production VMs get deployed

Tip: Newer best practices from VMware, as described in the VMware validated designs (VVD) version 3.0, also propose a two-cluster approach. In this case, the edge cluster is not needed anymore and all edge VMs are deployed directly onto the payload cluster. This can be a better choice from a cost and scalability perspective. However, it is important to choose the model according to the requirements and constraints found in the design.

The overview should be only as complex as necessary since its purpose is to give a quick impression over the solution and its configuration. Typically, there are a few of these overviews for each section.

This forms a basic SDDC design where the edge and the management cluster are separated. According to the latest VMware best practices, payload and edge VMs can also run on the same cluster. This basically is a decision based on scale and size of the entire environment. Often it is also a decision based on a limit or a requirement — for example, edge hosts need to be physically separated from management hosts.

Logical overview of solution components

This is as important as the cluster overview and should describe the basic structure of the SDDC components, including some possible connections to third-party integration like IPAM.

Also, it should provide a basic understanding for the relationship between the different solutions.


It is important to have an understanding of these components and how they work together. This will become important during the deployment of the SDDC since none of these components should be left out or configured wrong. For the vRealize Log Insight connects, that is especially important.

Note: If not all components are configured to send their logs into vRealize Log Insight, there will be gaps, which can make troubleshooting very difficult or even impossible. A plan, which describes the relation, can be very helpful during this step of the SDDC configuration.

These connections should also be reflected in a table to show the relationship and confirm that everything has been set up correctly. The better the detail is in the design, the lower the chance that something gets configured wrong or is forgotten during the installation.

The vRealize Automation design

Based on the use case, there are two setup methods/designs vRealize Automation 7 supports when being installed.

Small: Small stands for a very dense and easy-to-deploy design. It is not recommended for any enterprise workloads or even for production. But it is ideal for a proof of concept (PoC) environment, or for a small dev/test environment to play around with SDDC principles and functions.

The key to the small deployment is that all the IaaS components can reside on one single Windows VM. Optionally, there can be additional DEMs attached which eases future scale. However, this setup has one fundamental disadvantage: There is no built-in resilience or HA for the portal or DEM layer. This means that every glitch in one of these components will always affect the entire SDDC.

Enterprise: Although this is a more complex way to install vRealize Automation, this option will be ready for production use cases and is meant to serve big environments. All the components in this design will be distributed across multiple VMs to enable resiliency and high availability.


In this design, the vRealize Automation OVA (vApp) is running twice. To enable true resilience a load balancer needs to be configured. The users access the load balancer and get forwarded to one of the portals. VMware has good documentation on configuring NSX as a load balancer for this purpose, as well as the F5 load balancer. Basically, any load balancer can be used, as long as it supports HTML protocol checks.

Note: DNS alias or MS load-balancing should not be used for this, since these methods cannot prove if the target server is still alive. According to VMware, there are checks required for the load balancer to understand if each of the vRA Apps is still available. If these checks are not implemented, the user will get an error while trying to access the broken vRA

In addition to the vRealize Automation portal, there has to be a load balancer for the web server components. Also, these components will be installed on a separate Windows VM. The load balancer for these components has the same requirements as the one for the vRealize Automation instances.

The active web server must only contain one web component of vRA, while the second (passive) web server can contain component 2, 3, and more.

Finally, the DEM workers have to be doubled and put behind a load balancer to ensure that the whole solution is resilient and can survive an outage of any one of the components.

Tip: If this design is used, the VMs for the different solutions need to run on different ESXi hosts in order to guarantee full resiliency and high availability. Therefore, VM affinity must be used to ensure that the DEMs, web servers or vRA appliances never run on the same ESXi host. It is very important to set this rule, otherwise, a single ESXi outage might affect the entire SDDC.

This is one of VMware’s suggested reference designs in order to ensure vRA availability for users requesting services. Although it is only a suggestion it is highly recommended for a production environment. Despite all the complexity, it offers the highest grade of availability and ensures that the SDDC can stay operative even if the management stack might have troubles.

Tip: vSphere HA cannot deliver this grade of availability since the VM would power off and on again. This can be harmful in an SDDC environment. Also, to bring back up operations, the startup order is important. Since HA can’t really take care of that, it might power the VM back on at a surviving host, but the SDDC might still be unusable due to connection errors (wrong order, stalled communication, and so on).

Once the decision is made for one of these designs, it should be documented as well in the setup section. Also, take care that none of the limits, assumptions, or requirements are violated with that decision.

Another mechanism of resiliency is to ensure that the required vRA SQL database is configured as an SQL cluster. This would ensure that no single point of failure could affect this component. Typically, big organizations have already some form of SQL cluster running, where the vRA database could be installed. If this isn’t a possibility, it is strongly recommended to set up such a cluster in order to protect the database as well. This fact should be documented in the design as a requirement when it comes to the vRA installation.

This tutorial is a chapter excerpt from “Building VMware Software-Defined Data Centers” by Valentin Hamburger. Use the code ORSCP50 at checkout to save 50% on the recommended retail price until Dec. 15.

Source link