Tag Archives: linux

AMD Posts New “AMD-PSTATE” CPUFreq Driver Leveraging CPPC For Better Perf-Per-Watt


AMD --

At last! AMD has posted the Linux kernel driver patches for their new “AMD-PSTATE” driver! This driver with modern AMD Zen CPUs (initially limited to Zen 3) to achieve greater performance per Watt / power efficiency on Linux than the conventional ACPI CPUFreq driver.

The new AMD-PSTATE driver is akin to Intel’s P-State driver long used by Intel CPUs as better catering to their hardware than the generic ACPI CPUFreq driver. AMD-PSTATE leverages ACPI Collaborative Processor Performance Controls (CPPC) for making more informed performance state decisions.

ACPI CPPC has been supported since Zen 2 processors but the initial AMD-PSTATE driver is limited to just Zen 3 processors. AMD says they will extend their coverage with time, which would mean going back to supporting Zen 2 processors too.

It was back in July 2019 that AMD originally posted “amd_cpufreq” as a CPPC-based driver right as they were launching the Zen 2 processors. However, that patch set was abandoned and never made it to mainline. Over the past two years I routinely asked AMD about the CPPC Linux support to which they commented on the lack of resources, but great to see this new AMD CPUFreq driver finally materializing.

It’s not entirely unexpected though. Last month I wrote about AMD and Valve working on a new CPU performance scaling design. In that prior article basically laid out that it would likely be the long-awaited CPPC-based approach and now this new driver patch series is delivering on just that.

Further pointing to the Valve / Steam Deck connection is that the initial AMD P-State patches are tested on AMD Cezanne APUs. AMD posted a few tests with their patch series showing nice gains out of this AMD-PSTATE driver compared to ACPI CPUFreq.

The code was posted today across 19 patches. Needless to say, I am currently building a new kernel with these patches and will be delivering a number of AMD Ryzen 5000 series and EPYC 7003 series benchmarks over the coming days looking at the performance and power efficiency of this new driver. It’s a long time with more than two years since the original AMD CPPC patches were posted, but great to see it come about and this time around hopefully has sufficient momentum to get worked on punctually and mainlined for benefiting the Steam Deck and the growing number of AMD Linux users at large. Stay tuned!


Container Security Best Practices | Network Computing


The evolution of container technology propels interest in tools aimed at securing microservices. This article will shed light on the solutions that exist in this area and the top threats to container infrastructure. It will also explain who should be responsible for safeguarding such applications in a company.

Container-based microservices are increasingly popular and are used to accomplish a variety of tasks. However, traditional security measures may be inefficient in virtual environments. Let’s discuss what containerized applications are and how to secure them.

What are containers?

Containers are essentially small virtual machines designed to simplify and speed up the development process. This term is closely related to cloud-native applications, which are independent cloud services based on four basic elements. These are the DevOps paradigm, the implementation of the CI/CD pipeline, application development according to microservice architecture, and the use of containers along with their orchestration tools.

The emergence of containers was a response to the poor implementation of multitasking in modern operating systems. Containerization first and foremost helps work with open-source products by running them on a single server, which would be much harder to do with the standard operating system features. Containers can be provided as a service, delivered from the cloud, or deployed within a customer’s computer infrastructure.

Orchestration tools are important elements of the container ecosystem. They underlie load balancing, fault tolerance, and centralized management. As a result, these instruments create conditions for system scaling. Orchestration can be implemented in four ways:

  • Cloud provider service;
  • A self-deployed Kubernetes cluster;
  • Container management systems intended for developers;
  • Container management systems focused on usability.

There are three main stages of the container lifecycle. First, the container is built and undergoes functional and load tests. Next comes the storage phase, when the container is in the image registry, waiting to be launched. Container runtime is the third stage.

What can undermine container security?

In December 2020, cloud security company Prevasio examined about 4 million containers hosted at Docker Hub and found that 51% of them were riddled with critical vulnerabilities, and 13% contained high-impact loopholes. Coin miners, hacker tools, and other types of malware, including ransomware, were detected inside the crudely secured container images. Only a fifth of all the analyzed images had no known vulnerabilities. These findings show the big picture: containers are susceptible to serious threats.

The security of the infrastructure that hosts containers is hugely important in this context. In addition to the proper configuration of orchestration systems, a well-thought-out set of permissions to access the Docker node or Kubernetes plays a major role. Another aspect is the protection of the container itself, which largely depends on the security of the images that were used to build it.

The later a vulnerability is identified, the harder it is to fix. This is the gist of the Shift Left paradigm, which recommends focusing on security as early in the product lifecycle as the design or requirements gathering stage. Automatic security checks can also be embedded into the CI/CD pipeline.

Slip-ups during the continuous integration (CI) phase are risky business as well. For instance, the use of questionably safe third-party services for testing may leak data from the product base. Therefore, container security should be approached holistically, with each stage of the software engineering lifecycle being subject to thorough analysis. The booming containerization has also raised the issue of trust in regards to the environment, the code, and the running applications.

There are four levels of security for cloud-native applications: code security, build security, deployment security, and runtime security. Each of these includes several elements that need to be addressed. At the code security level, for instance, these are secure development and open-source component management. When it comes to container security in general, it essentially boils down to controlling integrity, delimiting access to the pipeline, and ensuring that vulnerabilities are identified before a product is released.

Information security professionals traditionally work in real-time, blocking problems “here and now.” The use of unified application deployment tools (and a container is one of the ways to unify this process) also allows testing a product before it is deployed. Therefore, containers can be checked for malicious code and vulnerable components in advance to identify the secrets left behind and unveil policy violations.

To elaborate further on container security, it is also worth touching upon the target audience of specialized InfoSec products. Are these systems intended for information security specialists, or are they closer to developers and users? There is no short answer. Some are more focused on InfoSec experts; some are oriented towards building interoperability between security teams, cluster administrators, and developers, while others provide visibility into containers, allowing you to understand how the application is coded and how it works.

Managing secrets in containerized environments

Containerized microservices communicate with each other and external systems by establishing secure connections, performing authentication with usernames and passwords, and using other types of secrets. How do you protect keys, passwords, and other sensitive data in containers from leaking? How is this issue addressed in Kubernetes? Is it possible to control this aspect of security?

Kubernetes comes with a basic mechanism for managing secrets, which prevents keys and passwords from being stored in plaintext. In addition, there are separate products on the market that serve as secrets management tools for container environments.

The need for such extras stems from the fact that Kubernetes lacks a mechanism for managing the lifecycle of secrets. Another noteworthy fact is that secrets are stored in a text file by default, which means that a provider may access them during the deployment of a containerized environment in the cloud.

It is also important to manage the process of adding secrets, control the use of keys down the road, and specify restrictions that kick in when one container tries to access the secret data of another. This layer of the problem requires the development of security policies and other techniques to manage sensitive data. One more challenge comes down to the immaturity and high volatility of the container orchestration market. By and large, a clear understanding of how to properly implement secrets management has yet to emerge.

Traditional defenses in container-based ecosystems

Let’s now figure out if traditional security tools, such as data loss prevention (DLP), web application firewall (WAF), network traffic analysis (NTA), and others, can be used to secure virtual cluster networks and containers.

Classic next-generation firewall (NGFW) systems cannot efficiently control traffic in virtual cluster networks. Special NGFW tools that run inside a cluster can do the trick. Essentially, these are containers that monitor data in transit.

It is not always necessary to embed protection tools into a container, as this increases the complexity of the application. In some cases, it makes more sense to use traditional security solutions. The choice of a defense method depends on the specific company and the set of tools already in use. Furthermore, there are specially crafted instruments that supervise containers while they are running and quickly rebuild them if problems are spotted.

That said, if Kubernetes is used as a service, traditional security tools simply won’t be deployable. On the other hand, if the container orchestration system is hosted on-premises, a full range of tools can be used to protect it.

The security principles for conventional infrastructure and containerization are basically the same, but their implementation may differ. The security tool must understand the environment it is safeguarding.

Who is responsible for container security?

It is also worth discussing who should be responsible for container infrastructure protection in an organization – information security specialists or developers. What expertise should these people have? In the case of containers, the usual roles of teams are reversed, and the principle of “who developed it owns it” applies.

The task of managing the defenses is assigned to the developers, but a separate team of InfoSec specialists sets the security rules and investigates incidents. The department responsible for information security most often acts as the customer for the implementation of container technology protections. Sometimes the development team gets involved, and almost never the operation team.

As for the knowledge and skills, a specialist responsible for container security should have an understanding of the infrastructure, proficiency in Linux and Kubernetes, as well as a desire to learn are the most important.



Source link

Outdated Linux Versions, Misconfigurations Triggering Cloud Attacks: Report


The “Linux Threat Report 2021 1H” from Trend Micro found that Linux cloud operating systems are heavily targeted for cyberattacks, with nearly 13 million detections in the first half of this year. As organizations expand their footprint in the cloud, correspondingly, they are exposed to the pervasive threats that exist in the Linux landscape.

This latest threat report, released Aug. 23, provides an in-depth look at the Linux threat landscape. It discusses several pressing security issues that affect Linux running in the cloud.

Key findings include that Linux is powerful, universal, and dependable, but not devoid of flaws, according to the researchers. However, like other operating systems, Linux remains susceptible to attacks.

Linux in the cloud powers most infrastructures, and Linux users make up the majority of the Trend Micro Cloud One enterprise customer base at 61 percent, compared to 39 percent Windows users.

The data comes from the Trend Micro Smart Protection Network (SPN) or the data reservoir for all detections across all Trend Micro’s products. The results show enterprise Linux at considerable risk from system configuration mistakes and outdated Linux distributions.

For instance, data from internet scan engine Censys.io revealed that nearly 14 million results for exposed devices running any sort of Linux operating system on July 6, 2021. A search for port 22 in Shodan, a port commonly used for Secure Shell Protocol (SSH) for Linux-based machines, showed almost 19 million exposed devices detected as of July 27, 2021.

Like any operating system, security depends entirely on how you use, configure, or manage the operating system. Each new Linux update tries to improve security. However, to get the value you must enable and configure it correctly, cautioned Joseph Carson, chief security scientist and advisory CISO at Thycotic.

“The state of Linux security today is rather good and has evolved in a positive way, with much more visibility and security features built-in. Nevertheless, like many operating systems, you must install, configure, and manage it with security in mind — as how cybercriminals take advantage is the human touch,” he told LinuxInsider.

Top Linux Threats

The Trend Micro Report disclosed rampant malware families within Linux systems. Unlike previous reports based on malware types, this study focused on the prevalence of Linux as an operating system and the pervasiveness of the various threats and vulnerabilities that stalk the OS.

That approach showed that the top three threat detections originated in the U.S. (almost 40 percent), Thailand (19 percent), and Singapore (14 percent).

Detections arose from systems running end-of-life versions of Linux distributions. The four expired distributions were from CentOS versions 7.4 to 7.9 (almost 44 percent), CloudLinux Server (more than 40 percent), and Ubuntu (about 7 percent).


Trend Micro tracked more than 13 million malware events flagged from its sensors. Researchers then cultivated a list of the prominent threat types consolidated from the top 10 malware families affecting Linux servers from Jan. 1 to June 30, 2021.

The top threat types found in Linux systems in the first half of 2021 are:

  • Coinminers (24.56 percent)
  • Web shell (19.92 percent)
  • Ransomware (11.56 percent)
  • Trojans (9.56 percent)
  • Others (3.15 percent)

The top four Linux distributions where the top threat types in Linux systems were found in H1-2021 are:

  • CentOS Linux (50.80 percent)
  • CloudLinux Server (31.24 percent)
  • Ubuntu Server (9.56 percent)
  • Red Hat Enterprise Linux Server (2.73 percent)

Top malware families include:

  • Coinminers (25 percent)
  • Web shells (20 percent)
  • Ransomware (12 percent)

CentOS Linux and CloudLinux Server are the top Linux distributions with the found threat types, while web application attacks happen to be the most common attack vector.

Web Apps Top Targets

Most of the applications and workloads exposed to the internet run web applications. Web application attacks are among the most common attack vectors in Trend Micro’s telemetry, said researchers.

If launched successfully, web app attacks allow hackers to execute arbitrary scripts and compromise secrets. Web app attacks also can modify, extract, or destroy data. The research shows that 76 percent of the attacks are web-based.

The LAMP stack (Linux, Apache, MySQL, PHP) made it inexpensive and easy to create web applications. In a very real way, it democratized the internet so anyone can set up a web application, according to John Bambenek, threat intelligence advisor at Netenrich.

“The problem with that is that anyone can set up a web app. While we are still waiting for the year of Linux on the desktop, it is important for organizations to use best practices for their web presences. Typically, this means staying on top of CMS patches/updates and routine scanning with even open-source tools (like the Zed Attack Proxy) to find and remediate SQL injection vulnerabilities,” he told LinuxInsider.

The report referenced the Open Web Application Security Project (OWASP) top 10 security risks, which lists injection flaws and cross-scripting (XSS) attacks remaining as high as ever. What strikes Trend Micro researchers as significant is the high number of insecure deserialization vulnerabilities.


This is partly due to the ubiquity of Java and deserialization vulnerabilities in it, according to Trend Micro. It’s report also noted that the Liferay Portal, Ruby on Rails, and Red Hat JBoss deserialization vulnerabilities as being prominent.

Attackers also try to use vulnerabilities where there is broken authentication to gain unauthorized access to systems. Plus, the number of command injection hits also poses a surprise as they are higher than what Trend Micro’s analysts expected.

Expected Trend

It is no surprise that the majority of these attacks are web-based. Every website is different, written by different developers with different skill sets, observed Shawn Smith, director of infrastructure at nVisium.

“There is a wide range of different frameworks across a multitude of languages with various components that all have their own advantages and drawbacks. Combine this with the fact that not all developers are security gurus, and you’ve got an incredibly alluring target,” he told LinuxInsider.

Web servers are one of the most common services to expose to the internet because most of the world interacts with the internet through websites. There are other areas exposed — like FTP or IRC servers — but the vast majority of the world is using websites as their main contact point to the internet.

“As a result, this is where attackers will focus to get the biggest return on investment for their time spent,” Smith said.

OSS Linked to Supply Chain Attacks

Software supply chains must be secured to deal with the Linux attack landscape as well, noted the Trend Micro report. Attackers can insert malicious code to compromise software components of third-party suppliers. That code then connects to a command-and-control server to download and deploy backdoors and other malicious payloads within the system, causing remote code.

This can lead to remote code execution to an enterprise’s system and computing resources. Supply chain attacks can also come from misconfigurations, which are the second top incident type in cloud-native environments, according to the Trend Micro report. More than 56 percent of their survey respondents had a misconfiguration or known unpatched vulnerability incident involving their cloud-native applications.

Hackers are having an easy time. “The major attack types on web-based applications have remained constant over the recent past. That, combined with the rising time-to-fix and declining remediation rates, makes the hackers’ job easier,” said Setu Kulkarni, vice president of strategy at NTT Application Security.

Organizations need to test applications in production, figuring out what their top three-to-five vulnerability types are. Then launch a targeted campaign to address them, rinse, and repeat, he recommended.

The “Linux Threat Report 2021 1H” is available here.



Source link

Latest POP_OS! Release Brings COSMIC Overtones


When I reviewed POP!_OS 20.04 in May 2020, I saw its potential to be one of the best starting points for any new Linux user.

The latest release, POP!_OS Linux 21.04 issued June 29, clearly shows that the in-house tweaking of the GNOME desktop to the COSMIC GNOME-based desktop is even more inviting.

Given this distro’s rising popularity, it will continue to hold that distinction. COSMIC is an attractive offering for seasoned Linux users as well.

That is a bold statement, but developer System76 has made some bold moves to push this distro to the forefront and spark its popularity among newcomers to Linux — as well as with seasoned users. That was true for the changeover to a modified GNOME desktop last year. It is even truer with this latest release’s added COSMIC polish to GNOME.

COSMIC stands for Computer Operating System Main Interface Components. While it is not an out-of-this-world or strikingly new desktop environment, it does provide enough change to the traditional GNOME user interface to be better than the original.

That has been System 76’s goal from the get-go. The company has refined the desktop experience primarily for its own line of Linux-powered computers. But even running POP_OS! on your own unoptimized hardware, this Linux distribution soars like a heavenly creature.

What’s Up with COSMIC

Ubuntu 21.04 (Hirsute Hippo) is the first release of System76’s distribution with its own revamped GNOME desktop environment. Earlier releases were based on stock GNOME with additional System76 tweaks.

Numerous distro makers using the GNOME desktop modify its user interface. So that is not a remarkable innovation at all.

What is noteworthy, however, is the subtlety of the innovations that produce a much better hands-on experience using GNOME’s underpinnings. I am not a zealous fan of GNOME in almost any modified version. I find that the desktop environment is too inflexible in meeting the demands of my workflow.


Much of that displeasure is a reaction to power-user features easily accessible to fully functional panel bars and keyboard shortcuts that supplement navigating around multiple open virtual workspaces. GNOME just gets in the way of executing my on-screen workflow needs.

The modified COSMIC GNOME integration soothes and solves much of that workflow blockage. The COSMIC desktop comes with a fully customizable dock. It splits the Activities Overview function into Workspaces and Applications views. It provides the ability to open the launcher with the Super key, as well as various trackpad gestures.

The COSMIC desktop also brings streamlined launching and switching between applications. All these features make the interface simpler and more straightforward to use.

POP_OS! Workspaces

Meet the COSMIC layout. Workspace overview is still displayed in a vertical column when you click on the Workspaces button at the top left of the screen. You can also use the Show Workspaces button on the far left of the bottom dock or near the right side of the top panel.


More Under the Hood

In short, COSMIC with POP_OS! just has enough new options to deliver an adjusted GNOME desktop to satisfy my personal computing tastes and meet most of my workflow needs. Is it an all-around perfect computing solution? No! But it is much closer to meeting that goal without having to leave GNOME behind.

One glaring example is the option to have minimize/maximize buttons for windows. Add to that the ability to tile windows with the mouse by clicking and dragging tiled windows to rearrange them.

COSMIC also adds an ability to upgrade the recovery partition, an improved search feature, and a plugin system for the launcher to let you create your own plugins. Plus, the new release comes with updated components and a newer kernel from the upstream Ubuntu 21.04 release.

Another nice touch is being able to move the workspaces to the left or right edges of the screen. To do that, open Settings and go to Desktop | Workspaces.

But the System76 designers left a glaring old GNOME menu display in place. The application menu remains full screen. That might be a visual impediment to which new users will have to adjust. The popup or dropdown one- or two-column menu most Linux operating systems use is not a part of the COSMIC display.

POP_OS! Applications launcher

One thing that has not changed with COSMIC’s design is the full-screen applications launcher. Press the Applications button and then select the software category. You can see the selected category (in this case System applications) in the top square overlay. The full-screen menu with all software is somewhat visible under the displayed System folder.


A More Likable GNOME

POP_OS! is largely a “take it or leave it” offering. If you really like the GNOME environment, you should love how System76 morphed the UI into something unlike any other GNOME desktop revisions in any other Linux distro. If you are not familiar with GNOME yet, this is a much better version to make that introduction.

One example of this likability is how COSMIC handles workspaces. POP_OS! uses a vertical layout along the edge of the screen for the workspace overview. But the designers made up for that GNOME carryover somewhat by adding a Workspaces button in the top panel. I give designers credit for building in the ability to easily drag and drop applications to a different Workspace.

Another new element is the centered bottom dock. But I find the dock provides less utility than a fully functional bottom panel. Functionality should include more than just a holding spot for quick access apps.

YES, the latest POP_OS! has a top panel that resembles a classic Linux layout. But this panel bar lacks full functionality. However, it does provide access to other system icons on the right end. It also includes a Workspaces button in the top panel.

Unusual Tiling Option

Usually, tiling window managers is a separate kind of desktop environment in Linux distros that offer that option. POP_OS! does include it as an option. Tiling windows is not for everyone. In COSMIC, the tiling window manager is highly tweaked.

The window tiling feature automates the process of arranging window sizes in split-screen configurations. But it is not a typical Linux feature that has universal appeal.

I doubt new users to POP_OS! will find it particularly endearing or useful. However, other components of COSMIC will certainly make trying this new release worthwhile; like trackpad gestures, for instance.

Keeping Track of Gestures

System76 seems quite committed to making gestures a new Linux OS staple for trackpads. Its designers have done a good job to make this a palatable feature.

If you are handy with the Chromebook platform, you no doubt already are proficient in using trackpad gestures. Lately, I use Chrome OS quite a bit. It is a nice change of pace and lets me combine the benefits of tablets and my favorite Linux applications. I think my growing affinity for Chromebooks has made me feel more at home with the latest release of POP_OS!.

The included gestures are:

  • Swipe four fingers right on the trackpad to open the Applications view;
  • Swipe four fingers left to open the Workspaces view;
  • Swipe four fingers up or down to switch to another workspace;
  • Swipe (in any direction) with three fingers to switch between open windows.

Trackpad’s gestures is a game-changer for desktop Linux in general and for POP_OS! in particular. It is efficient and user-friendly.

Bottom Line

The combination of an Ubuntu base and GNOME customization makes POP!_OS with the new COSMIC integration a winning choice. New features and more tweaking make this release extra productive.

The only decision you need to make to download POP_OS! is your hardware configuration. It must be a 64-bit system. This release will not run on older 32-bit computers.


Another factor is the type of graphics your system uses. One download ISO file is strictly for Nvidia graphics cards. Otherwise, click on the other ISO choice.

The only other hardware requirement to meet is two GB RAM with at least 16 GB storage.

If you like the performance that this latest POP_OS! release gives you on your current computer, sit back and enjoy. Then think about how super-fast it will run on a spiffy new System76 computer that enhances the optimized operating system software.

Want to Suggest a Review?

Is there a Linux software application or distro you’d like to suggest for review? Something you love or would like to get to know?

Please email your ideas to me and I’ll consider them for a future column.

And use the Reader Comments feature below to provide your input!



Source link

Linux 5.15 Adds New Syscall To More Quickly Free Memory Of Dying Processes


LINUX KERNEL --

To help out memory pressure / out-of-memory killing solutions like systemd-oomd or Android’s LMKD, Linux 5.15 is introducing the “process_mrelease” system call to more quickly free the memory of dying processes.

Earlier this summer I wrote about a proposed “process_reap” system call for more quickly reclaiming memory when under pressure. It’s that work that evolved into “process_mrelease” and this new system call is now ready to go for Linux 5.15.

The aim in that using this system call can allow for reclaiming memory of a dying process quickly and more predictably than the status quo.


Linux — particularly Linux on the desktop — traditionally hasn’t coped too well when under memory pressure but there has been steady progress in recent years with systemd-oomd, various kernel innovations, and now process_mrelease being the latest work in this area.

The patch merged to Linux 5.15 by way of Andrew Morton’s patch series goes on to explain this process_mrelease system call:

For such system component it’s important to be able to free memory quickly and efficiently. Unfortunately the time process takes to free up its memory after receiving a SIGKILL might vary based on the state of the process (uninterruptible sleep), size and OPP level of the core the process is running. A mechanism to free resources of the target process in a more predictable way would improve system’s ability to control its memory pressure.

Introduce process_mrelease system call that releases memory of a dying process from the context of the caller. This way the memory is freed in a more controllable way with CPU affinity and priority of the caller. The workload of freeing the memory will also be charged to the caller. The operation is allowed only on a dying process.