Category Archives: Tutoriale Linux

Podman is gaining rootless overlay support


Podman is gaining rootless overlay support

What does a native overlayfs mean to you and your container workloads?
Dan Walsh
Sat, 6/12/2021 at 1:31pm

Image

Image by Kawin Piboonsawat from Pixabay

Podman can use native overlay file system with the Linux kernel versions 5.13. Up until now, we have been using fuse-overlayfs. The kernel gained rootless support in the 5.11 kernel, but a bug prevented SELinux use with the file system; this bug was fixed in 5.13.

Topics:  
Containers  
Linux  
Podman  
Read More at Enable Sysadmin

Adoption of a “COVID-19 Vaccine Required” Approach for our Fall 2021 Event Line-up


After careful consideration, we have decided that the safest course of action for returning to in-person events this fall is to take a “COVID-19 vaccine required” approach to participating in-person. Events that will be taking this approach include:

Open Source Summit + Embedded Linux Conference (and co-located events), Sept 27-30, Seattle, WAOSPOCon, Sept 27-29, Seattle, WALinux Security Summit, Sept 27-29, Seattle, WAOpen Source Strategy Forum, Oct 4-5, London, UKOSPOCon Europe, Oct 6, London, UKOpen Networking & Edge Summit + Kubernetes on Edge Day, Oct 11-12, Los Angeles, CAKubeCon + CloudNativeCon (and co-located events), Oct 11-15, Los Angeles, CAThe Linux Foundation Member Summit, Nov 2-4, Napa, CAOpen Source Strategy Forum, Nov 9-10, New York, NY

We are still evaluating whether to keep this requirement in place for events in December and beyond. We will share more information once we have an update.

Proof of full COVID-19 vaccination will be required to attend any of the events listed above. A person is considered fully vaccinated 2 weeks after the second dose of a two-dose series, or two weeks after a single dose of a one-dose vaccine.

Vaccination proof will be collected via a digitally secure vaccine verification application that will protect attendee data in accordance with EU GDPR, California CCPA, and US HIPAA regulations. Further details on the app we will be using, health and safety protocols that will be in place onsite at the events, and a full list of accepted vaccines will be added to individual event websites in the coming months. 

While this has been a difficult decision to make, the health and safety of our community and our attendees are of the utmost importance to us. Mandating vaccines will help infuse confidence and alleviate concerns that some may still have about attending an event in person. Additionally, it helps us keep our community members safe who have not yet been able to get vaccinated or who are unable to get vaccinated. 

This decision also allows us to be more flexible in pivoting with potential changes in guidelines that venues and municipalities may make as organizations and attendees return to in person events. Finally, it will allow for a more comprehensive event experience onsite by offering more flexibility in the structure of the event.

For those that are unable to attend in-person, all of our Fall 2021 events will have a digital component that anyone can participate in virtually. Please visit individual event websites for more information on the virtual aspect of each event.

We hope everyone continues to stay safe, and we look forward to seeing you, either in person or virtually, this fall. 

The Linux Foundation

FAQ

Q:If I’ve already tested positive for COVID-19, do I still need to show proof of COVID-19 vaccination to attend in person? 

A: Yes, you will still need to show proof of COVID-19 vaccination to attend in-person.

Q: Are there any special circumstances in which you will accept a negative COVID-19 test instead of proof of a COVID-19 vaccination? 

A: Unfortunately, no. For your own safety, as well as the safety of all our onsite attendees, everyone who is not vaccinated against COVID-19 will need to participate in these events virtually this year, and will not be able to attend in-person.

Q: I cannot get vaccinated for medical, religious, or other reasons. Does this mean I cannot attend?

A: For your own safety, as well as the safety of all our onsite attendees, everyone who is not vaccinated against COVID-19 – even due to medical, religious or other reasons – will need to participate in these events virtually this year, and will not be able to attend in-person.

Q: Will I need to wear a mask and socially distance at these events if everyone is vaccinated? 

A: Mask and social distancing requirements for each event will be determined closer to event dates, taking into consideration venue and municipality guidelines.

Q: Can I bring family members to any portion of an event (such as an evening reception) if they have not provided COVID-19 vaccination verification in the app? 

A: No. Anyone that attends any portion of an event in-person will need to register for the event, and upload COVID vaccine verification into our application.

Q: Will you provide childcare onsite at events again this year?

A: Due to COVID-19 restrictions, we unfortunately cannot offer child care services onsite at events at this time. We can, however, provide a list of local childcare providers. We apologize for this disruption to our normal event plans. We will be making this service available as soon as we can for future events.

Q: Will international (from outside the US) be able to attend? Will you accept international vaccinations?

A: Absolutely. As mentioned above, a full list of accepted vaccines will be added to individual event websites in the coming months. 

The post Adoption of a “COVID-19 Vaccine Required” Approach for our Fall 2021 Event Line-up appeared first on Linux Foundation.

Free Training Course Explores Software Bill of Materials


At the most basic level, a Software Bill of Materials (SBOM) is a list of components contained in a piece of software. It can be used to support the systematic review and approval of each component’s license terms to clarify the obligations and restrictions as it applies to the distribution of the supplied software. This is important to reducing risk for organizations building software that uses open source components.

There is often confusion concerning the minimum data elements required for an SBOM and the reasoning behind why those elements are included. Understanding how components interact in a product is key for providing support for security processes, compliance processes, and other software supply chain use cases. 

This is why The Linux Foundation has taken the step of creating a free, online training course, Generating a Software Bill of Materials (LFC192). This course provides foundational knowledge about the options and the tools available for generating SBOMs and will help with understanding the benefits of adopting SBOMs and how to use them to improve the ability to respond to cybersecurity needs. It is designed for directors, product managers, open source program office staff, security professionals, and developers in organizations building software. Participants will walk away with the ability to identify the minimum elements for a SBOM, how they can be coded up, and an understanding of some of the open source tooling available to support the generation and consumption of an SBOM. 

The course takes around 90 minutes to complete. It features video content from Kate Stewart, VP, Dependable Embedded Systems at The Linux Foundation, who works with the safety, security, and license compliance communities to advance the adoption of best practices into embedded open source projects. A quiz is included to help confirm learnings.

Enroll today to start improving your development practices.

The post Free Training Course Explores Software Bill of Materials appeared first on Linux Foundation – Training.

Determining the Source of Truth for Software Components


Abstract: Having access to a list of software components and their respective meta-data is critical to performing various DevOps tasks successfully. After considering the varying requirements of the different tasks, we determined that representing a software component as a “collection of files” provided an optimal representation. Conversely, when file-level information is missing, most tasks become more costly or outright impossible to complete.

Introduction

Having access to the list of software components that comprise a software solution, sometimes referred to as the Software Bill of Materials (SBOM), is a requirement for the successful execution of the following DevOps tasks:

  • Open Source and Third-party license compliance
  • Security Vulnerability Management
  • Malware Protection
  • Export Compliance
  • Functionally Safe Certification

A community effort, led by the National Telecommunications and Information Administration (NTIA) [1], is underway to create an SBOM exchange format driven mainly by the Security Vulnerability Management task. The holy grail of an effective SBOM design is twofold:

  1. Define a commonly agreed-upon data structure that best represents a software       component and;
  2. Devise a method that uniquely and effectively identifies each software component.

A component must represent a broad spectrum of software types including (but not limited to): a single source file, a library, an application executable, a container, a Linux runtime, or a more complex system that is composed of some combination of these types. For instance, a collection of source files (e.g., busybox 1.27.2) or a collection of three containers (e.g., a half dozen scripts and documentation) are two examples.

Because we must handle an eclectic range of component types, finding the right granular level of representation is critical. If it is too large, we will not be able to represent all the component types and the corresponding meta-information required to support the various DevOps tasks. On the other hand, it may add unnecessary complexity, cost, and friction to adoption if it is too small.

Traditionally, components have been represented at the software package or archive level, where the name and version are the primary means of identifying the component. This has several challenges, with the two biggest ones being:

  1. The fact that two different software components can have the same name yet be different, and
  2. Conversely, two copies of software with different names could be identical.

Another traditional method is to rely on the hash of the software using one of several methods – e.g., SHA1, SHA256, or MD5. This works well when your software component represents a single file, but it presents a problem when describing more complex components composed of multiple files. For example, the same collection of source files (e.g., busybox-1.27.2 [2]) could be packaged using different archive methods (e.g., .zip, .gz, .bz2), resulting in the same set of files having different hashes due to the different archive methods used. 

After considering the different requirements for the various DevOps tasks listed above, and given the broad range of software component types, we concluded that representing a software component as a “collection of files” where the “file” serves as the atomic unit provides an optimal representation. 

This granular level enables access to metadata at the file level, leading to a higher quality outcome when performing the various DevOps tasks (e.g., file-level licensing for license compliance, file-level vulnerability data for security management, and file-level cryptography info for export compliance).  To compute the unique identifier for a given component we recommend taking the “hash” of all the “file hashes” of the files that comprise a component. This enables unique identification independent of how the files are packaged. We discuss this approach in more detail in the sections that follow.

Why the File Level Matters

To obtain the most accurate information to support the various DevOps tasks sufficiently, one would need access to metadata at the atomic file level. This should not be surprising given that files serve as the building blocks from which software is built. If we represented software at any higher level (e.g., just name and version), pertinent information would be lost. 

License Compliance

If you want to understand all the licenses that impose obligations and restrictions on a program or library, you will need to know all the licenses of the files from which it was built (derived). There are many instances where, although an open source software component’s top level license is declared to be one license, it is common to find a half dozen or more other licenses within the codebase which typically impose additional obligations.  Popular open source projects usually borrow from other projects with different licenses. The open sharing of code is the force behind the success of the Open Source movement. For this reason, we must accept license diversity as the rule rather than the exception.  

This means that a project is often subject to the obligations of multiple licenses. Consider the impact of this on the use of busybox, which provides a lot of latitude regarding the features included in a build. How one configures busybox will determine which files are used. Knowing which files are used is the only way to know which licenses are applicable. For instance, although the top-level license is GPL-2.0, the source file math.c [3] has three licenses governing it (GPL-2.0, MIT, and BSD) because it was derived from three different projects.  

If one distributed a solution that includes an instance of busybox derived from math.c and provided a written offer for source code, one would need to reproduce their respective license notices in the documentation to comply with the MIT and BSD licenses. Furthermore, we have recently seen an open source component with Apache as the top-level license, yet deep within the bowels of the source code lies a set of proprietary files. These examples illustrate why having file level information is mission-critical.

Security Vulnerability Management

The Heartbleed vulnerability was identified within the OpenSSL component in 2014. Many web servers used OpenSSL to provide secure communication between a browser and a website. If left unpatched, it would allow attackers unprecedented access to sensitive information such as login and password credentials [4]. This vulnerability could be isolated to a single line within a single file. Therefore, the easiest and most definitive way to understand whether one was exposed was to determine whether their instance of OpenSSL was built using that file.

The Amnesia:33 vulnerability announcement [5], reported in November 2020, suggested that any software solution that included the FNET component was affected. With only the name and version of the FNET component to go on, one would have incorrectly concluded the Zephyr LTS 1.14 operating system was vulnerable.  However, by examining the file level source, one could have quickly determined the impacted files were not part of the Zephyr build, making it definitively clear that Zephyr was not in fact vulnerable [6].  Having to conduct a product recall when a product is not affected would be highly unproductive and costly. However, in the absence of file-level information, the analysis would not be possible and would have likely caused unnecessary worry, work, and cost. These examples further illustrate why having access to file-level information is mission-critical.

Export Compliance

The output quality of an Export Compliance program also depends on having access to file-level data. Although different governments have different rules and requirements concerning software export license compliance, most policies center around the use of cryptography methods and algorithms. To understand which cryptography libraries and algorithms are implemented, one needs to inspect the file-level source code. Depending on how a given software solution is built and which cryptography-based files are used (or not used), one should classify the software concerning the different jurisdiction policies. Having access to file-level data would also enable one to determine the classification for any given jurisdiction dynamically. The requirements of the export compliance task also mean that knowing what is at the file level is mission-critical.

Functional Safety

The objective of the functional safety software certification process is to mitigate the unacceptable risk of physical injury or of damage to the health of people and/or property. The standards that govern functional safety (e.g., ISO/IEC 61508, ISO 26262, …) require that the full system context is known to assess and mitigate risk successfully. Therefore, full system transparency requires verification and validation at the source file level, which includes understanding all the source code and build tools used, and how it was configured. The inclusion of components of unknown content and provenance would increase risk and prohibit most certifications. Thus, functionally safe certification represents yet another task where having file-level information becomes mission-critical.

Component Identification

One of the biggest challenges in managing software components is the ability to identify each one uniquely. Developing a high confidence method ensures that two copies of a component are the same when they represent identical content and different if the content is not identical. Furthermore, we want to avoid creating a dependency on a central component registry as a requirement for determining a component’s identifier. Therefore, an additional requirement is to be able to compute a unique identifier simply by examining the component’s contents.

Understanding a component’s file-level composition can play a critical role in designing such a method. Recall that our goal is to allow a software component to represent a wide spectrum of component types ranging from a single source file to a collection of containers and other files. Each component could therefore be broken down into a collection of files. This representation enables the construction of a method that can uniquely identify any given component. 

File hash methods such as SHA1, SHA256, and MD5 are effective at uniquely identifying a single file. However, when representing a component as a collection of files, we can uniquely represent it by creating a meta-hash – i.e., by taking the hash of “all file hashes” of the files that comprise a component. That is, i) generate a hash for each file (e.g., using SHA256), ii) sort the list of hashes, and iii) take the hash of the sorted list. Thus, the meta-hash approach would enable us to identify a component based solely on its content uniquely, and no registry or repository of truth is required.  

Conclusion

Having access to software components and their respective metadata is mission-critical to executing various DevOps tasks. Therefore, it is vital to establish the right level of granularity to ensure we can capture all the required data. This challenge is further complicated by the need to handle an eclectic range of component types. Therefore, finding the right granular level of representation is critical. If it is too large, we will not represent all the component types and the meta-information needed to support the DevOps function. If it is too small, we could add unnecessary complexity, cost, and friction to adoption. We have determined that a file-level representation is optimal for representing the various component types, capturing all the necessary information, and providing an effective method to identify components uniquely.

References

[1] NTIA: Software Bill of Materials web page, https://www.ntia.gov/SBOM

[2] Busybox Project, https://busybox.net/

[3] Software Heritage archive: math.c, https://archive.softwareheritage.org/api/1/content/sha1:695d7abcac1da03e484bcb0defbee53d4652c347/raw/ 

[4] Wikipedia: Heartbleed, https://en.wikipedia.org/wiki/Heartbleed

[5] “AMNESIA:33: Researchers Disclose 33 Vulnerabilities Across Four Open Source TCP/IP Libraries”, https://www.tenable.com/blog/amnesia33-researchers-disclose-33-vulnerabilities-tcpip-libraries-uip-fnet-picotcp-nutnet

[6] Zephyr Security Update on Amnesia:33, https://www.zephyrproject.org/zephyr-security-update-on-amnesia33/

Linux Foundation Announces Software Bill of Materials (SBOM) Industry Standard, Research, Training, and Tools to Improve Cybersecurity Practices


The Linux Foundation responds to increasing demand for SBOMs that can improve supply chain security

SAN FRANCISCO, June 17, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced new industry research, training, and tools – backed by the SPDX industry standard – to accelerate the use of a Software Bill of Materials (SBOM) in secure software development.

The Linux Foundation is accelerating the adoption of SBOM practices to secure software supply chains with:

SBOM standard: stewarding SPDX, the de-facto standard for requirements and data sharingSBOM survey: highlighting the current state of industry practices to establish benchmarks and best practicesSBOM training: delivering a new course on Generating a Software Bill of Materials to accelerate adoptionSBOM tools:  enabling development teams to create SBOMs for their applications

“As the architects of today’s digital infrastructure, the open source community is in a position to advance the understanding and adoption of SBOMs across the public and private sectors,” said Mike Dolan, Senior Vice President and General Manager Linux Foundation Projects. “The rise in cybersecurity threats is driving a necessity that the open source community anticipated many years ago to standardize on how we share what is in our software. The time has never been more pressing to surface new data and offer additional resources that help increase understanding about how to adopt and generate SBOMs, and then act on the information.” 

Ninety percent (90%) of a modern application is assembled from open source software components. An SBOM accounts for the open source software components contained in an application that details their quality, license, and security attributes. SBOMs are used to ensure developers understand what components are flowing throughout their software supply chains, proactively identify issues and risks, and establish a starting point for their remediation.

The recent presidential Executive Order on Improving the Nation’s Cybersecurity referenced the importance of SBOMs in protecting and securing the software supply chain. The National Telecommunications and Information Administration (NTIA) followed the issuance of this order by asking for wide-ranging feedback to define a minimum SBOM. The Linux Foundation has responded to the NTIA’s SBOM inquiry here, and the presidential Executive Order here. 

SPDX: The De-Facto SBOM Open Industry Standard

SPDX – a Linux Foundation Project, is the de-facto open standard for communicating SBOM information, including open source software components, licenses, and known security vulnerabilities. SPDX evolved organically over the last ten years by collaborating with hundreds of companies, including the leading Software Composition Analysis (SCA) vendors – making it the most robust, mature, and adopted SBOM standard in the market. 

SBOM Readiness Survey

Linux Foundation Research is conducting the SBOM Readiness Survey. It will examine obstacles to adoption for SBOMs and future actions required to overcome them related to the security of software supply chains. The recent US Executive Order on Cybersecurity emphasizes SBOMs, and this survey will help identify industry gaps in SBOM applications. Survey questions address tooling, security measures, and industries leading in producing and consuming SBOMs, among other topics. 

New Course: Generating a Software Bill of Materials

The Linux Foundation is also announcing a free, online training course, Generating a Software Bill of Materials (LFC192). This course provides foundational knowledge about the options and the tools available for generating SBOMs and how to use them to improve the ability to respond to cybersecurity needs. It is designed for directors, product managers, open source program office staff, security professionals, and developers in organizations building software. Participants will walk away with the ability to identify the minimum elements for an SBOM, how they can be assembled, and an understanding of some of the open source tooling available to support the generation and consumption of an SBOM. 

New Tools: SBOM Generator

Also announced today is the availability of the SPDX SBOM generator, which uses a command-line interface (CLI) to generate SBOM information, including components, licenses, copyrights, and security references of your application using SPDX v2.2 specification and aligning with the current known minimum elements from NTIA. Currently, the CLI supports GoMod (go), Cargo (Rust), Composer (PHP), DotNet (.NET), Maven (Java), NPM (Node.js), Yarn (Node.js), PIP (Python), Pipenv (Python), and Gems (Ruby). It is easily embeddable in automated processes such as continuous integration (CI) pipelines and is available for Windows, macOS, and Linux. 

Additional Resources

What is an SBOM?Build an SBOM training courseFree SBOM tool and APIs

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure, including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

Media Contacts

Jennifer Cloer

for Linux Foundation

jennifer@storychangesculture.com

503-867-2304

The post Linux Foundation Announces Software Bill of Materials (SBOM) Industry Standard, Research, Training, and Tools to Improve Cybersecurity Practices appeared first on Linux Foundation.