Tag Archives: Management

How AIOps Can Improve Data Center Management | IT Infrastructure Advice, Discussion, Community


Today’s data center management professionals face a unique challenge. Technologies like the Internet of Things (IoT) and cloud computing are elevating a new generation of IT applications, powering everything from smart cities to data-driven crisis response. However, these capabilities have made digital environments more complex by several orders of magnitude, making it increasingly difficult to effectively manage modern data centers.

Thankfully, an emerging trend known as AIOps — or Artificial Intelligence Operations — offers IT professionals the support they desperately need. By bringing artificial intelligence and visualization technologies to bear on a wide range of data center challenges, AIOps enables data center management professionals to automate administrative tasks, reduce unnecessary alerts, and identify anomalies before they cause wider issues.

AIOps tools are already helping IT teams in multiple scenarios, data center management included. In fact, a new report from OpsRamp shows that 87% of technology professionals say their AIOps solutions are delivering the value they had expected prior to implementation. Coupled with analyses showing that the market for AIOps platforms will grow from $2.6 billion in 2018 to $11.0 billion in 2023, these positive early results underscore the transformative potential of AIOps in the IT space.

For data center management professionals interested in learning more about AIOps, it’s important to understand the approach’s range of possible use cases, as well as the requirements for successful implementation. By doing so, data center teams can ensure they’ll reap the rewards of a technology that promises to revolutionize the IT industry at large.

Read the rest of this artilce on InformationWeek.

Read more Network Computing articles on this topic:

AI-Driven Wireless Is Key to the Digital Workplace

How Is AI Affecting Infrastructure Pros?

Why IT War Rooms Fail, and Why Failure is No Longer an Option

 



Source link

With Regolith, i3 Tiling Window Management Is Awesome, Strange and Easy | Reviews


By Jack M. Germain

Jun 20, 2019 10:33 AM PT

With Regolith, i3 Tiling Window Management Is Awesome, Strange and Easy

Regolith Linux brings together three unusual computing components that make traipsing into the i3 tiling window manager world out-of-the-box easy.

Much of the focus and attraction — as well as confusion — for newcomers to the Linux OS is the variety of desktop environments available. Some Linux distributions offer a range of desktop types. Others come only with a choice of one desktop.

i3 provides yet another option, but it is a much different choice that offers an entirely new approach to how you interact with the operating system.

Window managers usually are integrated into a full-fledged desktop system. Window managers control the appearance and placement of windows within the operating system’s screen display. A tiling window manager goes one step further. It organizes the screen display into non-overlapping frames rather than stacking overlapping windows.

The i3 tiling window manager in Regolith Linux serves as what essentially becomes a standalone pseudo desktop. It automatically arranges windows so they occupy the whole screen without overlapping.


Regolith Linux desktop

An otherwise barren desktop quickly gets crowded with equal-sized tiled windows. Here we see the Firefox Web browser on the left, Control Panel in the center, and a LibreOffice document on the right.

– click image to enlarge –


Regolith Linux brings together three computing elements not found anywhere else. It is part Ubuntu’s ubiquity, part i3-wm’s efficient and productive interface, and part GNOME’s system configuration features.

Different Strokes

Regolith Linux is designed for people who prefer a spartan interface with polished and consistent system management. You will not find many distros using the i3 tiling window manager.

The few distros that offer i3 as a sort of desktop option are built into Arch-based distros. The i3 wm components usually need elaborate installation and detailed configuration steps. That becomes a deterrent to trying the tiling window manager.

Regolith Linux changes all that. Developer Ken Gilmore stuffed the i3 tiling window manager into Ubuntu for stability and easy access. If you download the live CD version, you get a ready-to-go Regolith distro with all the Ubuntu software infrastructure.

Another option is to add the Regolith Ubuntu PPA to an existing Ubuntu 18.04 (Bionic) or 19.04 (Disco) system and swap out the Ubuntu desktop with Regolith’s tweaked i3 tiling window manager replacement.

Release 1.0 is based on Ubuntu 18.04; release 1.1 is based on Ubuntu 19.04. Either version will update to the latest files.

“All Regolith packages work fine on Ubuntu 18.04 and 19.04. Essentially the goal is to create something simple, polished and productive,” Gilmore told LinuxInsider.

New Approach

Regolith Linux is very new. Gilmore released the first edition of the Ubuntu installer with the Regolith distro on April 19. The PPA installation on an existing Ubuntu instance is about one year older, first appearing around March 2018.

“There are still many rough edges to be addressed, of course, but overall I feel the interface is particularly compelling to those that would like to work efficiently,” said Gilmore.

Almost all of the developmental work goes into little things that most people do not notice, he added. He sees that work as 90 percent polish.

His plans for continued development include keeping the 1.x development focused on the strategy of using existing open source projects and customizing them as needed to provide the best possible user experience with i3. However, he does not plan to get into actually changing any upstream code.

“I plan on releasing a 2.x development track which is more ambitious in that I plan to modify several UI (user interface) components that Regolith relies on (i3bar, Rofi, gnome-flashback) to further simplify and polish the user workflow. This is a longer-term goal, and I don’t really have specifics yet,” he explained, apart from lots of ideas.

Those UI improvements involve reducing the bar to only a few pixels deep and pushing a lot of the ambient information such as date/time and workspace map to a full-screen modal similar to the way Rofi (a window switcher) is rendered for program launching (Super-space).

More Work Ahead

Since the i3 window manager is largely a keyboard-driven interface, very little in the way of a graphical user display exists in Regolith Linux. The control panel is accessed with the keyboard shortcut Super key + c, for example. Once the control panel launches, you can arrow down a list of settings or use the mouse.

The default key bindings are kept in a .config file that is edited using the gEdit text editor. Gilmore plans to make UI changes more aggressively in the 2.x development. He passes along all developmental changes directly as rolling release updates.


Regolith Linux File Manager

The left window shows the File Manager in the .config folder. The right window displays the Regolith.config file in a text editor.

– click image to enlarge –


The developer issues updates to two PPAs: regolith-unstable for testing and regolith-stable. Once package updates have been pushed to regolith-stable, both PPA users, as well as distro users, get the updates via Ubuntu’s package update mechanism.

“I will add more ISO versions if needed but do not have a specific schedule or plan for global versioning. In fact, that Regolith is a distribution at all is simply because that is the best way for a lot of users to get the software,” noted Gilmore. “Users are familiar with the ISO approach, whereas PPA installations may be too technical.”

Keen on User Focus

Ultimately, Gilmore said it is not his goal to “capture” users or empire-build. In fact, he has documentation on
regolith-linux.org for users who wish to build their own thing or revert back to stock Ubuntu.

Regolith makes no attempt to hide the fact that it’s just Ubuntu with a different desktop environment, according to the developer. From my view, he would be perfectly justified in establishing Regolith Linux as a distro in its own right.

Familiarity with GNOME and Ubuntu help more experienced users settle into using the i3 window manager as a desktop environment, although the tweaking and integration Gilmore devised brings a whole new look and feel. If you are new to Linux or do not know Ubuntu, Regolith Linux *IS* a unique distro experience.

Gilmore plans to utilize configuration strategies that make it easier for neophytes to play around and share bits of configuration. He wants to make it easy to roll back changes when something goes wrong.

“And I would like to incorporate some of the subtle transitional animation elements we have come to expect with mobile UIs.,” Gilmore said. “Additionally, a lot of work remains for documentation. I want to provide a much more inclusive first-time user experience which gives a new user the ‘big picture’ and walks them through the UI, how to do things, etc., rather than just dropping them to a desktop with a cheat-sheet window.”

On the website, Gilmore wants to provide a full how-to section for people to build their own Regolith-like projects. Debian packaging was really hard for him to learn relative to the complexity of what the process involves. His goal is to help others if he can.

Common Ground Draws Users to Linux

Computer users do not have to be spoon fed what the megacorps want customers to use, according to Gilmore. Regular people often produce far more beautiful and creative environments than those from large software companies, regardless of how talented their designers are.

“How we interact with our computers is our choice to make,” said Gilmore.

When asked to describe the typical person interested in his new distro, his response underscored what makes Linux so inviting: “I think of myself around 2017 when I came to the realization that the Mac platform was a dead-end for professional developers. I had no idea what I should use next, as long as it wasn’t any of the ‘stock’ desktops (windows/mac/ubuntu).”

Not that anything is wrong with Ubuntu by default, Gilmore clarified, noting that it is designed for people who prefer the traditional Windows/Mac UI metaphors.

“For me, Windows was out by default and so that left Ubuntu, as my employer only allows that version of Linux due to IT management and security concerns,” he said.

Taking a Test Drive

Regolith is visually spartan by design so it is not a distraction. It has no icons, docks, panels, menus or widgets taking up screen space.

A small bar at the bottom of the screen shows information such as workspaces on the left end and system status indicators on the right end.

That is the extent of any similarity to an Ubuntu desktop of any variety — or any other Linux distro interface for that matter. The window header does display the expected icons to minimize/maximize, resize, or open window menus. However, they are just a throwback to their GNOME Ubuntu roots. The only window icon that actually works is the X to close the window.

If you are comfortable with terminal boxes and their commands, you can do absolutely anything you want without the missing GUI, right-click on the mouse, icons on the desktop or cascading menus. All it takes to open a terminal window is to use the default keyboard shortcut Super Key+Enter key.

Otherwise, press the Super Key+Space bar to get a scrollable list of installed applications. Just use the up/down arrows on the keyboard. You can point to a title on the center of the screen.


Regolith Linux Super+space keys

The Super+space keys launch the applications list in the center of the screen, leaving the keyboard shortcut list shaded but visible on the right.

– click image to enlarge –


Just do not click on it. Nothing happens. Instead, press the enter key to launch the program. You can close the menu list with the escape key.

Navigating the Desktop

One of the most glaring interface hurdles for me was adjusting to the workspace landscape. i3 has no workplace switcher applet on the bottom panel.

Key mappings are already configured. Press the Super key and a number to jump to that workspace instantly. By default, Regolith has 19 workspaces waiting for you.

Each new workspace you open has its own small colored box that sits with its number in the left end of the bottom of the screen. You rotate among the workspaces with the Super key+number keyboard shortcut.

In any workspace, you can open as many applications as you want or need. The first one opens full screen. The second one changes the screen display to two equal shares. The third one automatically divides the screen into three windows of equal size.

Everything stays in view so there is no need for the Alt-Tab window switching feature. You have no scale or expo animation displays either

Bottom Line

Overall, i3’s minimal visual design does not prevent you from using a modern system with file management features. They are all available, but you must access them differently.

Every workplace screen shows a vertical Konky-style panel with a list of the most commonly used keyboard shortcuts. You can change the default keyboard bindings or add new ones by going into the File Manager, selecting the Show Hidden Files option, and opening the Regolith.config file in the text editor.


Regolith Linux Activated workspaces

Each workspace screen shows the keyboard binding Konky display, a vacant desktop, and bare minimum details on the bottom bar. Activated workspaces are shown as different-colored squares on the left end of the bar.

– click image to enlarge –


Study the syntax pattern from what is already there. Then add your own comment line and the new mapping or edit an existing one. Remember to save the file.

If you decide to tackle this awesome but strange i3 tiling window manager environment, be sure to read through the developer’s
Getting Started guide.

Want to Suggest a Review?

Is there a Linux software application or distro you’d like to suggest for review? Something you love or would like to get to know?

Please
email your ideas to me, and I’ll consider them for a future Linux Picks and Pans column.

And use the Reader Comments feature below to provide your input!


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

DevOps Influence on Infrastructure Management | IT Infrastructure Advice, Discussion, Community


How a person defines DevOps often depends on their scope of interest/responsibility within an IT environment. Those with an infrastructure management background are going to lean towards an Infrastructure as Code (IaC) definition. Application developers typically focus on application development processes and agility. There are also people who tell you that DevOps is an end-to-end solution combining infrastructure management and application development.

I see Infrastructure management and application development as two separate disciplines, where infrastructure is used to build a platform consumed by application developers. Application developers can configure the platform as required within the constraints of said platform.

There is no doubt that DevOps principals have influenced how infrastructure is managed at scale using IaC strategies. However, there is a point where DevOps principals may no longer be relevant for IaC, especially when it comes to the management of bare metal.

Continuous integration/continuous delivery (CI/CD) pipelines play a core role in shipping code from development to production; within the pipeline unit and integration tests are run against the code to ensure reliable delivery and reduce risk of faults.

Many hardware vendors supply emulators to replicate the behaviour expected from the hardware platforms. An emulator’s replication of a physical hardware platform should typically be considered a best effort and not necessarily an entirely accurate a representation. A VM running the Network Operating System (NOS) for a white box switching solution would not be able to test how a change impacts the performance of an ASIC contained within physical switches.

Some companies can afford to purchase enough hardware dedicated to testing new workflows or updated impacts, others cannot. The level of accuracy between the test and production environment determines the testing reliability of a CI/CD pipeline.

Introducing version control to track changes is one of the most significant value propositions that IaC provides to operational teams. Issues caused by manual infrastructure changes can be incredibly challenging to troubleshoot as sometimes the intended change isn’t exactly what was changed. Increasing the number of changes made using IaC reduces the amount of time it takes to find which settings were changed.

CI/CD pipelines are frequently integrated with version control systems, enabling automatic execution of pipelines when a repo receives a new commit. If the hardware vendor provides emulators for their hardware platform, the pipeline should build a virtual environment to represent the current environment to run integration testing.

Test results can be used to provide more than simple pass / fail validation, and tests can be used to determine the impact of a change which can then be used to determine if additional steps are required before, during or after application of a new configuration.

Many environments and application services have a state that can impact how seamless a failover is. In a virtualised environment, higher workload density increases the number of applications potentially impacted by an interruption caused by a change, even if that impact is only a blip.

Using a CICD pipeline to detect interruptions caused by the change allows for better change planning or incorporation of steps to perform workload migrations and clean failovers. The use of emulators might be adequate for this level of testing; however, physical reproduction is always a better option.

Continuous iterations required

Working towards a high degree test coverage requires continual iterations which include lessons from previous successes and failures. Agile project management strategies provide a practical framework for managing iteration work in progress.

Physical infrastructure isn’t ephemeral unless you live in the Twilight Zone physical devices do not suddenly appear and disappear from racks. There are configuration changes which can be performed on demand and those which cannot.

Storage platforms have supported storage nodes as individual nodes within a cluster, allowing for the addition and removal of nodes as required. However, the process of changing storage nodes places additional load on the storage solution while data rebalancing operations occur or evacuated. Some changes may require that some protection features are disabled or tuned down to prevent unneeded load on the system. Typically, these are the types of changes which build the foundation of a storage solution provided for consumption.

There are many areas when DevOps principals influence and improve IaC strategies; however, physical hardware management is different from software management, and the suitability of different principals varies between environments and goals.



Source link

3 Imperatives for Network Management Success in the Hybrid World | IT Infrastructure Advice, Discussion, Community


Networks today are a mixed bag, comprised of what can be a tangled mess of physical, virtualized, and cloud infrastructure. In order to compete today, businesses are pursuing digital transformation initiatives such as SD-WAN, Network Function Virtualization, and edge computing for a competitive edge. While these technologies offer great benefits, they also add great complexity. The race for a competitive edge inevitably creates interoperability hurdles amongst IT systems. Today businesses must wade through wired and wireless networks, multi-platform, multi-vendor, as well as multi-cloud – each with their own set of complexities. Performance issues inevitably arise, which can cause downtime, and cost a business anywhere from tens of thousands to millions of dollars

One major challenge faced by many network operations (NetOps) teams is the use of too many monitoring tools. The issue of monitoring tool sprawl is far worse than most realize. According to a bi-annual network management study from Enterprise Management Associates, nearly half of all networking pros are using between four and ten tools to monitor and troubleshoot their networks. And nearly one-third of IT teams are juggling 11 or more tools!

Today’s hybrid networks simply demand more. Organizations must anticipate, identify, troubleshoot and resolve a wide array of network issues. An important key to network management is comprehensive visibility, with advanced performance analytics, all through a single pane of glass.

Here are three imperatives for network visibility and management across hybrid networks:

The ability to collect various data sources across all network domains: Whether a team is conducting capacity planning, troubleshooting a critical performance issue, or analyzing an anomaly to achieve true end-to-end visibility across the entire network, teams need insight into a broad range of data sources. From Flow (IPFIX, NetFlow, sFlow, Cflowd, etc.) and SNMP, to packet data (full capture and analytics) and API integrations (REST, Bulk, Stream, etc.), each data source plays a unique and critical role in the overall process of managing the network. Without the ability to consume these different data sources, NetOps can be left with insufficient data that can hinder their ability to manage and troubleshoot the network.

The ability toisualize and interpret that data intuitively in order to take action: It’s not enough to simply have access to every network data type. NetOPs teams need solutions that translate data into simplistic management and troubleshooting workflows. For instance, Flow data from virtual, physical and cloud devices is especially critical to managing and troubleshooting application performance. But, if a network management platform doesn’t allow the team to visualize an applications flow across the entire network – from source IP address to destination IP address – it will be difficult to preserve a positive end user experience. Packet-level data is critical for troubleshooting complex application issues like slow database performance. Visualizing the network path and reviewing the packet data creates performance visualizations that allow NetOPs to resolve issues faster. Whether troubleshooting a VoIP issue or optimizing a new SD-WAN deployment, having granular visibility into all types of network data is imperative to comprehensive network management and control. 

The ability to present top-level status updates and reports to executive stakeholders: What good is all this if NetOps can’t clearly communicate its value and progress to executives? Higher ups typically only care about a few key reports and don’t want to be bogged down trying to decipher in-depth networking analytics. How are we doing on uptime? What’s the availability of a particular set of devices, circuits or sites? What caused the minor downtime incident last week? How is the bottom line impacted? There’s a reason they call it an executive summary. If you can’t arm executives with this type of critical information, they won’t be able to make sounds budgetary, personnel, or business decisions. Teams need management solutions that enable them to generate reports that convey easily-digestible network performance metrics, SLA status, application conditions, and ultimately the merits of their work.

The complexity challenges presented by multi-vendor, multi-platform and multi-cloud IT environments, coupled with the ever-present issue of tool sprawl, makes managing today’s hybrid networks an uphill battle. NetOps teams need access to a wide range of network data sources, to visualize that information coherently, and to act quickly. Imperative is effective reporting on business-critical metrics, in order to successfully manage these complex modern network topologies.

 



Source link

Open Source Flaw Management Shows Signs of Improvement: Report | Software


By Jack M. Germain

Apr 30, 2019 1:16 PM PT

Almost two years after the infamous
Equifax breach, many organizations still struggle to identify and manage open source risk across their application portfolios.

Meanwhile, the latest report tracking open source security shows a 40 percent rise in the average number of open source components detected in each codebase analyzed. The scanned software includes commercial applications.

Black Duck by Synopsys on Tuesday released its annual Open Source Security and Risk Analysis, which examines the open source audit results of scanned codebases to identify insightful trends and patterns in open source usage. The report also looks at the prevalence of insecure open source components and software license risk.

Titled “Understanding Open Source Risk and Why It’s So Important to Manage,” the report compiles research backed by the
Synopsys Cybersecurity Research Center (CyRC). It provides an in-depth look at the state of open source security, license compliance and code-quality risk in commercial software.

The CyRC Belfast team examined findings from the anonymized data of more than 1,200 commercial codebases reviewed by the Black Duck Audit Services team in 2018. The 17 industries represented in the report range from aerospace to virtual reality. The audit services team reviewed an average of 71 codebases per industry during 2018.

The continued growth of open source components in commercial codebases is mitigated by the report’s finding that many of open source vulnerabilities detected were first disclosed more than a decade ago.

The percentage of codebases containing vulnerable components has decreased, the report notes. The percentage of codebases containing license conflicts also has decreased.

The least surprising trend identified is that open source adoption has continued to rise, and the majority of codebases contain more open source than proprietary code, according to Tim Mackey, senior technical evangelist at Synopsys.

“One trend that is concerning is that the majority of codebases (60 percent) contain at least one vulnerable open source component, and 40 percent contain at least one high-risk vulnerability. Similarly, open source license compliance continues to be a challenge, with 68 percent of codebases containing some form of open source license conflict,” he told LinuxInsider.

Results Highlights

Audits found open source in more than 96 percent of codebases scanned in 2018. That percentage is similar to the figures from the last two OSSRA reports.

Most of the codebases that contained no open source consisted of fewer than 1,000 files. More than 99 percent of the codebases scanned in 2018 with more than 1,000 files contained open source components.

In most industries, the year-to-year difference in the percentage of codebases containing open source was negligible, according to the report. The audited codebases generally were from companies whose business is building software rather than from enterprises for whom software supports their main business.

The audits found, on average, 298 open source components per codebase in 2018 versus 257 in 2017. Open source represented 60 percent of the code analyzed in 2018, up from 57 percent in 2017.

“The main takeaway from this report is that the security and license compliance risk associated with the use of open source is very real, but it is the risk that can be managed with a proactive open source governance policy, automated tools like software composition analysis and an effective patching strategy,” said Mackey.

Encouraging Indications

This year’s report shows signs of an improving situation. There definitely are encouraging data points suggesting the industry may be turning the corner in terms of organizations’ ability to manage open source risk, noted Mackey.

For example, while 60 percent of codebases contained at least one vulnerable open source component, that number is down significantly from the 78 percent observed in the 2018 OSSRA report, he said. Likewise, the 68 percent of codebases containing some form of open source license conflict is slightly better than the 74 percent seen in last year’s report.

“This is a good thing, as it shows how teams are continuing to leverage open source to accelerate innovation,” Mackey observed, “but more open source also means more open source risk that needs to be managed.”

Enterprise IT and corporate security workers should not be concerned that the rise in open source code may create greater security risks, suggested Tobie Langel, principal at consulting firm
UnlockOpen.

“There is no reason to believe that open source software is inherently less secure than closed source software,” he told LinuxInsider. “However, when a security issue is found in open source software that is used across the industry, the impact can be greater, as it is ubiquitous.”

Sustaining and securing open source is the industry’s biggest challenge right now — but open source also is where the most innovation is happening.

“I am confident we will get there,” Langel said. “Open source is by far the most effective means of building software and innovating at scale, once we find the right set of solutions to provide long term maintenance. It will also be the most secure solution by far.

Common Code Risk Critical

Numerous components were commonly used across different codebases, researchers found. For example, jQuery, open source software using the permissive MIT License, was found in 56 percent of the scanned codebases and in virtually every industry covered in the OSSRA report.

Other notable open source components found in the scans include Bootstrap, an open source front-end Web framework; jQuery UI, a curated set of user interface interactions, effects, widgets and themes built on the jQuery JavaScript Library; and Font Awesome, an open source font and icon toolkit based on CSS and LESS.

Despite using so much open source, few companies accurately track the components they use in their code. Most lack the policies, processes and tools to keep up with the choices made by their developers, according to the report. As a consequence, all the good functionality that comes with open source also brings along a variety of risks.

“Open source libraries are a double-edged sword,” remarked Manish Gupta, CEO of
ShiftLeft.

Widely used open source software tools are generally more stable and more robust than custom code, he told LinuxInsider, because they are deployed in a variety of environments and have been battle-tested.

Bugs and vulnerabilities potentially are reported and fixed much faster than in custom-code that is leveraged by only one organization. However, the documented system of CVEs means that attackers know how your libraries are vulnerable, Gupta cautioned, and they can create an exploit much more easily.

“This means that consumers of OSS must stay on top of patches, which is not always easy to do,” he said. “The security industry hasn’t provided effective solutions to the developers to deal with this dilemma. The tools merely tell developers which OSS libraries being used are vulnerable.”

Clarifying Risk From Use

A key takeaway in the report is the care it takes not to mischaracterize the findings as an attack on the use of open source technology itself. Open source is not less secure than proprietary code. Nor is it more secure.

All software has weaknesses that are potential vulnerabilities, whether the code is proprietary or open source, the report warns. Organizations that use open source must identify and patch.

That management process is challenging, since most organizations have thousands of different pieces of software, ranging from mobile apps to cloud-based systems to legacy systems running on-premises. Software in general is a mix of commercial off-the-shelf packages, open source software and custom-built codebases — and vulnerabilities affect all of them, the report emphasizes.

“The use of any software comes with inherent risks, but open source software presents a few unique challenges,” said Mackey.

The first challenge concerns license obligations that can be opaque and easy to overlook compared to commercial software. The second challenge is the responsibility for identifying and patching open source security vulnerabilities, which falls solely on the organization using the software.

“Commercial software vendors can proactively urge, or in some cases, force their customers to update or apply security patches,” Mackey said. “Managing open source security and license risk should be viewed as an accepted cost of otherwise free open source software.”

Attack Vectors Persistent

An alarming number of companies fail to patch the software they use, whether proprietary or open source, the report said. That makes them targets.

Unpatched software vulnerabilities are one of the biggest cyberthreats organizations face. Unpatched open source components in software add to security risk. Certain characteristics of open source make vulnerabilities in popular components attractive to attackers.

Makers of commercial software can push fixes, patches and updates to users automatically. Open source has a pull support model. That makes the users responsible for keeping track of both vulnerabilities and fixes for the open source software they use.

The pervasiveness and ubiquity of open source pose management tasks that extend far beyond many organizations’ capabilities, as they do not do manual tracking of components, their versions and their vulnerabilities, according to the report.

Assistance Required

Organizations using open source must establish management strategies to identify and patch known vulnerabilities in open source components, notes the report. Vulnerabilities are disclosed through sources such as the National Vulnerability Database (NVD), mailing lists, GitHub issues and project homepages.

The widespread use of open source makes it imperative for organizations to keep accurate, comprehensive and up-to-date inventories of the open source components used in their applications. An incomplete inventory makes it extremely difficult to maintain adequate software asset management procedures, according to the report.

The increase in open source vulnerability age, despite a decrease in the number of codebases containing open source vulnerabilities, is interesting, said Synopsys’ Mackey, “but our audits often reveal that organizations are tracking less than half the open source in use. You can’t patch what you aren’t aware of.”

Sample Solutions

One solution for organizations using open source code is to tap into readily available sources tracking vulnerabilities, suggested Gabriel Bianconi, founder of
Scalar Research.

“Large projects often have mailing lists announcing bug fixes and vulnerabilities,” he told LinuxInsider. “There are several vendors providing software to monitor security risks in open source libraries and dependencies used by your company.”

More often than not, the biggest problem is that the company is using an outdated version of the codebase that does not contain the latest security patches.

“Professionals must ensure that their dependencies are consistently updated,” Bianconi said.

Breaking Breaches

“POODLE,” “Heartbleed” and “Spectre” are not just cute monikers for security vulnerabilities. They are very real and potentially dangerous holes, noted Steve Tcherchian, chief product officer at
XYPRO.

When an application vulnerability is identified, it typically is followed by a patch or new version to remediate the vulnerability, he explained, and with the proliferation of free and open source software, this activity becomes critically important.

“Oftentimes procrastination takes over, and the application is not timely patched for a variety of reasons,” Tcherchian told LinuxInsider. “This now leaves the application wide open to a published, and in most cases, publicized vulnerability.”

As for how to change the mentality within a development organization to be more security-focused, education and reinforcement are key, Tcherchian said.

“Security cannot be left for the end. Introduce security into your development processes early and re-introduce them often,” he added.

More Action Needed

The report cites a conclusion by the U.S. Senate Permanent Subcommittee on Investigations declaring that Equifax’s lack of a complete software inventory was a contributing factor to its massive 2017 data breach.

A number of reliable strategies exist to ensure that open source components used in applications are up-to-date with crucial patches applied, noted Matt Wilson, chief information security advisor at
BTB Security.

“The good news is that they aren’t terribly complicated. What is important is that teams are aware of what you run in your environment, which can be hard for less mature organizations,” he told LinuxInsider.

The process involves maintaining awareness of updates to the code you run, applying patches as quickly as possible, and ensuring you conduct regular testing of your application/environment as a catch-all, Wilson explained.

Several industries, such as government, healthcare and automotive, have started to adopt standards that require organizations to inventory and track their use of open source components in a software bill of materials, according to Synopsys’ Mackey.

“This is a good first step,” he said. “After all, you can’t manage risk you don’t know exists.”


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link