Tag Archives: servers

Open Source Is Everywhere and So Are Vulnerabilities, Says Black Duck Report | Enterprise


By Jack M. Germain

May 15, 2018 5:00 AM PT

Black Duck by Synopsys on Tuesday released the 2018 Open Source Security and Risk Analysis report, which details new concerns about software vulnerabilities amid a surge in the use of open source components in both proprietary and open source software.

Open Source Is Everywhere and So Are Vulnerabilities, Says Black Duck Report

The report provides an in-depth look at the state of open source security, license compliance and code-quality risk in commercial software. That view shows consistent growth over the last year, with the Internet of Things and other spaces showing similar problems.

This is the first report Black Duck has issued since Synopsys acquired it late last year. The Synopsys Center for Open Source Research & Innovation conducted the research and examined findings from anonymized data drawn from more than 1,100 commercial code bases audited in 2017.

The report comes on the heels of heightened alarm regarding open source security management following the major data breach at Equifax last year. It includes insights and recommendations to help organizations’ security, risk, legal, development and M&A teams better understand the open source security and license risk landscape.

The goal is to improve the application risk management processes that companies put into practice.

Industries represented in the report include the automotive, big data (predominantly artificial intelligence and business intelligence), cybersecurity, enterprise software, financial services, healthcare, Internet of Things, manufacturing and mobile app markets.

“The two big takeaways we’ve seen in this year’s report are that the actual license compliance side of things is improving, but organizations still have a long way to go on the open source security side of things,” said Tim Mackey, open source technology evangelist at Black Duck by Synopsys.

Gaining Some Ground

Organizations have begun to recognize that compliance with an open source license and the obligations associated with it really do factor into governance of their IT departments, Mackey told LinuxInsider, and it is very heartening to see that.

“We are seeing the benefit that the ecosystem gets in consuming an open source component that is matured and well vetted,” he said.

One surprising finding in this year’s report is that the security side of the equation has not improved, according to Mackey.

“The license part of the equation is starting to be better understood by organizations, but they still have not dealt with the number of vulnerabilities within the software they use,” he said.

Structural Concerns

Open source is neither more nor less secure than custom code, based on the report. However, there are certain characteristics of open source that make vulnerabilities in popular components very attractive to attackers.

Open source has become ubiquitous in both commercial and internal applications. That heavy adoption provides attackers with a target-rich environment when vulnerabilities are disclosed, the researchers noted.

Vulnerabilities and exploits are regularly disclosed through sources like the National Vulnerability Database, mailing lists and project home pages. Open source can enter code bases through a variety of ways — not only through third-party vendors and external development teams, but also through in-house developers.

Commercial software automatically pushes updates to users. Open source has a pull support model. Users must keep track of vulnerabilities, fixes and updates for the open source system they use.

If an organization is not aware of all the open source it has in use, it cannot defend against common attacks targeting known vulnerabilities in those components, and it exposes itself to license compliance risk, according to the report.

Changing Stride

Asking whether open source software is safe or reliable is a bit like asking whether an RFC or IEEE standard is safe or reliable, remarked Roman Shaposhnik, vice president of product & strategy at
Zededa.

“That is exactly what open source projects are today. They are de facto standardization processes for the software industry,” he told LinuxInsider.

A key question to ask is whether open source projects make it safe to consume what they are producing, incorporating them into fully integrated products, Shaposhnik suggested.

That question gets a twofold answer, he said. The projects have to maintain strict IP provenance and license governance to make sure that downstream consumers are not subject to frivolous lawsuits or unexpected licensing gotchas.

Further, projects have to maintain a strict security disclosure and response protocol that is well understood, and that it is easy for downstream consumers to participate in a safe and reliable fashion.

Better Management Needed

Given the continuing growth in the use of open source code in proprietary and community-developed software, more effective management strategies are needed on the enterprise level, said Shaposhnik.

Overall, the Black Duck report is super useful, he remarked. Software users have a collective responsibility to educate the industry and general public on how the mechanics of open source collaboration actually play out, and the importance of understanding the possible ramifications correctly now.

“This is as important as understanding supply chain management for key enterprises,” he said.

Report Highlights

More than 4,800 open source vulnerabilities were reported in 2017. The number of open source vulnerabilities per code base grew by 134 percent.

On average, the Black Duck On-Demand audits identified 257 open source components per code base last year. Altogether, the number of open source components found per code base grew by about 75 percent between the 2017 and 2018 reports.

The audits found open source components in 96 percent of the applications scanned, a percentage similar to last year’s report. This shows the ongoing dramatic growth in open source use.

The average percentage of open source in the code bases of the applications scanned grew from 36 percent last year to 57 percent this year. This suggests that a large number of applications now contain much more open source than proprietary code.

Pervasive Presence

Open source use is pervasive across every industry vertical. Some open source components have become so important to developers that those components now are found in a significant share of applications.

The Black Duck audit data shows open source components make up between 11 percent and 77 percent of commercial applications across a variety of industries.

For instance, Bootstrap — an open source toolkit for developing with HTML, CSS and JavaScript — was present in 40 percent of all applications scanned. jQuery closely followed with a presence in 36 percent of applications.

Other components common across industries was Lodash, a JavaScript library that provides utility functions for programming tasks. Lodash appeared as the most common open source component used in applications employed by such industries as healthcare, IoT, Internet, marketing, e-commerce and telecommunications, according to the report.

Other Findings

Eighty-five percent of the audited code bases had either license conflicts or unknown licenses, the researchers found. GNU General Public License conflicts were found in 44 percent of audited code bases.

There are about 2,500 known open source licenses governing open source components. Many of these licenses have varying levels of restrictions and obligations. Failure to comply with open source licenses can put businesses at significant risk of litigation and compromise of intellectual property.

On average, vulnerabilities identified in the audits were disclosed nearly six years ago, the report notes.

Those responsible for remediation typically take longer to remediate, if they remediate at all. This allows a growing number of vulnerabilities to accumulate in code bases.

Of the IoT applications scanned, an average of 77 percent of the code base was comprised of open source components, with an average of 677 vulnerabilities per application.

The average percentage of code base that was open source was 57 percent versus 36 percent last year. Many applications now contain more open source than proprietary code.

Takeaway and Recommendations

As open source usage grows, so does the risk, OSSRA researchers found. More than 80 percent of all cyberattacks happened at the application level.

That risk comes from organizations lacking the proper tools to recognize the open source components in their internal and public-facing applications. Nearly 5,000 open source vulnerabilities were discovered in 2017, contributing to nearly 40,000 vulnerabilities since the year 2000.

No one technique finds every vulnerability, noted the researchers. Static analysis is essential for detecting security bugs in proprietary code. Dynamic analysis is needed for detecting vulnerabilities stemming from application behavior and configuration issues in running applications.

Organizations also need to employ the use of software composition analysis, they recommended. With the addition of SCA, organizations more effectively can detect vulnerabilities in open source components as they manage whatever license compliance their use of open source may require.


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

WhiteSource Rolls Out New Open Source Security Detector | Enterprise


By Jack M. Germain

May 24, 2018 10:24 AM PT

WhiteSource on Tuesday launched its next-generation software composition analysis (SCA) technology, dubbed “Effective Usage Analysis,” with the promise that it can reduce open source vulnerability alerts by 70 percent.

The newly developed technology provides details beyond which components are present in the application. It provides actionable insights into how components are being used. It also evaluates their impact on the security of the application.

The new solution shows which vulnerabilities are effective. For instance, it can identify which vulnerabilities get calls from the proprietary code.

It also underscores the impact of open source code on the overall security of the application and shows which vulnerabilities are ineffective. Effective Usage Analysis technology allows security and engineering teams to cut through the noise to enable correct prioritization of threats to the security of their products, according to WhiteSource CEO Rami Sass.

“Prioritization is key for managing time and limited resources. By showing security and engineering teams which vulnerable functionalities are the most critical and require their immediate attention, we are giving them the confidence to plan their operations and optimize remediation,” he said.

The company’s goal is to empower businesses to develop better software by harnessing the power of open source. In its Software Composition Analysis (SCA) Wave report in 2017, Forrester recognized the company as the best current offering.

WhiteSource’s new Effective Usage Analysis offering addresses an ongoing challenge for open source developers: to identify and correct identifiable security vulnerabilities proactively, instead of watching or fixing problems after the fact, said Charles King, principal analyst at Pund-IT.

“That should result in applications that are more inherently secure and also improve the efficiency of developers and teams,” he told LinuxInsider. “Effective Usage Analysis appears to be a solid individual solution that is also complementary and additive to WhiteSource’s other open source security offerings.”

Open Source Imperative

As open source usage has increased, so has the number of alerts on open source components with known vulnerabilities. Security teams have become overloaded with security alerts, according to David Habusha, vice president of product at WhiteSource.

“We wanted to help security teams to prioritize the critical vulnerabilities they need to deal with first, and increase the developers’ confidence that the open source vulnerabilities they are being asked to fix are the most pressing issues that are exposing their applications to threats,” he told LinuxInsider.

The current technology in the market is limited to detecting which vulnerable open source components are in your application, he said. They cannot provide any details on how those components are being used, or the impact of each vulnerable functionality to the security of the application.

The new technology currently supports Java and JavaScript. The company plans to expand its capabilities to include additional programming languages. Effective Usage Analysis is currently in beta testing and will be fully available in June.

How It Works

Effective Usage Analysis promises to cut down open source vulnerabilities alerts dramatically by showing which vulnerabilities are effective (getting calls from the proprietary code that impact the security of the application) and which ones are ineffective.

Only 30 percent of reported alerts on open source components with known vulnerabilities originated from effective vulnerabilities and required high prioritization for remediation, found a WhiteSource internal research study on Java applications.

Effective Usage Analysis also will provide actionable insights to developers for remediating a vulnerability by providing a full trace analysis to pinpoint the path to the vulnerability. It adds an innovative level of resolution for understanding which functionalities are effective.

This approach aims to reduce open source vulnerability alerts and provide actionable insights. It identifies the vulnerabilities’ exact locations in the code to enable faster, more efficient remediation.

A Better Mousetrap

Effective Usage Analysis is an innovative technology representing a radical new approach to effectiveness analysis that may be applied to a variety of use cases, said WhiteSource’s Habusha. SCA tools traditionally identify security vulnerabilities associated with an open source component by matching its calculated digital signature with an entry stored in a specialized database maintained by the SCA vendor.

SCA tools retrieve data for that entry based on reported vulnerabilities in repositories such as the
NVD, the U.S. government repository of standards-based vulnerabilities.

“While the traditional approach can identify open source components for which security vulnerabilities are reported, it does not establish if the customer’s proprietary code actually references — explicitly or implicitly — entities reported as vulnerable in such components,” said Habusha.

WhiteSource’s new product is an added component that targets both security professionals and developers. It helps application security professionals prioritize their security alerts and quickly detect the critical problems that demand their immediate attention.

It helps developers by mapping the path from their proprietary code to the vulnerable open source functionality, providing insights into how they are using the vulnerable functionality and how the issues can be fixed.

Different Bait

Effective Usage Analysis employs a new scanning process that includes the following steps:

  • Scanning customer code;
  • Analyzing how the code interacts with open source components;
  • Indicating if reported vulnerabilities are effectively referenced by such code; and
  • Identifying where that happens.

It employs a combination of advanced algorithms, a comprehensive knowledge base, and a fresh new user interface to accomplish those tasks. Effective Usage Analysis enables customers to establish whether reported vulnerabilities constitute a real risk.

“That allows for a significant potential reduction in development efforts and higher development process efficiency,” said Habusha.

Potential Silver Bullet

WhiteSource’s new solution has the potential to be a better detection tool for open source vulnerabilities, suggested Avi Chesla, CTO of
Empow Cyber Security. The new detection tools will allow developers to understand the potential risk associated with the vulnerabilities.

The tools “will ultimately motivate developers to fix them before releasing a new version. Or at least release a version with known risks that will allow the users to effectively manage the risks through external security tools and controls,” he told LinuxInsider.

The new approach matters, because the long-standing existing vulnerabilities are and should be known to the industry, Chesla explained. It offers a better chance that security tools will detect exploitation attempts against them.

Effective Usage Analysis is probably the most important factor because developers are flooded with alerts, or noise. The work of analyzing the noise-to-signal ratio is time-consuming and requires cybersecurity expertise, noted Chesla.

The “true” signals are the alerts that represent a vulnerability that actually can be exploited and lead to a real security breach. The cybersecurity market deals with this issue on a daily basis.

“Security analysts are flooded with logs and alerts coming from security tools and experience a similar challenge to identify which alerts represent a real attack intent in time,” Chesla pointed out.

Equifax Factor

The major vulnerability that compromised Equifax last year sent security experts and software devs scrambling for effective fixes. However, it is often a business decision, rather than a security solution, that most influences software decisions, suggested Ed Price, director of compliance and senior solution architect at
Devbridge Group.

“Any tools that make it easier for the engineering team to react and make the code more secure are a value-add,” he told LinuxInsider.

In some cases, the upgrade of a single library, which then cascades down the dependency tree, will create a monumental task that cannot be fixed in a single sprint or a reasonable timeframe, Price added.

“In many cases, the decision is taken out of the hands of the engineering team and business takes on the risk of deploying code without the fixes and living with the risk,” Price said, adding that no tool — open source or otherwise — will change this business decision.

“Typically, this behavior will only change in an organization once an ‘Equifax event’ occurs and there is a penalty in some form to the business,” he noted.

Saving Code Writers’ Faces

WhiteSource’s new tool is another market entry that aims to make sense of the interconnected technologies used in enterprise environments, suggested Chris Roberts, chief security architect at
Acalvio.

“The simple fact of the matter is, we willingly use code that others have written, cobbling things together in an ever increasingly complex puzzle of collaborative code bases,” he told LinuxInsider, “and then we wonder why the researchers and criminals can find avenues in. It is good to see someone working hard to address these issues.”

The technologies will help if people both pay attention and learn from the mistakes being made. It is an if/and situation, Roberts said.

The logic is as follows: *If* I find a new tool that helps me understand the millions of lines of code that I have to manage or build as part of a project, *and* the understanding that the number of errors per 100 lines is still unacceptable, then a technology that unravels those complexities, dependencies and libraries is going to help, he explained.

“We need to use it as a learning tool and not another crutch or Band-Aid to further mask the garbage we are selling to people,” Roberts said.

Necessary Path

Hackers love open source software security vulnerabilities because they are a road map for exploiting unpatched systems, observed Tae-Jin Kang, CEO of
Insignary. Given that the number of vulnerabilities hit a record in 2017, according to the CVE database, finding the vulnerabilities is the best, first line of defense.

“Once they are found in the code and patched, then it is appropriate to begin leveraging technologies to deal with higher-order, zero-day issues,” Kang told LinuxInsider.

Organizations for years have looked to push back the day of reckoning with regard to OSS security vulnerabilities. They have been viewed as trivial, while engineering debt has piled up.

“Equifax has been the clearest illustration of what happens when these two trends meet,” said Kang. “With the implementation of GDPR rules, businesses need to get more aggressive about uncovering and patching security vulnerabilities, because the European Union’s penalties have teeth.”


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

Red Hat Launches Fuse 7, Fuse Online for Better Cloud Integration | Enterprise


By Jack M. Germain

Jun 5, 2018 7:00 AM PT

Red Hat on Monday launched its Fuse 7 cloud-native integration solution and introduced Fuse Online, an alternative integration Platform as a Service (iPaaS).

Red Hat Fuse is a lightweight modular and flexible integration platform with a new-style enterprise service bus (ESB) to unlock information. It provides a single, unified platform across hybrid cloud environments for collaboration between integration experts, application developers and business users.

The Fuse 7 upgrade expands the platform’s integration capabilities natively to Red Hat OpenShift Container Platform. OpenShift is a comprehensive enterprise Kubernetes platform.

Fuse Online includes a set of automated tools for connecting software applications that are deployed in different environments. iPaaS often is used by large business-to-business (B2B) enterprises that need to integrate on-premises applications and data with cloud applications and data.

Red Hat customers already using Fuse are likely to welcome the new additions and updates, said Charles King, principal analyst at Pund-IT. Those who are actively utilizing the company’s OpenShift container solutions, and those planning hybrid cloud implementations may be especially interested.

“I’m not sure whether those features will attract significant numbers of new customers to Red Hat, but Fuse 7 appears to do a solid job of integrating the company’s container and hybrid cloud technologies into a seamless whole,” King told LinuxInsider.

Competitive Differentiator

Because Red Hat’s Fuse enables subscribers to integrate custom and packaged applications across the hybrid cloud quickly and efficiently, it can be a competitive differentiator for organizations today, the company said.

The new iPaaS offering allows diverse users such as integration experts, application developers and nontechnical citizen integrators to participate independently in the integration process. It gives users a single platform that maintains compliance with corporate governance and processes.

By taking advantage of capabilities in Red Hat OpenShift Container Platform, Fuse offers greater productivity and manageability in private, public or hybrid clouds, said Sameer Parulkar, senior product marketing manager at Red Hat.

“This native OpenShift-based experience provides portability for services and integrations across runtime environments and enables diverse users to work more collaboratively,” he told LinuxInsider.

Fuse Capabilities

Fuse 7 introduces a browser-based graphical interface with low-code drag-and-drop capabilities that enable business users and developers to integrate applications and services more rapidly, using more than 200 predefined connectors and components, said Parulkar.

Based on Apache Camel, the components include more than 50 new connectors for big data, cloud services and Software as a Service (SaaS) endpoints. Organizations can adapt and scale endpoints for legacy systems, application programming interfaces, Internet of Things devices and cloud-native applications.

Customers can extend services and integrations for use by third-party providers and partners. Users can deploy Fuse alongside Red Hat’s 3scale API Management offering to add capabilities for security, monetization, rate limiting and community features.

Fuse Online is a new service, but it is based on the existing structure of the Fuse application. Fuse online is a 24/7 service with drag and drop integration capabilities.

“The foundation is the same. It can be used in conjunction with Fuse 7 or separately with the ability to abstract all the customer’s existing data,” said Parulkar. “It allows an organization to get started much more quickly.”

Expanding Agility

Combined with Red Hat OpenShift Container Platform and 3scale API Management, Fuse forms the foundation of Red Hat’s agile integration architecture. 3scale API Management 2.2, released last month, introduced new tools for graphical configuration of policies, policy extensibility and shareability. It also expanded Transport Layer Security (TLS) support.

The result makes it easier for business users to implement their organization’s API program. Combined integration technologies let users more quickly, easily and reliably integrate systems across their hybrid cloud environments, Parulkar said.

“Data integration is critical to the National Migration Department’s mission of effective threat prediction, and Red Hat Fuse plays a crucial role in this process,” said Osmar Alza, coordinator of migration control for Direccin Nacional de Migraciones de la Repblica Argentina. “The Red Hat Fuse platform provides unified access to a complete view of a person for smarter, more efficient analysis, and supports flexible integration

Access and Use

Red Hat Fuse 7 is available for download by members of the Red Hat Developer community. Existing Fuse users automatically get the Fuse 7 upgrade.

Fuse Online is available for free trial followed by a monthly subscription.

Both products use the same interface, so the customer gets a unified platform whether used in the cloud or on premises. Fuse offers users more integration than similar solutions provided by IBM, Oracle and Google, Parulkar said.

“The key benefits of any integrated PaaS platform are simplified implementation and centralized management functions. On first glance, Fuse Online seems to hit those notes via well-established and road-tested Red Hat technologies, including OpenShift,” said King.

Fusing Advantages

From the very beginning, the goal of Red Hat Fuse was to simplify integration across the extended enterprise and help organizations compete and differentiate themselves, said Mike Piech, vice president and general manager for middleware at Red Hat.

“With Fuse 7, which includes Fuse Online, we are continuing to enhance and evolve the platform to meet the changing needs of today’s businesses, building off of our strength in hybrid cloud and better serving both the technical and non-technical professionals involved in integration today,” he said.

Red Hat simplifies what otherwise could be a cumbersome task — that is, integrating disparate applications, services, devices and APIs across the extended enterprise.

Fuse enables customers to achieve agile integration to their advantage, noted Saurabh Sharma, principal analyst at
Ovum.

“Red Hat’s new iPaaS solution fosters developer productivity and supports a wider range of user personas to ease the complexity of hybrid integration,” he said.

Right Path to the Cloud

Red Hat’s new Fuse offerings are further proof that businesses — and especially enterprises — have embraced the hybrid cloud as the preferred path forward, said Pund-IT’s King.

“That is a stick in the eye to evangelists who have long claimed that public cloud will eventually dominate IT and rapidly make internal IT infrastructures a thing of the past,” he remarked.

Pushing Old Limits

Fuse comes from a more traditional or legacy enterprise applications approach centered around service-oriented architecture (SOA) and enterprise service bus (ESB). As was common back in the day, there’s a lot of emphasis on formal standard compliance as opposed to de facto open source standardization through project development, noted Roman Shaposhnik, vice president for product and strategy at
Zededa.

While the current generation of enterprise application architectures unquestionably is based on microservices and 12-factor apps, Fuse and ESB in general still enjoy a lot of use in existing applications, he told LinuxInsider. That use, however, is predominantly within existing on-premises data center deployments.

“Thus the question becomes: How many enterprises will use the move to the cloud as a forcing function to rethink their application architecture in the process, versus conducting a lift-n-shift exercise first?” Shaposhnik asked.

It is hard to predict the split. There will be a nontrivial percentage that will pick the latter and will greatly benefit from a more cloud-native implementation of Fuse, he noted.

“This is very similar to how Amazon Web Services had its initial next generation-focused, greenfield application deployments built, exclusively based on cloud-native principles and APIs, but which over time had to support a lot of legacy bridging technologies like Amazon Elastic File System,” Shaposhnik said. “That is basically as old school of a [network-attached storage]-based on [network file system] protocol as one can get.”

Possible Drawbacks

The advantage to the workaround technology is clearly one more roadblock removed from being able to seamlessly lift-and-shift legacy enterprise applications into cloud-native deployment environments. That becomes a disadvantage, Shaposhnik noted.

“The easier the cloud and infrastructure providers make it for enterprises to continue using legacy bridging technologies, the more they delay migration to the next-generation architectures, which are critical for scalability and rapid iteration on the application design and implementation,” he said.

Red Hat’s technology can be essential to enterprise cloud use, said Ian McClarty, CEO of
PhoenixNAP Global It Solutions.

“To organizations leveraging the Red Hat ecosystem, Fuse helps manage components that today are handled from disparate sources into a much simpler-to-use interface with the capability of extending functionality,” he told LinuxInsider.

The advantage of an iPaaS offering is ease of use, said McClarty. Further, added management for multiple assets becomes a lot easier and scale-out becomes a possibility.

One disadvantage is the availability of the system. Since it is a hosted solution, subscribers are limited by the uptime of the vendor, said McClarty.

Another disadvantage is that vendor lock becomes a stronger reality, he pointed out. The DevOps/system administrator relies on the iPaaS system to do daily tasks, so the vendor becomes much harder to displace.


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

Private Cloud May Be the Best Bet: Report | Enterprise


By Jack M. Germain

Jun 13, 2018 5:00 AM PT

News flash: Private cloud economics can offer more cost efficiency than public cloud pricing structures.

Private (or on-premises) cloud solutions can be more cost-effective than public cloud options, according to “Busting the Myths of Private Cloud Economics,” a report 451 Research and Canonical released Wednesday. That conclusion counters the notion that public cloud platforms traditionally are more cost-efficient than private infrastructures.

Half of the enterprise IT decision-makers who participated in the study identified cost as the No. 1 pain point associated with the public cloud. Forty percent mentioned cost-savings as a key driver of cloud migration.

“We understand that people are looking for more cost-effective infrastructure. This was not necessarily news to us,” said Mark Baker, program director at Canonical.

“It was interesting to see the report point out that operating on-premises infrastructure can be as cost-effective as using public cloud services if done in the right way,” he told LinuxInsider.

Report Parameters

The Cloud Price Index, 451 Group’s tracking of public and private cloud pricing since 2015, supplied the data underpinning the latest report. Companies tracked in the Cloud Price Index include but are not limited to Amazon Web Services, Google, Microsoft, VMware, Rackspace, IBM, Oracle, HPE, NTT and CenturyLink.

The Cloud Price Index is based on quarterly surveys of some 50 providers across the globe that together represent around nearly 90 percent of global Infrastructure as a Service revenue, noted Owen Rogers, director of the Digital Economics Unit at 451 Research.

“Most providers give us data in return for complimentary research. Canonical asked us if they could participate as well. Any provider is welcome to submit a quotation and to be eligible for this research,” he told LinuxInsider.

Providers are not compared directly with each other directly because each vendor and each enterprise scenario is different. It is not fair to say Provider A is cheaper than Provider B in all circumstances, Rogers explained.

“We just provide benchmarks and pricing distributions for a specific use-case so that enterprises can evaluate if the price they are paying is proportional to the value they are getting from that specific vendor,” he said. “Because we keep individual providers’ pricing confidential, we get more accurate and independent data.”

Private Cloud Trend

The private cloud sector continues to attract enterprise customers looking for a combination of price economy and cloud productivity. That combination is a driving point for Canonical’s cloud service, said Baker.

“We see customers wanting to be able to continue running workloads on-premises as well as on public cloud and wanting to get that public cloud economics within a private cloud. We have been very focused on helping them do that,” he said.

Enterprise customers have multiple reasons for choosing on-premises or public cloud services. They ranges from workload characteristics and highly variable workloads to different business types, such as retail operations. Public clouds let users vary their capacity.

“You see the rates of innovation delivered by the public cloud because of the new services they are launching,” said Baker, “but there is a need for some to run workloads on-premises as well. That can be for compliance reasons, security reasons, or cases where systems are already in place.”

In some cases, maintaining cloud operations on-premises can be even more cost-effective than running in the public cloud, he pointed out. Cost is only one element, albeit a very important one.

Report Highlights

The public cloud is not always the bargain buyers expect, the report suggests. Cloud computing may not deliver the promised huge cost savings for some enterprises.

Reducing costs was the enterprise’s main reason for moving to the cloud, based on a study conducted last summer. More than half of the decision-makers polled said cost factors were still their top pain point in a follow-up study a few months later.

Once companies start consuming cloud services, they realize the value that on-demand access to IT resources brings in terms of quicker time to market, easier product development, and the ability to scale to meet unexpected opportunities.

As a result, enterprises consume more and more cloud services as they look to grow revenue and increase productivity. With scale, public cloud costs can mount rapidly, without savings from economies of scale being passed on, the latest report concludes.

Private Clouds Can Be Cheaper If…

Enterprises using private or on-premises clouds need the right combination of tools and partnerships. Cost efficiency is only possible when operating in a “Goldilocks zone” of high utilization and high labor efficiency.

Enterprises should use tools, outsourced services and partnerships to optimize their private cloud as much as possible to save money, 451 recommended. That will enhance their ability to profit from value-added private cloud benefits.

Many managed private clouds were priced reasonably compared to public cloud services, the report found, providing enterprises with the best of both worlds — private cloud peace of mind, control and security, yet at a friendlier price.

Managed services can increase labor efficiency by providing access to qualified, experienced engineers. They also can reduce some operational burdens with the outsourcing and automation of day-to-day operations, the report notes.

Convincing Study

While public cloud services can be valuable in many circumstances, they are not necessarily the Utopian IT platform of the future that proponents make them out to be, observed Charles King, principal analyst at Pund-IT.

“As the report suggests, these points are clearly the case where enterprises are involved. However, they are increasingly relevant for many smaller companies, especially those that rely heavily on IT-based service models,” he told LinuxInsider.

An interesting point about the popularity of private cloud services is that their success relates to generational shifts in IT management processes and practices, King noted. Younger admins and other personnel gravitate toward services that offer simplified tools and intuitive graphical user interfaces that are commonplace in public cloud platforms but unusual in enterprise systems.

“Public cloud players deserve kudos for seeing and responding to those issues,” King said. “However, the increasing success of private cloud solutions is due in large part to system vendors adapting to those same generational changes.”

The Canonical Factor

Canonical’s managed private cloud compares favorably to public cloud services, the report found. Canonical last year engaged with 451 Research for the Cloud Price Index, which compared its pricing and services against the industry at large using the CPI’s benchmark averages and market distributions.

Canonical’s managed private cloud was cheaper than 25 of the public cloud providers included in the CPI price distributions, which proves that the benefits of outsourced management and private cloud do not have to come at a premium, according to the report’s authors.

High levels of automation drive down management costs significantly. Canonical is a pioneer in model-driven operations that reduce the amount of fragmentation and customization required for diverse OpenStack architectures and deployments.

That likely is a contributing factor to the report’s finding that Canonical was priced competitively against other hosted private cloud providers. Canonical’s offering is a full-featured open cloud with a wide range of reference architectures and the ability to address the entire range of workload needs at a competitive price.

Dividing Options

It is not so much a divide between private and public cloud usage in enterprise markets today, suggested Pund-IT’s King, as a case of organizations developing a clearer understanding or sophistication about what works best in various cloud scenarios and what does not.

“The Canonical study clarifies how the financial issues driving initial public cloud adoption can and do change over time and often favor returning to privately owned cloud-style IT deployments,” he explained. “But other factors, including privacy and security concerns, also affect which data and workloads companies will entrust to public clouds.”

A valid case exists for using both public and private infrastructure, according to the 451 Research report. Multicloud options are the endgame for most organizations today. This approach avoids vendor lock-in and enables enterprises to leverage the best attributes of each platform, but the economics have to be realistic.

It is worth considering private cloud as an option rather than assuming that public cloud is the only viable route, the report concludes. The economics showcased in the report suggest that a private cloud strategy could be a better solution.


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

Can Hackers Crack the Ivory Towers? | Enterprise


Just like leaders in every other field you can imagine, academics have been hard at work studying information security. Most fields aren’t as replete with hackers as information security, though, and their contributions are felt much more strongly in the private sector than in academia.

The differing motives and professional cultures of the two groups act as barriers to direct collaboration, noted
Anita Nikolich in her “Hacking Academia” presentation at the
CypherCon hacking conference recently held in Milwaukee. Nikolich recently finished her term as the program director for cybersecurity at the National Science Foundation’s Division of Advanced Cyberinfrastructure.

For starters, academics and hackers have very distinct incentives.

“The topics of interest tend to be the same — the incentives are very different,” Nikolich said.

“In the academic community, it’s all about getting tenure, and you do that by getting published in a subset of serious journals and speaking at a subset of what they call ‘top conferences,'” she explained. “For the hacker world … it could be to make the world a better place, to fix things, [or] it could be to just break things for fun.”

These differences in motivations lead to differences in perception — particularly in that the hacker community’s more mischievous air discourages academics from associating with them.

“There is still quite a bit of perception that if you bring on a hacker you’re not going to be able to put boundaries on their activity, and it will harm your reputation as an academic.” Nikolich said.

Deep Rift

The perception problem is something other academics also have observed.

The work of hackers holds promise in bolstering that of academics, noted Massimo DiPierro, a professor at
DePaul College of Computing and Digital Media.

Hackers’ findings are edifying even as things stand, he contended, but working side-by-side with one has the potential to damage an academic’s career.

“I think referencing their research is not a problem. I’ve not seen it done much [but] I don’t see that as a problem,” DiPierro said. “Some kind of collaboration with a company is definitely valuable. Having it with a hacker — well, hackers can provide information so we do want that, but we don’t want that person to be labeled as a ‘hacker.'”

Far from not working actively with hackers, many academics don’t even want to be seen with hackers — even at events such as CypherCon, where Nikolich gave her presentation.

“It’s all a matter of reputation. Academics — 90 percent of them have told me they don’t want to be seen at hacker cons,” she said.

Root Causes

While both researchers agreed that their colleagues would gain from incorporating hackers’ discoveries into their own work, they diverged when diagnosing the source of the gulf between the two camps and, to a degree, even on the extent of the rift.

Academic papers have been infamously difficult to get access to, and that is still the case, Nikolich observed.

“Hackers, I found, will definitely read and mine through the academic literature — if they can access it,” she said.

However, it has become easier for hackers to avail themselves of the fruits of academic study, according to DiPierro.

“A specific paper may be behind a paywall, but the results of certain research will be known,” he said.

On the other hand, academia moves too slowly and too conservatively to keep up with the private sector, DiPierro maintained, and with the hackers whose curiosity reinforces it. This limited approach is due in part to the tendency of university researchers to look at protocols in isolation, rather than look at how they are put into practice.

“I think most people who do research do it based on reading documentation, protocol validation, [and] looking for problems in the protocol more than the actual implementation of the protocol,” he said.

Risk Taking

That’s not to say that DiPierro took issue with academia’s model entirely — quite the contrary. One of its strengths is that the results of university studies are disseminated to the public to further advance the field, he pointed out.

Still, there’s no reason academics can’t continue to serve the public interest while broadening the scope of their research to encompass the practical realities of security, in DiPierro’s view.

“I think, in general, industry should learn [public-mindedness] from academia, and academia should learn some of the methodologies of industry, which includes hackers,” DiPierro said. “They should learn to take a little bit more risks and look at more real-life problems.”

Academics could stand to be more adventurous, Nikolich said, but the constant pursuit of tenure is a restraining force.

“I think on the academic side, many of them are very curious, but what they can learn — and some of them have this — is to take a risk,” she suggested. “With the funding agencies and the model that there is now, they are not willing to take risks and try things that might show failure.”

Financial Incentives

While Nicolich and DiPierro might disagree on the root cause of the breakdown between hackers and academic researchers, their approaches to addressing it are closely aligned. One solution is to allow anyone conducting security research to dig deeper into the systems under evaluation.

For Nikolich, that means not only empowering academia to actively test vulnerabilities, but to compensate hackers enough for them to devote themselves to full-time research.

“Academics should be able to do offensive research,” she said. “I think that hackers should have financial incentive, they should be able to get grants — whether it’s from industry, from the private sector, from government — to do their thing.”

In DiPierro’s view, it means freeing researchers, primarily hackers, from the threat of financial or legal consequences for seeking out vulnerabilities for disclosure.

“I would say, first of all, if anything is accessible, it should be accessible,” he said. “If you find something and you think that what you find should not have been accessible, [that] it was a mistake to make it accessible, you [should] have to report it. But the concept of probing for availability of certain information should be legal, because I think it’s a service.”


Jonathan Terrasi has been an ECT News Network columnist since 2017. His main interests are computer security (particularly with the Linux desktop), encryption, and analysis of politics and current affairs. He is a full-time freelance writer and musician. His background includes providing technical commentaries and analyses in articles published by the Chicago Committee to Defend the Bill of Rights.





Source link