Tag Archives: Internet of Things

The Router’s Obstacle-Strewn Route to Home IoT Security | Software


It is newly minted conventional wisdom that not a single information security conference goes by without a presentation about the abysmal state of Internet of Things security. While this is a boon for researchers looking to make a name for themselves, this sorry state of affairs is definitely not beneficial for anyone who owns a connected device.

IoT device owners aren’t the only ones fed up, though. Right behind them is Eldridge Alexander, manager of Duo Labs at
Duo Security. Even better, he has a plan, and the experience to lend it some credibility.

Before assuming his current role at Duo Security, Alexander held various IT posts at Google and Cloudflare. For him, the through-line that ties together his past and present IT work is the security gains that accrue from aligning all of a network’s security controls with the principle of zero-trust.

“I’ve basically been living and breathing zero-trust for the last several years,” Alexander told LinuxInsider.

Simply put, “zero-trust” is the idea that to the furthest extent possible, devices should not be trusted to be secure, and they should be treated as such. There are many ways zero-trust can manifest, as it is not so much a singular technique as a guiding principle, but the idea is to leave yourself as invulnerable to the compromise of any one device as possible.

A recurring theme among his past few employers, this understandably has left its mark on Alexander, to the point where it positively permeates his plan for IoT security on home networks. His zeal for zero-trust comes to home networks at just the right time.

Although consumer IoT adoption
has been accelerating, zero-trust has yet to factor into most consumer networking tech, Alexander observed, and we’re getting to the point where we can’t afford for it not to.

“Investigating not really new threats but increased amount of threats in IoT and home networks, I’ve been really interested in seeing how we could apply some of these very enterprise-focused principles and philosophies to home networks,” he noted.

Network Segmentation

In Alexander’s home IoT security schema, which he unveiled at Chicago’s THOTCON hacking conference this spring, zero-trust chiefly takes the form of network segmentation, a practice which enterprise networks long have relied on.

In particular, he advocates for router manufacturers to provide a way for home users to create two separate SSIDs (one for each segment) either automatically or with a simple user-driven GUI, akin to the one already included for basic network provisioning (think your 192.168.1.1 Web GUI).

One would be the exclusive host for desktop and mobile end-user devices, while the other would contain only the home’s IoT devices, and never the twain shall meet.

Critically, Alexander’s solution largely bypasses the IoT manufacturers themselves, which is by design. It’s not because IoT manufacturers should be exempted from improving their development practices — on the contrary, they should be expected to do their part. It’s because they haven’t proven able to move fast enough to meet consumer security needs.

“My thoughts and talk here is kind of in response to our current state of the world, and my expectations of any hope for the IoT manufacturers is long term, whereas for router manufacturers and home network equipment it is more short term,” he said.

Router manufacturers have been much more responsive to consumer security needs, in Alexander’s view. However, anyone who has ever tried updating router firmware can point to the minimal attention these incremental patches often receive from developers as a counterclaim.

Aside from that issue, router manufacturers typically integrate new features like updated 802.11 and WPA specifications fairly quickly, if for no other reason than to give consumers the latest and greatest tech.

“I think a lot of [router] companies are going to be open to implementing good, secure things, because they know as well as the security community does … that these IoT devices aren’t going to get better, and these are going to be threats to our networks,” Alexander said.

So how would home routers actually implement network segmentation in practice? According to Alexander’s vision, unless confident consumers wanted to strike out on their own and tackle advanced configuration options, their router simply would establish two SSIDs on router setup. In describing this scenario, he dubbed the SSIDs “Eldridge” and “Eldridge IoT,” along the lines of the more traditional “Home” and “Home-Guest” convention.

The two SSIDs are just the initial and most visible (to the consumer) part of the structure. The real power comes from the deployment of VLANs respective to each SSID. The one containing the IoT devices, “Eldridge IoT” in this case, would not allow devices on it to send any packets to the primary VLAN (on “Eldridge”).

Meanwhile, the primary VLAN either would be allowed to communicate with the IoT VLAN directly or, preferably, would relay commands through an IoT configuration and management service on the router itself. This latter management service also could take care of basic IoT device setup to obviate as much direct user intervention as possible.

The router “would also spin up an app service such as Mozilla Web Things or Home Assistant, or something custom by the vendor, and it would make that be the proxy gateway,” Alexander said. “You would rarely need to actually talk from the primary Eldridge VLAN over into the Eldridge IoT VLAN. You would actually just talk to the Web interface that would then communicate over to the IoT VLAN on your behalf.”

By creating a distinct VLAN exclusively for IoT devices, this configuration would insulate home user laptops, smartphones, and other sensitive devices on the primary VLAN from compromise of one of their IoT devices. This is because any rogue IoT device would be blocked from sending any packets to the primary VLAN at the data link layer of the OSI pyramid, which it should have no easy way to circumvent.

It would be in router manufacturers’ interests to enable this functionality, said Alexander, since it would offer them a signature feature. If bundled in a home router, it would provide consumers with a security feature that a growing number of them actually would benefit from, all while asking very little of them in the way of technical expertise. It ostensibly would be turned on along with the router.

“I think that’s a valuable incentive to the router manufacturers for distinguishing themselves in a crowded marketplace,” Alexander said. “Between Linksys and Belkin and some of the other manufacturers, there’s not a whole lot of [distinction] between pricing, so offering home assistant and security is a great [distinction] that they could potentially use.”

IoT Security Standards?

There is some promise in these proposed security controls, but it’s doubtful that router manufacturers actually would equip consumer routers to deliver them, said Shawn Davis, director of forensics at
Edelson and adjunct industry professor at the Illinois Institute of Technology.

Specifically, VLAN tagging is not supported in almost any home router devices on the market, he told LinuxInsider, and segmenting IoT from the primary network would be impossible without it.

“Most router manufacturers at the consumer level don’t support reading VLAN tags, and most IoT devices don’t support VLAN tagging, unfortunately,” Davis said.

“They both could easily bake in that functionality at the software level. Then, if all IoT manufacturers could agree to tag all IoT devices with a particular VLAN ID, and all consumer routers could agree to route that particular tag straight to the Internet, that could be an easy way for consumers to have all of their IoT devices automatically isolated from their personal devices,” he explained.

VLAN tagging is not restricted by any hardware limitations, as Davis pointed out, but is merely a matter of enabling the software to handle it. Just because the manufacturers can switch on VLAN tagging in software, that doesn’t mean it will be an easy matter to convince them to do so.

It’s unlikely that router manufacturers will be willing to do so for their home router lines and, unsurprisingly, it has to do with money, he said.

“A lot of the major companies produce consumer as well as corporate routers,” Davis noted. “I think they could easily include VLAN functionality in consumer routers but often don’t in order to justify the cost increase for feature-rich business level hardware.”

Most router manufacturers see advanced functionality like VLAN tagging as meriting enterprise pricing due to the careful development that it requires to meet businesses’ stricter operational requirements. On top of that, considering the low average technical literacy of home users, router manufacturers have reason to think that power user features in home routers simply wouldn’t be used, or would be misconfigured.

“Aside from the pricing tier differences,” Davis said, “they also might be thinking, ‘Well, if we bake in VLANs and other enterprise-based features, most consumers might not even know how to configure them, so why even bother?'”

Beyond cajoling router makers to enable VLAN tagging and any other enterprise-grade features needed to realize Alexander’s setup, success also would hinge on each manufacturer’s implementation of the features, both in form and function, Davis emphasized.

“I think each manufacturer would have different flows in their GUIs for setting up isolated VLANs, which wouldn’t be the easiest for consumers to follow when switching across different brands,” he said. “I think if IoT security was more standards-based or automatic by default between devices and routers, overall security in consumer devices would greatly improve.”

Securing both of these concessions from router manufacturers would likely come down to ratifying standards across the industry, whether formally or informally, as Davis sees it.

“The different standards boards could potentially get together and try to pitch an IoT security standard to the router and IoT device manufacturers, and try to get them to include it in their products,” he said. “Aside from a new standard, there could potentially be a consortium where a few of the major manufacturers include advanced IoT device isolation in the hopes that others would follow suit.”

Risk Reduction

Alexander’s THOTCON presentation touched on the 5G connectivity that
many predict IoT will integrate, but in exploring the viability of alternatives to his setup, Davis quickly gravitated toward Alexander’s proposal.

Connecting to IoT devices via 5G certainly would keep them away from home users’ laptop- and smartphone-bearing networks, Davis acknowledged, but it would present other challenges. As anyone who has ever browsed
Shodan can tell you, always-on devices with seldom-changed default credentials connected directly to the public Internet have their downsides.

“Having your IoT devices isolated with your home-based devices is great, but there is still the possibly of the IoT devices being compromised,” Davis said. “If they are publicly accessible and have default credentials, they could then be used in DDoS attacks.”

Enabling IoT for direct 5G Internet connections doesn’t necessarily improve the security of end-user devices, Davis cautioned. IoT owners will still need to send commands to their IoT devices from their laptops or smartphones, and all 5G does is change the protocol that is employed for doing so.

“IoT devices using cellular 4G or 5G connections are another method of isolation,” he said, “but keep in mind, then the devices are relying even more on ZigBee, Z-Wave or Bluetooth Low Energy to communicate with other IoT devices in a home, which can lead to other security issues within those wireless protocols.”

Indeed, Bluetooth Low Energy

has its share of flaws, and at the end of the day protocols don’t impact security as much as the security of the devices that speak it.

Regardless of how the information security community chooses to proceed, it is constructive to look to other points in the connectivity pipeline between IoT devices and user access to them for areas where attack surfaces can be reduced. Especially when weighed against the ease of inclusion for the necessary software, router manufacturers undoubtedly can do more to protect users in cases where IoT largely hasn’t so far.

“I think a lot of the security burden is falling on the consumer who simply wants to plug in their device and not have to configure any particular security features,” Davis said. “I think the IoT device manufacturers and the consumer router and access point manufacturers can do a lot more to try to automatically secure devices and help consumers secure their networks.”


Jonathan Terrasi has been an ECT News Network columnist since 2017. His main interests are computer security (particularly with the Linux desktop), encryption, and analysis of politics and current affairs. He is a full-time freelance writer and musician. His background includes providing technical commentaries and analyses in articles published by the Chicago Committee to Defend the Bill of Rights.





Source link

Shuttleworth on Ubuntu 18.04: Multicloud Is the New Normal | Software


By Jack M. Germain

Apr 29, 2018 5:00 AM PT

Canonical last week released the
Ubuntu 18.04 LTS platform for desktop, server, cloud and Internet of Things use. Its debut followed a two-year development phase that led to innovations in cloud solutions for enterprises, as well as smoother integrations with private and public cloud services, and new tools for container and virtual machine operations.

The latest release drives new efficiencies in computing and focuses on the big surge in artificial intelligence and machine learning, said Canonical CEO Mark Shuttleworth in a global conference call.

Ubuntu has been a platform for innovation over the last decade, he noted. The latest release reflects that innovation and comes on the heels of extraordinary enterprise adoption on the public cloud.

The IT industry has undergone some fundamental shifts since the last Ubuntu upgrade, with digital disruption and containerization changing the way organizations think about next-generation infrastructures. Canonical is at the forefront of this transformation, providing the platform for enabling change across the public and private cloud ecosystem, desktop and containers, Shuttleworth said.

“Multicloud operations are the new normal,” he remarked. “Boot time and performance-optimized images of Ubuntu 18.04 LTS on every major public cloud make it the fastest and most-efficient OS for cloud computing, especially for storage and compute-intensive tasks like machine learning,” he added.

Ubuntu 18.04 comes as a unified computing platform. Having an identical platform from workstation to edge and cloud accelerates global deployments and operations. Ubuntu 18.04 LTS features a default GNOME desktop. Other desktop environments are KDE, MATE and Budgie.

Diversified Features

The latest technologies under the Ubuntu 18.04 hood are focused on real-time optimizations and an expanded Snapcraft ecosystem to replace traditional software delivery via package management tools.

For instance, the biggest innovations in Ubuntu 18.04 are related to enhancements to cloud computing, Kubernetes integration, and Ubuntu as an IoT control platform. Features that make the new Ubuntu a platform for artificial intelligence and machine learning also are prominent.

The Canonical distribution of Kubernetes (CDK) runs on public clouds, VMware, OpenStack and bare metal. It delivers the latest upstream version, currently Kubernetes 1.10. It also supports upgrades to future versions of Kubernetes, expansion of the Kubernetes cluster on demand, and integration with optional components for storage, networking and monitoring.

As a platform for AI and ML, CDK supports GPU acceleration of workloads using the Nvidia DevicePlugin. Further, complex GPGPU workloads like Kubeflow work on CDK. That performance reflects joint efforts with Google to accelerate ML in the enterprise, providing a portable way to develop and deploy ML applications at scale. Applications built and tested with Kubeflow and CDK are perfectly transportable to Google Cloud, according to Shuttleworth.

Developers can use the new Ubuntu to create applications on their workstations, test them on private bare-metal Kubernetes with CDK, and run them across vast data sets on Google’s GKE, said Stephan Fabel, director of product management at Canonical. The resulting models and inference engines can be delivered to Ubuntu devices at the edge of the network, creating an ideal pipeline for machine learning from the workstation to rack, to cloud and device.

Snappy Improvements

The latest Ubuntu release allows desktop users to receive rapid delivery of the latest applications updates. Besides having access to typical desktop applications, software devs and enterprise IT teams can benefit from the acceleration of snaps, deployed across the desktop to the cloud.

Snaps have become a popular way to get apps on Linux. More than 3,000 snaps have been published, and millions have been installed, including official releases from Spotify, Skype, Slack and Firefox,

Snaps are fully integrated into Ubuntu GNOME 18.04 LTS and KDE Neon. Publishers deliver updates directly, and security is maintained with enhanced kernel isolation and system service mediation.

Snaps work on desktops, devices and cloud virtual machines, as well as bare-metal servers, allowing a consistent delivery mechanism for applications and frameworks.

Workstations, Cloud and IoT

Nvidia GPGPU hardware acceleration is integrated in Ubuntu 18.04 LTS cloud images and Canonical’s OpenStack and Kubernetes distributions for on-premises bare metal operations. Ubuntu 18.04 supports Kubeflow and other ML and AI workflows.

Kubeflow, the Google approach to TensorFlow on Kubernetes, is integrated into Canonical Kubernetes along with a range of CI/CD tools, and aligned with Google GKE for on-premises and on-cloud AI development.

“Having an OS that is tuned for advanced workloads such as AI and ML is critical to a high-velocity team,” said David Aronchick, product manager for Cloud AI at Google. “With the release of Ubuntu 18.04 LTS and Canonical’s collaborations to the Kubeflow project, Canonical has provided both a familiar and highly performant operating system that works everywhere.”

Software engineers and data scientists can use tools they already know, such as Ubuntu, Kubernetes and Kubeflow, and greatly accelerate their ability to deliver value for their customers, whether on-premises or in the cloud, he added.

Multiple Cloud Focus

Canonical has seen a significant adoption of Ubuntu in the cloud, apparently because it offers an alternative, said Canonical’s Fabel.

Typically, customers ask Canonical to deploy Open Stack and Kubernetes together. That is a pattern emerging as a common operational framework, he said. “Our focus is delivering Kubernetes across multiple clouds. We do that in alignment with Microsoft Azure service.”

Better Economics

Economically, Canonical sees Kubernetes as a commodity, so the company built it into Ubuntu’s support package for the enterprise. It is not an extra, according to Fabel.

“That lines up perfectly with the business model we see the public clouds adopting, where Kubernetes is a free service on top of the VM that you are paying for,” he said.

The plan is not to offer overly complex models based on old-school economic models, Fabel added, as that is not what developers really want.

“Our focus is on the most effective delivery of the new commodity infrastructure,” he noted.

Private Cloud Alternative to VMware

Canonical OpenStack delivers private cloud with significant savings over VMware and provides a modern, developer-friendly API, according to Canonical. It also has built-in support for NFV and GPGPUs. The Canonical OpenStack offering has become a reference cloud for digital transformation workloads.

Today, Ubuntu is at the heart of the world’s largest OpenStack clouds, both public and private, in key sectors such as finance, media, retail and telecommunications, Shuttleworth noted.

Other Highlights

Among Ubuntu 18.04’s benefits:

  • Containers for legacy workloads with LXD 3.0 — LXD 3.0 enables “lift-and-shift” of legacy workloads into containers for performance and density, an essential part of the enterprise container strategy.

    LXD provides “machine containers” that behave like virtual machines in that they contain a full and mutable Linux guest operating system, in this case, Ubuntu. Customers using unsupported or end-of-life Linux environments that have not received fixes for critical issues like Meltdown and Spectre can lift and shift those workloads into LXD on Ubuntu 18.04 LTS with all the latest kernel security fixes.

  • Ultrafast Ubuntu on a Windows desktop — New Hyper-V optimized images developed in collaboration with Microsoft enhance the virtual machine experience of Ubuntu in Windows.
  • Minimal desktop install — The new minimal desktop install provides only the core desktop and browser for those looking to save disk space and customize machines with their specific apps or requirements. In corporate environments, the minimal desktop serves as a base for custom desktop images, reducing the security cross-section of the platform.

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

Open Source Is Everywhere and So Are Vulnerabilities, Says Black Duck Report | Enterprise


By Jack M. Germain

May 15, 2018 5:00 AM PT

Black Duck by Synopsys on Tuesday released the 2018 Open Source Security and Risk Analysis report, which details new concerns about software vulnerabilities amid a surge in the use of open source components in both proprietary and open source software.

Open Source Is Everywhere and So Are Vulnerabilities, Says Black Duck Report

The report provides an in-depth look at the state of open source security, license compliance and code-quality risk in commercial software. That view shows consistent growth over the last year, with the Internet of Things and other spaces showing similar problems.

This is the first report Black Duck has issued since Synopsys acquired it late last year. The Synopsys Center for Open Source Research & Innovation conducted the research and examined findings from anonymized data drawn from more than 1,100 commercial code bases audited in 2017.

The report comes on the heels of heightened alarm regarding open source security management following the major data breach at Equifax last year. It includes insights and recommendations to help organizations’ security, risk, legal, development and M&A teams better understand the open source security and license risk landscape.

The goal is to improve the application risk management processes that companies put into practice.

Industries represented in the report include the automotive, big data (predominantly artificial intelligence and business intelligence), cybersecurity, enterprise software, financial services, healthcare, Internet of Things, manufacturing and mobile app markets.

“The two big takeaways we’ve seen in this year’s report are that the actual license compliance side of things is improving, but organizations still have a long way to go on the open source security side of things,” said Tim Mackey, open source technology evangelist at Black Duck by Synopsys.

Gaining Some Ground

Organizations have begun to recognize that compliance with an open source license and the obligations associated with it really do factor into governance of their IT departments, Mackey told LinuxInsider, and it is very heartening to see that.

“We are seeing the benefit that the ecosystem gets in consuming an open source component that is matured and well vetted,” he said.

One surprising finding in this year’s report is that the security side of the equation has not improved, according to Mackey.

“The license part of the equation is starting to be better understood by organizations, but they still have not dealt with the number of vulnerabilities within the software they use,” he said.

Structural Concerns

Open source is neither more nor less secure than custom code, based on the report. However, there are certain characteristics of open source that make vulnerabilities in popular components very attractive to attackers.

Open source has become ubiquitous in both commercial and internal applications. That heavy adoption provides attackers with a target-rich environment when vulnerabilities are disclosed, the researchers noted.

Vulnerabilities and exploits are regularly disclosed through sources like the National Vulnerability Database, mailing lists and project home pages. Open source can enter code bases through a variety of ways — not only through third-party vendors and external development teams, but also through in-house developers.

Commercial software automatically pushes updates to users. Open source has a pull support model. Users must keep track of vulnerabilities, fixes and updates for the open source system they use.

If an organization is not aware of all the open source it has in use, it cannot defend against common attacks targeting known vulnerabilities in those components, and it exposes itself to license compliance risk, according to the report.

Changing Stride

Asking whether open source software is safe or reliable is a bit like asking whether an RFC or IEEE standard is safe or reliable, remarked Roman Shaposhnik, vice president of product & strategy at
Zededa.

“That is exactly what open source projects are today. They are de facto standardization processes for the software industry,” he told LinuxInsider.

A key question to ask is whether open source projects make it safe to consume what they are producing, incorporating them into fully integrated products, Shaposhnik suggested.

That question gets a twofold answer, he said. The projects have to maintain strict IP provenance and license governance to make sure that downstream consumers are not subject to frivolous lawsuits or unexpected licensing gotchas.

Further, projects have to maintain a strict security disclosure and response protocol that is well understood, and that it is easy for downstream consumers to participate in a safe and reliable fashion.

Better Management Needed

Given the continuing growth in the use of open source code in proprietary and community-developed software, more effective management strategies are needed on the enterprise level, said Shaposhnik.

Overall, the Black Duck report is super useful, he remarked. Software users have a collective responsibility to educate the industry and general public on how the mechanics of open source collaboration actually play out, and the importance of understanding the possible ramifications correctly now.

“This is as important as understanding supply chain management for key enterprises,” he said.

Report Highlights

More than 4,800 open source vulnerabilities were reported in 2017. The number of open source vulnerabilities per code base grew by 134 percent.

On average, the Black Duck On-Demand audits identified 257 open source components per code base last year. Altogether, the number of open source components found per code base grew by about 75 percent between the 2017 and 2018 reports.

The audits found open source components in 96 percent of the applications scanned, a percentage similar to last year’s report. This shows the ongoing dramatic growth in open source use.

The average percentage of open source in the code bases of the applications scanned grew from 36 percent last year to 57 percent this year. This suggests that a large number of applications now contain much more open source than proprietary code.

Pervasive Presence

Open source use is pervasive across every industry vertical. Some open source components have become so important to developers that those components now are found in a significant share of applications.

The Black Duck audit data shows open source components make up between 11 percent and 77 percent of commercial applications across a variety of industries.

For instance, Bootstrap — an open source toolkit for developing with HTML, CSS and JavaScript — was present in 40 percent of all applications scanned. jQuery closely followed with a presence in 36 percent of applications.

Other components common across industries was Lodash, a JavaScript library that provides utility functions for programming tasks. Lodash appeared as the most common open source component used in applications employed by such industries as healthcare, IoT, Internet, marketing, e-commerce and telecommunications, according to the report.

Other Findings

Eighty-five percent of the audited code bases had either license conflicts or unknown licenses, the researchers found. GNU General Public License conflicts were found in 44 percent of audited code bases.

There are about 2,500 known open source licenses governing open source components. Many of these licenses have varying levels of restrictions and obligations. Failure to comply with open source licenses can put businesses at significant risk of litigation and compromise of intellectual property.

On average, vulnerabilities identified in the audits were disclosed nearly six years ago, the report notes.

Those responsible for remediation typically take longer to remediate, if they remediate at all. This allows a growing number of vulnerabilities to accumulate in code bases.

Of the IoT applications scanned, an average of 77 percent of the code base was comprised of open source components, with an average of 677 vulnerabilities per application.

The average percentage of code base that was open source was 57 percent versus 36 percent last year. Many applications now contain more open source than proprietary code.

Takeaway and Recommendations

As open source usage grows, so does the risk, OSSRA researchers found. More than 80 percent of all cyberattacks happened at the application level.

That risk comes from organizations lacking the proper tools to recognize the open source components in their internal and public-facing applications. Nearly 5,000 open source vulnerabilities were discovered in 2017, contributing to nearly 40,000 vulnerabilities since the year 2000.

No one technique finds every vulnerability, noted the researchers. Static analysis is essential for detecting security bugs in proprietary code. Dynamic analysis is needed for detecting vulnerabilities stemming from application behavior and configuration issues in running applications.

Organizations also need to employ the use of software composition analysis, they recommended. With the addition of SCA, organizations more effectively can detect vulnerabilities in open source components as they manage whatever license compliance their use of open source may require.


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

Red Hat Launches Fuse 7, Fuse Online for Better Cloud Integration | Enterprise


By Jack M. Germain

Jun 5, 2018 7:00 AM PT

Red Hat on Monday launched its Fuse 7 cloud-native integration solution and introduced Fuse Online, an alternative integration Platform as a Service (iPaaS).

Red Hat Fuse is a lightweight modular and flexible integration platform with a new-style enterprise service bus (ESB) to unlock information. It provides a single, unified platform across hybrid cloud environments for collaboration between integration experts, application developers and business users.

The Fuse 7 upgrade expands the platform’s integration capabilities natively to Red Hat OpenShift Container Platform. OpenShift is a comprehensive enterprise Kubernetes platform.

Fuse Online includes a set of automated tools for connecting software applications that are deployed in different environments. iPaaS often is used by large business-to-business (B2B) enterprises that need to integrate on-premises applications and data with cloud applications and data.

Red Hat customers already using Fuse are likely to welcome the new additions and updates, said Charles King, principal analyst at Pund-IT. Those who are actively utilizing the company’s OpenShift container solutions, and those planning hybrid cloud implementations may be especially interested.

“I’m not sure whether those features will attract significant numbers of new customers to Red Hat, but Fuse 7 appears to do a solid job of integrating the company’s container and hybrid cloud technologies into a seamless whole,” King told LinuxInsider.

Competitive Differentiator

Because Red Hat’s Fuse enables subscribers to integrate custom and packaged applications across the hybrid cloud quickly and efficiently, it can be a competitive differentiator for organizations today, the company said.

The new iPaaS offering allows diverse users such as integration experts, application developers and nontechnical citizen integrators to participate independently in the integration process. It gives users a single platform that maintains compliance with corporate governance and processes.

By taking advantage of capabilities in Red Hat OpenShift Container Platform, Fuse offers greater productivity and manageability in private, public or hybrid clouds, said Sameer Parulkar, senior product marketing manager at Red Hat.

“This native OpenShift-based experience provides portability for services and integrations across runtime environments and enables diverse users to work more collaboratively,” he told LinuxInsider.

Fuse Capabilities

Fuse 7 introduces a browser-based graphical interface with low-code drag-and-drop capabilities that enable business users and developers to integrate applications and services more rapidly, using more than 200 predefined connectors and components, said Parulkar.

Based on Apache Camel, the components include more than 50 new connectors for big data, cloud services and Software as a Service (SaaS) endpoints. Organizations can adapt and scale endpoints for legacy systems, application programming interfaces, Internet of Things devices and cloud-native applications.

Customers can extend services and integrations for use by third-party providers and partners. Users can deploy Fuse alongside Red Hat’s 3scale API Management offering to add capabilities for security, monetization, rate limiting and community features.

Fuse Online is a new service, but it is based on the existing structure of the Fuse application. Fuse online is a 24/7 service with drag and drop integration capabilities.

“The foundation is the same. It can be used in conjunction with Fuse 7 or separately with the ability to abstract all the customer’s existing data,” said Parulkar. “It allows an organization to get started much more quickly.”

Expanding Agility

Combined with Red Hat OpenShift Container Platform and 3scale API Management, Fuse forms the foundation of Red Hat’s agile integration architecture. 3scale API Management 2.2, released last month, introduced new tools for graphical configuration of policies, policy extensibility and shareability. It also expanded Transport Layer Security (TLS) support.

The result makes it easier for business users to implement their organization’s API program. Combined integration technologies let users more quickly, easily and reliably integrate systems across their hybrid cloud environments, Parulkar said.

“Data integration is critical to the National Migration Department’s mission of effective threat prediction, and Red Hat Fuse plays a crucial role in this process,” said Osmar Alza, coordinator of migration control for Direccin Nacional de Migraciones de la Repblica Argentina. “The Red Hat Fuse platform provides unified access to a complete view of a person for smarter, more efficient analysis, and supports flexible integration

Access and Use

Red Hat Fuse 7 is available for download by members of the Red Hat Developer community. Existing Fuse users automatically get the Fuse 7 upgrade.

Fuse Online is available for free trial followed by a monthly subscription.

Both products use the same interface, so the customer gets a unified platform whether used in the cloud or on premises. Fuse offers users more integration than similar solutions provided by IBM, Oracle and Google, Parulkar said.

“The key benefits of any integrated PaaS platform are simplified implementation and centralized management functions. On first glance, Fuse Online seems to hit those notes via well-established and road-tested Red Hat technologies, including OpenShift,” said King.

Fusing Advantages

From the very beginning, the goal of Red Hat Fuse was to simplify integration across the extended enterprise and help organizations compete and differentiate themselves, said Mike Piech, vice president and general manager for middleware at Red Hat.

“With Fuse 7, which includes Fuse Online, we are continuing to enhance and evolve the platform to meet the changing needs of today’s businesses, building off of our strength in hybrid cloud and better serving both the technical and non-technical professionals involved in integration today,” he said.

Red Hat simplifies what otherwise could be a cumbersome task — that is, integrating disparate applications, services, devices and APIs across the extended enterprise.

Fuse enables customers to achieve agile integration to their advantage, noted Saurabh Sharma, principal analyst at
Ovum.

“Red Hat’s new iPaaS solution fosters developer productivity and supports a wider range of user personas to ease the complexity of hybrid integration,” he said.

Right Path to the Cloud

Red Hat’s new Fuse offerings are further proof that businesses — and especially enterprises — have embraced the hybrid cloud as the preferred path forward, said Pund-IT’s King.

“That is a stick in the eye to evangelists who have long claimed that public cloud will eventually dominate IT and rapidly make internal IT infrastructures a thing of the past,” he remarked.

Pushing Old Limits

Fuse comes from a more traditional or legacy enterprise applications approach centered around service-oriented architecture (SOA) and enterprise service bus (ESB). As was common back in the day, there’s a lot of emphasis on formal standard compliance as opposed to de facto open source standardization through project development, noted Roman Shaposhnik, vice president for product and strategy at
Zededa.

While the current generation of enterprise application architectures unquestionably is based on microservices and 12-factor apps, Fuse and ESB in general still enjoy a lot of use in existing applications, he told LinuxInsider. That use, however, is predominantly within existing on-premises data center deployments.

“Thus the question becomes: How many enterprises will use the move to the cloud as a forcing function to rethink their application architecture in the process, versus conducting a lift-n-shift exercise first?” Shaposhnik asked.

It is hard to predict the split. There will be a nontrivial percentage that will pick the latter and will greatly benefit from a more cloud-native implementation of Fuse, he noted.

“This is very similar to how Amazon Web Services had its initial next generation-focused, greenfield application deployments built, exclusively based on cloud-native principles and APIs, but which over time had to support a lot of legacy bridging technologies like Amazon Elastic File System,” Shaposhnik said. “That is basically as old school of a [network-attached storage]-based on [network file system] protocol as one can get.”

Possible Drawbacks

The advantage to the workaround technology is clearly one more roadblock removed from being able to seamlessly lift-and-shift legacy enterprise applications into cloud-native deployment environments. That becomes a disadvantage, Shaposhnik noted.

“The easier the cloud and infrastructure providers make it for enterprises to continue using legacy bridging technologies, the more they delay migration to the next-generation architectures, which are critical for scalability and rapid iteration on the application design and implementation,” he said.

Red Hat’s technology can be essential to enterprise cloud use, said Ian McClarty, CEO of
PhoenixNAP Global It Solutions.

“To organizations leveraging the Red Hat ecosystem, Fuse helps manage components that today are handled from disparate sources into a much simpler-to-use interface with the capability of extending functionality,” he told LinuxInsider.

The advantage of an iPaaS offering is ease of use, said McClarty. Further, added management for multiple assets becomes a lot easier and scale-out becomes a possibility.

One disadvantage is the availability of the system. Since it is a hosted solution, subscribers are limited by the uptime of the vendor, said McClarty.

Another disadvantage is that vendor lock becomes a stronger reality, he pointed out. The DevOps/system administrator relies on the iPaaS system to do daily tasks, so the vendor becomes much harder to displace.


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link