Tag Archives: servers

Digital Transformation: Trust but Verify | IT Infrastructure Advice, Discussion, Community


Moving digital assets to the public cloud reduces costs and increases productivity, but it poses some new information security challenges. Specifically, many Intrusion Detection and Prevention Systems (IDPS) that were designed for the on-premises network come up short when deployed in the public cloud. For this reason, public cloud providers have built-in security layers to manage information security using their own security monitoring infrastructure. Unfortunately, these built-in monitoring services are one-size-fits-all and may miss crucial customer-specific security requirements or user account compromises. This leaves cloud-based assets more vulnerable to data breaches.

Why public clouds are difficult to secure

Public clouds are great when it comes to providing shared compute resources that can be set up or torn down quickly. The cloud provider offers a basic software interface to provisioning storage, servers, and applications, and basic security monitoring that runs on top of that interface at the application layer. But the application layer runs on top of the network, and the network is the only place where certain classes of dangerous security breaches can be detected and prevented.

In the cloud, customers can’t conduct network-level traffic analysis because public clouds don’t give customers access to the network layer. Clouds restrict users from inspecting or logging the bits that go over the network wire. Inspecting a public cloud at the application layer can give customers information about what the network endpoints are doing, but that’s only part of the picture. For example, breaches due to users’ misbehavior are only visible at the network layer by observing the communication patterns that are inconsistent with company policies. The cloud’s built-in monitoring services would not be aware of it because they do not monitor network behavior on behalf of the enterprise. Importantly, if malware or a rogue application somehow makes it into a cloud instance or remote VM hosted in the cloud, native cloud monitoring services may not detect its malicious behavior at the network level. Because customers don’t have access to the bits being transmitted, they’ll never know the malware is there.

And the network threats are there. Over 540 million Facebook records were exposed on AWS. In 2017, 57 million Uber customer records were compromised because hackers extracted Uber’s AWS credentials from the company’s private account. Public cloud offers no tools for monitoring the network data that would have detected and prevented these breaches.

Public cloud operators could see what’s going on if they were to look at the network traffic, but they don’t provide that information to their customers. Most of the time, public cloud operators are focused on providing application-level security information from systems like firewalls or endpoint Antivirus solutions. Adding NG firewalls from third-party vendors to public cloud deployments adds the ability to customize the inspection of all the bits flying by. But this fails to detect communications within the cloud (for example, between a web server and a database) or lateral communications (for example, a compromised host trying to spread within the internal cloud network between VMs). This leaves blind spots that can allow malware to execute without the user’s knowledge. Lastly, when there is a breach, in most cases, cloud customers can’t even quantify, precisely, the number of records or the amount of data exfiltrated.

As it’s not feasible to deploy hardware on a public cloud provider’s premises, the way to eliminate these blind spots lies with software that can implement a virtual tap and monitor traffic at the network level. The industry is now moving away from dedicated hardware devices and toward multi-function software that will address these needs.



Source link

From 0 To 6000: Celebrating One Year Of Proton, Valve’s Brilliant Linux Gaming Solution




Linux Gaming

This week, Valve’s Proton turns one year old, and it has unarguably propelled the notion of gaming on Linux further than I would have thought possible. It has led to noticeably more mainstream press and YouTube coverage of desktop Linux, including this gem from Linus Tech Tips titled “Linux Gaming Finally Doesn’t Suck.” (Forbes)


Taking AI to the IoT Edge | IT Infrastructure Advice, Discussion, Community


Two disruptive technologies, artificial intelligence (AI) and edge computing, are joining together to help make yet another disruptive technology, the Internet of Things (IoT), more powerful and versatile.

AI on the IoT edge is increasingly seen as a technology that will be critical to the success of IoT networks covering many different applications. When IoT technology first appeared, many observers thought that most computing tasks would be handled entirely in the cloud. Yet when it comes to IoT deployments in areas such as manufacturing and logistics, and technologies like autonomous vehicles, decisions have to made as fast as possible. “There’s a huge benefit in getting the analytics capability or the AI capability, to where the action is,” said Kiva Allgood, Ericsson’s head of IoT.

In the years ahead, IoT sensors will collect and stream increasingly large amounts of data, stretching the cloud’s ability to keep pace. “Data growth drives network constraints, as well as the need to analyze and act on this information in near real-time,” observed Steen Graham, general manager of Intel’s IoT ecosystem/channels unit. “Deploying AI at the edge enables you to address network constraints by discarding irrelevant data and compressing essential data for future insights and drive actionable insights in near-real-time with AI.”

S. Hamid Nawab, chief scientist at Yobe, a company that makes AI on the edge software for voice recognition, agreed. “AI on the edge can evaluate the local situation and determine whether or not it’s necessary to send information to the cloud for further processing,” he explained. “It can also provide signal-level pre-processing of the cloud-bound stream so that the cloud-based processing can focus its resources on higher-level issues.”

Use cases

AI on the IoT edge promises to make its biggest impact on organizations that require real-time data analytics for immediate decision-making, such as whether to immediately raise or lower prices depending on consumer demand or other factors, such as time, temperature or inventory level. “Another example is use cases where constant cloud connectivity is simply not available,” observed Tim Sherwood, vice president of IoT and mobility at telecom firm Tata Communications.

Edge AI can also help IoT devices conserve power by limiting communication with the cloud to times when it is strictly necessary to do so, Nawab noted. “There are [also] ‘secure’ use cases where the security risks in sending data streams on the IoT network need to be minimized,” he added.

Industries that can expect to see the most benefits from AI on the IoT edge include healthcare, manufacturing, retailing, and smart cities projects. “The application of IoT in healthcare might bring the most impact on humanity,” Graham stated. “The combination of AI and IoT is streamlining drug discovery and speeding up genomics processing and medical imaging analysis, making the latter more accurate for personalized treatment.”

Security concerns

While AI on the IoT edge promises many benefits, it also possesses some inherent drawbacks. Chris Carreiro, CTO of Park Place Technologies’ ParkView data center monitoring service, warned that the approach potentially gives data centers slightly less control over collected data. “Business systems would now be pushed down from a central data center out to a local plant or branch,” he explained. “This would decentralize the Infrastructure, changing requirements for security, both physical and network.”

Security is, in fact, a top AI on the IoT edge concern. “A person would have to be pretty advanced to hack into some of the [IoT] networks, but it’s essential to be aware that some people want to do that,” Allgood reported.

When you give endpoints more control over data, they become a target for cyberattacks,” Sherwood observed. He noted that to deal with this vulnerability, Tata is exploring the possibility of updating its SIM cards to improve device authentication and network policy controls, limiting the data sources that can be reached by the device, and providing enhanced security for IoT data in motion.

Getting started

Give the fact that AI on the IoT edge is still an emerging technology with relatively few real-world deployments. It’s important for potential adopters to temper their excitement with pragmatism. “The cost of adopting edge AI may outweigh the benefits of real-time intelligence and decision making in some use cases, so this is the first point to consider,” Sherwood advised. He noted that IT leaders also need to fully understand their needs and goals before reaching a final decision on whether or not to bring AI to the edge of their IoT network. Still, for many organizations, the answer will be affirmative. “If you need your IoT application to analyze data at rapid intervals for immediate decision making, you need edge AI,” Sherwood said.

Graham predicted that the next five to ten years will see a rapid move toward a software-defined, more autonomous world that will pave the way for transformation and innovation across industries. “AI, IoT, and edge computing are at the center of this transformation,” he noted. “To paraphrase [former Intel CEO] Andy Grove, you can be the subject of a strategic inflection point or the cause of one—companies that embrace this transformation will thrive and others will falter.”

 



Source link

openSUSE Board Gets a New Chairman » Linux Magazine


Long-time openSUSE contributor Richard Brown is stepping down from his role as chairperson of openSUSE board, a position he had been holding for the last five years. He will be replaced by Gerald Pfeifer, SUSE’s CTO for EMEA. Gerald himself is a developer who has contributed to projects like like GCC and Wine.

In a blog post, Brown said, “Some of the key factors that led me to make this step include the time required to do the job properly and the length of time I’ve served. Five years is more than twice as long as any of my predecessors. The time required to do the role properly has increased, and I now find it impossible to balance the demands of the role with the requirements of my primary role as a developer in SUSE, and with what I wish to achieve outside of work and community.”

Brown will focus on his work at SUSE’s Future Technology Team that works on emerging technologies.

“I could not be more excited and humbled to participate in the openSUSE Project as board chair,” Pfeifer said. “Collaboration in the openSUSE community has contributed to remarkable Linux distributions, and I’m looking forward to ongoing growth in both the community and the openSUSE distributions – Linux and beyond – and tools. openSUSE is at the leading edge of a historic shift, as open source software is now a critical part of any thriving enterprise’s core business strategy. This is an exciting time for the openSUSE community, as well as for open source at large.”

The openSUSE project is funded by SUSE, but it is a community driven project where decisions are made by the community. The openSUSE distros are also upstream to many SUSE products, such as SUSE Linux Enterprise and SUSE CaaSP.



Source link

Knoppix 8.6 Released » Linux Magazine


Klaus Knopper has announced the release of the latest version of the Knoppix Live GNU/Linux distribution. Knoppix is a classic Live Linu that is often used to repair and restart downed Linux and Windows systems.

Version 8.6 of Knoppix is based on Debian/stable (buster), with some packages from Debian/testing and unstable (sid) for newer graphics drivers or desktop software packages. Knoppix uses Linux kernel 5.2.5 and Xorg 7.7 (core 1.20.4) for supporting current computer hardware.

Knoppix is suitable for both new and old hardware. “Both 32-bit and 64-bit kernels are included for supporting both old and new computers; the 64-bit version also supports systems with more than 4GB of RAM and chroot to 64-bit installations for system rescue tasks. The bootloader will start the 64-bit kernel automatically if a 64-bit-capable CPU is detected,” according to the release notes.

The latest Knoppix comes with a boatload of packages pre-installed, including the GNOME 3 and KDE Plasma 5 desktop environments and the Wine 4.0 compatibility tool for supporting Windows applications.

Knoppix 8.6 is available for free download.



Source link