Monthly Archives: July 2021

How The New Workplace Model Challenges Cybersecurity


Accessibility and security. Two words that keep most technology officers awake at night. Perhaps now more than ever before, businesses are forced to find new approaches to cybersecurity to keep data safe. As employees no longer report to an in-house network, keeping data safe across geographies and network lines has become the newest challenge.

Quick solutions brought to light many “good enough” answers that are now causing security nightmares. While companies tried to deal with certain issues caused by remote access limitations, they may have positioned themselves for the almost equally conceivable doom of cyberattacks. Especially as cyber attackers become more sophisticated.

Remote work during COVID-19 increased data breach costs in the United States by $137,000. And at the height of the pandemic, the FBI reported upwards of 4,000 security complaints per day. The list continues to grow of circumstances like these with mounting challenges for cybersecurity teams.

If there were ever a time to go back to the basics and redefine security and accessibility, that time is now. We’ve been relying on a vast and ever-increasing number of discreet security products like VPN products and Next-Gen Firewalls to the most recent use of SD-WAN (SASE) deployments. We forget that sometimes the absolute best security “tool” is a change in attitude. Rather than keeping everything in the castle or on-premises, the new workplace needs to be able to adjust security to zero trust.

Who gets in?

Bad guys out, good guys in. This long-standing principle has shaped how enterprises approach information security for decades. Anchored in the premise that IT environments can be protected from malicious activity by simply making the perimeter bigger, stronger, and more resilient. But as globalization grows and our networks expand through neighborhoods and countries, IT departments must reevaluate not just their tactics but their attitude.

For many organizations, adding layer upon layer of these defenses over an extended period of time has caused the implementation of many defenses reliant on legacy, on-premises, and cumbersome point solutions. Fortifying the castle one wall, one moat, and one drawbridge at a time doesn’t allow for much architectural progress.

During COVID, organizations that previously had tight control of the user’s endpoint found themselves struggling to provide access to necessary organizational data and push security updates from their central location onto the bandwidth-constrained home networks. Ironically, the tighter the pre-covid security stance had aligned to central control, the larger the problem they now faced.

According to research, enterprises already run 77% of their workloads in the cloud. While COVID-19 put this adoption in overdrive, the concept isn’t new—what is new is all the ways we’re interacting with cloud architecture, which is where IT must begin to find a “new normal” for internal and external networks. The new framework should become zero-trust.

Who has access?

Whether intentional or not, everyone who has access to the network can be compromised. This type of security framework requires all users, whether in or outside the organization’s network, to be authenticated, authorized, and continuously validated before being granted or maintaining access to applications and data. Zero-trust assumes that there is no traditional network edge—they can be on-premises, in the cloud, or hybrid—which is where many organizations are finding themselves now.

This type of security embraces the use of more precise and stringent network segmentation, creating what are sometimes called micro-perimeters throughout the network to prevent lateral movement. The goal is that when – not if – a breach occurs, an intruder can’t easily access sensitive data by hopping VLANs, for example. Gartner predicts that by 2023, 60% of enterprises will phase out most of their remote access virtual private networks (VPNs) in favor of Zero-trust Network Access.

Policies and governance also play an important role in a zero-trust architecture since users should have the least amount of access required to fulfill their duties. Granular control over

who, what, where and when resources are accessed is vital to a zero-trust network.

Automate the rest

Along with the move to zero-trust, IT teams must also automate continual trust evaluations. From the banals of science fiction, we’ve always been afraid that machines will replace us. When in reality, they’re just here to make us better. For the last decade or so, artificial intelligence and automation have emerged as key partners to prepare infrastructures for the future. IT automation, or infrastructure automation, is the use of software to create repeatable processes.

The purpose of automation is to reduce human interaction with IT systems and make the remaining interaction completely predictable. A core component of a zero-trust network relies on trust evaluation—usually done by an adaptive access control engine. By combining logs from the trusted proxy with continuous analysis of behaviors, AI can help analyze and ensure access is maintained to only risk-averse users.

In many ways, IT automation is the foundation of the modern data center where servers, storage, and networking are transformed into software-defined infrastructure. When we discuss keeping data secure, the fewer human touchpoints, the better. By automating many of the security processes, once manual, tedious tasks can be automated, and therefore security is increased.

Who keeps the future secure?

Just like the workplace is changing, so is what we expect from our IT departments and partners. No one could have foreseen the way that our workforce would change—not just to more remote work, but to a truly distributed workforce capable of working anywhere. The reality we find ourselves in now will continue to force innovators to keep their networks secure and accessible. With an agile philosophy, IT teams should feel supported to walk the tightrope between security and accessibility with a zero-trust framework.

Karl Adriaenssens works in the Office of the CTO at GCSIT.

 



Source link

Interested in a Cloud Computing Career? This Roadmap Can Point the Way


Like many people, you might be thinking about a career in the fast growing field of cloud computing. It’s a smart move, with the Open Source Jobs Report finding that possessing cloud computing skills has the biggest impact on hiring decisions amongst technical hiring managers surveyed. And recent data have shown that job openings for cloud computing professionals have skyrocketed the last few years. 

The problem for most is determining how and where to start. If you are new to the IT sector, jumping straight into cloud and cloud native technologies is nearly impossible without first gaining an understanding of the infrastructure technologies on which the cloud is built. That’s why we’ve developed the roadmap below, outlining the knowledge and skills needed to successfully pursue a cloud career.

To start, you need to understand Linux. Over 90% of public cloud instances are running on Linux, and if you aren’t proficient in the Linux command line interface, you won’t get very far working in the cloud. You also need to understand DevOps – a term referring to the combination of development and operations which traditionally were separate in the IT space. The vast majority of organizations today use DevOps practices to deploy to the cloud, so you need to understand those practices. 

Once you’ve learned the fundamentals underpinning the cloud, you can start to learn the cloud technologies themselves. 91% of organizations running in the cloud are using Kubernetes, so it’s an ideal technology to focus on. 

To get your feet wet, you can start with some of our free courses:

Introduction to Linux
Introduction to DevOps and Site Reliability Engineering
Introduction to Cloud Infrastructure Technologies
Introduction to Kubernetes

After that, consider our Cloud Engineer Bootcamp if you want a more structured learning program, or check out our full array of cloud training and certification offerings

And don’t forget to view the Cloud Career Roadmap below for more insights!

Download full size version

The post Interested in a Cloud Computing Career? This Roadmap Can Point the Way appeared first on Linux Foundation – Training.

Linux Changes Pipe Behavior After Breaking Problematic Android Apps On Recent Kernels


LINUX KERNEL --

At the end of 2019 a rework to the Linux kernel’s pipe code saw some of its logic reworked to only wake up readers if needed and avoid a possible thundering herd problem. But it turns out some Android libraries abused the functionality and this has led to broken Android applications when moving to recent kernels. While the user-space software is in the wrong, the kernel is sticking to its policy of not breaking user-space and as such Linus Torvalds has changed the code’s behavior for Linux 5.14 and to be back-ported to prior stable kernels.

Rather than only waking up readers if needed, the change merged into the Linux kernel on Friday will make pipe writes always wake up readers. Due to some Android libraries like Realm misusing the EPOLL interface, the pipe change at the end of 2019 ended up breaking some Android apps.

This has broken “numerous Android applications” since Linux 5.5, but given the long period of times between kernel versions shipped by Android, it only has become a problem recently with Android transitioning to Linux 5.10 LTS. Realm’s behavior has since been addressed but will take some period of time before all applications leveraging the library (and any other problematic libraries out there) are updated and re-built, thus for now broken Android applications are still out there.

While user-space was misusing an interface and that led to “all applications using this library stopped working”, the Linux kernel carries a policy that if applications break from new kernel behavior/changes, it’s a regression. Thus on Friday Linus Torvalds authored and merged this change to always make writes wake-up readers even if extraneous in order to better jive with the old behavior.

See this commit for those interested in all the technical details on the issue and resolution.


Success Story: Preparing for Kubernetes Certification Improves a Platform Development Engineer’s Skills


Faseela K. is a platform development engineer with a background in open source networking. As she saw the use of containers growing more than the VMs she was working with, she began studying Kubernetes and eventually decided to pursue a Certified Kubernetes Administrator (CKA). We spoke to her about her experience.

Linux Foundation: What was the experience like taking the CKA exam?

Faseela K: I was actually nervous, as this was the first online certification exam I was taking from home, so there was some uncertainty going in. Would the proctor turn up on time? Will the cloud platform where we are taking the exam get stuck? Will I be able to finish the exam on time? Those and several other such questions ran through my mind. But I turned down all my concerns, had a very smooth exam experience, and was able to finish it without any difficulties. 

LF: How did you prepare for the exam?

FK: I am a person who uses Kubernetes in my day to day work, so the topics in the syllabus were familiar to me. On top of that I did some practice tests and online courses. Preparing for the exam made so many of my day to day work related tasks much easier, and my level of expertise on K8s increased considerably.

LF: How did preparing for and taking CKA help you improve your skills?

FK: Though I work on K8s regularly, the range of concepts and capabilities I was using were minimal. Preparing for CKA helped me touch upon all areas of K8s, and the experience which I already had helped me get a complete end to end view of things. I can troubleshoot Kubernetes issues in a better way now, and go deep into each problem to find a solution.

LF: Tell us more about your current job role. What types of activities are you engaged in and how has the CKA helped with them?

FK: I currently work as a platform development engineer at Cisco, where we develop and maintain an enterprise Kubernetes platform. Troubleshooting, upgrading, networking, and system management of containerized platforms are part of our daily tasks, and CKA has helped me master all these areas with perfection. The training which I took to prepare for the CKA phenomenally transformed my perspective about Kubernetes administration, and this has helped me attain an end to end view of the product. Debugging any issues in the platform has become easier than ever, and the certification has given me even more confidence with fixing issues in a time sensitive manner.

LF: You mentioned to us previously you’d like to take the Certified Kubernetes Application Developer (CKAD) next; what appeals to you about that certification?

FK: I am planning to go deeper into containerized application development in my career, and hence CKAD was appealing to me. In fact, I already completed CKAD and became CKAD certified within less than a month of achieving my CKA certification. The confidence I gained after CKA helped me try the second one also faster.

LF: Tell us about your experience working on the OpenDaylight project. What prompted you to move from focusing on SDN to Kubernetes?

FK: I was previously a member of the Technical Steering Committee of the OpenDaylight project at The Linux Foundation, and made a lot of contributions to OpenDaylight. Working in open source has been the most amazing experience I have ever had in my life, and OpenDaylight gave me exposure to the various activities under LF Networking, while being a part of The Linux Foundation generally helped me engage with some of the top notch brains across organizations. 

Coming together from across the globe during various conferences and DDFs, and working together across the company boundaries to solve common SDN problems has given me so much satisfaction. Over a period of time, containers were gaining traction over VMs, and I wanted to get more involved with containerization and platform development, where Kubernetes looked more promising.

LF: What are your future career goals?

FK: I intend to learn more about K8s internal implementation, and also to get involved with projects like istio, servicemesh and networkservicemesh in the future. My dream is to become a cloud native software developer, who promotes containerized application development in a cloud native way.

LF: What technology are you most interested in studying next?

FK: I am currently pursuing a course on the golang programming language. I also plan to take the Certified Kubernetes Security Specialist (CKS) exam if time permits.

The post Success Story: Preparing for Kubernetes Certification Improves a Platform Development Engineer’s Skills appeared first on Linux Foundation – Training.

Is Unlicensed Wireless Too Risky for Mission-Critical Use


As the use of mobile devices, IoT sensors, and other wireless technologies continues to rise, the unlicensed wireless spectrum that Wi-Fi technologies rely on can succumb to over congestion due to an increase in external interference.

Compounding this problem even further is the fact that wireless carriers in the US are beginning to deploy License Assisted Access (LAA) technologies as part of their 5G rollout plans. Although LAA was designed to coexist with Wi-Fi in the 5 GHz space, a recent study from the University of Chicago shows that this may not be the case in some situations. Thus, as 5G continues to be deployed throughout the US, enterprise organizations must be prepared for the potential performance degradation that LAA deployments could have on portions of their wireless LAN (WLAN). Additionally, businesses that rely heavily on wireless for mission-critical uses might want to begin investigating alternatives that are less prone to this type of interference.

What is License Assisted Access (LAA)?

Seeking a low-cost method for expanding backhaul capabilities of LTE and 5G networks, carriers have looked at several unlicensed options that take advantage of the same 5 GHz wireless spectrum that most enterprise Wi-Fi deployments use today. While there are several technologies and standards available, carriers in the US have largely settled on LAA to perform these duties. LAA integrates seamlessly with LTE and 5G technologies and can deliver a significant download performance boost where LAA is deployed. In areas where LTE/5G device usage is dense, carriers are looking to LAA as a way to bolster their ability to handle larger traffic loads without having to use larger chunks of expensive, licensed spectrum.

The potential for significant Wi-Fi degradation when Wi-Fi and LAA coexist

The 3GPP LAA standard made significant attempts to coexist with Wi-Fi in the 5 GHz spectrum. The standard includes a strict Listen-before-talk (LBT) mechanism that forces the LAA platform to monitor channels within the 5 GHz space and only use those channels when they are not being used by Wi-Fi.

On paper and in lab scenarios, LAA with LBT seemed to be a fair way for carriers to tap into unused 5 GHz spectrum without stepping on the toes of existing Wi-Fi deployments. However, in the real world, it seems like this may not be the case. The University of Chicago study points out one common scenario known as the “hidden node problem.”

Without getting overly technical, the hidden node problem is a common situation found when wireless access points (AP) cannot see all others and becomes “hidden” from a clear-to-send standpoint. In this situation, any attempt to simultaneously send data to an AP that sits between the other two – and thus can see and communicate with both — results in both transmissions canceling each other out. This hidden node problem illustrates an inherent flaw in the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) function within Wi-Fi that can significantly degrade performance across a WLAN.

The hidden node problem can exist within Wi-Fi-only networks. However, the University of Chicago report shows that hidden nodes operating using LAA technologies can also impact Wi-Fi networks. Therefore, even with strict LBT functionality in place, LAA deployments have the potential to render Wi-Fi useless when hidden node scenarios exist.

Alternative connectivity options for mission-critical wireless use-cases

Given the increased chance for wireless interference, which translates to performance degradation, businesses might want to look at alternatives to Wi-Fi in the 5 GHz spectrum. The most obvious choice is to upgrade all or parts of a WLAN to AP’s that support the new Wi-Fi 6E standard that operates in the 6 GHz space. However, keep in mind that most enterprise-grade manufacturers have yet to launch WLAN products that use this new standard. Additionally, very few Wi-Fi-capable endpoints are Wi-Fi 6E compatible today. Finally, note that the 6 GHz band that Wi-Fi 6E uses is also defined by the FCC as unlicensed spectrum. This means that it’s highly likely that a future variant of LAA will not only tap into 5 GHz unlicensed spectrum but also frequencies in the 6 GHz space.

A better option may be to abandon unlicensed spectrum altogether as interference will likely be an ongoing problem for the foreseeable future. Private LTE or 5G networks that operate in the Citizens Broadband Radio Spectrum (CBRS), for example, eliminate much of this risk. Instead of unlicensed spectrum that can be used (and potentially abused) by anyone, CBRS uses a spectrum sharing model for its 150 MHz wide band in the 3.5 GHz space.

Any business that wishes to deploy a private mobile network using CBRS-capable technologies must register through an automated coordination tool called the Spectrum Access System (SAS) prior to operation. The SAS is essentially a geolocation-based reservation system operated by a conglomeration of technology companies under the oversight of the FCC that dynamically manages spectrum between those that use it. SAS ensures that private mobile networks in the same geographic location will not use overlapping frequencies that result in interference. Thus, while the number of CBRS channels that a business can use at any given time might fluctuate, interference that can render wireless communication completely useless is far less likely. This added protection found in CBRS that’s missing in Wi-Fi today can safeguard wireless transmissions that are considered by businesses to be mission-critical.



Source link