Tag Archives: Security

Five Steps to Address Cloud Security Challenges | IT Infrastructure Advice, Discussion, Community

Today’s interconnected world relies on data accessibility from anywhere, at any time, on any device. The speed and agility that comes with hosting services and applications in the cloud are central to modern interconnected success. As such, these inherent benefits have compelled organizations to migrate some or all of their applications or infrastructures to the cloud. In fact, some industry experts estimate that up to 83 percent of enterprise workloads will migrate to the cloud by 2020.

While the cloud may offer significant benefits, organizations need to be aware of the security challenges when planning a cloud-first strategy. Some of those challenges involve not only protection and compliance but also operational considerations, such as the ability to integrate security solutions for on-premise and cloud workloads; to enforce consistent security policies across the hybrid cloud and to automate virtual machine (VM) discovery to ensure visibility and control over dynamic infrastructure.

1: Balance protection and compliance

Striking a balance between protection and compliance is a huge challenge. Sometimes, it’s all about discouraging threat actors by making them invest more time, energy, and resources than they first estimated into breaching the organization. Making attackers go through several layers of defenses means they could slip up at some point and trigger an alert before reaching the organization’s crown jewels.

Recent data breaches should push leaders into thinking beyond compliance. Besides risking more fines, they risk their reputation as well. Compliance regulations tend to be addressed as base-minimum security options. However, thorough protection involves deploying multiple security layers designed to both help IT and security teams streamline operations, as well as increase visibility and accelerate detection of threats before a full-blown breach occurs.

2: Integrate security solutions for on-premise and cloud workloads

Finding the right security solution to seamlessly integrate with both on-premise and cloud workloads without impacting consolidation ratios, affecting performance or creating manageability issues is also a challenge. Traditional security solutions can, at best, offer separate solutions for on-premise and cloud workloads; however, still run the risk of creating visibility and management issues. At worst, the same traditional security solution is deployed on all workloads – cloud and local – creating serious performance issues for the latter. It’s important for organizations to integrate a security solution that’s built for automatically molding its security agent to the job at hand, based on whether the workload is on-premises or in the cloud, without impacting performance or compromising on security capabilities.

3: Deploy consistent security policies across the hybrid cloud

To address this challenge, organizations need to find security solutions that can adapt security agents to the type of environment they are deployed in. Cloud environments solutions must be agile enough to leverage all the benefits of cloud without sacrificing security, while for traditional on-premise environments, versatile enough to enable productivity and mobility. Organizations must understand that deploying security policies across hybrid infrastructures can be troublesome, especially without a centralized security console that can seamlessly relay those policies across all endpoints and workloads. It’s important to automatically apply group security policies to newly spawned virtual machines, based on their role within the infrastructure. For instance, newly spawned virtual servers should immediately adhere to group-specific policies, as well as newly spawned VDIs the same, and so on. Otherwise, the consequences could be disastrous, in the sense that they would be left unprotected against threats and attackers for as long as they’re operational.

4: Automate VM discovery

Automated VM discovery is the whole point of an integrated security platform, as security policies can automatically be applied based on the type of machine.

Organizations should consider adopting security solutions that can automate VM discovery and apply security policies accordingly, without forcing IT and security teams to push policies to newly instanced workloads manually.

Considering the hybrid cloud’s flexibility in terms of endpoints (physical and virtual) and infrastructure (on-premise and in the cloud), it’s important that the security solution embraces the same elasticity and enable organizations to fully embrace the benefits of these infrastructures without sacrificing performance, usability or security.

5: Maintain visibility and control over dynamic infrastructure

In the context of adopting a mobility- and cloud-first approach, it has become increasingly difficult for IT and security teams to view an organization’s security posture, especially since traditional security solutions don’t offer single-pane-of-glass visibility across all endpoints.

Integrating a complete security platform can help IT and security teams save time while offering security automation features that help speed up the ability to identify signs of a data breach accurately.

Addressing cloud security challenges is constant, ongoing work that requires IT and security teams to be vigilant while at the same time adopting the right security and automation tools to help take some of the operational burden off their shoulders. Working together to find the right solutions ensures both teams get what they need. The collaboration of these two focused teams ensures the entire infrastructure is protected, regardless of on-premise or cloud workloads.

Source link

Combatting DNS Hijacking Requires Improved DNS Security | IT Infrastructure Advice, Discussion, Community

Global DNS hijacking is becoming an increasingly troublesome security threat for the entire Internet. Calls for secure domain authentication using DNSSEC specifications have been ongoing for years. But while added security is a step in the right direction, we all must understand that a huge portion of our Internet security lays at the feet of a single, private entity called the Internet Corporation for Assignable Names and Numbers (ICANN).

The latest cry for improved domain name system (DNS) security functionally was sent out in late February — and it came directly from ICANN. For those of use in the field of IT security, we fully understand the security concern surrounding DNS. Like most early networking mechanisms, first iterations of DNS contained no security safeguards. Instead, DNS was simply built as a hierarchical, distributed database to match a hostname (such as networkcomputing.com) to a unique IP address that computer networks use to communicate. The concern is that without the necessary security protections in place, DNS can be intentionally or unintentionally altered to send people to the wrong destination. And if done properly, a session can be hijacked without the end user ever knowing it.

Moves to enforce DNSSEC are a great way to secure the various DNS servers located on the Internet that are managed by various governments, corporations and service providers. DNSSEC authentication helps to solidify the integrity of lower branches on the DNS hierarchy tree. In other words, it helps verify that a compromised DNS server won’t send you to a hijacked server when you point a browser to a specific domain name. That said, this security only goes so far up that tree — and it ends at the very top where ICANN resides. ICANN controls all the top-level domains (TLD) that we’re familiar with including .com, .net and .org. It also controls TLD’s for governments and countries including .gov, .eu and .cn. Any changes at this level – and any security enforced – is made at the organization’s sole discretion.

We’re talking about a massive amount of responsibility – while being run as a private non-profit organization. So, how did it get this way?

ICANN from the beginning

In 1983, a man named Jon Postel established the Internet Assigned Numbers Authority (IANA) at the University of Southern California. At that time, Mr. Postel created the IANA when USC was under contract with the Defense Advanced Research Project Agency (DARPA). Until 1998, IANA — and all TLD control was managed within the U.S. government itself. As the popularity of the Internet exploded in the mid-1990s from a consumer and commercial perspective, the IANA merged with several other Internet-governance groups to form ICANN. The new non-profit was then contracted to manage TLD’s for the U.S. National Telecommunications and Information Administration (NTIA) from the time it formed until October 2016. It was at this point where the US government relinquished control to ICANN. Now that the United States government is out of the picture, ICANN now considers itself a global community that supports what they call a “vision of ‘one, world, one Internet.'”

Now that the Internet is indeed a global network, some conclude that the decision to remove U.S. control over TLD’s is a correct one. Others feel that a compromised ICANN can quickly become a national security threat. That said, as users of the free and global Internet, we must make sure that necessary checks and balances are in place to make sure ICANN never becomes corrupted by groups or governments. In other words, we need to make sure protocols and transparencies are in place so we can all “watch the watchers.”

Source link

Operational Security is Critical for Container Safety | IT Infrastructure Advice, Discussion, Community

The container ecosystem is, as expected, growing. Security solutions are popping up that promise to provide better protection of containers and communications between them. That’s a good thing, because a Tripwire report on the State of Container Security found that 94% of respondents were “somewhat or very concerned” about container security. We need solutions that scan, verify, lock down, and securely manage secrets to help protect this emerging infrastructure and the applications they deliver.

That’s the good news. Now for the bad news.

Despite these concerns, less than half (47%) in the Tripwire survey cite security as impeding greater container adoption rates. That means more than half are plowing ahead despite knowing the risks.  To wit, the same survey found that 17% of respondents have vulnerable containers, know what those vulnerabilities are, but have deployed them anyway.


The publication of CVE-2019-5736 – which is still undergoing analysis – should be a wake-up call for those who tend to be less, oh, aggressive about ensuring security before deployment. If you hadn’t heard about the vulnerability in runc, let me recap. runc is one of the more commonly deployed container runtimes. It’s basically a tool used to spawn and run containers based on the Open Container Initiative (OCI) specification. So it’s pretty important to operating containerized environments. It’s also (allegedly) vulnerable to being overwritten by attackers if said attackers were able to control a container image launched within the environment.

Which is likely easier than you think, given that containers introduce significant operational security challenges as well as the ones associated with the containers themselves.

For example, reading through various reports on container security based on running environments you will find a goodly number (read: more than zero) of container environments running in the public cloud have absolutely no access control on their consoles. Which means you or me or an attacker can gain access simply by finding it. Access to the console means a compromised container can be launched, which enables the exploitation of CVE-2019-5736. Voila. I have root-level access to your environment.

Oh, but you’re not one of those organizations, right? You have properly secured your consoles and require complex passwords. But do you allow loading of images from external sources? A 2018 KubeCon survey found that 73% have already adopted container registries – from which updated images are acquired.  According to a Sysdig report on actual Docker usage, 30% of respondents update container images on a daily basis. 8% of those update every 5-10 minutes.

Hopefully, these are coming from private container registries. Because a poisoned image can offer the same access. And it happens. It happened when we relied on RPMs to expand and update our Linux-based systems, and it happens to developers who rely on third-party components for application functionality.

Expanded focus is needed

The problem with container security initiatives right now is that we’re focusing primarily on the containers themselves. We’re so focused on container-to-container communications, deploying mutual TLS between containers, and arguing over how best to protect containerized systems from attack that we’re forgetting about the significance of operations on security.

You wouldn’t deploy a J2EE application with open access to its administrative console, would you? But some are apparently willing to do so when it comes to containers, even though allowing unfettered access to any environmental controls is simply bad security.

The same is true for blindly trusting third-party sources for images in real time. Images you are relying on should be vetted and certified and served up from a private, controlled repository. Allowing external images enables attackers to effectively push compromised containers by tricking you into loading them. And even if they’re coming from a private repository, scan them. Every. Single. Time.

There are good reasons to be concerned about internal security with containers and how we should resolve them. But there are just as many good reasons to be concerned about operational security with containers and how to resolve it, too. The reason we focus on the former and try to brush the latter under the rug is that internal security can be – and likely will be – resolved with technology. Operational security too often falls under “behavior and practices” and that, as we know from the bumpy road to DevOps adoption, is a bigger challenge than both. 


Source link

Your Network Security Strategy: Time to Update or Reboot? | IT Infrastructure Advice, Discussion, Community

As the need for stronger network protection grows ever more urgent, many organizations are studying their security strategies and wondering whether rapidly evolving threat vectors have rendered their existing plans obsolete. This observation often leads IT and business leaders to ask themselves a critical question: is it best to keep updating an existing security strategy or to simply start over from scratch?

There are several instances when an organization may want to consider creating an entirely new network strategy rather than updating the current one, said Frank Downs, director of the cybersecurity practice at ISACA, an international professional association that’s focused on IT governance. “One of the most significant [motivations] is an attack that reveals that the fundamental elements of the strategy are weak, indicating that a complete overhaul should be considered,” he observed. “An example of this type of incident includes an attack that impacts data in motion within the network and as it leaves the network, such as a man-in-the-middle attack at a gateway point.”

Organizations should also consider developing an entirely new network security strategy when there has been significant change within network architecture or when business goals and objectives have shifted direction, suggested Derek Loonan, a senior security specialist at cybersecurity services provider GreyCastle Security. “For example, moving to a new location or being part of an acquisition.” Loonan noted that to implement and prioritize the controls that will provide the most risk reduction, a security strategy should align directly with the organization’s risk management program. “Strategy should be visualized and managed against a high-level roadmap that depicts the desired end-state within a three to five-year period,” he said.

The network security landscape is undergoing a transition resulting from changes in the underlying traffic patterns, observed Jeff Reed, senior vice president of product management in Cisco’s security business unit. “With apps and data moving to SaaS, IaaS and PaaS, coupled with increasing user mobility and the acceleration of SD-WAN, it’s important to reevaluate what network security controls are being used and where they are being placed.”

Unfixable flaws

Prof. Tom Thomas, a faculty member in Tulane University’s School of Professional Advancement and its IT cybersecurity program, noted that a complete strategy replacement might be needed when spinning up an entirely new infrastructure. A fresh start may also be necessary when an organization’s existing plan becomes so complex and intertwined that creating a fresh strategy becomes the only sensible course. “In this case, you would build the new security infrastructure in parallel with the old and migrate in phases,” he explained. “This also allows for plenty of testing, which is always important.”

Yet another reason for starting anew is when a security infrastructure grows so old and decrepit that it can’t function properly in a modern security environment or is likely to degrade network service in some way. “This is a rip and replace because what is currently in place is so lacking in capabilities that there is little to no value in undergoing a migration,” Thomas said.

Jack Hamm, director of security and network operations for network security firm Gigamon, argued that a fundamental flaw in many network security plans is that they’re built as overlays onto an existing network plan. “This is a bad strategy since it somehow implies that you can build a network and add security,” he advised. Buildings, after all, aren’t constructed by starting with the end goal and then adding the foundation. “Similarly, network security strategies that follow this approach are doomed,” Hamm stated.

Laurence Pitt, security strategy director for Juniper Networks, cautioned that enterprises shouldn’t be too hasty about discarding an existing security blueprint. “This is not to say that the existing strategy will have anything that can be salvaged, but to entirely rip-and-replace for something new will slow down the ability to respond and will cause confusion,” he explained.

Pitt suggested stripping an obsolete security strategy back to its foundation and then building it back up. “While [the old plan] may be out of date or seen as ineffective, there will be areas that still work, and these can and should be updated rather than recreated,” he reasoned. “This would allow for more focus to be given to entirely new areas, such as IoT protection or implementation of automation technologies.”

Review frequently

Network security strategy should be reviewed yearly since both the security market and the relative threats are in a constant state of change, Reed said. “Every two years an entirely new strategy should be evaluated … to understand what gaps, if any, exist and what opportunities are available for your organization,” he noted. “For nearly all modern businesses, networks are the lifeblood, and they simply can’t afford to be ill-prepared for the ever-increasing landscape of threats.”


Source link

Systemd 241 Released With Security Fixes & Other Changes


Lennart Poettering has just tagged the systemd 241 update that includes the “system down” security fixes and other improvements to this widely-used Linux init system.

Systemd 241 succeeds the v240 release from December and incorporates the fixes for the “system down” vulnerabilities plus regular file and FIFO protection, a new stderr priority option for systemd-cat, support for configuring the default locale at compile-time, the kernel-install script can now handle more than one initrd file, better handling of backslashes within the EnvironmentFile setting, and other changes as outlined in the NEWS file.

Systemd 241 can be grabbed fresh from GitHub.