Tag Archives: Security

Librem One Affected By Nasty Security Bug On Launch Day, Acknowledges Rebranded Apps


FREE SOFTWARE --

Yesterday Purism launched the Librem One suite of services that initially consists of a privacy-minded, but even with priding themselves on security, there ended up being a nasty launch-day security issue uncovered. The fact that their offered software was quietly re-branded open-source software also rubbed some users the wrong way.

The security issue yesterday affected Librem Chat and allowed any user into any account on the service due to a typo in the Matrix.org code. The issue ended up being reported and after some brief downtime taken care of, as outlined via the Purism blog. While it happened on launch day, so far there are less than two thousand users, so the overall impact isn’t that much and it doesn’t appear the issue was exploited for nefarious intent.

With Librem One costing $7.99 USD per month or $14.99 USD for a “family pack”, a number of users have been expressing frustration with Purism largely just re-branding various pieces of open-source software that comprise this suite. As to that, they sought to address those concerns in How Purism Works Upstream and Gives Back. Purism argues that re-branding these pieces of software provides convenience and gives them a leg up in competing with tech giants like Apple and Google. From the linked blog post, “By putting services under a centralized brand, we make these decentralized services just as convenient to use as the big tech alternatives. That way an end-user doesn’t have to know what Matrix, ActivityPub, or even IMAP are or try to find all of the applications that work with those services on their particular platform. Instead, they just need to know that they want to chat, join social media, or send email.

We’ll see how well the Librem One suite ends up working out especially with Purism being stretched so thin as is trying to deliver their Librem 5 smartphone next quarter, which is already coming two quarters later than originally anticipated.


Four Tips to Worsen Your Network Security | IT Infrastructure Advice, Discussion, Community


If you want to keep your network infrastructure secured, you need to monitor what’s going on with routers, switches, and other network devices. Such visibility would enable you to quickly detect and investigate threats to perimeter security, such as unauthorized changes to configurations, suspicious logon attempts, and scanning threats. For example, improper changes of network device configurations will leave your network vulnerable to hackers who could break into your network. If you want to strengthen your network security, never follow these four tips.

Tip # 1: Don’t care about unauthorized logons

Most attempts to log on to a network device are valid actions by network administrators — but some are not. Inability to promptly detect suspicious logon attempts leaves your organization vulnerable to attackers. Unusual events include access by an admin outside of business hours or during holidays, failed logon attempts, or the modification of access rights, etc. An immediate alert about suspicious events enables IT personnel to take action before security is compromised. This practice is also helpful for compliance audits, as it gives evidence that privileged users and their activities on your devices are closely watched (e.g., who is logging in and how often).

Tip # 2: Configure your devices at random

The key threat associated with network devices is improper configuration. A single incorrect change can weaken your perimeter security, raise concerns during regulatory audits and even cause costly system outages that can bring your business down. For example, a firewall misconfiguration can give attackers easy access to your network, which could lead to lasting damage. Visibility into who changed what will provide you with insight and control of your network devices. Continuous auditing would enable you to have better user accountability and detect potential security incidents more quickly before they cause real trouble.

Tip # 3: Ignore scanning threats

Hackers often use network scanning to learn about a network’s structure and behavior to execute an attack on the network. If you avoid monitoring of your network devices for scanning threats, you might miss malicious activities until your sensitive data is compromised. To strengthen your protection against scanning threats and minimize the risk of data breaches, ensure continuous monitoring of network devices. Such visibility would enable you to understand which host and subnet were scanned, from which IP address it was initiated, and how many scanning attempts were made.

Tip # 4: Ease control of VPN logons

Virtual private network (VPN) access is a popular way to improve the security of remote connections for many organizations, but there are many security risks associated with it. In reality, VPN connections are usually used by anyone in the organization without any approvals. The best practices recommend providing access to network resources via VPN only after proper approvals and only to users that need access according to their business need. However, practice shows that there are no 100 percent secured VPNs and any VPN connection is a risk. The major risk scenarios include a user connecting via public Wi-Fi (since someone might steal their credentials) or a user who doesn’t usually work with VPN suddenly beginning to use it (which can be a sign that a user has lost their device and someone else is trying to log in using it). Visibility into network devices enables you to keep track of each VPN logon attempt. Such visibility also provides information about who tried to access your network devices, the IP address each authentication attempt was made from, and the cause of each failed VPN logon.



Source link

The Missing Piece in Cloud App Security | IT Infrastructure Advice, Discussion, Community


As the economy improves, the workforce becomes more mobile. It has become quite common for employees to take more than their potted plants with them when they leave. They take confidential company data, too – and the majority see nothing wrong with it, even though it is a criminal offense. Failing to properly secure this data leaves companies open to the loss of customers and competitive advantage.

Organizations can increase trust by driving bad actors out and improving their overall security posture if they have better visibility into insider threats,. Below are the top five events that organizations monitor cloud applications for and how they can help to promote good security hygiene within a company.

1. Exported data

Users can run reports on nearly anything within Salesforce, from contacts and leads to customers. Employees can extract large amounts of sensitive data from Salesforce and other cloud applications by exporting reports. And those reports can be exported for easy reference and analysis.

This is a helpful feature for loyal employees, but in the hands of others, such data extractions can make a company vulnerable to data theft and breaches. Departing employees may choose to export a report of customers, using the list to join or start a competitive business.

Companies are not helpless, though. Organizations can monitor for exports to:

— Protect sensitive customer, partner and prospect information, increasing trust with your customers and meeting key regulations and security frameworks (e.g., PCI-DSS).

— Easily detect team members who may be stealing data for personal or financial gain and stop the exfiltration of data before more damage occurs.

— More quickly spotting and remediating the activity, reducing the cost of a data breach.

— Spot possible instances of compromised credentials and deactivate compromised users.

2. Who is running reports

While organizations focus most of their attention on which reports are being exported, simply running a report could create a potential security issue. The principle of least privilege dictates that people only be given the minimal amount of permissions necessary to complete their job – and that applies to data that can be viewed. But many companies grant broad access across the organization, even to those whose job does not depend on viewing specific sensitive information.

By paying attention to top report runners, report volume and which reports have been run, you can track instances where users might be running reports to access information that’s beyond their job scope. Users may also be running – but not necessarily exporting – larger reports than they normally do or than their peers do.

In addition, you can monitor for personal and unsaved reports, which can help close any security vulnerability created by users attempting to exfiltrate data without leaving a trail. Whether it’s a user who is attempting to steal the data, a user who has higher access levels than necessary, or a user who has accidentally run the report, monitoring for report

access will help you spot any additional security gaps or training opportunities.

3. Location and identity of logins

You can find some hidden gems of application interaction by looking at login activity. Terminated users who have not been properly deprovisioned may be able to gain access to sensitive data after employment, in the case of a departed employee, or at the end of a contract with a third party. Login activity can also tell you a user’s location, hours, devices and more – all of which can uncover potential security incidents, breaches or training opportunities.

By monitoring for inactive users logging in, then, companies can protect data from theft by a former employee or contractor. Login activity can also tell you whether employees are logging in after hours or from a remote location. This may be an indicator of an employee working overtime — but it may also be a red flag for a departing employee, logging in after hours to steal data, or of compromised credentials.

4. Changes to profiles and permissions

There are profiles and permissions within cloud applications that regulate what a user can and cannot do. For example, in Salesforce, every user has one profile but can have multiple permissions sets. The two are usually combined by using profiles to grant the minimum permissions and access settings for a specific group of users, then permission sets to grant more permissions to individual users as needed. Profiles control object, field, app and user permissions; tab settings; Apex class and Visualforce page access; page layouts; record types; and login hours and IP ranges.

Permissions for each application vary at each organization. In some companies, all users enjoy advanced permissions; others use a conservative approach, granting only the permissions that are necessary for that user’s specific job roles and responsibilities. But with over 170 permissions in Salesforce, for instance – and hundreds or thousands of users – it can be difficult to grasp the full scope of what your users can do in that application.

5. Creating or deactivating users

Managing users including being able to create and deactivate their accounts. Organizations can monitor for deactivation – which, if not done properly after an employee leaves the organization, may result in an inactive user gaining access to sensitive data or an external attacker gaining hold of their still-active credentials. For this and other cloud applications, a security issue may also arise when an individual with administrative permissions creates a “shell,” or fake user, under which they can steal data. After the fact, they can deactivate the user to cover their tracks.

Monitoring for user creation is another way that security teams watch for any potential insider threats. And by keeping track of when users are deactivated, you can run a report of deactivated users within a specific time frame and correlate them with your former employees (or contractors) to ensure proper deprovisioning. Monitoring for creation and/or deactivation of users is also required by regulations like SOX and frameworks like ISO 27001.

Monitor for greater insight

You can’t defend against what you can’t see. With the widespread adoption of cloud applications, businesses are seeing an enormous uptick in user activity that is simultaneously harder to keep track of. Consequently, many organizations are looking for ways to increase visibility into how users are using these applications and the data within them. Monitoring the specific activities detailed above will help organizations increase visibility and keep data safe and secure.

 



Source link

Five Steps to Address Cloud Security Challenges | IT Infrastructure Advice, Discussion, Community


Today’s interconnected world relies on data accessibility from anywhere, at any time, on any device. The speed and agility that comes with hosting services and applications in the cloud are central to modern interconnected success. As such, these inherent benefits have compelled organizations to migrate some or all of their applications or infrastructures to the cloud. In fact, some industry experts estimate that up to 83 percent of enterprise workloads will migrate to the cloud by 2020.

While the cloud may offer significant benefits, organizations need to be aware of the security challenges when planning a cloud-first strategy. Some of those challenges involve not only protection and compliance but also operational considerations, such as the ability to integrate security solutions for on-premise and cloud workloads; to enforce consistent security policies across the hybrid cloud and to automate virtual machine (VM) discovery to ensure visibility and control over dynamic infrastructure.

1: Balance protection and compliance

Striking a balance between protection and compliance is a huge challenge. Sometimes, it’s all about discouraging threat actors by making them invest more time, energy, and resources than they first estimated into breaching the organization. Making attackers go through several layers of defenses means they could slip up at some point and trigger an alert before reaching the organization’s crown jewels.

Recent data breaches should push leaders into thinking beyond compliance. Besides risking more fines, they risk their reputation as well. Compliance regulations tend to be addressed as base-minimum security options. However, thorough protection involves deploying multiple security layers designed to both help IT and security teams streamline operations, as well as increase visibility and accelerate detection of threats before a full-blown breach occurs.

2: Integrate security solutions for on-premise and cloud workloads

Finding the right security solution to seamlessly integrate with both on-premise and cloud workloads without impacting consolidation ratios, affecting performance or creating manageability issues is also a challenge. Traditional security solutions can, at best, offer separate solutions for on-premise and cloud workloads; however, still run the risk of creating visibility and management issues. At worst, the same traditional security solution is deployed on all workloads – cloud and local – creating serious performance issues for the latter. It’s important for organizations to integrate a security solution that’s built for automatically molding its security agent to the job at hand, based on whether the workload is on-premises or in the cloud, without impacting performance or compromising on security capabilities.

3: Deploy consistent security policies across the hybrid cloud

To address this challenge, organizations need to find security solutions that can adapt security agents to the type of environment they are deployed in. Cloud environments solutions must be agile enough to leverage all the benefits of cloud without sacrificing security, while for traditional on-premise environments, versatile enough to enable productivity and mobility. Organizations must understand that deploying security policies across hybrid infrastructures can be troublesome, especially without a centralized security console that can seamlessly relay those policies across all endpoints and workloads. It’s important to automatically apply group security policies to newly spawned virtual machines, based on their role within the infrastructure. For instance, newly spawned virtual servers should immediately adhere to group-specific policies, as well as newly spawned VDIs the same, and so on. Otherwise, the consequences could be disastrous, in the sense that they would be left unprotected against threats and attackers for as long as they’re operational.

4: Automate VM discovery

Automated VM discovery is the whole point of an integrated security platform, as security policies can automatically be applied based on the type of machine.

Organizations should consider adopting security solutions that can automate VM discovery and apply security policies accordingly, without forcing IT and security teams to push policies to newly instanced workloads manually.

Considering the hybrid cloud’s flexibility in terms of endpoints (physical and virtual) and infrastructure (on-premise and in the cloud), it’s important that the security solution embraces the same elasticity and enable organizations to fully embrace the benefits of these infrastructures without sacrificing performance, usability or security.

5: Maintain visibility and control over dynamic infrastructure

In the context of adopting a mobility- and cloud-first approach, it has become increasingly difficult for IT and security teams to view an organization’s security posture, especially since traditional security solutions don’t offer single-pane-of-glass visibility across all endpoints.

Integrating a complete security platform can help IT and security teams save time while offering security automation features that help speed up the ability to identify signs of a data breach accurately.

Addressing cloud security challenges is constant, ongoing work that requires IT and security teams to be vigilant while at the same time adopting the right security and automation tools to help take some of the operational burden off their shoulders. Working together to find the right solutions ensures both teams get what they need. The collaboration of these two focused teams ensures the entire infrastructure is protected, regardless of on-premise or cloud workloads.



Source link

Combatting DNS Hijacking Requires Improved DNS Security | IT Infrastructure Advice, Discussion, Community


Global DNS hijacking is becoming an increasingly troublesome security threat for the entire Internet. Calls for secure domain authentication using DNSSEC specifications have been ongoing for years. But while added security is a step in the right direction, we all must understand that a huge portion of our Internet security lays at the feet of a single, private entity called the Internet Corporation for Assignable Names and Numbers (ICANN).

The latest cry for improved domain name system (DNS) security functionally was sent out in late February — and it came directly from ICANN. For those of use in the field of IT security, we fully understand the security concern surrounding DNS. Like most early networking mechanisms, first iterations of DNS contained no security safeguards. Instead, DNS was simply built as a hierarchical, distributed database to match a hostname (such as networkcomputing.com) to a unique IP address that computer networks use to communicate. The concern is that without the necessary security protections in place, DNS can be intentionally or unintentionally altered to send people to the wrong destination. And if done properly, a session can be hijacked without the end user ever knowing it.

Moves to enforce DNSSEC are a great way to secure the various DNS servers located on the Internet that are managed by various governments, corporations and service providers. DNSSEC authentication helps to solidify the integrity of lower branches on the DNS hierarchy tree. In other words, it helps verify that a compromised DNS server won’t send you to a hijacked server when you point a browser to a specific domain name. That said, this security only goes so far up that tree — and it ends at the very top where ICANN resides. ICANN controls all the top-level domains (TLD) that we’re familiar with including .com, .net and .org. It also controls TLD’s for governments and countries including .gov, .eu and .cn. Any changes at this level – and any security enforced – is made at the organization’s sole discretion.

We’re talking about a massive amount of responsibility – while being run as a private non-profit organization. So, how did it get this way?

ICANN from the beginning

In 1983, a man named Jon Postel established the Internet Assigned Numbers Authority (IANA) at the University of Southern California. At that time, Mr. Postel created the IANA when USC was under contract with the Defense Advanced Research Project Agency (DARPA). Until 1998, IANA — and all TLD control was managed within the U.S. government itself. As the popularity of the Internet exploded in the mid-1990s from a consumer and commercial perspective, the IANA merged with several other Internet-governance groups to form ICANN. The new non-profit was then contracted to manage TLD’s for the U.S. National Telecommunications and Information Administration (NTIA) from the time it formed until October 2016. It was at this point where the US government relinquished control to ICANN. Now that the United States government is out of the picture, ICANN now considers itself a global community that supports what they call a “vision of ‘one, world, one Internet.'”

Now that the Internet is indeed a global network, some conclude that the decision to remove U.S. control over TLD’s is a correct one. Others feel that a compromised ICANN can quickly become a national security threat. That said, as users of the free and global Internet, we must make sure that necessary checks and balances are in place to make sure ICANN never becomes corrupted by groups or governments. In other words, we need to make sure protocols and transparencies are in place so we can all “watch the watchers.”



Source link