Tag Archives: hosting

Mirai is Back and Tougher than Before | IT Infrastructure Advice, Discussion, Community


Mirai, the highly disruptive malware strain that got its name from a 2011 Japanese TV show, is back on the beat and even “better” than before. Programmers have modified the original botnet beast, and it’s now screeching its way through enterprise-level Internet of things (IoT) devices.

The original Mirai crash landed in 2016. A sophisticated piece of malware programming, it snatched control of networked devices and assimilated them into a ferocious botnet. Even low-level programmers were able to access thousands of gadgets and computers and to orchestrate distributed denial of service attacks. ADSL modems, routers and network cameras proved most vulnerable to the well-engineered strain.

Mirai: A DDoD powerhouse

Ultimately, Mirai played a central role in several infamous distributed denial of service raids against multiple high-profile targets including the French hosting company OVH.com, the website of venerated online security reporter Brian Krebs and DNS server provider Dyn, which crippled popular sites like Reddit, GitHub, Airbnb and Netflix for a period. Rutgers University and the African country of Liberia also suffered under the malware’s grip.

And for months, Mirai’s author remained anonymous. Eventually, the malware entered the halls of hacker infamy. James Ferraro, an electronic composer and musician, even name-checked the notorious Trojan on his 2018 album “Four Pieces for Mirai.”

However, in 2017, Krebs revealed his suspicion that a programmer going by the alias Anna-senpai — government name: Paras Jha — penned Mirai. A student at Rutgers with a dorm-room business, Jha initially denied the charges. Then the FBI got involved, and on December 13, 2017, Jha and two other people pled guilty to criminal errands related to the Mirai botnet attacks. Ultimately, a judge sentenced Jha to six months behind bars and slapped him with an $8.6 million fine.

Mirai is back and more dangerous

Before Jha and his co-conspirators reported to authorities for incarceration, Mirai’s source code found its way online, and likeminded programmers took up the mantle. The result: new Mirai strains that can weasel their way into enterprise IoT devices and make use of all that business bandwidth, which could, theoretically, result in an attack of historic proportions.

In the fall of 2018, researcher Matthew Bing explained in a blog post:

“Like many IoT devices, unpatched Linux servers linger on the network and are being abused at scale by attackers sending exploits to every vulnerable server they can find. [We have] been monitoring exploit attempts for the Hadoop YARN vulnerability in our honeypot network and found a familiar, but surprising payload – Mirai.”

Vulnerable devices

According to Kaspersky Labs, second-generation Mirai strains represent about 21 percent of all IoT device pollutants. Additionally, the latest versions are even more flexible than the original and can exploit a wider range of targets, including enterprise-class controllers, wireless presentation systems, and digital signage. Analysts warn that the following devices are particularly vulnerable:

  • DLink DCS-930L network video cameras;
  • DLink DCS-930L network video cameras;
  • Netgear WG102, WG103, WN604, WNDAP350, WNDAP360, WNAP320, WNAP210, WNDAP660, WNDAP620 devices;
  • Netgear DGN2200 N300 Wireless ADSL2+ modem routers;
  • Netgear Prosafe WC9500, WC7600, WC7520 wireless controllers;
  • ePresent WiPG-1000 wireless presentation systems;
  • LG Supersign TVs;
  • DLink DIR-645, DIR-815 routers; and
  • Zyxel P660HN-T routers.

Many security experts strongly suspect that Industrial IoT devices may now also be vulnerable.

Guarding Against a Mirai Infection

Now that you know what Mirai is, you’re probably wondering: What measures should be taken to prevent infection?

Researchers and engineers, including the team at one of the best vpn services of 2019, unanimously agree that IT divisions should:

  • Take inventory of all IoT devices connected to their networks
  • Change default passwords across the board
  • Ensure that every device connected to the Internet is up-to-date on patches
  • Create a preventative strategy that includes firewalls, vpn, and anti-virus and anti-malware software

It may even be worth the investment to bring in a third-party expert to ensure your system is locked down properly. Companies that don’t have an in-house IT department should definitely summon a security professional for a threat of Mirai’s magnitude.

Businesses aren’t the only ones who must worry about Mirai. Every individual with a home network should also take measures to protect against the malicious virus. Many home routers come with a default backdoor that hackers can easily exploit. Making a network unattractive to Mirai-wielding ne’er-do-wells simply involves changing the default credentials.

Online privacy concerns and compliance

Malware is part of an ever-expanding landscape of online privacy concerns. And as legislation grows up around technological advancements, businesses need to be more cognizant of the intersection between data safekeeping and government breach regulations.

For example, did you know that in many jurisdictions, under certain circumstances, companies can be held legally and financially responsible for data breaches? So be sure to take reasonable steps to indemnify your company from possible punishment in the event of an attack.

The bottom line

Everyone needs to be aware of the threat that Mirai and its malware spawn present. Get your network shored up sooner rather than later, because the next big Mirai-rooted attack will likely cause tremendous chaos, the likes of which the world has never seen.

Read more Network Computing security-related articles:

Four Tips to Worsen Your Network Security

The Missing Piece in Cloud App Security

Five Steps to Address Cloud Security Challenges

 

 



Source link

Why Cloud-based DCIM is not Just for Data Centers | IT Infrastructure Advice, Discussion, Community


Just as technology and its use are evolving at a tremendous pace, the physical infrastructure which supports IT equipment is also being transformed to support these advances. There are some significant trends driving new approaches to the way technology is being deployed, but there are also important ramifications for the way that the basics – power, cooling, space – have to be provisioned and, more importantly, managed.

Firstly, a massive shift towards hybrid infrastructure is underway, says Gartner. The analyst predicts that by 2020, cloud, hosting, and traditional infrastructure services will be on a par in terms of spending. This follows on from earlier research which indicates an increase in the use of hybrid infrastructure services. As companies have placed an increasing proportion of IT load into outsourced data center services and cloud, both the importance and proliferation of distributed IT environments have been heightened.

Secondly, the IoT – or more specifically the Industrial IoT – has quietly been on the rise for a couple of decades. While industrial manufacturing and processing have utilized data for some time in order to maintain their ability to remain competitive and ensure profitability, companies must continually strive to optimize efficiency and productivity. The answer is being sought through more intelligent and more automated decision-making – most of it data-driven – with the data almost exclusively gathered and processed outside traditional data center facilities.

Thirdly, rapidly developing applications such as gaming and content streaming, as well as emerging uses like autonomous vehicles require physical resources which are sensitive to both latency and bandwidth limitations. Closing the physical distance between data sources, processing and use, is the pragmatic solution, but it also means that centralized data centers are not the answer. Most of the traction for these sorts of services is where large numbers of people reside – exactly where contested power, space and connectivity add unacceptable cost for large facility operations.

The rise of distributed IT facilities and edge data centers

In each of these examples – and there are more – IT equipment has to be run efficiently and reliably. Today there’s little argument with the fact that the best way to enable this from an infrastructure point of view is within a data center. Furthermore, the complexity of environments and the business criticality of many applications means that data center-style management practices need to be implemented in order to ensure that uptime requirements are met. And yet, data centers per se only partially provide the answer, because distributed IT environments are becoming an increasingly vital part of the mix.

The key challenges that need to be resolved where multiple edge and IT facilities are being operated in multiple or diverse locations include visibility, availability, security, and automation – functions which DCIM has a major role in fulfilling for mainstream data centers. You could also add human resource to the list, because most data center operations, including service and maintenance, are delivered by small and focused professional teams. When you add the complication of distributed localities, you have a recipe for having the wrong people in the wrong place, at the wrong time.

Cloud-based DCIM answers the need for managing Edge Computing infrastructure

DCIM deployment in any network can be both complex and potentially high cost (whether delivered using on-premise or as-a-service models). By contrast, cloud-based DCIM, or DMaaS (Data Center Management-as-a-Service), overcomes this initial inertia to offer a practical solution for the challenges being posed. Solutions such as Schneider Electric EcoStruxure IT enable physical infrastructure in distributed environments to be managed remotely for efficiency and availability using no more than a smartphone.

Access Edge Computing White Paper

DMaaS combines simplified installation and a subscription-based approach coupled with a secure connection to cloud analytics to deliver smart and actionable insights for the optimization of any server room, wiring closet or IT facility. This means that wherever data is being processed, stored or transmitted, physical infrastructure can be managed proactively for assured uptime and Certainty in a Connected World.

Read this blog post to find out more about the appeal of cloud-based data center monitoring, or download our free white paper, “Why Cloud Computing is Requiring us to Rethink Resiliency at the Edge.

 



Source link

LVFS Could Be Hosting 10k+ Firmware Files By End Of 2019


HARDWARE --

LVFS, the Linux Vendor Firmware Service, that pairs with Fwupd integration for offering firmware/BIOS updates to Linux users could be offering up more than ten thousand distinct firmware files before the end of the calendar year.

Richard Hughes of Red Hat who has been leading Fwupd/LVFS development has been quite busy as of late. In addition to courting more hardware vendors, eyeing the enterprise, becoming a Linux Foundation project, and hitting a goal of serving more than 500,000 firmware files to Linux users in a single month, this year they are on a trajectory to be offering more than ten thousand different firmware files.

Hughes noted in a mailing list post that they have grown from dozens of firmware files to thousand and “tens of thousands of files before the year is finished.”

That’s quite an ambitious goal and we’ll certainly be monitoring its progress. This goal was mentioned as part of some shell / user experience improvements to the LVFS given the growing number of firmware offerings.


It’s Time To Vote On Whether FreeDesktop.org Will Formally Hook Up With X.Org


X.ORG --

While X.Org and FreeDesktop.org are already closely related, administered by many of the same people, and FreeDesktop.org provides the hosting for much of the infrastructure, there isn’t many formalities around FreeDesktop.org and the X.Org Foundation formally doesn’t have control of FreeDesktop.org. But there’s now a vote on whether the X.Org Foundation will formally accept FreeDesktop.org.

For months there has been talk of FreeDesktop.org joining forces with the X.Org Foundation given the significant overlap with most X.Org resources tied to FreeDesktop.org. But the FreeDesktop.org does also host some projects not under the umbrella like various small/personal projects, LibreOffice, Plymouth, GStreamer, and other (mostly desktop) open-source software for their Git repositories.

For this year’s X.Org Foundation elections, there is a vote on whether to add “Support free and open source projects through the freedesktop.org infrastructure. This includes, but is not limited to: Administering and providing project hosting services.” to the foundation’s bylaws.

Little will change in reality, just formally recognizing and supporting FreeDesktop.org.

After several failed starts due to communication issues and other problems, the 2019 X.Org Foundation elections are underway and open to all current X.Org members. Details on those running for the board can be found via the Wiki.

Current members can vote via members.x.org. Due to the bylaws change, a super majority is needed to pass the change, which in the past has sometimes been a problem reaching the super majority threshold.


Five Steps to Address Cloud Security Challenges | IT Infrastructure Advice, Discussion, Community


Today’s interconnected world relies on data accessibility from anywhere, at any time, on any device. The speed and agility that comes with hosting services and applications in the cloud are central to modern interconnected success. As such, these inherent benefits have compelled organizations to migrate some or all of their applications or infrastructures to the cloud. In fact, some industry experts estimate that up to 83 percent of enterprise workloads will migrate to the cloud by 2020.

While the cloud may offer significant benefits, organizations need to be aware of the security challenges when planning a cloud-first strategy. Some of those challenges involve not only protection and compliance but also operational considerations, such as the ability to integrate security solutions for on-premise and cloud workloads; to enforce consistent security policies across the hybrid cloud and to automate virtual machine (VM) discovery to ensure visibility and control over dynamic infrastructure.

1: Balance protection and compliance

Striking a balance between protection and compliance is a huge challenge. Sometimes, it’s all about discouraging threat actors by making them invest more time, energy, and resources than they first estimated into breaching the organization. Making attackers go through several layers of defenses means they could slip up at some point and trigger an alert before reaching the organization’s crown jewels.

Recent data breaches should push leaders into thinking beyond compliance. Besides risking more fines, they risk their reputation as well. Compliance regulations tend to be addressed as base-minimum security options. However, thorough protection involves deploying multiple security layers designed to both help IT and security teams streamline operations, as well as increase visibility and accelerate detection of threats before a full-blown breach occurs.

2: Integrate security solutions for on-premise and cloud workloads

Finding the right security solution to seamlessly integrate with both on-premise and cloud workloads without impacting consolidation ratios, affecting performance or creating manageability issues is also a challenge. Traditional security solutions can, at best, offer separate solutions for on-premise and cloud workloads; however, still run the risk of creating visibility and management issues. At worst, the same traditional security solution is deployed on all workloads – cloud and local – creating serious performance issues for the latter. It’s important for organizations to integrate a security solution that’s built for automatically molding its security agent to the job at hand, based on whether the workload is on-premises or in the cloud, without impacting performance or compromising on security capabilities.

3: Deploy consistent security policies across the hybrid cloud

To address this challenge, organizations need to find security solutions that can adapt security agents to the type of environment they are deployed in. Cloud environments solutions must be agile enough to leverage all the benefits of cloud without sacrificing security, while for traditional on-premise environments, versatile enough to enable productivity and mobility. Organizations must understand that deploying security policies across hybrid infrastructures can be troublesome, especially without a centralized security console that can seamlessly relay those policies across all endpoints and workloads. It’s important to automatically apply group security policies to newly spawned virtual machines, based on their role within the infrastructure. For instance, newly spawned virtual servers should immediately adhere to group-specific policies, as well as newly spawned VDIs the same, and so on. Otherwise, the consequences could be disastrous, in the sense that they would be left unprotected against threats and attackers for as long as they’re operational.

4: Automate VM discovery

Automated VM discovery is the whole point of an integrated security platform, as security policies can automatically be applied based on the type of machine.

Organizations should consider adopting security solutions that can automate VM discovery and apply security policies accordingly, without forcing IT and security teams to push policies to newly instanced workloads manually.

Considering the hybrid cloud’s flexibility in terms of endpoints (physical and virtual) and infrastructure (on-premise and in the cloud), it’s important that the security solution embraces the same elasticity and enable organizations to fully embrace the benefits of these infrastructures without sacrificing performance, usability or security.

5: Maintain visibility and control over dynamic infrastructure

In the context of adopting a mobility- and cloud-first approach, it has become increasingly difficult for IT and security teams to view an organization’s security posture, especially since traditional security solutions don’t offer single-pane-of-glass visibility across all endpoints.

Integrating a complete security platform can help IT and security teams save time while offering security automation features that help speed up the ability to identify signs of a data breach accurately.

Addressing cloud security challenges is constant, ongoing work that requires IT and security teams to be vigilant while at the same time adopting the right security and automation tools to help take some of the operational burden off their shoulders. Working together to find the right solutions ensures both teams get what they need. The collaboration of these two focused teams ensures the entire infrastructure is protected, regardless of on-premise or cloud workloads.



Source link