Tag Archives: servers

The Router’s Obstacle-Strewn Route to Home IoT Security | Software


It is newly minted conventional wisdom that not a single information security conference goes by without a presentation about the abysmal state of Internet of Things security. While this is a boon for researchers looking to make a name for themselves, this sorry state of affairs is definitely not beneficial for anyone who owns a connected device.

IoT device owners aren’t the only ones fed up, though. Right behind them is Eldridge Alexander, manager of Duo Labs at
Duo Security. Even better, he has a plan, and the experience to lend it some credibility.

Before assuming his current role at Duo Security, Alexander held various IT posts at Google and Cloudflare. For him, the through-line that ties together his past and present IT work is the security gains that accrue from aligning all of a network’s security controls with the principle of zero-trust.

“I’ve basically been living and breathing zero-trust for the last several years,” Alexander told LinuxInsider.

Simply put, “zero-trust” is the idea that to the furthest extent possible, devices should not be trusted to be secure, and they should be treated as such. There are many ways zero-trust can manifest, as it is not so much a singular technique as a guiding principle, but the idea is to leave yourself as invulnerable to the compromise of any one device as possible.

A recurring theme among his past few employers, this understandably has left its mark on Alexander, to the point where it positively permeates his plan for IoT security on home networks. His zeal for zero-trust comes to home networks at just the right time.

Although consumer IoT adoption
has been accelerating, zero-trust has yet to factor into most consumer networking tech, Alexander observed, and we’re getting to the point where we can’t afford for it not to.

“Investigating not really new threats but increased amount of threats in IoT and home networks, I’ve been really interested in seeing how we could apply some of these very enterprise-focused principles and philosophies to home networks,” he noted.

Network Segmentation

In Alexander’s home IoT security schema, which he unveiled at Chicago’s THOTCON hacking conference this spring, zero-trust chiefly takes the form of network segmentation, a practice which enterprise networks long have relied on.

In particular, he advocates for router manufacturers to provide a way for home users to create two separate SSIDs (one for each segment) either automatically or with a simple user-driven GUI, akin to the one already included for basic network provisioning (think your 192.168.1.1 Web GUI).

One would be the exclusive host for desktop and mobile end-user devices, while the other would contain only the home’s IoT devices, and never the twain shall meet.

Critically, Alexander’s solution largely bypasses the IoT manufacturers themselves, which is by design. It’s not because IoT manufacturers should be exempted from improving their development practices — on the contrary, they should be expected to do their part. It’s because they haven’t proven able to move fast enough to meet consumer security needs.

“My thoughts and talk here is kind of in response to our current state of the world, and my expectations of any hope for the IoT manufacturers is long term, whereas for router manufacturers and home network equipment it is more short term,” he said.

Router manufacturers have been much more responsive to consumer security needs, in Alexander’s view. However, anyone who has ever tried updating router firmware can point to the minimal attention these incremental patches often receive from developers as a counterclaim.

Aside from that issue, router manufacturers typically integrate new features like updated 802.11 and WPA specifications fairly quickly, if for no other reason than to give consumers the latest and greatest tech.

“I think a lot of [router] companies are going to be open to implementing good, secure things, because they know as well as the security community does … that these IoT devices aren’t going to get better, and these are going to be threats to our networks,” Alexander said.

So how would home routers actually implement network segmentation in practice? According to Alexander’s vision, unless confident consumers wanted to strike out on their own and tackle advanced configuration options, their router simply would establish two SSIDs on router setup. In describing this scenario, he dubbed the SSIDs “Eldridge” and “Eldridge IoT,” along the lines of the more traditional “Home” and “Home-Guest” convention.

The two SSIDs are just the initial and most visible (to the consumer) part of the structure. The real power comes from the deployment of VLANs respective to each SSID. The one containing the IoT devices, “Eldridge IoT” in this case, would not allow devices on it to send any packets to the primary VLAN (on “Eldridge”).

Meanwhile, the primary VLAN either would be allowed to communicate with the IoT VLAN directly or, preferably, would relay commands through an IoT configuration and management service on the router itself. This latter management service also could take care of basic IoT device setup to obviate as much direct user intervention as possible.

The router “would also spin up an app service such as Mozilla Web Things or Home Assistant, or something custom by the vendor, and it would make that be the proxy gateway,” Alexander said. “You would rarely need to actually talk from the primary Eldridge VLAN over into the Eldridge IoT VLAN. You would actually just talk to the Web interface that would then communicate over to the IoT VLAN on your behalf.”

By creating a distinct VLAN exclusively for IoT devices, this configuration would insulate home user laptops, smartphones, and other sensitive devices on the primary VLAN from compromise of one of their IoT devices. This is because any rogue IoT device would be blocked from sending any packets to the primary VLAN at the data link layer of the OSI pyramid, which it should have no easy way to circumvent.

It would be in router manufacturers’ interests to enable this functionality, said Alexander, since it would offer them a signature feature. If bundled in a home router, it would provide consumers with a security feature that a growing number of them actually would benefit from, all while asking very little of them in the way of technical expertise. It ostensibly would be turned on along with the router.

“I think that’s a valuable incentive to the router manufacturers for distinguishing themselves in a crowded marketplace,” Alexander said. “Between Linksys and Belkin and some of the other manufacturers, there’s not a whole lot of [distinction] between pricing, so offering home assistant and security is a great [distinction] that they could potentially use.”

IoT Security Standards?

There is some promise in these proposed security controls, but it’s doubtful that router manufacturers actually would equip consumer routers to deliver them, said Shawn Davis, director of forensics at
Edelson and adjunct industry professor at the Illinois Institute of Technology.

Specifically, VLAN tagging is not supported in almost any home router devices on the market, he told LinuxInsider, and segmenting IoT from the primary network would be impossible without it.

“Most router manufacturers at the consumer level don’t support reading VLAN tags, and most IoT devices don’t support VLAN tagging, unfortunately,” Davis said.

“They both could easily bake in that functionality at the software level. Then, if all IoT manufacturers could agree to tag all IoT devices with a particular VLAN ID, and all consumer routers could agree to route that particular tag straight to the Internet, that could be an easy way for consumers to have all of their IoT devices automatically isolated from their personal devices,” he explained.

VLAN tagging is not restricted by any hardware limitations, as Davis pointed out, but is merely a matter of enabling the software to handle it. Just because the manufacturers can switch on VLAN tagging in software, that doesn’t mean it will be an easy matter to convince them to do so.

It’s unlikely that router manufacturers will be willing to do so for their home router lines and, unsurprisingly, it has to do with money, he said.

“A lot of the major companies produce consumer as well as corporate routers,” Davis noted. “I think they could easily include VLAN functionality in consumer routers but often don’t in order to justify the cost increase for feature-rich business level hardware.”

Most router manufacturers see advanced functionality like VLAN tagging as meriting enterprise pricing due to the careful development that it requires to meet businesses’ stricter operational requirements. On top of that, considering the low average technical literacy of home users, router manufacturers have reason to think that power user features in home routers simply wouldn’t be used, or would be misconfigured.

“Aside from the pricing tier differences,” Davis said, “they also might be thinking, ‘Well, if we bake in VLANs and other enterprise-based features, most consumers might not even know how to configure them, so why even bother?'”

Beyond cajoling router makers to enable VLAN tagging and any other enterprise-grade features needed to realize Alexander’s setup, success also would hinge on each manufacturer’s implementation of the features, both in form and function, Davis emphasized.

“I think each manufacturer would have different flows in their GUIs for setting up isolated VLANs, which wouldn’t be the easiest for consumers to follow when switching across different brands,” he said. “I think if IoT security was more standards-based or automatic by default between devices and routers, overall security in consumer devices would greatly improve.”

Securing both of these concessions from router manufacturers would likely come down to ratifying standards across the industry, whether formally or informally, as Davis sees it.

“The different standards boards could potentially get together and try to pitch an IoT security standard to the router and IoT device manufacturers, and try to get them to include it in their products,” he said. “Aside from a new standard, there could potentially be a consortium where a few of the major manufacturers include advanced IoT device isolation in the hopes that others would follow suit.”

Risk Reduction

Alexander’s THOTCON presentation touched on the 5G connectivity that
many predict IoT will integrate, but in exploring the viability of alternatives to his setup, Davis quickly gravitated toward Alexander’s proposal.

Connecting to IoT devices via 5G certainly would keep them away from home users’ laptop- and smartphone-bearing networks, Davis acknowledged, but it would present other challenges. As anyone who has ever browsed
Shodan can tell you, always-on devices with seldom-changed default credentials connected directly to the public Internet have their downsides.

“Having your IoT devices isolated with your home-based devices is great, but there is still the possibly of the IoT devices being compromised,” Davis said. “If they are publicly accessible and have default credentials, they could then be used in DDoS attacks.”

Enabling IoT for direct 5G Internet connections doesn’t necessarily improve the security of end-user devices, Davis cautioned. IoT owners will still need to send commands to their IoT devices from their laptops or smartphones, and all 5G does is change the protocol that is employed for doing so.

“IoT devices using cellular 4G or 5G connections are another method of isolation,” he said, “but keep in mind, then the devices are relying even more on ZigBee, Z-Wave or Bluetooth Low Energy to communicate with other IoT devices in a home, which can lead to other security issues within those wireless protocols.”

Indeed, Bluetooth Low Energy

has its share of flaws, and at the end of the day protocols don’t impact security as much as the security of the devices that speak it.

Regardless of how the information security community chooses to proceed, it is constructive to look to other points in the connectivity pipeline between IoT devices and user access to them for areas where attack surfaces can be reduced. Especially when weighed against the ease of inclusion for the necessary software, router manufacturers undoubtedly can do more to protect users in cases where IoT largely hasn’t so far.

“I think a lot of the security burden is falling on the consumer who simply wants to plug in their device and not have to configure any particular security features,” Davis said. “I think the IoT device manufacturers and the consumer router and access point manufacturers can do a lot more to try to automatically secure devices and help consumers secure their networks.”


Jonathan Terrasi has been an ECT News Network columnist since 2017. His main interests are computer security (particularly with the Linux desktop), encryption, and analysis of politics and current affairs. He is a full-time freelance writer and musician. His background includes providing technical commentaries and analyses in articles published by the Chicago Committee to Defend the Bill of Rights.





Source link

Can You Hear Me Now? Staying Connected During a Cybersecurity Incident | Cybersecurity


We all know that communication is important. Anyone who’s ever been married, had a friend, or held a job knows that’s true. While good communication is pretty much universally beneficial, there are times when it’s more so than others. One such time? During a cybersecurity incident.

Incident responders know that communication is paramount. Even a few minutes might mean the difference between closing an issue (thereby minimizing damage) vs. allowing a risky situation to persist longer than it needs to. In fact, communication — both within the team and externally with different groups — is one of the most important tools at the disposal of the response team.

This is obvious within the response team itself. After all, there is a diversity of knowledge, perspective and background on the team, so the more eyes on the data and information you have, the more likely someone will find and highlight pivotal information. It’s also true with external groups.

For example, outside teams can help gather important data to assist in resolution: either technical information about the issue or information about business impacts. Likewise, a clear communication path with decision makers can help “clear the road” when additional budget, access to environments/personnel, or other intervention is required.

What happens when something goes wrong? That is, when communication is impacted during an incident? Things can get hairy very quickly. If you don’t think this is worrisome, consider the past few weeks: two large-scale
disruptions impacting Cloudflare (rendering numerous sites inaccessible) and a
disruption in Slack just occurred. If your team makes use of either cloud-based correspondence tools dependent on Cloudflare (of which there are a few) or Slack itself, the communication challenges are probably still fresh in your mind.

Now imagine that every communication channel you use for normative operations is unavailable. How effective do you think your communication would be under those circumstances?

Alternate Communication Streams

Keep in mind that the middle of an incident is exactly when communications are needed most — but it also is (not coincidentally) the point when they are most likely to be disrupted. A targeted event might render critical resources like email servers or ticketing applications unavailable. A wide-scale malware event might leave the network itself overburdened with traffic (impacting potentially both VoIP and other networked communications), etc.

The point? If you want to be effective, plan ahead for this. Plan for communication failure during an incident just like you would put time into preparedness for the business itself in response to something like a natural disaster. Think through how your incident response team will communicate with other geographic regions, distributed team members, and key resources if an incident should render normal channels nonviable.

In fact, it’s often a good idea to have a few different options for “alternate communication channels” that will allow team members to communicate with each other depending on what is impacted and to what degree.

The specifics of how and what you’ll do will obviously vary depending on the type of organization, your requirements, cultural factors, etc. However, a good way to approach the planning is to think through each of the mechanisms your team uses and come up with at least one backup plan for each.

If your team uses email to communicate, you might investigate external services that are not reliant on internal resources but maintain a reasonable security baseline. For example, you might consider external cloud-based providers like ProtonMail or Hushmail.

If you use VoIP normally, think through whether it makes sense to issue prepaid cellular or satellite phones to team members (or to at least have a few on hand) in the event that voice communications become impacted. In fact, an approach like supplementing voice services with external cellular or satellite in some cases can help provide an alternate network connectivity path at the same time, which could be useful in the event network connectivity is slow or unavailable.

Planning Routes to Resources and Key External Players

The next thing to think through is how responders will gain access to procedures, tools and data in the event of a disruption. For example, if you maintain documented response procedures and put them all on the network where everyone can find them in a pinch, that’s a great start… but what happens if the network is unavailable or the server its stored on is down? If it’s in the cloud, what happens if the cloud provider is impacted by the same problem or otherwise can’t be reached?

Just as you thought through and planned alternatives for how responders need to communicate during an event, so too think through what they’ll need to communicate and how they’ll get to important resources they’ll need.

In the case of documents, this might mean maintaining a printed book somewhere that they can physically access — in the case of software tools, it might mean keeping copies stored on physical media (a USB drive, CD, etc.) that they can get to should they need it. The specifics will vary, but think it through systematically and prepare a backup plan.

Extend this to key external resources and personnel your team members may need access to as well. This is particularly important when it comes to three things: access to key decision-makers, external PR, and legal.

In the first case, there are situations where you might need to bring in an external resources to help support you (for example, law enforcement or forensic specialists). In doing that, waiting for approval from someone who is unavailable because of the outage or otherwise difficult to reach puts the organization at risk.

The approver either needs to be immediately reachable (potentially via an alternate communication pathway as described above) or, barring that, have provided approval in advance (for example, preapproval to spend money up to a given spending threshold) so that you’re not stuck waiting around during an event.

The same is true for external communications. You don’t want to find your key contact points and liaisons (for example to the press) to be MIA when you need them most. Lastly, it is very important to have access to legal counsel, so make sure that your alternative communication strategy includes a mechanism to access internal or external resources should you require their input.

The upshot of it is that the natural human tendency is to overlook the fragility of dependencies unless we examine them systematically. Incident responders need to be able to continue to operate effectively and share information even under challenging conditions.

Putting the time into thinking these things through and coming up with workarounds is important to support these folks in doing their job in the midst of a cybersecurity event.


Ed Moyle is general manager and chief content officer at Prelude Institute. He has been an ECT News Network columnist since 2007. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development. Ed is co-author of Cryptographic Libraries for Developers and a frequent contributor to the information security industry as author, public speaker and analyst.





Source link

Debian 10.0 “Buster” Now Available – Powered By Linux 4.19, GNOME + Wayland


DEBIAN --

After a long day of preparations, Debian 10.0 “Buster” is now available as planned with the CD/DVD images having just hit the mirrors.

Debian 10 is making use of the Linux 4.19 kernel, UEFI Secure Boot is finally supported by the distribution, on the desktop side GNOME Shell with Wayland is the default experience, AppArmor is enabled by default, and there are a plethora of updated packages compared to Debian 9 Stretch. Simply the newer/added packages alone in Debian 10 Buster make it a worthwhile upgrade from servers to desktops and workstations.

Some other fun facts shared by the Debian Project about the 10.0 Buster release are 34 vendors of Debian DVDs and USB sticks, Debian 10.0 has 28,939 source packages, there are 31 official and unofficial ports of Debian to different kernels and hardware architectures, and at least 1,330 people have contributed to Debian until 2019.

Find Debian 10.0 Buster images on Debian.org. More Debian 10 benchmarks will be coming up on Phoronix soon.


Why CIOs Are Betting on Cloud for Their Modern Data Programs | IT Infrastructure Advice, Discussion, Community


Enterprise infrastructures are changing rapidly as the management and visibility requirements of modern, data-driven applications are outpacing legacy data storage functionality. Gartner confirms that, with artificial intelligence and machine learning driving an explosion in data volume and variety, IT operations are outgrowing existing frameworks. Although insights from today’s vast amounts of structured, semi-structured, and unstructured data can deliver superior value, organizations are currently unable to adequately monitor or analyze this information (and between 60 percent and 73 percent of all data within an enterprise goes unused).

Cloud has been the buzz for more than a decade, and it is now seeing mass adoption among enterprises. Similarly, over the past several years, the size and scope of data pipelines have grown significantly. Just a few years ago, Fortune 500 companies were still experimenting with and testing the efficacy of ‘big data’ as they move toward a digital transformation. Yet today, the majority of those organizations have moved from big data pilots to large-scale, full production workloads with enterprise-level SLAs. Now, these organizations are most interested in maximizing the return on their big data investments and developing new use cases that create new revenue streams.

Data is staying put: Why Big Data needs the cloud

According to recent research from Sapio Research, who surveyed more than 300 IT decision makers, ranging from directors to C-suite, enterprises are overwhelmingly embracing the cloud to host their big data programs. As of January of this year, 79% of the respondents have data workloads currently running in the cloud, and 83% have a strategy to move existing data applications into the cloud. Why?

Modern data applications create processing workloads that require elastic scaling, meaning compute and storage needs change frequently and independently of each other. The cloud provides the flexibility to accommodate this type of elasticity, ensuring the computing and storage resources are available to ensure optimal performance of data pipelines under any circumstances. Many new generation data applications require data workflows to process increased traffic loads at certain times, yet little need to process data at other times – think of social media, video streaming or dating sites. For the many different organizations that encounter this type of resilience monthly, weekly, or even daily, the cloud provides an agile, scalable environment that helps future-proof against these unpredictable increases in data volume, velocity, and variety.

As an example, e-commerce retailers use data processing and analytics tools to provide targeted, real-time shopping suggestions for customers as well as to analyze their actions and experiences. Every year, these organizations experience spiking website traffic on major shopping days like Cyber Monday – and in a traditional big data infrastructure, a company would need to deploy physical servers to support this activity. These servers would likely not be required the other 364 days of the year, resulting in wasted expenditures. With the cloud, however, online retailers have instant access to additional compute and storage resources to accommodate traffic surges and to scale back down during quieter times. In short, cloud computing lacks the headaches of manual configuration and troubleshooting, as with on-premise, and saves money by eliminating the need to physically grow infrastructure.

Lastly, for organizations that handle hyper-secure, personal information (think social security numbers, health records, financial details, etc.) and worry about cloud-based data protection, adopting a hybrid cloud model allow enterprises to keep sensitive workloads on-premises while moving additional workloads to the cloud. Organizations realize they don’t have to be all in or out of the cloud. Sapio’s survey revealed that most respondents are embracing a hybrid cloud strategy (56 percent) for this reason.

The rapid increase in data volume and variety drives organizations to rethink enterprise infrastructures, particularly cloud strategies, and focus on longer-term data growth, flexibility, and cost savings. Over the next year, we will see an increase in modernized data processing systems, ran partially or entirely on the cloud, to support advanced data-driven applications and its emerging use cases.



Source link

One Bad App(le) Spoils the Barrel | IT Infrastructure Advice, Discussion, Community


Despite the fact that we’re talking technology, the old proverb “one bad apple spoils the barrel” holds true when discussing app security. Like the very real threat of one ‘bad’ apple rapidly spoiling every other apple in a barrel, one compromised app can lead to a plethora of problems; from mass infection to compromise of other systems, access to even a single app can be devastating.

To wit, most of us are familiar with the ‘casino fish tank‘ hack in which attackers gained access to sensitive data via an innocuous, thermometer app connected to the Internet. It was unprotected. As an aquarium enthusiast, it makes me sad given the sensitive nature of reef tanks to temperature changes. As a technology enthusiast, it makes me cringe because no app is an island today, and if it’s on your network, it can potentially reach any other app you have running. Like the ones you consider critical to business. That’s why I like to remind everyone that every app is critical when it comes to security.

“Every app” is a significantly large number these days. An enterprise operates on average 900 apps according to the MuleSoft Connectivity Benchmark 2019). Those are the apples in your barrel and it’s true whether the barrel is in the cloud or at home, on-prem.

Many of those apps are not protected. In some cases, the reason is a simple oversight. In others, those apps are one of the 29% MuleSoft found are connected or integrated, and crafting access policies are just more trouble than they’re worth. After all, you have to inventory every app and determine which other apps have a legitimate need to access it. Given an average of 900 apps with 29% connected, that’s 261 apps that need very specific access policies. That’s a lot of work for what most consider very little risk.

That’s when I like to remind folks of the tale of the fish tank. Or bring up an even better-known tale of HVAC systems and their relationship to a POS hack that cost a certain business millions of dollars and the trust of even more customers.

A single app is a risk. The connective tissue known as the network that spans data centers, clouds, and even remote and branch offices today enables even the most irrelevant app to become a potential point of attack. With containers continuing to grow like weeds, the risk is multiplied. Because containerized architectures operate on a principle of horizontal (cloned) scalability, a single app with a vulnerability or open access policy can replicate quickly, each one offering yet another point of entry into the broader application landscape.

It isn’t just apps and data at risk. It’s your network. We have incredible bandwidth today, especially in the cloud and in the data center, but when coupled with auto-scaling containers there is a very real risk of exploiting a single, vulnerable app (container) in ways that cause it to scale out of control rapidly. Bandwidth and resource consumption ensue, and in the cloud can drive up costs faster than a toddler with uncontrolled in-app purchase power. In the data center, communication can swamp the local servers and networks and cause chaos and ultimately outages. 

“Lateral” attacks – those launched from an app or system inside a container cluster or other networked environment – are a very real threat. It isn’t enough to protect apps considered critical when every app is critical to the overall security of your data, network, and customers.

When considering what apps to protect, it’s no longer enough to simply use the sensitivity of data or business criticality as primary factors. It’s important to consider what other resources and apps can be reached by someone who gains access to that unassuming fish tank app.

 



Source link