Tag Archives: Security

5 Easy Tips for Linux Web Browser Security | Linux.com


If you use your Linux desktop and never open a web browser, you are a special kind of user. For most of us, however, a web browser has become one of the most-used digital tools on the planet. We work, we play, we get news, we interact, we bank… the number of things we do via a web browser far exceeds what we do in local applications. Because of that, we need to be cognizant of how we work with web browsers, and do so with a nod to security. Why? Because there will always be nefarious sites and people, attempting to steal information. Considering the sensitive nature of the information we send through our web browsers, it should be obvious why security is of utmost importance.

So, what is a user to do? In this article, I’ll offer a few basic tips, for users of all sorts, to help decrease the chances that your data will end up in the hands of the wrong people. I will be demonstrating on the Firefox web browser, but many of these tips cross the application threshold and can be applied to any flavor of web browser.

1. Choose Your Browser Wisely

Although most of these tips apply to most browsers, it is imperative that you select your web browser wisely. One of the more important aspects of browser security is the frequency of updates. New issues are discovered quite frequently and you need to have a web browser that is as up to date as possible. Of major browsers, here is how they rank with updates released in 2017:

  1. Chrome released 8 updates (with Chromium following up with numerous security patches throughout the year).

  2. Firefox released 7 updates.

  3. Edge released 2 updates.

  4. Safari released 1 update (although Apple does release 5-6 security patches yearly).

But even if your browser of choice releases an update every month, if you (as a user) don’t upgrade, that update does you no good. This can be problematic with certain Linux distributions. Although many of the more popular flavors of Linux do a good job of keeping web browsers up to date, others do not. So, it’s crucial that you manually keep on top of browser updates. This might mean your distribution of choice doesn’t include the latest version of your web browser of choice in its standard repository. If that’s the case, you can always manually download the latest version of the browser from the developer’s download page and install from there.

If you like to live on the edge, you can always use a beta or daily build version of your browser. Do note, that using a daily build or beta version does come with it the possibility of unstable software. Say, however, you’re okay with using a daily build of Firefox on a Ubuntu-based distribution. To do that, add the necessary repository with the command:

sudo apt-add-repository ppa:ubuntu-mozilla-daily/ppa

Update apt and install the daily Firefox with the commands:

sudo apt-get update

sudo apt-get install firefox

What’s most important here is to never allow your browser to get far out of date. You want to have the most updated version possible on your desktop. Period. If you fail this one thing, you could be using a browser that is vulnerable to numerous issues.

2. Use A Private Window

Now that you have your browser updated, how do you best make use of it? If you happen to be of the really concerned type, you should consider always using a private window. Why? Private browser windows don’t retain your data: No passwords, no cookies, no cache, no history… nothing. The one caveat to browsing through a private window is that (as you probably expect), every time you go back to a web site, or use a service, you’ll have to re-type any credentials to log in. If you’re serious about browser security, never saving credentials should be your default behavior.

This leads me to a reminder that everyone needs: Make your passwords strong! In fact, at this point in the game, everyone should be using a password manager to store very strong passwords. My password manager of choice is Universal Password Manager.

3. Protect Your Passwords

For some, having to retype those passwords every single time might be too much. So what do you do if you want to protect those passwords, while not having to type them constantly? If you use Firefox, there’s a built-in tool, called Master Password. With this enabled, none of your browser’s saved passwords are accessible, until you correctly type the master password. To set this up, do the following:

  1. Open Firefox.

  2. Click the menu button.

  3. Click Preferences.

  4. In the Preferences window, click Privacy & Security.

  5. In the resulting window, click the checkbox for Use a master password (Figure 1).

  6. When prompted, type and verify your new master password (Figure 2).

  7. Close and reopen Firefox.

4. Know your Extensions

There are plenty of privacy-focused extensions available for most browsers. What extensions you use will depend upon what you want to focus on. For myself, I choose the following extensions for Firefox:

  • Firefox Multi-Account Containers – Allows you to configure certain sites to open in a containerized tab.

  • Facebook Container – Always opens Facebook in a containerized tab (Firefox Multi-Account Containers is required for this).

  • Avast Online Security – Identifies and blocks known phishing sites and displays a website’s security rating (curated by the Avast community of over 400 million users).

  • Mining Blocker – Blocks all CPU-Crypto Miners before they are loaded.

  • PassFF – Integrates with pass (A UNIX password manager) to store credentials safely.

  • Privacy Badger – Automatically learns to block trackers.

  • uBlock Origin – Blocks trackers based on known lists.

Of course, you’ll find plenty more security-focused extensions for:

Not every web browser offers extensions. Some, such as Midoria, offer a limited about of built-in plugins, that can be enabled/disabled (Figure 3). However, you won’t find third-party plugins available for the majority of these lightweight browsers.

5. Virtualize

For those that are concerned about releasing locally stored data to prying eyes, one option would be to only use a browser on a virtual machine. To do this, install the likes of VirtualBox, install a Linux guest, and then run whatever browser you like in the virtual environment. If you then apply the above tips, you can be sure your browsing experience will be safe.

The Truth of the Matter

The truth is, if the machine you are working from is on a network, you’re never going to be 100% safe. However, if you use that web browser intelligently you’ll get more bang out of your security buck and be less prone to having data stolen. The silver lining with Linux is that the chances of getting malicious software installed on your machine is exponentially less than if you were using another platform. Just remember to always use the latest release of your browser, keep your operating system updated, and use caution with the sites you visit.

IT Resume Dos and Don’ts: Formatting for Readability | Developers


In my career as an IT resume writer, I’ve seen a lot of IT resumes cross my desk, and I’d like to share some common of the most common formatting problems that I see regularly. Of course, an IT resume requires more than great formatting. It requires well-written, targeted content, and a clear story of career progression. It needs to communicate your unique brand and value proposition.

Still, if the formatting is off, that can derail the rest of the document and prevent your story being read by the hiring authority.

I’ll start with a few IT resume formatting “don’ts.”

1. Don’t Use Headers

This is an easy fix. Headers and footers made a lot of sense when an IT resume was likely to be read as a printed sheet of paper.

In 2018, how likely is it that a busy hiring authority is going to take the time or the effort to print out the hundreds of resumes that are submitted for every position?

Not terribly.

Your IT resume is going to be read online.

That’s why using a header for your contact information is a bad idea.

It takes a few seconds to click on the header, copy and paste your email and phone number, and then click again in the body of the resume to read the text.

A few seconds doesn’t seem like much, but for someone who is looking through a lot of resumes, every second really does count. A hiring authority who is REALLY busy may just decide it’s too much trouble to get your contact information from the header.

That means your resume may well end up in the “read later” folder.

That’s not a good outcome.

There’s another problem with using the header, related to the one I just discussed.

Headers just look old fashioned. Out of date.

Old fashioned is not the brand you want to present if you’re looking for a job in technology — whether you’re a CIO, an IT director, or a senior developer.

Again, this is an easy fix. Just put your name and contact information in the body of the resume. I suggest using a larger font in bold caps for your name. You want to be certain that your name will stick in the memory of the reader.

2. Don’t Over-Bullet

This is probably the most common mistake I see in the IT resumes that cross my desk.

In my trade, we call it “death by bullets.” The job seeker has bulleted everything.

Everything.

That’s really hard to read. Beyond the fact that it’s just not clear, there’s another big problem with over-bulleting.

To paraphrase The Incredibles, if everything is bulleted, nothing is.

The goal of using bullets — sparingly — is to draw the reader’s eye and attention to your major accomplishments.

If you’ve bulleted everything, the reader doesn’t know what’s critical and what’s not, which defeats the purpose of using bullets in your resume.

In my own work as an IT resume writer, I make a clear distinction between duties and responsibilities and hard, quantifiable accomplishments. I write the duties in paragraph format, and bullet only the accomplishments that demonstrate what my IT resume clients really have delivered.

It’s a clear, straightforward approach that I recommend.

3. Don’t Get Colorful

Happily, this particular problem doesn’t seem as common as it was a few years ago, but every once in a while, I’ll still see a resume with lots of color.

The idea behind that, of course, is to make the resume “eye-catching.”

Rather than catching the reader’s eye, however, a lot of color is just confusing.

“Why is this section blue? Is blue telling me it’s really important? And yellow? Why is this person using yellow? Because it’s mighty hard to read…”

I’m sure you see my point. The colors, rather than giving the reader a map of what to look at first — what to prioritize — just end up looking, well, busy.

That makes your resume harder to read. And if it’s harder to read?

Yeah. As I mentioned above: It’s likely to go into the “read later” folder.

You really don’t want that to happen.

4. Don’t Lead With Education

This is another easy fix, but it’s important.

The only time you want to lead with education is when you’re a new grad. If you’re a professional — whether senior, mid-career or junior — you want to highlight your experience on page one, and not take up that valuable space with your degrees or certifications.

Of course, degrees, training and certifications are important, but they belong at the end of the resume, at the bottom of page two or three.

5. Don’t Use Arial or Times New Roman

I’ll end the “don’ts” with another simple one.

Arial and Times New Roman are, well, so 1990s. Yes, they’re good, clear, readable fonts, which is why they’ve become so popular.

Probably 90 percent of all IT resumes are written in these two fonts. There’s nothing negative in that, but it’s a little boring.

Now, I’m not suggesting you use Comic Sans or Magneto, but there are some great, clean fonts that aren’t as common in the IT resume world.

Personally? I like Calibri for body and Cambria for headings.

So, that gives you a number of things to avoid in formatting your IT resume. I’ll now suggest a few “dos” to concentrate on to ensure that your document is as readable as possible.

1. Keep Things Simple

I’m a strong believer that an IT resume needs to tell a story. The formatting of the document should serve only to clarify that story, and not get in the way.

When the document is finished, take a look. Does the formatting lead your eye to the most important points? Is the formatting clear and clean? Or does it distract from the story you’re trying to tell?

2. Think Mobile

This point gets more important with each passing year. These days, the odds are that the hiring authority will be reading your story on a phone, tablet, or other mobile device.

That’s changed the way I’ve formatted the IT resumes I write for my clients.

I’ve never gone beyond minimal design, but I’ve scaled things back. For example, I used to use shading to draw attention to critical sections of the document.

But now? I think that can be hard to read on a mobile — and readability, to repeat a theme, is the only goal of resume formatting.

3. Use Bold and Italics Sparingly

This point follows directly from the previous one. We don’t want to bold or italicize everything. Bold and italics, used consistently and sparingly, can help signal to the reader what is most important in your IT resume, and provide a framework for a quick read-through.

That enables the hiring authority to get the gist of your career fast, without distracting from a deeper second read.

4. Use Hard Page Breaks

This is pretty simple, but it is important. I always insert hard page breaks in every finished IT resume I write. That helps ensure that the document is going to look consistent across devices and across platforms.

It’s not 100 percent foolproof — Word is a less-than-perfect tool. With hard page breaks, though, the odds are very good that your resume will look the same to each reader — and to the same reader when reviewing the document on different devices. That consistency reinforces the sense of professionalism you’re striving to convey.

5. Write First, Format Later

Professional IT resume writers disagree on this, but I’m going to suggest what I’ve found effective in my practice.

I always write the resume first. I personally use a plain text editor, to make certain that Microsoft Word doesn’t add anything that I’ll have to fight to remove later.

It’s only when I’ve got the text completely finished that I copy and paste into Word, and then add the formatting that I think best supports the client story I’m trying to tell.

If I try to format as I’m writing, the formatting may take over. It’s tempting to insist on keeping the formatting consistent, even when it’s not best supporting the story.

So think about it. I’d strongly recommend writing first, and formatting later, when you’re completely clear on the story you’re trying to tell.

I know that many people struggle with formatting their IT resume, so I hope that these simple ideas will help make the process a little easier and less painful.

Stay tuned for future articles that will dig a bit deeper into the IT resume process, covering content structure, writing style, and branding.


J.M. Auron is a professional resume writer who focuses exclusively on crafting
the best possible IT resume for clients from C-level leaders to hands-on IT professionals. When he’s not working, he practices Fujian Shaolin Kung Fu and Sun Style Tai Chi. He also writes detective fiction and the occasional metrical poem.





Source link

WhiteSource Rolls Out New Open Source Security Detector | Enterprise


By Jack M. Germain

May 24, 2018 10:24 AM PT

WhiteSource on Tuesday launched its next-generation software composition analysis (SCA) technology, dubbed “Effective Usage Analysis,” with the promise that it can reduce open source vulnerability alerts by 70 percent.

The newly developed technology provides details beyond which components are present in the application. It provides actionable insights into how components are being used. It also evaluates their impact on the security of the application.

The new solution shows which vulnerabilities are effective. For instance, it can identify which vulnerabilities get calls from the proprietary code.

It also underscores the impact of open source code on the overall security of the application and shows which vulnerabilities are ineffective. Effective Usage Analysis technology allows security and engineering teams to cut through the noise to enable correct prioritization of threats to the security of their products, according to WhiteSource CEO Rami Sass.

“Prioritization is key for managing time and limited resources. By showing security and engineering teams which vulnerable functionalities are the most critical and require their immediate attention, we are giving them the confidence to plan their operations and optimize remediation,” he said.

The company’s goal is to empower businesses to develop better software by harnessing the power of open source. In its Software Composition Analysis (SCA) Wave report in 2017, Forrester recognized the company as the best current offering.

WhiteSource’s new Effective Usage Analysis offering addresses an ongoing challenge for open source developers: to identify and correct identifiable security vulnerabilities proactively, instead of watching or fixing problems after the fact, said Charles King, principal analyst at Pund-IT.

“That should result in applications that are more inherently secure and also improve the efficiency of developers and teams,” he told LinuxInsider. “Effective Usage Analysis appears to be a solid individual solution that is also complementary and additive to WhiteSource’s other open source security offerings.”

Open Source Imperative

As open source usage has increased, so has the number of alerts on open source components with known vulnerabilities. Security teams have become overloaded with security alerts, according to David Habusha, vice president of product at WhiteSource.

“We wanted to help security teams to prioritize the critical vulnerabilities they need to deal with first, and increase the developers’ confidence that the open source vulnerabilities they are being asked to fix are the most pressing issues that are exposing their applications to threats,” he told LinuxInsider.

The current technology in the market is limited to detecting which vulnerable open source components are in your application, he said. They cannot provide any details on how those components are being used, or the impact of each vulnerable functionality to the security of the application.

The new technology currently supports Java and JavaScript. The company plans to expand its capabilities to include additional programming languages. Effective Usage Analysis is currently in beta testing and will be fully available in June.

How It Works

Effective Usage Analysis promises to cut down open source vulnerabilities alerts dramatically by showing which vulnerabilities are effective (getting calls from the proprietary code that impact the security of the application) and which ones are ineffective.

Only 30 percent of reported alerts on open source components with known vulnerabilities originated from effective vulnerabilities and required high prioritization for remediation, found a WhiteSource internal research study on Java applications.

Effective Usage Analysis also will provide actionable insights to developers for remediating a vulnerability by providing a full trace analysis to pinpoint the path to the vulnerability. It adds an innovative level of resolution for understanding which functionalities are effective.

This approach aims to reduce open source vulnerability alerts and provide actionable insights. It identifies the vulnerabilities’ exact locations in the code to enable faster, more efficient remediation.

A Better Mousetrap

Effective Usage Analysis is an innovative technology representing a radical new approach to effectiveness analysis that may be applied to a variety of use cases, said WhiteSource’s Habusha. SCA tools traditionally identify security vulnerabilities associated with an open source component by matching its calculated digital signature with an entry stored in a specialized database maintained by the SCA vendor.

SCA tools retrieve data for that entry based on reported vulnerabilities in repositories such as the
NVD, the U.S. government repository of standards-based vulnerabilities.

“While the traditional approach can identify open source components for which security vulnerabilities are reported, it does not establish if the customer’s proprietary code actually references — explicitly or implicitly — entities reported as vulnerable in such components,” said Habusha.

WhiteSource’s new product is an added component that targets both security professionals and developers. It helps application security professionals prioritize their security alerts and quickly detect the critical problems that demand their immediate attention.

It helps developers by mapping the path from their proprietary code to the vulnerable open source functionality, providing insights into how they are using the vulnerable functionality and how the issues can be fixed.

Different Bait

Effective Usage Analysis employs a new scanning process that includes the following steps:

  • Scanning customer code;
  • Analyzing how the code interacts with open source components;
  • Indicating if reported vulnerabilities are effectively referenced by such code; and
  • Identifying where that happens.

It employs a combination of advanced algorithms, a comprehensive knowledge base, and a fresh new user interface to accomplish those tasks. Effective Usage Analysis enables customers to establish whether reported vulnerabilities constitute a real risk.

“That allows for a significant potential reduction in development efforts and higher development process efficiency,” said Habusha.

Potential Silver Bullet

WhiteSource’s new solution has the potential to be a better detection tool for open source vulnerabilities, suggested Avi Chesla, CTO of
Empow Cyber Security. The new detection tools will allow developers to understand the potential risk associated with the vulnerabilities.

The tools “will ultimately motivate developers to fix them before releasing a new version. Or at least release a version with known risks that will allow the users to effectively manage the risks through external security tools and controls,” he told LinuxInsider.

The new approach matters, because the long-standing existing vulnerabilities are and should be known to the industry, Chesla explained. It offers a better chance that security tools will detect exploitation attempts against them.

Effective Usage Analysis is probably the most important factor because developers are flooded with alerts, or noise. The work of analyzing the noise-to-signal ratio is time-consuming and requires cybersecurity expertise, noted Chesla.

The “true” signals are the alerts that represent a vulnerability that actually can be exploited and lead to a real security breach. The cybersecurity market deals with this issue on a daily basis.

“Security analysts are flooded with logs and alerts coming from security tools and experience a similar challenge to identify which alerts represent a real attack intent in time,” Chesla pointed out.

Equifax Factor

The major vulnerability that compromised Equifax last year sent security experts and software devs scrambling for effective fixes. However, it is often a business decision, rather than a security solution, that most influences software decisions, suggested Ed Price, director of compliance and senior solution architect at
Devbridge Group.

“Any tools that make it easier for the engineering team to react and make the code more secure are a value-add,” he told LinuxInsider.

In some cases, the upgrade of a single library, which then cascades down the dependency tree, will create a monumental task that cannot be fixed in a single sprint or a reasonable timeframe, Price added.

“In many cases, the decision is taken out of the hands of the engineering team and business takes on the risk of deploying code without the fixes and living with the risk,” Price said, adding that no tool — open source or otherwise — will change this business decision.

“Typically, this behavior will only change in an organization once an ‘Equifax event’ occurs and there is a penalty in some form to the business,” he noted.

Saving Code Writers’ Faces

WhiteSource’s new tool is another market entry that aims to make sense of the interconnected technologies used in enterprise environments, suggested Chris Roberts, chief security architect at
Acalvio.

“The simple fact of the matter is, we willingly use code that others have written, cobbling things together in an ever increasingly complex puzzle of collaborative code bases,” he told LinuxInsider, “and then we wonder why the researchers and criminals can find avenues in. It is good to see someone working hard to address these issues.”

The technologies will help if people both pay attention and learn from the mistakes being made. It is an if/and situation, Roberts said.

The logic is as follows: *If* I find a new tool that helps me understand the millions of lines of code that I have to manage or build as part of a project, *and* the understanding that the number of errors per 100 lines is still unacceptable, then a technology that unravels those complexities, dependencies and libraries is going to help, he explained.

“We need to use it as a learning tool and not another crutch or Band-Aid to further mask the garbage we are selling to people,” Roberts said.

Necessary Path

Hackers love open source software security vulnerabilities because they are a road map for exploiting unpatched systems, observed Tae-Jin Kang, CEO of
Insignary. Given that the number of vulnerabilities hit a record in 2017, according to the CVE database, finding the vulnerabilities is the best, first line of defense.

“Once they are found in the code and patched, then it is appropriate to begin leveraging technologies to deal with higher-order, zero-day issues,” Kang told LinuxInsider.

Organizations for years have looked to push back the day of reckoning with regard to OSS security vulnerabilities. They have been viewed as trivial, while engineering debt has piled up.

“Equifax has been the clearest illustration of what happens when these two trends meet,” said Kang. “With the implementation of GDPR rules, businesses need to get more aggressive about uncovering and patching security vulnerabilities, because the European Union’s penalties have teeth.”


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

Can Hackers Crack the Ivory Towers? | Enterprise


Just like leaders in every other field you can imagine, academics have been hard at work studying information security. Most fields aren’t as replete with hackers as information security, though, and their contributions are felt much more strongly in the private sector than in academia.

The differing motives and professional cultures of the two groups act as barriers to direct collaboration, noted
Anita Nikolich in her “Hacking Academia” presentation at the
CypherCon hacking conference recently held in Milwaukee. Nikolich recently finished her term as the program director for cybersecurity at the National Science Foundation’s Division of Advanced Cyberinfrastructure.

For starters, academics and hackers have very distinct incentives.

“The topics of interest tend to be the same — the incentives are very different,” Nikolich said.

“In the academic community, it’s all about getting tenure, and you do that by getting published in a subset of serious journals and speaking at a subset of what they call ‘top conferences,'” she explained. “For the hacker world … it could be to make the world a better place, to fix things, [or] it could be to just break things for fun.”

These differences in motivations lead to differences in perception — particularly in that the hacker community’s more mischievous air discourages academics from associating with them.

“There is still quite a bit of perception that if you bring on a hacker you’re not going to be able to put boundaries on their activity, and it will harm your reputation as an academic.” Nikolich said.

Deep Rift

The perception problem is something other academics also have observed.

The work of hackers holds promise in bolstering that of academics, noted Massimo DiPierro, a professor at
DePaul College of Computing and Digital Media.

Hackers’ findings are edifying even as things stand, he contended, but working side-by-side with one has the potential to damage an academic’s career.

“I think referencing their research is not a problem. I’ve not seen it done much [but] I don’t see that as a problem,” DiPierro said. “Some kind of collaboration with a company is definitely valuable. Having it with a hacker — well, hackers can provide information so we do want that, but we don’t want that person to be labeled as a ‘hacker.'”

Far from not working actively with hackers, many academics don’t even want to be seen with hackers — even at events such as CypherCon, where Nikolich gave her presentation.

“It’s all a matter of reputation. Academics — 90 percent of them have told me they don’t want to be seen at hacker cons,” she said.

Root Causes

While both researchers agreed that their colleagues would gain from incorporating hackers’ discoveries into their own work, they diverged when diagnosing the source of the gulf between the two camps and, to a degree, even on the extent of the rift.

Academic papers have been infamously difficult to get access to, and that is still the case, Nikolich observed.

“Hackers, I found, will definitely read and mine through the academic literature — if they can access it,” she said.

However, it has become easier for hackers to avail themselves of the fruits of academic study, according to DiPierro.

“A specific paper may be behind a paywall, but the results of certain research will be known,” he said.

On the other hand, academia moves too slowly and too conservatively to keep up with the private sector, DiPierro maintained, and with the hackers whose curiosity reinforces it. This limited approach is due in part to the tendency of university researchers to look at protocols in isolation, rather than look at how they are put into practice.

“I think most people who do research do it based on reading documentation, protocol validation, [and] looking for problems in the protocol more than the actual implementation of the protocol,” he said.

Risk Taking

That’s not to say that DiPierro took issue with academia’s model entirely — quite the contrary. One of its strengths is that the results of university studies are disseminated to the public to further advance the field, he pointed out.

Still, there’s no reason academics can’t continue to serve the public interest while broadening the scope of their research to encompass the practical realities of security, in DiPierro’s view.

“I think, in general, industry should learn [public-mindedness] from academia, and academia should learn some of the methodologies of industry, which includes hackers,” DiPierro said. “They should learn to take a little bit more risks and look at more real-life problems.”

Academics could stand to be more adventurous, Nikolich said, but the constant pursuit of tenure is a restraining force.

“I think on the academic side, many of them are very curious, but what they can learn — and some of them have this — is to take a risk,” she suggested. “With the funding agencies and the model that there is now, they are not willing to take risks and try things that might show failure.”

Financial Incentives

While Nicolich and DiPierro might disagree on the root cause of the breakdown between hackers and academic researchers, their approaches to addressing it are closely aligned. One solution is to allow anyone conducting security research to dig deeper into the systems under evaluation.

For Nikolich, that means not only empowering academia to actively test vulnerabilities, but to compensate hackers enough for them to devote themselves to full-time research.

“Academics should be able to do offensive research,” she said. “I think that hackers should have financial incentive, they should be able to get grants — whether it’s from industry, from the private sector, from government — to do their thing.”

In DiPierro’s view, it means freeing researchers, primarily hackers, from the threat of financial or legal consequences for seeking out vulnerabilities for disclosure.

“I would say, first of all, if anything is accessible, it should be accessible,” he said. “If you find something and you think that what you find should not have been accessible, [that] it was a mistake to make it accessible, you [should] have to report it. But the concept of probing for availability of certain information should be legal, because I think it’s a service.”


Jonathan Terrasi has been an ECT News Network columnist since 2017. His main interests are computer security (particularly with the Linux desktop), encryption, and analysis of politics and current affairs. He is a full-time freelance writer and musician. His background includes providing technical commentaries and analyses in articles published by the Chicago Committee to Defend the Bill of Rights.





Source link

Docker Data Security Complications


Docker containers provide a real sea change in the way applications are written, distributed and deployed. The aim of containers is to be flexible and allow applications to be spun up on-demand, whenever and wherever they are needed. Of course wherever we use our applications, we need data.

There are two schools of thought on how data should be mapped into containers. The first says we keep the data only in the container; the second says we have persistent data outside of the container that extends past the lifetime of any individual container instance. In either scenario, the issue of security poses big problems for data and container management.

Managing data access

As discussed in my previous blog, there are a number of techniques for assigning storage to a Docker container. Temporary storage capacity, local to the host running the container can be assigned at container run time. Storage volumes assigned are stored within the host in a specific subdirectory mapped to the application. Volumes can be created at the time the container is instanced, or in advance using the “docker volume” command.

Alternatively, local storage can be mapped as a mount point into the container. In this instance, the “docker run” command specifies a local directory as the mounted point within the container. The third option is to use a storage plugin that directly associates external storage with the container.

Open access

In each of the described methods, the Docker framework provides no inherent security model for data. For example, any host directory can be mounted to a container, including sensitive system folders like /etc. It’s possible for a container to then modify those files, as permissions are granted using standard, simple Unix permission settings. An alternative and possibly better practice is to consider using non-root containers, which involves running containers under a different Linux user ID (UID). This is relatively easy to do, however does mean building a methodology to secure each container with either a group ID (GID) or UID as permissions checking is done on UID/GID numbers.

Here we run into another problem: Using non-root containers with local volumes doesn’t work, unless the UID used to run the container has permissions to the /var/lib/docker/volumes directory. Without this, data can’t be accessed or created. Opening up this directory would be a security risk; however, there’s no inherent method to set individual permissions on a per-volume basis without a lot of manual work.

If we look at how external storage has been mounted to a container, many solutions simply present a block device (a LUN) to the host running the container and format a file system onto it. This is then presented into the container as a mount point. At this point, the security on directories and files can be set by within container itself, reducing some of the issues we’ve discussed. However, if this LUN/volume is reused elsewhere, there are no security controls about how it is mounted or used on other containers, as there is no security model built directly into the container/volume mapping relationship. Everything depends on trusting the commands run on the host.

This is where we have yet another issue: a lack of multi-tenancy. When we run containers, each container instance may run for a separate application. As in traditional storage deployments, storage assigned to containers should have a degree of separation to ensure data can’t be inadvertently or maliciously accessed cross-application. There’s no easy way to currently do this at the host level, other than to trust the orchestration tool running the container and mapping it to data.

Finding a solution

Obviously some of the issues presented here are Linux/Unix specific. For example, the abstraction of the mount namespace provides different entry points for our data, however there’s no abstraction of permissions – I can’t map user 1,000 to user 1,001 without physically updating the ACL (access control list) data associated with each file and directory. Making large-scale ACL changes could potentially impact performance. For local volumes, Docker could easily set the permissions of the directory on the host that represents a new volume to match the UID of the container being started.

External volumes provide a good opportunity to move away from the permissions structure on the host running containers. However, this means that a mechanism is required to map data on a volume to a known trusted application running in a specific container instance. Remember that containers have no inherent “identification” and can be started and stopped at will. This makes it hard to determine whether any individual container is the owner of a data volume.

Today the main solution is to rely on the orchestration platform that manages the running of the containers themselves. We put the trust into these systems to map volumes and containers accurately. In many respects, this isn’t unlike traditional SAN storage or the way virtual disks are mapped to virtual machines. However, the difference for containers is the level of portability they represent and the need to have a security mechanism that extends to the public cloud.

There’s still some work to be done here. For Docker, its acquisition of storage startup Infinit may spur ideas about how persistent data is secured. This should hopefully mean the development of an interface that all vendors can work towards — storage “batteries included” but optional.

Learn more about containers at Interop ITX, May 15-19 in Las Vegas. Container sessions include “Managing Containers in Production: What You Need To Think About,” and “The Case For Containers: What, When, and Why?” Register now!



Source link