Tag Archives: databases

ArangoDB: Three Databases In One





ArangoDB, a German database expanding its business in the United States, has released new capabilities in version 3.5 of its eponymous database management software to make it easier to query and search growing data sets across multiple data models. With ArangoDB, data can be stored as key-value pairs, graphs or documents and accessed with one declarative query language. And you can do both at the same time — a document query and a graph query. The combination offers flexibility and performance advantages, explained Claudius Weinberger, CEO. (Source: The New Stack)




Previous articleGoogle Open Sources Tool For Pivacy

Swapnil Bhartiya has decades of experience covering emerging technologies and enterprise open source. His stories have appeared in a multitude of leading publications including CIO, InfoWorld, Network World, The New Stack, Linux Pro Magazine, ADMIN Magazine, HPE Insights, Raspberry Pi Geek Magazine, SweetCode, Linux For You, Electronics For You and more. He is also a science fiction writer and founder of TFiR.io.

Can You Hear Me Now? Staying Connected During a Cybersecurity Incident | Cybersecurity


We all know that communication is important. Anyone who’s ever been married, had a friend, or held a job knows that’s true. While good communication is pretty much universally beneficial, there are times when it’s more so than others. One such time? During a cybersecurity incident.

Incident responders know that communication is paramount. Even a few minutes might mean the difference between closing an issue (thereby minimizing damage) vs. allowing a risky situation to persist longer than it needs to. In fact, communication — both within the team and externally with different groups — is one of the most important tools at the disposal of the response team.

This is obvious within the response team itself. After all, there is a diversity of knowledge, perspective and background on the team, so the more eyes on the data and information you have, the more likely someone will find and highlight pivotal information. It’s also true with external groups.

For example, outside teams can help gather important data to assist in resolution: either technical information about the issue or information about business impacts. Likewise, a clear communication path with decision makers can help “clear the road” when additional budget, access to environments/personnel, or other intervention is required.

What happens when something goes wrong? That is, when communication is impacted during an incident? Things can get hairy very quickly. If you don’t think this is worrisome, consider the past few weeks: two large-scale
disruptions impacting Cloudflare (rendering numerous sites inaccessible) and a
disruption in Slack just occurred. If your team makes use of either cloud-based correspondence tools dependent on Cloudflare (of which there are a few) or Slack itself, the communication challenges are probably still fresh in your mind.

Now imagine that every communication channel you use for normative operations is unavailable. How effective do you think your communication would be under those circumstances?

Alternate Communication Streams

Keep in mind that the middle of an incident is exactly when communications are needed most — but it also is (not coincidentally) the point when they are most likely to be disrupted. A targeted event might render critical resources like email servers or ticketing applications unavailable. A wide-scale malware event might leave the network itself overburdened with traffic (impacting potentially both VoIP and other networked communications), etc.

The point? If you want to be effective, plan ahead for this. Plan for communication failure during an incident just like you would put time into preparedness for the business itself in response to something like a natural disaster. Think through how your incident response team will communicate with other geographic regions, distributed team members, and key resources if an incident should render normal channels nonviable.

In fact, it’s often a good idea to have a few different options for “alternate communication channels” that will allow team members to communicate with each other depending on what is impacted and to what degree.

The specifics of how and what you’ll do will obviously vary depending on the type of organization, your requirements, cultural factors, etc. However, a good way to approach the planning is to think through each of the mechanisms your team uses and come up with at least one backup plan for each.

If your team uses email to communicate, you might investigate external services that are not reliant on internal resources but maintain a reasonable security baseline. For example, you might consider external cloud-based providers like ProtonMail or Hushmail.

If you use VoIP normally, think through whether it makes sense to issue prepaid cellular or satellite phones to team members (or to at least have a few on hand) in the event that voice communications become impacted. In fact, an approach like supplementing voice services with external cellular or satellite in some cases can help provide an alternate network connectivity path at the same time, which could be useful in the event network connectivity is slow or unavailable.

Planning Routes to Resources and Key External Players

The next thing to think through is how responders will gain access to procedures, tools and data in the event of a disruption. For example, if you maintain documented response procedures and put them all on the network where everyone can find them in a pinch, that’s a great start… but what happens if the network is unavailable or the server its stored on is down? If it’s in the cloud, what happens if the cloud provider is impacted by the same problem or otherwise can’t be reached?

Just as you thought through and planned alternatives for how responders need to communicate during an event, so too think through what they’ll need to communicate and how they’ll get to important resources they’ll need.

In the case of documents, this might mean maintaining a printed book somewhere that they can physically access — in the case of software tools, it might mean keeping copies stored on physical media (a USB drive, CD, etc.) that they can get to should they need it. The specifics will vary, but think it through systematically and prepare a backup plan.

Extend this to key external resources and personnel your team members may need access to as well. This is particularly important when it comes to three things: access to key decision-makers, external PR, and legal.

In the first case, there are situations where you might need to bring in an external resources to help support you (for example, law enforcement or forensic specialists). In doing that, waiting for approval from someone who is unavailable because of the outage or otherwise difficult to reach puts the organization at risk.

The approver either needs to be immediately reachable (potentially via an alternate communication pathway as described above) or, barring that, have provided approval in advance (for example, preapproval to spend money up to a given spending threshold) so that you’re not stuck waiting around during an event.

The same is true for external communications. You don’t want to find your key contact points and liaisons (for example to the press) to be MIA when you need them most. Lastly, it is very important to have access to legal counsel, so make sure that your alternative communication strategy includes a mechanism to access internal or external resources should you require their input.

The upshot of it is that the natural human tendency is to overlook the fragility of dependencies unless we examine them systematically. Incident responders need to be able to continue to operate effectively and share information even under challenging conditions.

Putting the time into thinking these things through and coming up with workarounds is important to support these folks in doing their job in the midst of a cybersecurity event.


Ed Moyle is general manager and chief content officer at Prelude Institute. He has been an ECT News Network columnist since 2007. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development. Ed is co-author of Cryptographic Libraries for Developers and a frequent contributor to the information security industry as author, public speaker and analyst.





Source link

Telegram Provides Nuclear Option to Erase Sent Messages | Developers


By Jack M. Germain

Mar 26, 2019 5:00 AM PT

Telegram Messaging on Sunday announced a new privacy rights feature that allows user to delete not only their own comments, but also those of all other participants in the message thread on all devices that received the conversation. Although the move is meant to bolster privacy, it’s likely to spark some controversy.

Telegram Provides Nuclear Option to Erase Sent Messages

Telegram, a cloud-based instant messaging and Voice over IP service, is similar to WhatsApp and Facebook Messenger. Telegram Messenger allows users to send free messages by using a WiFi connection or mobile data allowance with optional end-to-end encryption and encrypted local storage for Secret Chats.

Telegram’s new unsend feature does two things. First, it removes the previous 48-hour time limit for removing anything a user wrote from the devices of participants. Second, it lets users delete entire chats from the devices of all participating parties.



Unsend Anything screenshot

– click image to play video –


Telegram also changed a policy regarding how users can or can not forward another’s conversation.

Privacy policies are critical to people who rely heavily on chat communications, noted Paul Bischoff, privacy advocate with
Comparitech.

“Many people use chat apps under the assumption that their communications are private, so it is very important that chat apps meet those expectations of privacy,” he told LinuxInsider.

Obviously, if you’re a dissident in an autocratic country that cracks down on free speech, privacy is very important. However, it is also important to everyday people, said Bischoff, for “sending photos of their kids, organizing meetings, and exchanging Netflix passwords,” for example.

Potential Controversy

Telegram’s new unsend feature could stir controversy over the rights of parties to a message conversation. One user’s right to carry out a privacy purge could impact other participants’ rights to engage in discourse.

Regardless of who initiated the chat, any participant can delete some or all of the conversation. Criticisms voiced since the change in the company’s unsend policy suggest that the first participant to unsend effectively can remove control from everyone else. Telegram’s process allows deletion of messages in their entirety — not just the senders’ comments.

The chat history suddenly disappears. No notification indicates the message thread was deleted.

Privacy Treatments

Telegram Messenger, like its competitors, has had an “unsend” feature for the last two years. It allowed users to delete any messages they sent via the app within a 48-hour time limit. However, users could not delete conversations they did not send.

Facebook’s unsend feature differs in that it gives users the ability to recall a sent message — but only within 10 minutes of sending it.

“Telegram doesn’t enable end-to-end encryption by default, but you can get it by using the “Secret Chats” feature,” said Comparitech’s Bischoff.

End-to-end encryption ensures that no one except the intended recipient — not even Telegram — can decrypt messages, he said. WhatsApp and Signal encrypt messages by default.

Telegram has an incredibly strong brand, according to Jamie Cambell, founder of
Go Best VPN. It has a reputation for being the app of the people, since it’s been banned from Russia for not providing the encryption keys to the government.

“Its founder, Pavel Durov, actively seeks to fight censorship and is widely considered the Mark Zuckerberg of Russia,” he told LinuxInsider.

Why the Change?

The new unsend feature gives millions of users complete control of any private conversation they have ever had, according to Telegram. Users can choose to delete any message they sent or received from both sides in any private chat.

“The messages will disappear for both you and the other person — without leaving a trace,” noted the Telegram Team in an online post.

The change was orchestrated “to improve the privacy of the Telegram messaging application,” the post continues. “Its developers upgraded the Unsend feature “to allow users to remotely delete private chat sessions from all devices involved.”

The privacy changes are to protect users, according to the company. Old forgotten messages might be taken out of context and used against them decades later.

For example, a hasty text sent to a girlfriend in school can return to haunt the sender years later “when you decide to run for mayor,” the company suggested.

How It Works

Telegram users can delete any private chat entirely from both their device and the other person’s device with just two taps.

To delete a message from both ends, a user taps on the message and selects the delete button. A message windows then asks the user to select whether to delete just his/her chat messages or those of the other participants as well.

Telegram’s new feature lets users delete messages in one-to-one or group private chats. Selecting the second choice deletes the message everywhere. Selecting the first choice only removes it from the inbox of the user initiating the delete request.

The privacy purge allows users to delete all traces of the conversation, even if the user did not send the original message or begin the thread.

Forwarding Controls Added

Telegram also added an Anonymous Forwarding feature to make privacy more complete. This feature gives users new controls to restrict who can forward their messages, according to Telegram.

When users enable the Anonymous Forwarding setting, their forwarded messages no longer will link back to their account. Instead, the message window will only display an unclickable name in the “from” field.

“This way people you chat with will have no verifiable proof you ever sent them anything,” according to Telegram’s announcement.

Telegram also introduced new message controls in the app’s Privacy and Security settings. A new feature called “Forwarded messages” lets users restrict who can view their profile photos and prevent any forwarded messages from being traced back to their account.

Open Source Prospects

The Telegram application programming interface
is 100 percent open for all developers who want to build applications on the Telegram platform, according to the company.

“Open APIs allow third-party developers to create applications that integrate with Telegram and extend its capabilities,” Bischoff said.

Telegram may be venturing further into open source terrain. The company might release all of the messaging app’s code at some point, suggests a note on its website’s FAQ page. That could bode well for privacy rights enthusiasts.

“Releasing more of the code will have a positive effect on Telegram’s appeal, barring any unforeseen security issues. That allows security auditors to crack open the code to see if Telegram is doing anything unsafe or malicious,” Bischoff added.

Win-Win Proposition

Telegram’s new take on protecting users’ privacy rights is a positive step forward, said attorney David Reischer, CEO of
LegalAdvice.com. It benefits both customers who want more control over how their data and communications are shared and privacy rights advocates who see privacy as an important cornerstone of society.

It is not uncommon for a person to send a message and then later regret it. There also can be legal reasons for a person to want to delete all copies of a previously sent message.

For example, “a person may send a message and then realize, even many months later, that the communication contained confidential information that should not be shared or entered into the public domain,” Reischer told LinuxInsider.

Allowing a person to prevent the communication from being forwarded is also an important advance for consumers who value their privacy, he added. It allows a user to prevent sharing of important confidential communications.

“Privacy rights advocates, such as myself, see these technology features as extremely important because the right to privacy entails that one’s personal communications should have a high standard of protection from public scrutiny,” Reischer said.

Still, there exists a negative effect when private conversations are breached through malicious actors who find an unlawful way to circumvent the privacy features, he cautioned. Ultimately, the trust and confidence on the part of senders could be misplaced if communications turn out to be not-so-private after all.

Privacy Concerns First Priority

Privacy is extremely important to those who use chat communications — at least those who are somewhat tech-savvy, noted Cambell. For Telegram, privacy is the most important feature for users.

Privacy is extremely important to many Americans who want to have private conversations even when the communications are just ordinary in nature, said Reischer. Many people like to know that their thoughts and ideas are to be read only by the intended recipient.

“A conversation taken out of context may appear damnable to others even when the original intent of the message was innocuous,” he said.

Additionally, many professionals of various trades and crafts may not want to share their confidential trade secrets and proprietary information, Reischer added. “Privacy is important to all business people, and there is typically an expectation of privacy in business when communicating with other coworkers, management, legal experts or external third parties.”

Other New Features

Telegram added new features that made the app more efficient to use. For example, the company added a search tool that allows users to find settings quickly. It also shows answers to any Telegram-related questions based on the FAQ.

The company also upgraded GIF and stickers search and appearance on all mobile platforms. Any GIF can be previewed by tapping and holding.

Sticker packs now have icons, which makes selecting the right pack easier. Large GIFs and video messages on Telegram are now streamed. This lets users start watching them without waiting for the download to complete.

VoiceOver and TalkBack support for accessibility features now support gesture-based technologies to give spoken feedback that makes it possible to use Telegram without seeing the screen.


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link

IT Resume Dos and Don’ts: Formatting for Readability | Developers


In my career as an IT resume writer, I’ve seen a lot of IT resumes cross my desk, and I’d like to share some common of the most common formatting problems that I see regularly. Of course, an IT resume requires more than great formatting. It requires well-written, targeted content, and a clear story of career progression. It needs to communicate your unique brand and value proposition.

Still, if the formatting is off, that can derail the rest of the document and prevent your story being read by the hiring authority.

I’ll start with a few IT resume formatting “don’ts.”

1. Don’t Use Headers

This is an easy fix. Headers and footers made a lot of sense when an IT resume was likely to be read as a printed sheet of paper.

In 2018, how likely is it that a busy hiring authority is going to take the time or the effort to print out the hundreds of resumes that are submitted for every position?

Not terribly.

Your IT resume is going to be read online.

That’s why using a header for your contact information is a bad idea.

It takes a few seconds to click on the header, copy and paste your email and phone number, and then click again in the body of the resume to read the text.

A few seconds doesn’t seem like much, but for someone who is looking through a lot of resumes, every second really does count. A hiring authority who is REALLY busy may just decide it’s too much trouble to get your contact information from the header.

That means your resume may well end up in the “read later” folder.

That’s not a good outcome.

There’s another problem with using the header, related to the one I just discussed.

Headers just look old fashioned. Out of date.

Old fashioned is not the brand you want to present if you’re looking for a job in technology — whether you’re a CIO, an IT director, or a senior developer.

Again, this is an easy fix. Just put your name and contact information in the body of the resume. I suggest using a larger font in bold caps for your name. You want to be certain that your name will stick in the memory of the reader.

2. Don’t Over-Bullet

This is probably the most common mistake I see in the IT resumes that cross my desk.

In my trade, we call it “death by bullets.” The job seeker has bulleted everything.

Everything.

That’s really hard to read. Beyond the fact that it’s just not clear, there’s another big problem with over-bulleting.

To paraphrase The Incredibles, if everything is bulleted, nothing is.

The goal of using bullets — sparingly — is to draw the reader’s eye and attention to your major accomplishments.

If you’ve bulleted everything, the reader doesn’t know what’s critical and what’s not, which defeats the purpose of using bullets in your resume.

In my own work as an IT resume writer, I make a clear distinction between duties and responsibilities and hard, quantifiable accomplishments. I write the duties in paragraph format, and bullet only the accomplishments that demonstrate what my IT resume clients really have delivered.

It’s a clear, straightforward approach that I recommend.

3. Don’t Get Colorful

Happily, this particular problem doesn’t seem as common as it was a few years ago, but every once in a while, I’ll still see a resume with lots of color.

The idea behind that, of course, is to make the resume “eye-catching.”

Rather than catching the reader’s eye, however, a lot of color is just confusing.

“Why is this section blue? Is blue telling me it’s really important? And yellow? Why is this person using yellow? Because it’s mighty hard to read…”

I’m sure you see my point. The colors, rather than giving the reader a map of what to look at first — what to prioritize — just end up looking, well, busy.

That makes your resume harder to read. And if it’s harder to read?

Yeah. As I mentioned above: It’s likely to go into the “read later” folder.

You really don’t want that to happen.

4. Don’t Lead With Education

This is another easy fix, but it’s important.

The only time you want to lead with education is when you’re a new grad. If you’re a professional — whether senior, mid-career or junior — you want to highlight your experience on page one, and not take up that valuable space with your degrees or certifications.

Of course, degrees, training and certifications are important, but they belong at the end of the resume, at the bottom of page two or three.

5. Don’t Use Arial or Times New Roman

I’ll end the “don’ts” with another simple one.

Arial and Times New Roman are, well, so 1990s. Yes, they’re good, clear, readable fonts, which is why they’ve become so popular.

Probably 90 percent of all IT resumes are written in these two fonts. There’s nothing negative in that, but it’s a little boring.

Now, I’m not suggesting you use Comic Sans or Magneto, but there are some great, clean fonts that aren’t as common in the IT resume world.

Personally? I like Calibri for body and Cambria for headings.

So, that gives you a number of things to avoid in formatting your IT resume. I’ll now suggest a few “dos” to concentrate on to ensure that your document is as readable as possible.

1. Keep Things Simple

I’m a strong believer that an IT resume needs to tell a story. The formatting of the document should serve only to clarify that story, and not get in the way.

When the document is finished, take a look. Does the formatting lead your eye to the most important points? Is the formatting clear and clean? Or does it distract from the story you’re trying to tell?

2. Think Mobile

This point gets more important with each passing year. These days, the odds are that the hiring authority will be reading your story on a phone, tablet, or other mobile device.

That’s changed the way I’ve formatted the IT resumes I write for my clients.

I’ve never gone beyond minimal design, but I’ve scaled things back. For example, I used to use shading to draw attention to critical sections of the document.

But now? I think that can be hard to read on a mobile — and readability, to repeat a theme, is the only goal of resume formatting.

3. Use Bold and Italics Sparingly

This point follows directly from the previous one. We don’t want to bold or italicize everything. Bold and italics, used consistently and sparingly, can help signal to the reader what is most important in your IT resume, and provide a framework for a quick read-through.

That enables the hiring authority to get the gist of your career fast, without distracting from a deeper second read.

4. Use Hard Page Breaks

This is pretty simple, but it is important. I always insert hard page breaks in every finished IT resume I write. That helps ensure that the document is going to look consistent across devices and across platforms.

It’s not 100 percent foolproof — Word is a less-than-perfect tool. With hard page breaks, though, the odds are very good that your resume will look the same to each reader — and to the same reader when reviewing the document on different devices. That consistency reinforces the sense of professionalism you’re striving to convey.

5. Write First, Format Later

Professional IT resume writers disagree on this, but I’m going to suggest what I’ve found effective in my practice.

I always write the resume first. I personally use a plain text editor, to make certain that Microsoft Word doesn’t add anything that I’ll have to fight to remove later.

It’s only when I’ve got the text completely finished that I copy and paste into Word, and then add the formatting that I think best supports the client story I’m trying to tell.

If I try to format as I’m writing, the formatting may take over. It’s tempting to insist on keeping the formatting consistent, even when it’s not best supporting the story.

So think about it. I’d strongly recommend writing first, and formatting later, when you’re completely clear on the story you’re trying to tell.

I know that many people struggle with formatting their IT resume, so I hope that these simple ideas will help make the process a little easier and less painful.

Stay tuned for future articles that will dig a bit deeper into the IT resume process, covering content structure, writing style, and branding.


J.M. Auron is a professional resume writer who focuses exclusively on crafting
the best possible IT resume for clients from C-level leaders to hands-on IT professionals. When he’s not working, he practices Fujian Shaolin Kung Fu and Sun Style Tai Chi. He also writes detective fiction and the occasional metrical poem.





Source link

WhiteSource Rolls Out New Open Source Security Detector | Enterprise


By Jack M. Germain

May 24, 2018 10:24 AM PT

WhiteSource on Tuesday launched its next-generation software composition analysis (SCA) technology, dubbed “Effective Usage Analysis,” with the promise that it can reduce open source vulnerability alerts by 70 percent.

The newly developed technology provides details beyond which components are present in the application. It provides actionable insights into how components are being used. It also evaluates their impact on the security of the application.

The new solution shows which vulnerabilities are effective. For instance, it can identify which vulnerabilities get calls from the proprietary code.

It also underscores the impact of open source code on the overall security of the application and shows which vulnerabilities are ineffective. Effective Usage Analysis technology allows security and engineering teams to cut through the noise to enable correct prioritization of threats to the security of their products, according to WhiteSource CEO Rami Sass.

“Prioritization is key for managing time and limited resources. By showing security and engineering teams which vulnerable functionalities are the most critical and require their immediate attention, we are giving them the confidence to plan their operations and optimize remediation,” he said.

The company’s goal is to empower businesses to develop better software by harnessing the power of open source. In its Software Composition Analysis (SCA) Wave report in 2017, Forrester recognized the company as the best current offering.

WhiteSource’s new Effective Usage Analysis offering addresses an ongoing challenge for open source developers: to identify and correct identifiable security vulnerabilities proactively, instead of watching or fixing problems after the fact, said Charles King, principal analyst at Pund-IT.

“That should result in applications that are more inherently secure and also improve the efficiency of developers and teams,” he told LinuxInsider. “Effective Usage Analysis appears to be a solid individual solution that is also complementary and additive to WhiteSource’s other open source security offerings.”

Open Source Imperative

As open source usage has increased, so has the number of alerts on open source components with known vulnerabilities. Security teams have become overloaded with security alerts, according to David Habusha, vice president of product at WhiteSource.

“We wanted to help security teams to prioritize the critical vulnerabilities they need to deal with first, and increase the developers’ confidence that the open source vulnerabilities they are being asked to fix are the most pressing issues that are exposing their applications to threats,” he told LinuxInsider.

The current technology in the market is limited to detecting which vulnerable open source components are in your application, he said. They cannot provide any details on how those components are being used, or the impact of each vulnerable functionality to the security of the application.

The new technology currently supports Java and JavaScript. The company plans to expand its capabilities to include additional programming languages. Effective Usage Analysis is currently in beta testing and will be fully available in June.

How It Works

Effective Usage Analysis promises to cut down open source vulnerabilities alerts dramatically by showing which vulnerabilities are effective (getting calls from the proprietary code that impact the security of the application) and which ones are ineffective.

Only 30 percent of reported alerts on open source components with known vulnerabilities originated from effective vulnerabilities and required high prioritization for remediation, found a WhiteSource internal research study on Java applications.

Effective Usage Analysis also will provide actionable insights to developers for remediating a vulnerability by providing a full trace analysis to pinpoint the path to the vulnerability. It adds an innovative level of resolution for understanding which functionalities are effective.

This approach aims to reduce open source vulnerability alerts and provide actionable insights. It identifies the vulnerabilities’ exact locations in the code to enable faster, more efficient remediation.

A Better Mousetrap

Effective Usage Analysis is an innovative technology representing a radical new approach to effectiveness analysis that may be applied to a variety of use cases, said WhiteSource’s Habusha. SCA tools traditionally identify security vulnerabilities associated with an open source component by matching its calculated digital signature with an entry stored in a specialized database maintained by the SCA vendor.

SCA tools retrieve data for that entry based on reported vulnerabilities in repositories such as the
NVD, the U.S. government repository of standards-based vulnerabilities.

“While the traditional approach can identify open source components for which security vulnerabilities are reported, it does not establish if the customer’s proprietary code actually references — explicitly or implicitly — entities reported as vulnerable in such components,” said Habusha.

WhiteSource’s new product is an added component that targets both security professionals and developers. It helps application security professionals prioritize their security alerts and quickly detect the critical problems that demand their immediate attention.

It helps developers by mapping the path from their proprietary code to the vulnerable open source functionality, providing insights into how they are using the vulnerable functionality and how the issues can be fixed.

Different Bait

Effective Usage Analysis employs a new scanning process that includes the following steps:

  • Scanning customer code;
  • Analyzing how the code interacts with open source components;
  • Indicating if reported vulnerabilities are effectively referenced by such code; and
  • Identifying where that happens.

It employs a combination of advanced algorithms, a comprehensive knowledge base, and a fresh new user interface to accomplish those tasks. Effective Usage Analysis enables customers to establish whether reported vulnerabilities constitute a real risk.

“That allows for a significant potential reduction in development efforts and higher development process efficiency,” said Habusha.

Potential Silver Bullet

WhiteSource’s new solution has the potential to be a better detection tool for open source vulnerabilities, suggested Avi Chesla, CTO of
Empow Cyber Security. The new detection tools will allow developers to understand the potential risk associated with the vulnerabilities.

The tools “will ultimately motivate developers to fix them before releasing a new version. Or at least release a version with known risks that will allow the users to effectively manage the risks through external security tools and controls,” he told LinuxInsider.

The new approach matters, because the long-standing existing vulnerabilities are and should be known to the industry, Chesla explained. It offers a better chance that security tools will detect exploitation attempts against them.

Effective Usage Analysis is probably the most important factor because developers are flooded with alerts, or noise. The work of analyzing the noise-to-signal ratio is time-consuming and requires cybersecurity expertise, noted Chesla.

The “true” signals are the alerts that represent a vulnerability that actually can be exploited and lead to a real security breach. The cybersecurity market deals with this issue on a daily basis.

“Security analysts are flooded with logs and alerts coming from security tools and experience a similar challenge to identify which alerts represent a real attack intent in time,” Chesla pointed out.

Equifax Factor

The major vulnerability that compromised Equifax last year sent security experts and software devs scrambling for effective fixes. However, it is often a business decision, rather than a security solution, that most influences software decisions, suggested Ed Price, director of compliance and senior solution architect at
Devbridge Group.

“Any tools that make it easier for the engineering team to react and make the code more secure are a value-add,” he told LinuxInsider.

In some cases, the upgrade of a single library, which then cascades down the dependency tree, will create a monumental task that cannot be fixed in a single sprint or a reasonable timeframe, Price added.

“In many cases, the decision is taken out of the hands of the engineering team and business takes on the risk of deploying code without the fixes and living with the risk,” Price said, adding that no tool — open source or otherwise — will change this business decision.

“Typically, this behavior will only change in an organization once an ‘Equifax event’ occurs and there is a penalty in some form to the business,” he noted.

Saving Code Writers’ Faces

WhiteSource’s new tool is another market entry that aims to make sense of the interconnected technologies used in enterprise environments, suggested Chris Roberts, chief security architect at
Acalvio.

“The simple fact of the matter is, we willingly use code that others have written, cobbling things together in an ever increasingly complex puzzle of collaborative code bases,” he told LinuxInsider, “and then we wonder why the researchers and criminals can find avenues in. It is good to see someone working hard to address these issues.”

The technologies will help if people both pay attention and learn from the mistakes being made. It is an if/and situation, Roberts said.

The logic is as follows: *If* I find a new tool that helps me understand the millions of lines of code that I have to manage or build as part of a project, *and* the understanding that the number of errors per 100 lines is still unacceptable, then a technology that unravels those complexities, dependencies and libraries is going to help, he explained.

“We need to use it as a learning tool and not another crutch or Band-Aid to further mask the garbage we are selling to people,” Roberts said.

Necessary Path

Hackers love open source software security vulnerabilities because they are a road map for exploiting unpatched systems, observed Tae-Jin Kang, CEO of
Insignary. Given that the number of vulnerabilities hit a record in 2017, according to the CVE database, finding the vulnerabilities is the best, first line of defense.

“Once they are found in the code and patched, then it is appropriate to begin leveraging technologies to deal with higher-order, zero-day issues,” Kang told LinuxInsider.

Organizations for years have looked to push back the day of reckoning with regard to OSS security vulnerabilities. They have been viewed as trivial, while engineering debt has piled up.

“Equifax has been the clearest illustration of what happens when these two trends meet,” said Kang. “With the implementation of GDPR rules, businesses need to get more aggressive about uncovering and patching security vulnerabilities, because the European Union’s penalties have teeth.”


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link