Tag Archives: Enterprise

The Darwinian Accelerator Driving White-Box to Brite-Box in the Enterprise | IT Infrastructure Advice, Discussion, Community


Open, standards-based networking−a disaggregated Linux-based network operating system (NOS) running atop bare-metal “white-box” ODM switches from multiple vendors−is, at this point, conceptually nothing new.  A number of Silicon Valley start-ups began selling these solutions to the market back in early 2012.  Individually, each offering looked to advance the cause of modernizing and innovating IP networking in one of the three main segments of the overall IP marketplace−either the telco, data center, or large enterprise vertical.

As it turns out, not only are the software feature-set requirements needed to service these three verticals wildly divergent, but the logical and physical architectural realities of these network environments turned out to have quite an impact on the timing of overall open networking adoption. For example, the first vertical to fully embrace the white box open networking model was that of large data centers, aka web-scale networking or, often, multi-cloud networking. In this application of the technology, thousands of white box switches can be deployed at a very small number of locations where high-end Tier 3 resources are plentiful. So potential concerns about things like supply chains and global hardware service simply weren’t issues at all for the data center market where both switching hardware and IT talent are physically concentrated in the same places.

NOS vendors in all three open networking verticals−telco, data center and large enterprise alike−initially entered their target markets via classic proof-of-concept (PoC) white box trials and deployments. This was done to validate economic and business value propositions and show commercial robustness. The data center open networking vertical was the first to take off. In fact, open white-box networking was so successful here that all of the web-scale companies themselves, such as Amazon and Google, essentially rolled their own white box switches for their data centers to displace classic legacy installations from the Ciscos and Aristas of the world.

Networking takes a different approach

But something quite different is afoot with the second wave of open networking. That wave involves disaggregated Linux NOS software running enterprise features on white box switches with open APIs. Basically, the impact of scale in large, Fortune 500-class open networking deployments is far more layered, and even more mission-critical, in the enterprise than it was for data center open networking solutions.

Here we’re not talking about scale in the sense of configuring thousands of switches and automating the management of the network. Those are ubiquitous requirements for open networking in any of the three major market segments. No. We’re talking about the differences in critical business considerations involved in deploying 5,000 or 10,000 or 20,000 switches at a small handful of data center locations versus deploying the same number of devices in hundreds−or even thousands−of disparate locations across the globe. For the operators of large enterprises, concerns about things like dependable supply chains and global service and support suddenly shoot right to the top of their requirements lists.

When the open networking data center market moved from its PoC phase to commercial deployment, the choice of hardware manufacturer barely registered as a consideration. Spares could be easily stockpiled locally, and there was plenty of resident, local IT talent to install and maintain the open switching white-box hardware infrastructure. 

Conversely, when open networking PoCs and production trials for modernizing large enterprise networks finally started reaching the same point last year−using hardware from the exact same white box manufacturers, such as Edge-Core and Delta, used in the data center PoCs−the procurement arms of most of these companies immediately took notice and demanded that they now required switches from a known and trustworthy source of global hardware support and sparing before large-scale deployments would be funded. In essence, they began to mandate the use of commercially branded white box switching hardware, aka “brite-box,” for large-scale open networking deployments in their companies.

This is where the beauty of the open networking model makes itself known once again. The two leading brite-box switching vendors, Dell EMC and HP, both “brand” identical white box hardware used in the enterprise networking PoCs and trials. And they sell it under their names with backing from their extensive global service and support organizations. The Edge-Core 7618 and Dell EMC Z9264, for example, are identical 64-port 100G switches with open APIs that allow an open standards-based Linux NOS to run on them with a full enterprise feature set.

So, the Darwinian Accelerator driving the acceptance and deployment of open network solutions in large enterprises turns out to be scale, albeit with a “mutation” that favors the brite-box over the white-box trait.

 



Source link

Working with the Container Storage Library and Tools in Red Hat Enterprise Linux | Linux.com


How containers are stored on disk is often a mystery to users working with the containers. In this post, we’re going to look at how containers images are stored and some of the tools that you can use to work with those images directly –PodmanSkopeo, and Buildah.

Evolution of Container Image Storage

When I first started working with containers, one of the things I did not like about Docker’s architecture was that the daemon hid the information about the image store within itself. The only realistic way someone could use the images was through the daemon. We were working on theatomic tool and wanted a way to mount the container images so that we could scan them. After all a container image was just a mount point under devicemapper or overlay.

The container runtime team at Red Hat created the atomic mountcommand to mount images under Docker and this was used within atomic scan. The issue here was that the daemon did not know about this so if someone attempted to remove the image while we mounted it, the daemon would get confused. The locking and manipulation had to be done within the daemon. …

Container storage configuration is defined in the storage.conf file. For containers engines that run as root, the storage.conf file is stored in /etc/containers/storage.conf. If you are running rootless with a tool like Podman, then the storage.conf file is stored in $HOME/.config/containers/storage.conf.

Read more at Red Hat blog

Red Hat Enterprise Linux 8 Beta » Linux Magazine


Red Hat, soon to be owned by IBM, has announced the beta version of Red Hat Enterprise Linux 8. As the IT landscaope is changing and the workload is moving from traditional data centers to the cloud, leveraging emerging technologies like Blockchain and machine learning, the expectation from the OS that runs these workloads is also changing.

To keep up with the changing time RHEL 8 maintains a fine balance between past and future.

“Today, we’re offering a vision of a Linux foundation to power the innovations that can extend and transform business IT well into the future: Meet Red Hat Enterprise Linux 8 Beta,” Red Hat said in a press release.

One of the most notable highlights of this beta is the introduction of the concept of Application Streams to deliver userspace packages more simply and with greater flexibility.

“Userspace components can now update more quickly than core operating system packages and without having to wait for the next major version of the operating system,” said Red Hat.

What it means is users don’t have to worry about ‘rpm hell’ or conflict of packages. “Multiple versions of the same package, for example, an interpreted language or a database, can also be made available for installation via an application stream,” explained Red Hat.

It allows users to consume an agile and user-customized version of Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments.

You can test beta by downloading it from here: https://access.redhat.com/products/red-hat-enterprise-linux/beta



Source link

IT Resume Dos and Don’ts: Formatting for Readability | Developers


In my career as an IT resume writer, I’ve seen a lot of IT resumes cross my desk, and I’d like to share some common of the most common formatting problems that I see regularly. Of course, an IT resume requires more than great formatting. It requires well-written, targeted content, and a clear story of career progression. It needs to communicate your unique brand and value proposition.

Still, if the formatting is off, that can derail the rest of the document and prevent your story being read by the hiring authority.

I’ll start with a few IT resume formatting “don’ts.”

1. Don’t Use Headers

This is an easy fix. Headers and footers made a lot of sense when an IT resume was likely to be read as a printed sheet of paper.

In 2018, how likely is it that a busy hiring authority is going to take the time or the effort to print out the hundreds of resumes that are submitted for every position?

Not terribly.

Your IT resume is going to be read online.

That’s why using a header for your contact information is a bad idea.

It takes a few seconds to click on the header, copy and paste your email and phone number, and then click again in the body of the resume to read the text.

A few seconds doesn’t seem like much, but for someone who is looking through a lot of resumes, every second really does count. A hiring authority who is REALLY busy may just decide it’s too much trouble to get your contact information from the header.

That means your resume may well end up in the “read later” folder.

That’s not a good outcome.

There’s another problem with using the header, related to the one I just discussed.

Headers just look old fashioned. Out of date.

Old fashioned is not the brand you want to present if you’re looking for a job in technology — whether you’re a CIO, an IT director, or a senior developer.

Again, this is an easy fix. Just put your name and contact information in the body of the resume. I suggest using a larger font in bold caps for your name. You want to be certain that your name will stick in the memory of the reader.

2. Don’t Over-Bullet

This is probably the most common mistake I see in the IT resumes that cross my desk.

In my trade, we call it “death by bullets.” The job seeker has bulleted everything.

Everything.

That’s really hard to read. Beyond the fact that it’s just not clear, there’s another big problem with over-bulleting.

To paraphrase The Incredibles, if everything is bulleted, nothing is.

The goal of using bullets — sparingly — is to draw the reader’s eye and attention to your major accomplishments.

If you’ve bulleted everything, the reader doesn’t know what’s critical and what’s not, which defeats the purpose of using bullets in your resume.

In my own work as an IT resume writer, I make a clear distinction between duties and responsibilities and hard, quantifiable accomplishments. I write the duties in paragraph format, and bullet only the accomplishments that demonstrate what my IT resume clients really have delivered.

It’s a clear, straightforward approach that I recommend.

3. Don’t Get Colorful

Happily, this particular problem doesn’t seem as common as it was a few years ago, but every once in a while, I’ll still see a resume with lots of color.

The idea behind that, of course, is to make the resume “eye-catching.”

Rather than catching the reader’s eye, however, a lot of color is just confusing.

“Why is this section blue? Is blue telling me it’s really important? And yellow? Why is this person using yellow? Because it’s mighty hard to read…”

I’m sure you see my point. The colors, rather than giving the reader a map of what to look at first — what to prioritize — just end up looking, well, busy.

That makes your resume harder to read. And if it’s harder to read?

Yeah. As I mentioned above: It’s likely to go into the “read later” folder.

You really don’t want that to happen.

4. Don’t Lead With Education

This is another easy fix, but it’s important.

The only time you want to lead with education is when you’re a new grad. If you’re a professional — whether senior, mid-career or junior — you want to highlight your experience on page one, and not take up that valuable space with your degrees or certifications.

Of course, degrees, training and certifications are important, but they belong at the end of the resume, at the bottom of page two or three.

5. Don’t Use Arial or Times New Roman

I’ll end the “don’ts” with another simple one.

Arial and Times New Roman are, well, so 1990s. Yes, they’re good, clear, readable fonts, which is why they’ve become so popular.

Probably 90 percent of all IT resumes are written in these two fonts. There’s nothing negative in that, but it’s a little boring.

Now, I’m not suggesting you use Comic Sans or Magneto, but there are some great, clean fonts that aren’t as common in the IT resume world.

Personally? I like Calibri for body and Cambria for headings.

So, that gives you a number of things to avoid in formatting your IT resume. I’ll now suggest a few “dos” to concentrate on to ensure that your document is as readable as possible.

1. Keep Things Simple

I’m a strong believer that an IT resume needs to tell a story. The formatting of the document should serve only to clarify that story, and not get in the way.

When the document is finished, take a look. Does the formatting lead your eye to the most important points? Is the formatting clear and clean? Or does it distract from the story you’re trying to tell?

2. Think Mobile

This point gets more important with each passing year. These days, the odds are that the hiring authority will be reading your story on a phone, tablet, or other mobile device.

That’s changed the way I’ve formatted the IT resumes I write for my clients.

I’ve never gone beyond minimal design, but I’ve scaled things back. For example, I used to use shading to draw attention to critical sections of the document.

But now? I think that can be hard to read on a mobile — and readability, to repeat a theme, is the only goal of resume formatting.

3. Use Bold and Italics Sparingly

This point follows directly from the previous one. We don’t want to bold or italicize everything. Bold and italics, used consistently and sparingly, can help signal to the reader what is most important in your IT resume, and provide a framework for a quick read-through.

That enables the hiring authority to get the gist of your career fast, without distracting from a deeper second read.

4. Use Hard Page Breaks

This is pretty simple, but it is important. I always insert hard page breaks in every finished IT resume I write. That helps ensure that the document is going to look consistent across devices and across platforms.

It’s not 100 percent foolproof — Word is a less-than-perfect tool. With hard page breaks, though, the odds are very good that your resume will look the same to each reader — and to the same reader when reviewing the document on different devices. That consistency reinforces the sense of professionalism you’re striving to convey.

5. Write First, Format Later

Professional IT resume writers disagree on this, but I’m going to suggest what I’ve found effective in my practice.

I always write the resume first. I personally use a plain text editor, to make certain that Microsoft Word doesn’t add anything that I’ll have to fight to remove later.

It’s only when I’ve got the text completely finished that I copy and paste into Word, and then add the formatting that I think best supports the client story I’m trying to tell.

If I try to format as I’m writing, the formatting may take over. It’s tempting to insist on keeping the formatting consistent, even when it’s not best supporting the story.

So think about it. I’d strongly recommend writing first, and formatting later, when you’re completely clear on the story you’re trying to tell.

I know that many people struggle with formatting their IT resume, so I hope that these simple ideas will help make the process a little easier and less painful.

Stay tuned for future articles that will dig a bit deeper into the IT resume process, covering content structure, writing style, and branding.


J.M. Auron is a professional resume writer who focuses exclusively on crafting
the best possible IT resume for clients from C-level leaders to hands-on IT professionals. When he’s not working, he practices Fujian Shaolin Kung Fu and Sun Style Tai Chi. He also writes detective fiction and the occasional metrical poem.





Source link

WhiteSource Rolls Out New Open Source Security Detector | Enterprise


By Jack M. Germain

May 24, 2018 10:24 AM PT

WhiteSource on Tuesday launched its next-generation software composition analysis (SCA) technology, dubbed “Effective Usage Analysis,” with the promise that it can reduce open source vulnerability alerts by 70 percent.

The newly developed technology provides details beyond which components are present in the application. It provides actionable insights into how components are being used. It also evaluates their impact on the security of the application.

The new solution shows which vulnerabilities are effective. For instance, it can identify which vulnerabilities get calls from the proprietary code.

It also underscores the impact of open source code on the overall security of the application and shows which vulnerabilities are ineffective. Effective Usage Analysis technology allows security and engineering teams to cut through the noise to enable correct prioritization of threats to the security of their products, according to WhiteSource CEO Rami Sass.

“Prioritization is key for managing time and limited resources. By showing security and engineering teams which vulnerable functionalities are the most critical and require their immediate attention, we are giving them the confidence to plan their operations and optimize remediation,” he said.

The company’s goal is to empower businesses to develop better software by harnessing the power of open source. In its Software Composition Analysis (SCA) Wave report in 2017, Forrester recognized the company as the best current offering.

WhiteSource’s new Effective Usage Analysis offering addresses an ongoing challenge for open source developers: to identify and correct identifiable security vulnerabilities proactively, instead of watching or fixing problems after the fact, said Charles King, principal analyst at Pund-IT.

“That should result in applications that are more inherently secure and also improve the efficiency of developers and teams,” he told LinuxInsider. “Effective Usage Analysis appears to be a solid individual solution that is also complementary and additive to WhiteSource’s other open source security offerings.”

Open Source Imperative

As open source usage has increased, so has the number of alerts on open source components with known vulnerabilities. Security teams have become overloaded with security alerts, according to David Habusha, vice president of product at WhiteSource.

“We wanted to help security teams to prioritize the critical vulnerabilities they need to deal with first, and increase the developers’ confidence that the open source vulnerabilities they are being asked to fix are the most pressing issues that are exposing their applications to threats,” he told LinuxInsider.

The current technology in the market is limited to detecting which vulnerable open source components are in your application, he said. They cannot provide any details on how those components are being used, or the impact of each vulnerable functionality to the security of the application.

The new technology currently supports Java and JavaScript. The company plans to expand its capabilities to include additional programming languages. Effective Usage Analysis is currently in beta testing and will be fully available in June.

How It Works

Effective Usage Analysis promises to cut down open source vulnerabilities alerts dramatically by showing which vulnerabilities are effective (getting calls from the proprietary code that impact the security of the application) and which ones are ineffective.

Only 30 percent of reported alerts on open source components with known vulnerabilities originated from effective vulnerabilities and required high prioritization for remediation, found a WhiteSource internal research study on Java applications.

Effective Usage Analysis also will provide actionable insights to developers for remediating a vulnerability by providing a full trace analysis to pinpoint the path to the vulnerability. It adds an innovative level of resolution for understanding which functionalities are effective.

This approach aims to reduce open source vulnerability alerts and provide actionable insights. It identifies the vulnerabilities’ exact locations in the code to enable faster, more efficient remediation.

A Better Mousetrap

Effective Usage Analysis is an innovative technology representing a radical new approach to effectiveness analysis that may be applied to a variety of use cases, said WhiteSource’s Habusha. SCA tools traditionally identify security vulnerabilities associated with an open source component by matching its calculated digital signature with an entry stored in a specialized database maintained by the SCA vendor.

SCA tools retrieve data for that entry based on reported vulnerabilities in repositories such as the
NVD, the U.S. government repository of standards-based vulnerabilities.

“While the traditional approach can identify open source components for which security vulnerabilities are reported, it does not establish if the customer’s proprietary code actually references — explicitly or implicitly — entities reported as vulnerable in such components,” said Habusha.

WhiteSource’s new product is an added component that targets both security professionals and developers. It helps application security professionals prioritize their security alerts and quickly detect the critical problems that demand their immediate attention.

It helps developers by mapping the path from their proprietary code to the vulnerable open source functionality, providing insights into how they are using the vulnerable functionality and how the issues can be fixed.

Different Bait

Effective Usage Analysis employs a new scanning process that includes the following steps:

  • Scanning customer code;
  • Analyzing how the code interacts with open source components;
  • Indicating if reported vulnerabilities are effectively referenced by such code; and
  • Identifying where that happens.

It employs a combination of advanced algorithms, a comprehensive knowledge base, and a fresh new user interface to accomplish those tasks. Effective Usage Analysis enables customers to establish whether reported vulnerabilities constitute a real risk.

“That allows for a significant potential reduction in development efforts and higher development process efficiency,” said Habusha.

Potential Silver Bullet

WhiteSource’s new solution has the potential to be a better detection tool for open source vulnerabilities, suggested Avi Chesla, CTO of
Empow Cyber Security. The new detection tools will allow developers to understand the potential risk associated with the vulnerabilities.

The tools “will ultimately motivate developers to fix them before releasing a new version. Or at least release a version with known risks that will allow the users to effectively manage the risks through external security tools and controls,” he told LinuxInsider.

The new approach matters, because the long-standing existing vulnerabilities are and should be known to the industry, Chesla explained. It offers a better chance that security tools will detect exploitation attempts against them.

Effective Usage Analysis is probably the most important factor because developers are flooded with alerts, or noise. The work of analyzing the noise-to-signal ratio is time-consuming and requires cybersecurity expertise, noted Chesla.

The “true” signals are the alerts that represent a vulnerability that actually can be exploited and lead to a real security breach. The cybersecurity market deals with this issue on a daily basis.

“Security analysts are flooded with logs and alerts coming from security tools and experience a similar challenge to identify which alerts represent a real attack intent in time,” Chesla pointed out.

Equifax Factor

The major vulnerability that compromised Equifax last year sent security experts and software devs scrambling for effective fixes. However, it is often a business decision, rather than a security solution, that most influences software decisions, suggested Ed Price, director of compliance and senior solution architect at
Devbridge Group.

“Any tools that make it easier for the engineering team to react and make the code more secure are a value-add,” he told LinuxInsider.

In some cases, the upgrade of a single library, which then cascades down the dependency tree, will create a monumental task that cannot be fixed in a single sprint or a reasonable timeframe, Price added.

“In many cases, the decision is taken out of the hands of the engineering team and business takes on the risk of deploying code without the fixes and living with the risk,” Price said, adding that no tool — open source or otherwise — will change this business decision.

“Typically, this behavior will only change in an organization once an ‘Equifax event’ occurs and there is a penalty in some form to the business,” he noted.

Saving Code Writers’ Faces

WhiteSource’s new tool is another market entry that aims to make sense of the interconnected technologies used in enterprise environments, suggested Chris Roberts, chief security architect at
Acalvio.

“The simple fact of the matter is, we willingly use code that others have written, cobbling things together in an ever increasingly complex puzzle of collaborative code bases,” he told LinuxInsider, “and then we wonder why the researchers and criminals can find avenues in. It is good to see someone working hard to address these issues.”

The technologies will help if people both pay attention and learn from the mistakes being made. It is an if/and situation, Roberts said.

The logic is as follows: *If* I find a new tool that helps me understand the millions of lines of code that I have to manage or build as part of a project, *and* the understanding that the number of errors per 100 lines is still unacceptable, then a technology that unravels those complexities, dependencies and libraries is going to help, he explained.

“We need to use it as a learning tool and not another crutch or Band-Aid to further mask the garbage we are selling to people,” Roberts said.

Necessary Path

Hackers love open source software security vulnerabilities because they are a road map for exploiting unpatched systems, observed Tae-Jin Kang, CEO of
Insignary. Given that the number of vulnerabilities hit a record in 2017, according to the CVE database, finding the vulnerabilities is the best, first line of defense.

“Once they are found in the code and patched, then it is appropriate to begin leveraging technologies to deal with higher-order, zero-day issues,” Kang told LinuxInsider.

Organizations for years have looked to push back the day of reckoning with regard to OSS security vulnerabilities. They have been viewed as trivial, while engineering debt has piled up.

“Equifax has been the clearest illustration of what happens when these two trends meet,” said Kang. “With the implementation of GDPR rules, businesses need to get more aggressive about uncovering and patching security vulnerabilities, because the European Union’s penalties have teeth.”


Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.





Source link