Category Archives: Stiri iT & C

Stale Open Source Code Rampant in Commercial Software: Report


Organizations, regardless of industry, must do a better job maintaining open source components given their critical nature in software, according to this year’s risk analysis report by cybersecurity firm Synopsys.

Open source software is now the foundation for the vast majority of applications across all industries. But many of those industries are struggling to manage open source risk.

Synopsys released the 2021 Open Source Security and Risk Analysis (OSSRA) report on April 13. The report examines open source audit results, including usage trends and best practices across commercial applications.

Researchers analyzed more than 1,500 commercial codebases and found that open source security, license compliance, and maintenance issues are pervasive in every industry sector. The report highlights trends in open source usage within commercial applications and provides insights to help commercial and open source developers better understand the interconnected software ecosystem.

Consider that all the companies audited in the marketing tech industry sector had open source in their codebases. These include major software platforms used for lead generation, CRM, and social media. Ninety-five percent of those codebases contained open source vulnerabilities.

“That more than 90 percent of the codebases were using open source with no development activity in the past two years is not surprising,” said Tim Mackey, principal security strategist with the Synopsys Cybersecurity Research Center.

Risk Factors Widen

The Synopsys report details the pervasive risks posed by unmanaged open source code. These risks range from security vulnerabilities, to outdated or abandoned components, to license compliance issues.

“Unlike commercial software, where vendors can push information to their users, open source relies on community engagement to thrive. When an open source component is adopted into a commercial offering without that engagement, project vitality can easily wane,” Mackey explained.

Orphaned projects are not a new problem. When they occur, addressing security issues becomes that much more difficult. The solution is a simple one — invest in supporting those projects you depend upon for your success, he added.


Open source risk trends identified in the 2021 OSSRA report reveal that outdated open source components in commercial software is the norm. A hefty 85 percent of the codebases contained open source dependencies that were more than four years out-of-date.

One of the most significant takeaways from this year’s report was the predominant growth of orphaned open source code, according to Fred Bals, senior researcher, Synopsys Cybersecurity Research Center.

“An alarming 91percent of the codebases we audited contained open source that had no development activity in the last two years — meaning no code improvements and no security fixes,” he told LinuxInsider. Orphaned open source is a significant and growing problem.”

Differences Matter

Unlike abandoned projects, outdated open source components have active developer communities that publish updates and security patches that are not being applied by their downstream commercial consumers, according to Mackey.

Beyond the obvious security implications of neglecting to apply patches, the use of outdated open source components can contribute to unwieldy technical debt. That debt comes in the form of functionality and compatibility issues associated with future updates.

The prevalence of open source vulnerabilities is trending in the wrong direction, according to researchers. In 2020, the percentage of codebases containing vulnerable open source components rose to 84 percent, a nine percent increase from 2019.

Similarly, the percentage of codebases containing high-risk vulnerabilities jumped from 49 percent to 60 percent. Several of the top 10 open source vulnerabilities found in codebases in 2019 reappeared in the 2020 audits with significant percentage increases.

Over 90 percent of the audited codebases contained open source components with license conflicts, customized licenses, or no license at all. Another factor is that 65 percent of the codebases audited in 2020 contained open source software license conflicts, typically involving the GNU General Public License, according to the report.

Synopsys 2021 Open Source Security & Risk Analysis Report

At least 26 percent of the codebases were using open source with no license or a customized license. All three issues often need to be evaluated for potential intellectual property infringement and other legal concerns, especially in the context of merger and acquisition transactions, researchers noted.

Sector Breakouts

All of the companies audited in the marketing tech category — which includes lead-generation, CRM, and social media — contained open source in their codebases. Almost all of them (95 percent) had open source vulnerabilities.

Researchers found comparable figures in the audited databases of retail, financial services, and healthcare sectors, according to Bals.

In the healthcare sector, 98 percent of the codebases contained open source. Within those codebases 67 percent contained vulnerabilities.

In the financial services/fintech sector 97 percent of the codebases contained open source. Over 60 percent of those codebases contained vulnerabilities.

In the retail and e-commerce sector, 92 percent of codebases contained open source, and 71 percent of the codebases contained vulnerabilities.

Changing Times

In 2020 the percentage of codebases containing high-risk vulnerabilities jumped from 49 to 60 percent. What was more disturbing is that several of the top 10 open source vulnerabilities found in 2019 codebases reappeared in the 2020 audits, all with significant percentage increases, observed Bals.

“When you look at the industry breakdowns, there is an indication that the increase in vulnerabilities may be at least partly due to the pandemic and the significant increase in the use of marketing, retail, and customer relationship technologies,” he explained.

Open source is by-and-large safe, Bals insisted. It is the unmanaged use of open source that creates the issue.


“Developers and the businesses behind them need to treat the open source they use in the same way as the code they write themselves. That means creating and maintaining a comprehensive inventory of the open source their software uses, getting accurate information on vulnerability severity and exploitability, and having a clear direction on how to patch the affected open source,” he said.

Not too long ago commercial vendors referred to open source as “snake oil” and even as a disease, noted Bals. Many commercial companies even banned their developers from using open source.

Happily, those days are over. You would be hard-pressed today to find an application that does not depend on open source, he countered.

“But open source management has not yet caught up with open source use. Many development teams are still using manual processes like spreadsheets to track open source. There is now much too much open source to track without automating the process,” he added.



Source link

The Rise of Open Source: Pandemic, Economy, Efficiency, Trust


Those familiar with open source know that it works and comes with many benefits. A testament to the rising adoption of open source is the recent moves by software giants such as Microsoft, IBM, and Oracle into the open-source community.

This corporate migration to open source is continuing as many organizations, both large and small, turn to open source in tough economic times. Amid the continuing adjustments in staffing and operations the pandemic is causing, open source is helping enterprises and industries reduce costs and improve their ability to innovate.

A recent survey by Tidelift found that 68 percent of organizations recently turned to open source during the economic downturn to help them save time and money.

While Big Tech companies have the resources they need to succeed, this begs the question for many smaller organizations and/or development teams considering open source: how can they leverage the myriad of benefits it has to offer in order to be successful?

One way to manage a migration to open-source technology is using a management platform that monitors the various components in use. As open-source use continues to expand, so have software companies that focus on developing management platforms that offer a complete solution for maintaining open-source components backed by project maintainers.

Organizations are quickly learning that the developer community has a strong affinity for and loyalty to open source, according to Todd Moore, vice president of open tech at IBM. With that knowledge comes the realization that the more open they are to embracing open source in their own development, the better chances they’ll have of recruiting and retaining the top developer talent.

“We’ve seen large organizations come around to embracing open source more than ever in this last decade, and we expect that to increase as it becomes an even more pivotal part of software development,” he told LinuxInsider.

Growing Reputation

A new survey by O’Reilly Media and IBM reveals some accolades for open source that no doubt reflects its continuing adoption. The survey polled 3,400 developers and technology managers in the fall of 2020. The survey found:

  • Open-source software was rated equal to or better than proprietary software by 94 percent of respondents.
  • When choosing cloud providers, 70 percent of respondents prefer one based on open source.
  • 64.6 percent of respondents preferred skills related to the underlying open-source technologies (such as Linux and Kubernetes), while 35.4 percent preferred skills related to a specific cloud platform (i.e., AWS, Azure, or Google).
  • 65 percent of respondents agree completely that contributions to open source projects impress potential employers and result in better professional opportunities.

Organizations encourage the use of open source because they understand that they often get a lot of value for zero cost over commercial solutions or developing something entirely in-house, according to Odysseas Lamtzidis, developer relations/advocate at Netdata.

“It is often possible that certain needs can be completely covered by open source solutions,” he told LinuxInsider.

Open Source by the Numbers

In June of 2020, Tidelift conducted its annual managed open-source survey of technologists. Over 600 people shared how they use open-source software today and what influenced the migration.

This survey confirmed what many open-source adopters already experienced. That is, in tough economic times, open source helps companies save money. Even in better economic times, open source contributes to better productivity. Clearly, the COVID-19 pandemic and ensuing recession are changing the way respondents’ organizations think about and use open source.

One key finding Tidelift found is that open-source use is rising during the COVID-19 recession. That finding seems to support a trend in which open-source software can save money over development costs and corporate purchasing expenses.


Forty-two percent of respondents said their organization’s application development budget was cut because of the economic downturn. Only 10 percent said spending had increased. That budgeting adjustment led to a commitment by 60 percent of the responding organizations to use more open-source applications.

Encouragement of open source is even more likely (60 percent) among organizations cutting budgets due to the economy. Interestingly, use of more open source is also being encouraged at organizations with rising app development spending, according to Tidelift.

More Results

The Tidelift report also solidified the recognized benefits of using open-source code instead of proprietary solutions. More than two-thirds of respondents (68 percent) said open source helps them save money and development time by using existing open-source components versus writing new code.

Efficiency was another key factor highlighted in the Tidelift survey results.

Forty-eight percent of respondents reported increased efficiency of application development and maintenance as a key reason for their open-source use. Yet, organizations with more than 1,000 employees were more likely to cite efficiency (61 percent vs. 41 percent for organizations under 1,000 employees) as a reason for encouraging the use of more open source.

The size of the organization correlates to the larger support for open-source use as well.

Vendor lock was a third prominent benefit of using more open source over more costly proprietary applications, according to 40 percent of respondents. The report noted that half of the responding organizations with over 1,000 employees cited vendor lock protection compared to 37 percent for organizations with under 1,000 employees.

In the Clouds

Enterprise users adopt open source either directly from community distributions or indirectly via commercial offerings. They contribute back to the communities to make improvements, drive enhancements, or improve skills, observed IBM’s Moore.

“Because so many companies are moving their workloads to the cloud, enterprise developers are embracing open-source container frameworks like Kubernetes and OpenShift, which has led to an explosion of open-source adoption in the past few years,” he said.

Additionally, many clouds run on a Linux operating system, so new adopters are often embracing Linux as well. A recent O’Reilly survey commissioned by IBM indicates that nearly 95 percent of the 3,400 developers and IT managers surveyed considered Linux important to their career, while 90 percent of them considered containers to be important to their careers, Moore explained.

Over the last year of the pandemic, organizations accelerated their move to the cloud. This move to the cloud is the bigger driver in the adoption of open source; particularly tooling and frameworks to manage these new cloud environments, according to John Kinsella, chief architect at Accurics.

“We are also seeing organizations get more sophisticated in how they run DevSecOps in cloud environments,” he told LinuxInsider.

Open Source in Demand

Companies view open-source software as a great way to be flexible and avoid possible costly vendor lock-in, noted Netdata’s Lamtzidi. He also sees some good arguments that usually make the case for a commercial project to use open-source technologies.

“Faster time-to-market along with increased security are important considerations. Open-source projects are usually audited by many different contributors, leading to increased code quality and no secret backdoors or vulnerabilities,” he said.


Open source can be a great cost optimizer for certain businesses, he continued. It is cheaper to have a business running on Raspberry PI and Linux than proprietary Windows machines.

“We are seeing this in a number of schools which have replaced their aging computers with cheap, open-source alternatives, such as the Raspberry Pi. Likewise, many companies are looking to use open source as a great way to decrease costs, which is critical given the unusually high uncertainty due to the pandemic,” said Lamtzidi.

Security Factors

In 2019, over 16,000 vulnerabilities were disclosed across proprietary and open-source software. Over 1,000 of those were scored critical, according to Jennifer Fernick, global head of research at NCC Group.

Computer security experts are quick to point out that all computing platforms are vulnerable in varying degrees. Linux and open source are nonetheless regarded as more rigorous and quicker to fix when problems are discovered.

Vulnerabilities are not rare, and both CVE metrics and reasoning through the increased digitalization of our world give us strong reasons to believe that this problem is only going to get worse, Fernick reasoned.

“Open-source software is a significant part of the core infrastructure in most enterprises in most sectors around the world and is foundational to the Internet as we know it. Consequently, it represents a massive and profoundly valuable attack surface.” she told LinuxInsider.

Many of the best things about open-source development invite unique security challenges to overcome. Fernick noted that what is needed to make open source more secure than proprietary software includes:

  • Articulating a cohesive threat model of the open-source ecosystem;
  • A shared, data-driven identification of the world’s most critical open-source projects;
  • Funding for security improvements, audits, and research;
  • Interventions to prevent vulnerabilities in the first place;
  • Continued research and open-source tool development to scalably find as many vulnerabilities as possible in a codebase in a repeatable and automated way.

A good portion of continued open-source growth is based on trust in the modern open-source community, noted Accurics’s Kinsella. That includes, to a large degree, how the communities respond to security issues.

“In 2021, this definition of trust may change as we start to expect binaries to be signed and security of the software supply chain to become more commonplace,” he said.



Source link

Once the Big Tech Battler, Open Source Is Now Big Tech’s Battleground


A sagely guru by the name of Yogi Berra imparted some words of wisdom that I periodically revisit: “In theory there is no difference between theory and practice, while in practice there is.” Though intended for comedic effect, it rings true (as all good comedy does).

It’s easy to get so wrapped up in what we believe something to be, or should be, that we mistake it for reality. The more invested one is in a theory; the closer theory and practice can appear.

For its many devotees, open-source software is a paradigm of unparalleled beauty. Chief among its charms is that open-source software is of, by, and for the community. This is why large, traditional proprietary software companies assumed their battle stations when open source appeared on the radar: if users could collaborate to make their own software, who would pay money for theirs?

Roughly 30 years later, open source did not lay the tech giants low, as they feared. It played out that way because, after seeing what open source could do, rather than distancing themselves from it, many traditional tech powers lined up to grab a piece of the open-source pie. The cozying up didn’t happen all at once, but brick by brick, open source rose from a foundation to a towering evidence.

So why am I discussing this now? Not to dispense an open-source history lesson — there are plenty of those — but to discern the locus open source has reached, and extrapolate its trajectory from here, in light of recent indicative developments.

Open Source Meets Open Arms

Let’s first tackle the now, and then address the future (seems sensible, right?).

Last month, Google and Microsoft led a cadre of tech companies in creating the Rust Foundation. Obviously, this is neither the first nor largest contribution to an open-source project by private tech vendors. The Linux kernel has been flush with cash from the most dominant tech companies out there for many years.

Still, the creation of this new body marks another noteworthy instance in which proprietary software companies took the initiative to found and steward a nonprofit project. It’s not groundbreaking, but it doesn’t happen every day.

The key difference between the birth of the predecessor organizations that would merge into The Linux Foundation and that of the nascent Rust Foundation is context. In essence, Big Tech is comfortable with open source now.

Today, dozens of open-source projects, such as FreeBSD and Chromium, enjoy the Linux treatment, running on donations from tech companies valued in the billions; and when companies want a closer relationship than patronage, they’re fine with buying up open-source companies, as IBM did Red Hat a few years ago.


Big Tech companies not only fund open source, but actually develop open source. It’s common to see pages at the “opensource” subdomain of major tech company websites. Microsoft, Google, and Facebook, among many others, all have such pages.

Follow a few links and you can get from any of them to actual source code released by otherwise proprietary developers. In some cases, proprietary tech companies have gone as far as handing off their software completely. When Google announced in January that it was abandoning its Tilt Brush augmented reality creator software, it simultaneously handed it to the open-source community to keep alive.

Against this backdrop, you’d be hard-pressed to argue that the atmosphere between for-profit tech companies and open source is anything other than convivial.

We Need to Have the Relationship Talk

For open-source software users, robust independent cashflows enable one to enjoy a project’s work even if one can’t kick in money to fund it. But that’s not why corporations write checks. To avoid assuming that the reason is obvious, let’s take a minute to grasp the incentive dynamics at work, taking the Linux kernel as an example.

Google’s choice of the Linux kernel when designing Android and Chrome OS was pragmatic. By then, Linux was already able to run on a wide range of hardware. Moreover, it had proven to be a viable frame on which to build profitable products for other companies.

But Linux gave Google more than a solid base. It also yielded significant cost savings. Google could have amassed the talent to write a kernel in-house, but why do that when it could let the Linux devs write a kernel and contribute cash and code to it as needed?


Under this latter model, Google has all the benefits of a battle-tested kernel, but with Google devs freed up to add to preexisting work instead of banging out a kernel from scratch. The annual donation it sends to Linux probably funds more total developers (between the Linux project and its own kernel customization team) than if it spent the same amount completely internally.

Google is just one of many companies who recoup on investing in Linux. A similar cost-benefit calculus is likely at play in Microsoft et al. establishing and underwriting the Rust Foundation. Although Microsoft primarily writes its products in C and derivatives, the company is seriously experimenting with Rust. Cofounder Google is putting money toward writing components of the Apache Web server in Rust as well.

So just as Google did with Linux, these companies are literally betting on the future of Rust. The dollars of Google, Microsoft, and their cofounders will go further backing a project that checks in code from themselves, the Rust developers, and Rust community pull requests than if spent solely within their respective headquarters.

Where Do We Go Now?

The real question is: what does reinforcing this trend of for-profit investment in open-source nonprofits portend for open-source generally? Making predictions isn’t my strong suit, but I’ve had practice at gaming out consequences.

First, now that it’s no secret that investing in open-source yields an n-fold return, companies may start jockeying to prevent each other from assuming too much control over a project. If company X invests in project A, company Y may not want to let X be the only big-dollar contributor, and in turn may increase its own contributions. It’s the same reason why your little sister bought the last property you needed to start building houses in Monopoly.

We may also see tech players compete to get more pull requests accepted by a project than their co-contributors. Returning to our example, if X has one view of how A should develop, and Y has another incongruent one, the company that advances its vision within the project would wield a considerable edge over the other. In the case of profound architectural considerations, committing the project to your preferred mode over your competitor’s could force them to restructure or even abandon their internal projects.


Finally, there are the subtle shifts in open-source development priorities that will unconsciously result from where the concentration of funding in aggregate settles.

Because corporate funding is now a dependable means of keeping an open-source project afloat, projects may more commonly bend their development decisions toward what will make theirs most attractive to private-sector actors.

These are just the possible paths forward I perceive from where we currently sit. If my read on these dynamics does indeed play out, it will be interesting to see if the open-source community embraces them, or if they’re viewed as a threat to the spirit and ethos of open-source. In this sense, the future of open-source will be up to the open-source community to decide — as it should be.

What do you think about Big Tech’s role in open-source projects and the formation of the Rust Foundation? Please use the Reader Comments feature below to provide your input!



Source link

A Linux Safari to Classify the Genus of This Penguin


Recently, I took an interest in poking at Gentoo a bit. In the eyes of many desktop Linux users, it’s considered a rite of passage to install this historically significant distribution. I’ve scaled Mt. Arch, so Gentoo Peak is next in my Linux mountaineering.

Before I started sinking time into it, though, I wanted to see just what I would gain from the formidable task of installing Gentoo. In other words, what does Gentoo bring to the table? A lot, it turns out, but we’ll get there in time.

This curiosity sent me on a much more interesting Linux safari to explore what truly differentiates distributions. What follows is the classification field guide I wish I had when I began my Linux journey.

Spots and Colors Don’t Make the Species

In constructing our taxonomy, there are outward trappings that are tempting to include, but which actually have no bearing on the substance of a distribution. Let’s identify a few of these so we can rule them out.

We’re going to skip the amorphous categorization on the spectrum from “beginner” to “power-user” distro. Regions on this spectrum reflect the rough user demographic that a distribution attracts; and doesn’t necessarily or directly evidence structural makeup.

For instance, I’ve seen Manjaro characterized as a beginner distro, and I appreciate its mission to break in Arch Linux. But the fact that there’s a chance Manjaro users might need to manually downgrade its rolling release packages places it beyond what I would personally recommend to beginners. On the other hand, the most competent computer user I know, a decades-long veteran in the “tech sector,” uses Linux Mint, which I actually do recommend to Linux neophytes.

Also irrelevant to our exploration are desktop environments. It’s a mistake nearly every desktop Linux user makes at first, me included. It’s natural to take the most visible component for the distribution itself. But with few exceptions, the desktop has little bearing on what the distro really consists of. With far more distributions than desktops, it’s common for radically different distributions to have the same desktop.

Let’s Get Taxonomic!

So what does distinguish one distribution from another? Each one has its own combination of structural properties. Some of these properties are binary — merely present or absent — while others fall along a continuum.

In general, however, any point on one continuum can coexist with any point on another. With enough continuums and points on them, unique combinations are bound to emerge, creating distinctive user experiences with them.

Are updates released via a rolling release model or point release model?

This is a distinction I have referenced in past articles, but merits revisiting considering its direct relevance. Under a rolling release model, each package’s maintainers release a new version for installation when they, as a team, are ready to deploy it.

They don’t delay their update to release it in unison with another team and may not even do much to harmonize their package with its sibling packages. As a result, rolling release distributions yield a “bleeding edge” experience as the fastest way to get the newest version of software short of compiling from source. It’s the sharpest part of the metaphorical “edge.”


The alternative is the point release model. With point releases, all the distribution’s package maintainers coordinate to release their updates in scheduled waves. Those packages that have new versions since the last wave will all reach the user simultaneously, whether that’s weekly, monthly, or some other interval-ly.

While this means that users may not enjoy the latest software features right away, it usually means that software is more stable. The first teams to wrap up their work have to wait on the last teams to finish, so instead of twiddling their thumbs they can polish their work.

What kernel major and minor version does it lock in?

When a distribution’s newest major release drops, the developers usually state which major and minor Linux kernel version they will take as the basis for the distribution’s kernel.

Since the kernel is instrumental to any operating system, a distro’s developers may want to closely moderate how it changes. This is so that developers can focus on maintaining a stable experience with a finalized feature set instead of scurrying to include every new upstream kernel module and patch.

Locked-in kernel versions differ not only between distros, but between release tracks within a distro. For instance, the latest update to Ubuntu’s stable LTS track, Ubuntu 20.04.2, contains Linux 5.4. By contrast, the developer’s more dynamic testing track that came out in October, Ubuntu 20.10, flaunts the more current Linux 5.8.

The general pattern is that the older the kernel version, the more stable the distribution’s customized kernel, as more time has passed since that version first debuted from the Linux devs. As a tradeoff, though, there will be fewer modules, so the older the kernel version, the shakier new hardware compatibility is.

Is the distribution downstream of another, or an independent project?

Not every distribution stands on its own. There’s nothing wrong with building from a foundation laid by another project — in fact, numerous highly popular distributions do just that. But when classifying distros, this is useful to consider, because it shapes the user’s experience. Essentially, a distribution is either independent, created from scratch, or downstream, taking another distro as a starting point.

Who hosts (most of) the repositories?

The repositories (commonly called “repos”) make the distribution. They’re where all the software packages are maintained and offered for installation.

While just about every distro has at least one repo of its own for holding in-house packages, not every distro hosts all the repos it uses. Rather, they point their package managers at repos hosted by a distro they are downstream from.

For example, some distros make it their narrow, but laudable, focus to smooth out the installation process for an upstream distribution without tweaking the latter’s software — the former has their repo for the installer, and everything else is passed up the chain. Unless they build every package from source, independent distributions by definition host their own repos.


Is installation performed via a GUI installer or an interactive command line process?

In ancient times, only shell-fu practitioners could install a Linux distribution. In the modern era, installation for dozens of distros is as smooth as any user could want. There are distros out there, though, that preserve the old ways where installation is a challenge, but a rewarding one. For the most part, a distro either has a guided graphical installer or it drops you into a shell and assumes you know what you’re doing.

Is software installed from maintained packages or from source code?

Finally, a less common but critical distinction is whether a distribution’s developers bundle programs and libraries up as packages, or whether the system compiles all to-be-installed software from source code.

Most distros you’ll run across stick with packages, but a handful go the other way. In that case, instead of repos it has up-to-date directory structures full of compilation instruction files, which download and compile source code on execution. Gentoo being one of these rare few, we return to where we started.

By no means are these all the characteristics by which one can classify a Linux distribution, but they are some of the most important and easily spotted. When considering an unfamiliar distribution, if you take the time to score it on all these metrics, you can grasp the basics of what you can expect from the day-to-day experience of running it.



Source link

Open Source Joins Efforts to Create Gene Therapies for Rare Diseases


Some 400 million patients worldwide are affected by more than 7,000 rare diseases; yet treatments for rare genetic diseases remain an underserved area. More than 95 percent of rare diseases do not have an approved treatment, and new treatments are estimated to cost more than $1 billion.

Sanath Ramesh created the RareCamp project and the OpenTreatments Foundation to enable patients to create gene therapies for rare genetic diseases and then work with their doctors and nonprofit organizations to develop drugs. The Linux Foundation, the nonprofit organization enabling mass innovation through open source, is helping those efforts succeed.

Ramesh is the father of one such patient. His two-and-one-half-year-old son was born with a rare disease, Sedaghatian-type Spondylometaphyseal Dysplasia (SSMD), which is caused by a mutation in the GPX4 gene.

He has documented some of his personal story in this YouTube video:

Ramesh, a software developer, spearheaded a two-pronged attack to fight back using open-source software. He is the founder of the OpenTreatments Foundation and the creator of RareCamp.

The OpenTreatments Foundation enables treatments for rare genetic diseases regardless of rarity and geography. The RareCamp Project provides the source code and open governance for the OpenTreatments software platform to enable patients to create gene therapies for rare genetic diseases.

The Linux Foundation hosts the project to decentralize and accelerate drug development for rare genetic diseases. The project is supported by individual contributors, as well as collaborations from companies that include Baylor College of Medicine, Castle IRB, Charles River, Columbus Children’s Foundation, GlobalGenes, Odylia Therapeutics, RARE-X and Turing.

“OpenTreatments and RareCamp decentralize drug development and empowers patients, families, and other motivated individuals to create treatments for diseases they care about. We will enable the hand off of these therapies to commercial, governmental and philanthropic entities to ensure patients around the world get access to the therapies for years to come,” said Ramesh.

Open Source for the Greater Good

The RareCamp open-source project provides open governance for the software and scientific community to collaborate and create the software tools to aid in the creation of treatments for rare diseases.

The project uses the open-source Javascript framework NextJS for frontend, and the Amazon Web Services serverless stack — including AWS Lambda, Amazon API Gateway, and Amazon DynamoDB — to power the backend. The project also uses the open-source toolchain Serverless Framework to develop and deploy the software. The project is licensed under Apache 2.0 and available for anyone to use.


“If it’s not yet commercially viable to create treatments for rare diseases, we will take this work into our own hands with open-source software and community collaboration, [this] is the way we can do it,” said Ramesh.

“OpenTreatments and RareCamp really demonstrate how technology and collaboration can have an impact on human life. Ramesh’s vision is fueled by love for his son, technical savvy, and the desire to share what he is learning with others who can benefit. Contributing to this project was an easy decision,” Brett Andrews, RareCamp contributor and software engineer at Vendia, told LinuxInsider.

The OpenTreatments Foundation and RareCamp really represent exactly why open source and collaboration are so powerful, added Mike Dolan, executive vice president and general manager of projects at The Linux Foundation.

Creator’s View

The stark reality of dealing with his then-infant son’s diagnosis of an ultra-rare genetic condition 18 months ago drove Ramesh to seek an open-source solution where none existed in the proprietary software world. Since then, he has done a lot of work to repurpose existing drugs that have been used in other diseases to help treat his son’s condition.

Ramesh also started working on this new technology called gene replacement therapy. That technology essentially replaces a faulty gene with a good one, he explained.

“During this process, I discovered that the process of building a gene replacement therapy is targeted across a lot of diseases not specific to my son. It is something that a lot of patient foundations are trying to master by themselves; and quite honestly, they are failing,” he said. “Just like how I would have failed if I didn’t seek help.”

Ramesh realized a lot of people could benefit from this process. So he started thinking about ways to expand and share the knowledge and expertise with other patient foundations. That led Ramesh to create RareCamp and the Open Treatments Foundation.

What’s Involved

A lot more is going on, he continued. For instance, the big stack of the biotech world and all the activities that everyone is doing do not fully describe the extent of activity from the different players in the open-source collaborative space.

“What I am trying to do is sort of integrate all of that and provide a more streamlined simpler solution specifically focused on empowering individuals that typically are not already in the biotech space. That is something that has never been done before,” he said.

Until Ramesh started these two open-source projects, nothing comparable to providing the end game he seeks was available, he agreed.


It is also a different model because the activity is not based on a company structure. The endgame is not trying to sell a product. Instead, the nonprofit foundation is trying to create a sustainable ecosystem.

“So you are taking the open-source spirit and instilling it in an industry that has never seen this before. This is a completely different problem, a new problem,” said Ramesh.

That involves surmounting whatever other hurdles appear in order to get this project going to where it is now. Many of the hurdles exist because this is a new concept, he noted.

Implementing patients to build a treatment sounds sometimes ridiculous. But in reality, numerous instances of success and progress are growing.

Anniversary of Conference That Sparked the Idea

Building the software platform and getting the right scientific expertise aligned for the new project team were essential. Meeting those challenges at times seemed insurmountable. Pushing beyond them was part of the larger challenge of helping the patient community, Ramesh recalled.

“A lot of those challenges were primarily due to my lack of understanding of the space and secondarily, due to the lack of an established pattern in this phase, like no one has ever done this before. Which means there is always going to be a lot of resistance,” he shared.

Ramesh started having the first conversations about bringing open source into the gene therapy research field the first week of April last year. Ramesh said that occurred during a conference that happened the same week and set the stage for those conversations. It’s been a year in the making — pandemic and all. That surprisingly helped rather than hindered his efforts, he noted. Everybody was at home.

“I could reach the people that I wanted quite effectively because prior to the pandemic I would have to be flying. They would have given me four-week lead times before I could meet with them,” he explained.

Instead, all of their conferences and busy schedules were canceled. So Ramesh actually got a lot more time with people. Now he is feeling the difference because the world is starting to go back to the old normal. But in this pandemic period, Ramesh brought openness and collaboration into the biotech world.

“It is now a topic that everybody is discussing from the FDA to the NIH to academic institutions and to the biotech industry itself. Everybody’s talking about how we can bring more open collaboration to the space,” Ramesh said.



Source link