Tag Archives: Infrastructure

CXL Bring-Up Continues – More Infrastructure For Linux 5.14, “More Meat” For Linux 5.15


Intel open-source engineers continue working on the bring-up around Compute Express Link (CXL) as the new open standard interconnect built off PCIe aiming to empower next-generation servers.

Earlier this year with Linux 5.12 the initial Compute Express Link 2.0 support was published with an initial focus on enabling VXL 2.0 Type-3 Memory Device support. That CXL kernel infrastructure work has continued since and still led by Intel engineers.

With Linux 5.14 is another batch of infrastructure work around Compute Express Link that is mostly about the fundamentals and not too exciting for end-users. However, with Linux 5.15 there should be “more meat” around the CXL device support landing.

Intel’s Dan Williams summed up the current CXL Linux happenings via the Linux kernel mailing list:

This subsystem is still in the build-out phase as the bulk of the update is improvements to enumeration and fleshing out the device model. In terms of new features, more mailbox commands have been added to the allowed-list in support of persistent memory provisioning support targeting v5.15. The critical update from an enumeration perspective is support for the CXL Fixed Memory Window Structure that indicates to Linux which system physical address ranges decode to the CXL Host Bridges in the system. This allows the driver to detect which address ranges have been mapped by firmware and what address ranges are available for future hotplug.

So, again, mostly skeleton this round, with more meat targeting v5.15.

Compute Express Link support will be found with Intel’s Xeon Scalable “Sapphire Rapids” processors and also rumored to be supported by AMD EPYC 7004 “Genoa” processors as well.

The Pros and Cons of Hyperconverged Infrastructure

Hyperconverged solutions are software-defined systems with tightly integrated storage, networking, and resources that have seen an increase in usage recently, especially as a result of the COVID-19 pandemic. Hyperconverged infrastructure is a combination of traditional data center hardware and locally attached storage with software that enables the use of building blocks to replace legacy infrastructure systems. These legacy systems would typically consist of individual servers and different storage networks, but in a hyperconverged system, they are unified in a single platform.

Infrastructure with server-SAN and storage networks that featured independent modules which could be updated or changed without affecting other layers has been the typical way IT systems have operated for decades. However, in the era of hybrid cloud computing, this form of infrastructure can no longer keep up with the needs of many global businesses.

An HCI solution converges the entire data center stack, including storage, networking, and virtualization. Complex and expensive legacy infrastructure is replaced by a platform running on industry-standard servers that allows for businesses to start small and then scale as needed, one node at a time. More and more organizations are now starting to see the intrinsic benefits of hyperconverged infrastructure as it is a business that is expected to grow from $7.8 billion this year to over $27 billion by 2025, at a growth rate of 28.1%.

Currently, hyperconverged storage is the infrastructure of choice for companies that want to remain competitive during a critical period, see continued growth, and ensure that all data centers are sufficiently ready to transfer to cloud computing systems. With an HCI solution being the next logical step for the majority of businesses, it is important to understand exactly what are the positive and negative effects of transitioning to hyperconverged infrastructure.

Benefits of hyperconverged infrastructure

There are many clear benefits of having a highly integrated and tested system for your business, including:

Scalability: In a hyperconverged infrastructure, everything is seamlessly integrated, so scaling up involves simply adding another node to the previous one. This process offers the ability for more storage capabilities without the problem of any configuration hassles or hardware compatibility checks. It is essentially that simple; the more nodes you have, the more storage you can generate. This boost in storage not only offers seamless scalability, but it can be achieved without ever having to alter any management requirements.

Simplicity: The tried and true method of creating a data center requires bringing together a huge amount of separate pieces of hardware and software that can include servers, storage, networking, and management software for both hardware and software alike. A hyperconverged storage solution removes this problem. HCI solutions fully integrate hardware, software, and systems that are set to operate the moment they become functional. This simplicity ensures that all management issues are removed because everything has been created to specifically work together; therefore, it does.

Expense: Hyperconverged infrastructure will usually use commodity hardware, as opposed to special-purpose components, which is likely to significantly reduce the cost of implementation and operation. In addition, the nature of hyperconverged storage means that there is no need to bring in any additional IT staff to manage it due to its simplicity. Another key aspect of this is that the costs are entirely predictable as once you know the cost of adding an extra node and the additional capacity you will receive means simple cost calculations due to the pay-as-you-scale model.

Negatives of hyperconverged infrastructure

While these benefits are significant, it’s still important to be aware of potential pain points that might arise due to switching to an HCI solution. Some of these include:

Vendor Unity: When you use a hyperconverged infrastructure, you are then reliant on one lone vendor. This is why it is critical to ensure that any vendor you work with has a track record in your industry, is well-regarded, and can support your needs effectively. It is important to consult with experts before making your final decision regarding an HCI solution.

Hidden Costs: If you have not carefully considered the vendor you work with, costs can quickly begin to add up. Hidden costs can appear as some vendors may charge a higher amount for their equipment or services, while cloud costs can also increase if you are not careful. One way to prevent this from happening is to do due diligence when selecting a vendor, as well as undertake a full cost analysis of the process before agreeing to it.

Final thoughts

A hyperconverged infrastructure should not be considered a panacea for all your IT and data problems. However, the positives of a hyperconverged solution typically far outweigh the negatives when it comes to streamlining your overall operational capabilities and productivity. With companies and organizations receiving more data on an almost daily basis and analytical workloads growing constantly, we are seeing hyperconverged infrastructure quickly become a part of the ‘new business normal.’

Greg Jehs is the Director of Enterprise Engagement at Meridian.

Source link

Why Companies Are Migrating Legacy Systems to Cloud Infrastructure

At no point in time have industry leaders in IT desired a secure, cost-effective way to access their data more than they have in today’s post-COVID location-distributed world of remote work. It’s no wonder, then, that enterprises are migrating their legacy systems to the cloud with virtualization to reduce infrastructure costs and increase security while allowing their users to connect to business applications from any device at any time.

The migration of legacy systems to the cloud infrastructure typically occurs alongside specific events which, more often than not, relate to the optimization of storage resources and the acceleration of a business’s digital transformation. As the pandemic continues, though, more enterprises are likely to realize just how constrained they are with an on-premise IT infrastructure that can’t accommodate a remote workforce.

As more companies continue to migrate their legacy systems to cloud infrastructure, it’s important to examine the reasons behind their migrations as well as what they should expect once their systems are on the cloud. To that end, let’s dive into the biggest challenges that companies face as they make the move, as well as the most important tips they should bear in mind to make their transition as stress-free and seamless as they can.

Development is easier to coordinate on the cloud

It’s impossible to run successful software development without version control. These days, it’s necessary to use a version control application that provides high-view snapshots for each of your developer’s changes. As you may have already guessed, the fastest and most intuitive way to manage version control is on the cloud.

Organizations that are migrating their legacy systems to the cloud to accommodate new remote working conditions should remember to look for collaboration software that also automatically stores updates in the cloud. This is essential in the event that development teams want to easily access new changes and assign specific team members to new tasks related to their updates.

High-view snapshots can make it much easier for an entire development team to quickly understand the changes that one of their developers has recently committed. To maximize their communication regarding version control, software development teams working on the cloud need to use quality cloud-based collaboration tools that come with crucial features, including custom access that controls which team members can access what as well as tools that track project deliverables and their due dates.

Without collaboration software, even the most rigorous of version control processes can leave team members in the dark as to what’s going on with the code that they’re working on.

Above all, the most crucial component of choosing collaboration software is ensuring your team members can seamlessly keep each other updated on how each stage of your marketing campaigns are progressing.

More flexible software application testing

Third-party cloud testing has all of the infrastructures you need to start testing software in one place. It’s a series of testing processes that occur in a computing environment that’s entirely separate from your own, which makes it possible to test software applications in an environment that’s not beholden to a business’s budgetary and location-related constraints.

The flexibility that comes with cloud testing extends beyond cost-effective processes for software testing. Cloud testing makes it easier to constantly monitor for security vulnerabilities in the running software applications of legacy systems with the use of dynamic application code testing (DAST).

According to the security analysts from Cloud Defense, “DAST is a type of black-box application testing that can test applications while they are running. When testing an application with DAST, you don’t need to have access to the source code to find vulnerabilities.” With DAST, businesses can automatically analyze the code of their web-based applications for vulnerabilities without ever needing to access their source code on their own machines and connected to their own servers.

Additionally, cybersecurity teams can ensure that they mitigate risks of security breaches by using cloud testing to better harden applications against SQL injections, which have claimed 65% of businesses worldwide as their unfortunate victims. Mitigate threats to the application layer of your legacy system with flexible cloud testing that always targets application layer security in a carefully controlled environment.

Cloud storage is accessible to remote workforces

The optimization of limited storage resources is undoubtedly one of the biggest desires driving businesses’ decisions to migrate their legacy systems to the cloud. Storage that becomes exclusively cloud-based for large legacy systems makes it feasible for a distributed workforce to securely access system data when and where they need it. Not only that, but cloud-native storage solutions are pretty much mandatory in order to properly maintain security frameworks of businesses that are now working remotely.

For businesses that need the right tools to evolve their security frameworks based around legacy systems in the cloud, cloud-native storage can point them in the right direction. These businesses should search for providers whose cloud-native storage solutions include permission-based rules and KMS key rotations to cloud resources, the implementation of which is essential to securely running legacy systems in the cloud in 2021.

Enforcing a key rotation policy on all cloud users will provide even greater security protection but implemented role-based access is even better. Cloud storage for legacy systems is off-site and virtual to allow files to be accessed anywhere, day or night. That means that some people inevitably feel that their data is vulnerable just floating around in cyberspace and, subsequently, compromising the integrity of their security framework.


Legacy-to-cloud migrations are happening for a wide range of reasons and are likely to only continue in earnest as businesses adjust to post-COVID location distributed work. The cloud offers multiple benefits for businesses that have limited resources to secure and maintain data on-premises, and especially for those whose employees may need remote access to legacy applications from different devices at different times of the day.

Migrating legacy systems to cloud infrastructure makes software application testing more flexible and cost-effective with dynamic application code testing that catches vulnerabilities in applications even while they’re in the middle of running.

This flexibility in testing positively affects software development teams, who will likely have a relatively easier time developing code and coordinating projects on the cloud in real-time. It’s essential that businesses invest in cloud storage that’s accessible to remote workforces developing code from potentially numerous and disparate locations as well.

Source link

Linux Foundation Launches Open Source Digital Infrastructure Project for Agriculture, Enables Global Collaboration Among Industry, Government and Academia

AgStack Foundation will build and sustain the global data infrastructure for food and agriculture to help scale digital transformation and address climate change, rural engagement and food and water security

SAN FRANCISCO, Calif., May 5, 2021 –  The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the launch of the AgStack Foundation, the open source digital infrastructure project for the world’s agriculture ecosystem. AgStack Foundation will improve global agriculture efficiency through the creation, maintenance and enhancement of free, reusable, open and specialized digital infrastructure for data and applications.

Founding members and contributors include leaders from both the technology and agriculture industries, as well as across sectors and geographies. Members and partners include Agralogics, Call for Code, Centricity Global, Digital Green, Farm Foundation, farmOS, HPE, IBM, Mixing Bowl & Better Food Ventures, NIAB, OpenTeam, Our Sci, Produce Marketing Association, Purdue University / OATS & Agricultural Informatics Lab, the University of California Agriculture and Natural Resources (UC-ANR) and University of California Santa Barbara SmartFarm Project.

“The global Agriculture ecosystem desperately needs a digital makeover. There is too much loss of productivity and innovation due to the absence of re-usable tools and data. I’m excited to lead this community of leaders, contributors and members – from across sectors and countries – to help build this common and re-usable resource – AgStack – that will help every stakeholder in global agriculture with free and open digital tools and data,” said Sumer Johal, Executive Director of AgStack.

Thirty-three percent of all food produced is wasted, while nine percent of the people in the world are hungry or undernourished. These societal drivers are compounded with legacy technology systems that are too slow and inefficient and can’t work across the growing and more complex agricultural supply chain. AgStack will use collaboration and open source software to build the 21st century digital infrastructure that will be a catalyst for innovation on new applications, efficiencies and scale.

AgStack consists of an open repository to create and publish models, free and easy access to public data, interoperable frameworks for cross-project use and topic-specific extensions and toolboxes. It will leverage existing technologies such as agriculture standards (AgGateway, UN-FAO, CAFA, USDA and NASA-AR); public data (Landsat, Sentinel, NOAA and Soilgrids; models (UC-ANR IPM), and open source projects like Hyperledger, Kubernetes, Open Horizon, Postgres, Django and more.

“We’re pleased to provide the forum for AgStack to be built and to grow,” said Mike Dolan, general manager and senior vice president of projects at the Linux Foundation. “It’s clear that by using open source software to standardize the digital infrastructure for agriculture, that AgStack can reduce cost, accelerate integration and enable innovation. It’s amazing to see industries like agriculture use open source principles to innovate.”

For more information about AgStack, please visit: http://www.agstack.org

Member/Partner Statements

Call for Code

“Through Call for Code and IBM’s tech-for-good programs, we’ve seen amazing grassroots innovation created by developers who build solutions to address local farming issues that affect them personally,” said Daniel Krook, IBM CTO for Call for Code. “As thriving, sustainable open source projects hosted at the Linux Foundation, applications like Agrolly and Liquid Prep have access to a strong ecosystem of partners and will be able to accelerate their impact through a shared framework of open machine learning models, data sets, libraries, message formats, and APIs such as those provided by AgStack.”

Centricity Global

“Interoperability means working together and open source has proven to be the most practical means of doing so. Centricity Global looks forward to bringing our teams, tools and applications to the AgStack community and to propelling projects that deliver meaningful value long-term,” said Drew Zabrocki, Centricity Global. “Now is the time to get things done. The docking concept at AgStack is a novel way to bring people and technology together under a common, yet sovereign framework; I see great potential for facilitating interoperability and data sovereignty in a way that delivers tangible value on the farm forward across the supply value chain.”

Digital Green

“The explosion of agri-tech innovations from large companies to startups to governments to non-profits represents a game changer for farmers in both the Global South and North.  At the same time, it’s critical that we build digital infrastructure that ensures that the impact of these changes enables the aspirations of those most marginalized and builds their resilience, particularly in the midst of climate change. We’re excited about joining hands with AgStack with videos produced by & for farmers and FarmStack, a secure data sharing protocol, that fosters community and trust and puts farmers back in the center of our food & agricultural system,” said Rikin Gandhi, Co-founder and Executive Director.

Farm Foundation

“The advancements in digital agriculture over the past 10 years have led to more data than ever before—data that can be used to inform business decisions, improve supply and demand planning and increase efficiencies across stakeholders. However, the true potential of all that data won’t be fully realized without achieving interoperability via an open source environment. Interoperable data is more valuable data, and that will lead to benefits for farmers and others throughout the food and ag value chain,” said Martha King, Vice President of Programs and Projects, Farm Foundation.


“AgStack’s goal of creating a shared community infrastructure for agricultural datasets, models, frameworks, and tools fills a much-needed gap in the current agtech software landscape. Making these freely available to other software projects allows them to focus on their unique value and build upon the work of others. We in the farmOS community are eager to leverage these shared resources in the open source record keeping tools we are building together,” said Michael Stenta, founder and lead developer, farmOS.


“The world’s food supply needs digital innovation that currently faces challenges of adoption due to the lack of a common, secure, community-maintained digital infrastructure. AgStack – A Linux Foundation’s Project, is creating this much needed open source digital infrastructure for accelerating innovation. We at Hewlett Packard Enterprise are excited about contributing actionable insights and learnings to solve data challenges that this initiative can provide and we’re committed to its success!” said Janice Zdankus, VP, Innovation for Social Impact, Office of the CTO, Hewlett Packard Enterprise.

Mixing Bowl & Better Food Ventures

“There are a lot of people talking about interoperability; it is encouraging to see people jump in to develop functional tools to make it happen. We share the AgStack vision and look forward to collaborating with the community to enable interoperability at scale,” said Rob Trice, Partner, The Mixing Bowl & Better Food Ventures.


“Climate change is a global problem and agriculture needs to do its part to reduce greenhouse gas emissions during all stages of primary production. This requires digital innovation and a common, global, community-maintained digital infrastructure to create the efficient, resilient, biodiverse and low-emissions food production systems that the world needs. These systems must draw on the best that precision agriculture has to offer and aligned innovations in crop science, linked together through open data solutions. AgStack – A Linux Foundation Project, is creating this much needed open-source digital infrastructure for accelerating innovation. NIAB are excited to join this initiative and will work to develop a platform that brings together crop and data science at scale. As the UK’s fastest growing, independent crop research organization NIAB provides crop science, agronomy and data science expertise across a broad range of arable and horticultural crops,” said Dr Richard Harrison, Director of NIAB Cambridge Crop Research.


“Agriculture is a shared human endeavor and global collaboration is necessary to translate our available knowledge into solutions that work on the ground necessary to adapt and mitigate climate change, improve livelihoods, and biodiversity as well as the produce of abundant food fiber and energy.  Agriculture is at the foundation of manufacture and commerce and AgStack represents a collaborative effort at a scale necessary to meet the urgency of the moment and unlock our shared innovative capacity through free, reusable, open digital infrastructure.  OpenTEAM is honored to join with the mission to equip producers with tools that both support data sovereignty for trusted transactions while also democratizing site specific agricultural knowledge regardless of scale, culture or geography,” said Dr. Dorn Cox, project lead and founder of Open Technology Ecosystem for Agricultural Management and research director for Wolfe’s Neck Center for Agriculture & the Environment.

Our Sci

“AgStack provides a framework for a scalable base of open source software, and the shared commitment to keep it alive and growing.  We’re excited to see it succeed!” said Greg Austic, owner, Our Sci.

Produce Marketing Association

“The digitization of data will have tremendous benefits for the Fresh Produce and Floral industry in the areas of traceability, quality management, quality prediction and other efficiencies through supply chain visibility. The key is challenges to adoption is interoperability and the development of a common, community-maintained digital infrastructure. I am confident that AgStack – A Linux Foundation’s Project, can create this much needed open-source digital infrastructure for accelerating innovation. We at Produce Marketing Association are excited about this initiative and we are committed to its success,” said Ed Treacy, VP of Supply Chain and Sustainability.

Purdue University

“We need fundamental technical infrastructure to enable open innovation in agriculture, including ontologies, models, and tools. Through the AgStack Project, the Linux Foundation will provide valuable cohesion and development capacity to support shared, community-maintained infrastructure. At the Agricultural Informatics Lab, we’re committed to enabling resilience food and agricultural systems through deliberate design and development of such infrastructure,” said Ankita Raturi, Assistant Professor, Agricultural Informatics Lab, Purdue University.

“True interoperability requires a big community and we’re excited to see the tools that we’ve brought to the open-source ag community benefiting new audiences.  OATS Center at Purdue University looks forward to docking the Trellis Framework for supply chain, market access and regulatory compliance through AgStack for the benefit of all,” said Aaron Ault, Co-Founder OATS Center at Purdue University.

UC Davis

“Translating 100+ years of UC agricultural research into usable digital software and applications is a critical goal in the UC partnership with the AgStack open source community. We are excited about innovators globally using UC research and applying it to their local crops through novel digital technologies,” said Glenda Humiston, VP of Agriculture and Natural Resources, University of California.

“Artificial Intelligence and Machine Learning are critical to food and agriculture transformation, and will require new computational models and massive data sets to create working technology solutions from seed to shelf. The AI Institute for Next Generation Food Systems is excited to partner with the AgStack open source community to make our work globally available to accelerate the transformation,” said Ilias Tagkopoulos, Professor, Computer Science at UC Davis and Director, AI Institute of Next Generation Food Systems.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.


The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Jennifer Cloer
for Linux Foundation

The post Linux Foundation Launches Open Source Digital Infrastructure Project for Agriculture, Enables Global Collaboration Among Industry, Government and Academia appeared first on Linux Foundation.

Investing at the Intersection of Cloud Infrastructure and Cybersecurity

Innovation precedes security. We witnessed it with automobiles, which were introduced long before seatbelts. We witnessed it with the Internet, as credit card theft has inexorably run rampant ahead of encryption and 2FA (two-factor authentication) techniques. One Elon Musk might even argue that we will see it unfold yet again with the continued development of AI, or artificial intelligence:

Human tendencies can be hard to change. But we’re seeing some startups make the leap—to innovation with security rather than innovation before security. Some are focusing on issues related to cloud infrastructure, others on cybersecurity, and still others are making a splash intersecting the two.

Cloud infrastructure

The cloud continues to evolve and what an exciting transformation it is. We’re seeing a few trends here—namely:

  • The shift to hybrid cloud and multi-cloud. Organizations continue to make the jump—albeit more slowly than anticipated —to AWS (Amazon Web Services), Azure (Microsoft), and GCP (Google Cloud Platform). Granted, the complexity and difficulty inherent in this shift isn’t for everyone. As a result, many organizations have deployed hybrid cloud approaches, employing a mix of both cloud and legacy on-prem infrastructure. More sophisticated organizations have moved on to step two—a multi- cloud approach, eradicating vendor lock-in and enabling developers to use the most optimal cloud for their specific application. We will continue to see adoption here.

  • Adoption of microservices and service mesh architecture. The savviest of customers have moved beyond thinking about just the cloud and onto Google’s open-source spinout, Kubernetes (also known as k8s). Thanks to lackluster strategy and execution on monetization, folks have said goodbye to Docker and Mesosphere (now called D2iQ), branding k8s as the winner in container orchestration. But deploying k8s is no small feat. You need a provisioning tool (like Hashicorp Terraform). You need a multi- cluster management solution (like Rancher). Just like Databricks and Confluent created large and successful platforms on top of open-source Apache Spark and Apache Kafka, respectively, emerging startups are likewise looking to build simple, easy-to-use solutions on top of k8s

  • Focus on the developer. If you went to KubeCon, you noticed many companies’ offerings are at least partially, if not entirely, open-source and often free. Developers want to move fast, stay nimble, and typically don’t have a ton of money, so the premise makes sense. Get user base and mindshare up first; achieve network effects and pricing power via an ecosystem of happy users second. However, it’s hard to make money when things are free. As a result, umbrella companies continue to form around initially open-source projects. Keep an eye on Grafana Labs (monitoring for open-source tools, such as Prometheus) and Tetrate (container- native service mesh spun out of open-source Google Istio).


We’ll repeat it again—when innovation progresses on the infrastructure side, security follows. An illustrative and recent example is the plethora of container security vendors that emerged once this idea of containerization increased in popularity. Some trends in security we’re focused on:

  • Next-gen firewalls, or east-west security. North-south security has been a thing for as long as we can remember. Set your hard iron and firewall at the edge, procure endpoint security for your devices, and you’re good to go, right? Historically, yes. Today, no. That perimeter no longer exists. Visibility across hybrid and multi- cloud environments, policy automation and orchestration, and microsegmentation to contain attacks are direly needed as attacks inevitably make it through the front door. N/S + E/W will become the new standard.

  • Application security. We are seeing a shift of focus to the developer on the infrastructure side. And security is following as expected. Workloads are being spun up across virtual machines, containers, and clouds, and applications are being developed faster than ever, in line with the industry-wide CI/CD push. Developers want to write code faster, not fix bugs. Security teams want to fix bugs and slow down the process. A natural tension exists therein, and we are fond of the emerging solutions working to ameliorate this.

  • Brand protection. As brick and mortar continues to die out, e- commerce and online buying have proliferated. But it’s not only marketplaces and online retailers reaping the rewards; fraudsters and counterfeiters are taking a larger and larger piece of this growing pie as well. Two out of every five purchases are now counterfeit, as reported by the US Government Accountability Office. This is a growing problem across brands like Nike—which just broke up with Amazon—and Louis Vuitton as well as in industries like pharma. Companies that can automate detection of fake sites and products, block these avenues, and stop account takeovers should prove to be market winners.

The intersection

The last area here is the intersection of the two spaces we’ve covered in security and cloud infrastructure. We’d categorize this combination set into three buckets of companies:

Networking. The incumbents in this space have always made this intersection a notable part of their businesses. Companies like Cisco, Mellanox, Dell EMC, Arista, and Juniper Networks may ring a bell. But the more nascent entrants are following their footsteps —and doing quite well. The commoditization of hardware, abstraction of software from hardware, pooling of resources, and enforcement of app security / policy are draws for many customers, both from a cost savings and capacity gains perspective.

Storage. We’ve all heard of AWS, Azure, and GCP as the big storage vendors. But you can’t win forever, and they’ll all be disrupted sooner or later. Ideas around distributed storage and automatically identifying the best compute resource for any workload at any given time in any location are compelling. Although typically capital intensive, maniacal execution will result in large companies here. We’ve got an eye on this space.

Other. Broadcom first purchasing CA for $19 billion and then buying Symantec’s enterprise security division for $11 billion was a game changer. If you thought combining infra and security was fascinating, coupling semiconductors with security is a whole different ballgame. Cloudflare, which just went public, provides web performance management and ensures application availability, but also stops malicious bot abuse and DDoS attacks. Hashicorp secures and controls access to tokens, passwords, certificates, and encryption keys in addition to provisioning of cloud infrastructure (including k8s). Although these companies are attacking disparate markets, they are similar in their attempt to build security into their offerings from the ground up.


Although many companies continue to execute respectively within cloud infrastructure and security, some have decided to marry the two broader markets. In these cases, companies are introducing security from the get-go rather than as an afterthought. There’s no right or wrong, and it will always depend on the company’s overall strategy, go- to-market motion, and engineering capacity. However, the combination has proven effective in many instances and is interesting to be aware of nonetheless.



Source link