On the Road to a Fabric Infrastructure Reality | IT Infrastructure Advice, Discussion, Community


Although the industry aspires to build a NVMe over Fabrics (NVMe-oF) infrastructure – one that is built on “a set of compute, NVMe flash storage, memory, and I/O components joined through a fabric interconnect and the software to configure and manage them”– organizations are just starting to shift their IT efforts toward this transition. In 2019 we should realize the first step on a trajectory toward fabric infrastructure, including fabric-attached memory, with the wide-spread adoption of fabric-attached storage. This may seem like a small step, but the commitment to fabric-attached storage means we are taking the necessary steps, as an industry, to ensure all components are connected with one another, allowing compute to move closer to where the data is stored rather than data being resigned to several steps away from compute.

A Fork in the Road, Architecturally Speaking

Essentially, we’re at a fork in the road as general-purpose processors and infrastructures are failing to meet the demands of data-intensive applications and data-driven environments due to their uniform ratio of compute, storage, and network bandwidth resources. IT teams are trying to build flexible infrastructures using these traditional, rigid building blocks.

To meet the level of flexibility and predictable performance needed in today’s data center, a new architectural approach has emerged where compute, storage and network are disaggregated into shared resource pools and treated as services. The trend toward ‘composable’ architectures refers to the ability to make the resources available on-the-fly and create a virtual application environment with the optimum performance required to support workload demands.

At the same time, companies need to more closely analyze their workloads and determine where there are inefficiencies. How can they implement or best optimize their resources so they can unlock the potential of their data? For example, in workloads such as AI there may be less crunching and more analyzing. That type of architecture is very different than standard general-purpose processors with memory and storage attached. As companies think about how to optimize the tasks at hand, different architectures and ideas come into play. IT is moving away from solving problems the way they did in the past.

Green Light on A New Approach

As big data and fast data applications start to create more extreme workloads, purpose-built architectures will be required to pick up where today’s general-purpose architectures have reached their limit. Applications which require analytics, machine learning, artificial intelligence, and smart systems demand purpose-built architectures. Key to making this evolution happen is to embrace open standard interfaces for both disaggregated hardware elements and the software required to orchestrate them.

The first step in achieving this composability is the disaggregation of storage, compute, and networking resources. NVMe-oF allows flash storage to be disaggregated from the server to make that storage widely available to multiple applications and servers. Connecting storage nodes over a fabric is important as it allows multiple paths to a given storage resource. Giving hardware more granularity enables higher utilization.

The second step in delivering a Composable Disaggregated Infrastructure (CDI) is the adoption of standard APIs – such as Redfish® and Swordfish™ – to dynamically assign resources when needed.

The new architecture enables customers to adapt to changing workloads. Capacity and performance can be added independently, reducing cost and complexity. Multiple applications can be served with a common storage pool, which improves capacity utilization and reduces isolated silos of storage.

Looking Ahead: The Future of Data Infrastructure

Innovative companies are leveraging open frameworks such as composable infrastructure to forge a path toward making fabric-based infrastructures a reality.

Steps are being taken today to develop frameworks in which storage, compute, and networking resources can independently scale. Software is used to orchestrate these resource pools into logical application servers, on-the-fly. This allows storage to be disaggregated from compute, enabling applications to share a common pool of storage capacity. Data can easily be shared between applications or needed capacity can be allocated to an application regardless of location, so they are highly configurable.

Change doesn’t happen overnight. It’s an evolution, not a revolution and will take some time for these functions and architectures to take shape. However, these early innovations are paving the way toward making fabric-based infrastructures a reality.



Source link