Thursday, July 30, 2020

Gen-Z to Advance Memory-Driven Interconnect Fabric

Not to be confused with the demographic cohort that succeeds millennials, Gen-Z is a memory-semantic fabric architecture that’s at a point where it must better define how it fits within the greater scheme of specifications and standards, including the somewhat mature NVM Express and the emerging Compute Express Link (CXL) protocol that’s gaining traction in data centers.

Gen-Z uses memory-semantic communications to move data between memories on different components with minimal overhead, not only interconnecting memory devices, but also processors and accelerators, the latter of which are becoming increasingly popular for specific use cases — storage and artificial intelligence, for example — while taking pressure of the CPU. Ultimately, Gen-Z is about more flexibility and responsiveness when it comes to resource provisioning and sharing, allowing systems to be reconfigured as the demands of applications for different resources change.

Like many fabric architectures, Gen-Z is trying to balance the need to support and enhance existing systems while enabling the creation of new architectures. It also cements a need to clear any confusion as to who is doing what, said Thomas Coughlin, president of research firm Coughlin and Associates.  “It is getting confusing. There are getting to be so many things that seem like they possibly could be doing some of the same stuff, but it’d be nice if they actually ironed out who’s doing what and how they’re doing it and do it together.”

Interoperability with different types of standardization activity is an important part of Gen-Z becoming mainstream, and proponents of other architectures, such as CXL, have acknowledged the need to work together.

“We’re seeing a much-needed discussion between all these different ways in which people would like to create networks and fabrics of elements like networking and storage and compute, but also accelerators,” said Coughlin. “How do we get all these things to work together? How do we create optimal networks and fabrics that allow them to work both locally and remotely in the most effective way possible?” Differentiating the difference between NVMe over Fabric versus Gen-Z is just one example of the delineations that must be made, he said.

The Gen-Z fabric was developed with a focus on accommodating continuous performance increases through the transparent aggregation of next generation devices such as persistent memory, as well as leveraging DRAM through composable memory, and accelerators. (Source: Gen-Z Consortium)

For its part, the key technical advantages of Gen-Z being touted is the ability to mix both DRAM and non-volatile memories, as well as any future persistent-memory technologies, while also reducing solution cost and complexity by using a high-bandwidth, low-latency, efficient protocol that simplifies hardware and software designs. As with any with any new architecture, the aim is for systems to be able to scale up without sacrificing performance for flexibility, while maintaining mechanical compatibility that allows Gen-Z to be integrated into existing platforms and any necessary software compatibility.

In order for any of these architectures to gain traction, everyone must work together, which is a key driver for the Gen-Z Consortium’s memorandum of understanding (MOU) with the OpenFabrics Alliance (OFA). Its chair, Paul Grun, said the collaboration reflects the shared interests of the two groups. Gen-Z is motivated by a requirement to implement memory-like semantics across fabric topologies to support the Gen-Z vision for a distributed memory architecture, while the goal of OFA is to accelerate OFA development and adoption of new generation fabrics for the benefit of the advanced networks ecosystem. “Clearly, Gen-Z is a next-generation fabric.” However, he said, OFA isn’t a standards body, but an enabler. “We are accelerating the development and adoption of fabrics by providing the software that’s needed to make them go.”

Advanced software for fabrics boils down to any high-performance APIs and associated software for current and future high-performance computing, cloud, and enterprise data centers. Target applications and deployments are those that need efficient networking, ultra-low latencies, faster storage connectivity, scalable parallel computing, and the cloud. Grun said OFA is fabric and vendor agnostic, and its current focus areas are user mode APIs — called libfabric APIs — as part of its OpenFabrics Interfaces (OFI) and network management for composable networks that are heterogeneous and require managing through a common management framework.

Gen-Z Consortium president and chairman, Kurtis Bowman, said his organization and the OFA share a mutual interest in advancing high performance interconnects. Gen-Z’s efforts to implement memory-like semantics across fabric topologies as part of a broader vision for a distributed memory architecture requires a sophisticated interconnect and extremely low end-to-end latencies, he said. OFA’s efforts to facilitate development and adoption of new generation fabrics for the benefit of the advanced networks ecosystem includes Gen-Z.

The recently announced MOU will see OFA create a libfabric provider for Gen-Z to enable easy access to Gen-Z features for any libfabric-enabled application or middleware, as well as explore possible enhancements to the libfabric APIs, said Grun. Gen-Z will also be the first target for creating a solution for managing composable networks. The proposed solution will use DMTF’s Redfish standard and consists of a management framework, an “abstract” fabric manager, and fabric specific plug-ins.

Gen-Z can be integrated into a processor without impacting the traditional memory controller. For example, a DDR memory controller would continue independently service a portion of the processor’s address space, and Gen-Z would independently service a different portion. (Source: Gen-Z Consortium)

As a fabric, Bowman said Gen-Z reflects the need for an industry-standard architecture to support things like memory, high-speed GPUs, and other devices that don’t fit well on any of the existing fabrics but need their own high speed, low latency, secure fabric. “What we’ve seen over time is that there are just too many pins associated with a DDR interface,” he said. “We actually want some democratization amongst the devices, so not everything has to go through your host CPU.”

As a memory-semantic protocol, said Bowman, Gen-Z can do simple reads and writes to a memory space and get that information back, and rather than going through the CPU, it can go through an accelerator, such as a GPU, specialty AI device, or FPGA, as well as access local memory and memory that sits out on Gen-Z. “Memory then can be shared among the devices, either allocated to them or actually shared amongst multiple devices.”

So far, the Gen-Z Consortium has demonstrated that it can connect devices and shared memory while obtaining extremely low latency at rates not quite as fast as when directly attached to memory, said Bowman. “One of the demonstrations we did showed five times lower latency going to a Gen-Z connected memory device, then going out to some of the fastest MVME devices.” Right now, he said, there are two ways to connect to Gen-Z; one is to have a native interface in the endpoint device, and the other is using FPGAs that go directly to a Gen-Z interface.

Of course, not only does Gen-Z this need to fit with OFA efforts, but also other efforts such as the fledgling CXL and the maturing NVMe over Fabric. But as Grun notes, it would be too cost prohibitive for a single company to do any of these fabrics. Just as memory and networking can no longer be treated as one thing on their own, all these fabrics need to be threaded together. “I see it kind of as this big tapestry with a lot of important threads.”

The post Gen-Z to Advance Memory-Driven Interconnect Fabric appeared first on EE Times Asia.



from EE Times Asia https://ift.tt/30aG6sm

No comments:

Post a Comment

Please do not enter any spam link in the comment box.

How I channel my inner Star Trek character at work

In a recent Twitter thread , I self-identified as "some days Deanna, some days Riker." Others shared their own "Star Trek Sp...