00:00 Kurtis Bowman: All right, I want to go ahead and get started. I'd like to welcome you all first to this session. My name is Kurtis Bowman, I'm the president of Gen-Z Consortium. I'm also the secretary on the CXL Consortium board. And today we're going to be talking about these new high-speed interfaces -- where they fit and how they work together.
And I am delighted to have four very distinguished panelists joining me today, where we're going to talk about the way these two consortiums have come together to really make a memory-centric architecture work. And so, with that, let me have the four panelists introduce themselves. Larrie, let's start with you.
00:42 Larrie Carr: Hi, my name's Larrie Carr, I'm a fellow at Microchip in the data center solutions group. I am also a board member of Gen-Z. The treasurer, as well as a board member of CXL Consortium.
00:58 KB: All right, how about Matt? Let's have you go next.
01:02 Matt Burns: Good evening everybody, I'm Matt Burns, technical marketing manager at Samtec. And I'm also a member of the Gen-Z Marketing Working Group Committee.
01:10 KB: All right, thank you. Hiren?
01:14 Hiren Patel: Yeah, hi. I'm Hiren Patel, CEO of IntelliProp. I'm a VP on the Gen-Z board of directors, and I'm happy to be here to talk to you about new high-speed interfaces.
01:25 KB: And then last, but not least, Scott.
01:28 Scott Knowlton: Yeah, I'm Scott Knowlton, I work in the Solutions Group inside of Synopsys. This is the group that does the IP and Services, and I'm a member of the marketing workgroup for CXL Consortium.
01:42 KB: Excellent. And Scott, let me actually direct the first question to you. Provide us with an update on Compute Express Link, or CXL, and what's the latest news from the consortium?
01:55 SK: Yeah, so the latest news is that yesterday we released the CXL 2.0 specification. Of course, this is backwards compatible with CXL 1.1 and 1.0, and we're making great progress on the spec. This is a little over a year since the 1.1 specification was released, so back in July of last year. So, just a couple of the highlights here for CXL 2.0. I'll just highlight the three major things. So, we added support for a CXL switch, support for persistent memory and some enhancements to support security.
So, as you can imagine, the addition of the switch enables us to drive fan-out in devices which . . . And the focus has been on enabling memory scaling and expansion. And with memory expansion CXL 2.0, we're enabling servers to pool the memory within the servers amongst the various devices and assign them to the different workloads that are available.
02:56 SK: And, of course, if we do memory expansion we want to have a device that manages all the memory, and so to pool that together and maximize the utilization of it to stop overprovisioning in every server in the rack. And, of course, we got additions to the protocol to manage the standardization of that management. Again, support for persistent memory, so we want to enable simultaneous operation of persistent memory alongside DDR and allowing DDR to free up for other uses. And, of course, from a security perspective we're adding link-level integrity and data encryption to the protocol. So, that's what's new with 2.0 in a nutshell.
03:39 KB: Oh, congratulations. I know there's a lot of work that goes into releasing any spec at that level, so congratulations to you and to the consortium. Hiren, let me go to you next. Tell us about Gen-Z. What's the latest news from the Gen-Z Consortium?
03:55 HP: Yeah, thanks Kurtis. Yeah, there are many new exciting developments within the Gen-Z Consortium. First, there was an MOU agreement between OFA, Open Fabrics Alliance, and Gen-Z in May to advance the industry standardization of open source fabric management.
Second, Gen-Z recently released its management specification 1.0.
Third, the consortium has bolstered its proof-of-concept demos -- there are multiple companies showing various demos. I invite everyone to check those out at the Gen-Z virtual booth.
And then, finally, development kit is now available for purchase. It does include an ARM-based Linux host that can perform load store native Gen-Z accesses to a Gen-Z memory module, or ZMM for short. The kit enables development of Gen-Z in-band management for fabric managers. It's also perfect for those looking to just get started with Gen-Z.
04:55 KB: Excellent. Well, thanks. Yeah, getting dev kits out I know is also a bunch of work, so congratulations to you and the consortium on that . . . Kind of next question. I'm going to push this out, maybe get Matt and Larrie to tag team on this one. The two consortiums announced an MOU where they're going to be able to leverage the complementary aspects of both technologies. What are you doing to ensure the collaboration is there and to make sure the benefits of these complementary interfaces?
05:28 MB: Well, Kurtis, that's a great question. One of the things I see, just from a personal perspective, is when I look at the membership body of CXL and I look at the membership of Gen-Z, there's a lot of crossover between the two organizations, there's a lot of synergies. I think about the folks on this call, you've got . . . companies, you've got IP providers, you've got EDA, you've got equipment manufacturers, you've got connector companies. So, there's a lot of collaboration, there's a lot of industry buy-in for both CXL as a technology and Gen-Z as a technology, and we see that collaboration between the consortiums with the MOU. You mentioned earlier the MOU that was launched earlier here in 2020, and that's really served as a foundation to establish a working group to allow CXL and Gen-Z to leverage the synergies. Not only the technologies, but also of the membership companies between the two organizations.
06:29 LC: Yeah, and if I can elaborate a little bit on that working group. Right now, there's a large group of people within that working group that are defining the use cases that they see and for the CXL, Gen-Z ecosystem. So, right now they've been working on a bridging function and identify what that bridging function be used for. It's just a little bit more complicated than just memory operations being passed back and forth, but it's looking at a holistic view of management, scale, error handling and topologies. And there's a lot of different use cases being essentially brought forth by the various members, and we're going to have to pick the most popular ones because it is so popular. What this activity is, we'll have to pick the most popular ones and turn it into a speck for the bridge.
07:33 KB: Now that sounds really exciting. Let me follow that up, Larrie. How do you see the technologies benefiting storage and persistent memory? Do you see user benefits coming from the collaboration between these two consortium?
07:53 LC: There is a lot of overlap between these two protocols, but fundamentally, it's a load store connectivity and persistent memory, these new memory technologies, they definitely benefit from a load store behavior, as well as just regular system memory being expanded for increased bandwidth and composability between systems. So, the work is somewhat self-aligning. It's a matter of getting the management, and the system definitions are sort of defined in a way that people know what to expect when they actually fire up the system.
08:42 KB: I like it. Now, Hiren, you and I have talked a little bit about this. Do you have anything you want to add to that?
08:48 HP: Yeah, Kurtis. I mean, I think, Gen-Z allows the media to be abstracted away from the host, allowing for innovation that's decoupled from processor roadmaps. I think Gen-Z also provides the ability to share the memory or storage. Solution providers can create pools of memory that can drive up utilization efficiency, maybe drive down total solution cost, and then finally, the ability to share specific regions of memory between multiple host nodes could allow for node-to-node-type messaging. So, instead of traversing one host memory hierarchy to another host memory hierarchy, one can access the shared memory, simplify the transfer of information between host nodes and improve total solution performance.
09:36 KB: All right. Well, thank you. One of the things that I've heard a lot is, these are both memory-based protocols. That's why they're complementary. CXL is kind of focused locally or in the node as an interconnect. Gen-Z is focused more at the rack or row level and is fabric-based in that way. We've heard a lot of talk about inside-the-box, outside-the-box, how do you see these solutions working together within a system? A data center, a rack. Matt, why don't you kick that off?
10:13 MB: Yeah. I mean, and you make some of the key points in the question that you're posing, Kurtis. When you look at CXL, it's really a point-to-point portfolio . . . not portfolio, protocol, excuse me, going from the processor to the memory, or processor to the device. It scales nicely. It's tightly coupled. There's a lot of performance benefits that CXL has that we'll talk about throughout the roundtable here. So, it's really nice, fits in the box, it's very concise, it's a very clean system. When we look at Gen-Z, same type of advantages in terms of performance enhancements across the system, but it's at the rack level, or beyond the rack scale . . . or beyond the row, excuse me.
And when you look at the collaboration that we see between CXL and Gen-Z, I think Larrie mentioned it earlier, we're really trying to bridge the one protocol to the other, so that their synergies and we have lower latency, higher performance, disaggregation both inside-the-box and also inside the data center at the rack scale or at the row. We also see opportunities to scale into larger fabrics. We all know that data centers are being taxed because of demands, whether it's from AI or analytics or cloud connectivity. Gen-Z allows for scaling and for memory pooling at the rack level or at the data center level.
11:48 MB: CXL gives you the same capabilities of leveraging those resources within the node, and again, leveraging memory, whether it's at the processor or at the accelerator within the system. Something from a connector standpoint, a lot of work has gone into both CXL and Gen-Z to make sure that the signal integrity performs a leading-edge rates. We all know CXL is leveraging PCI Express for its physical layer, PCI Express 5.0 infrastructure. So, that gives us performance at 32 giga transfers per second in the node. And then when you look at Gen-Z, that provides us a pathway up to 112 Gigabits per second PAM4. So, for that high performance, really at the signal level, that's something that's also contributing to the advantages that both CXL and Gen-Z have, and the MOU is really designed to bring those together.
12:48 KB: Excellent, and Scott, I don't want to leave you out here. Why don't you kind of fill us in on your view here?
12:55 SK: Yeah, I was just going to expand on what was already said. I mean, really, if you look across all of this, the common themes that you're hearing is that we're trying to work on . . . In the end, we're trying to work on large data sets of data, and we don't want moving data to be a bottleneck in our systems. And, so, putting in higher speed connections, using PCIe locally, leveraging cache coherency across the heterogeneous components that are tied together with CXL and across the racks in order to reduce the amount of time that we're moving data around, all just result in some higher speeds and increased efficiency.
I mean if we look at CXL 2.0, we're disaggregating the memory from the processor and expanding and scaling the memory to create bigger pools and then provisioning that memory amongst the various hosts. So, now we can share those, we don't overprovision the servers with additional memory and then give us a way, as mentioned earlier, through the protocol, to manage all these resources. And so, the relationship between CXL and Gen-Z, we're providing context and focus on the features necessary for our top three use cases as we go forward in the linkage of these two. We're looking at in-memory databases, composability, we have memory in multiple places and accelerators across servers, and we want to make it really easy to add acceleration to the various systems.
14:26 KB: Great use cases. I can see where the industry will really benefit from the launch of CXL and Gen-Z into their environments, and hopefully cut down on the amount of overprovisioning. I like the way you put that, Scott. Now, in deploying CXL and Gen-Z in these heterogeneous computer environments that you all have been talking about, how do you see that working? What applications do you see them enabling? I'll kind of throw this out to the panel. Why don't I have . . . Matt, why don't I start with you?
15:01 MB: Yeah, I think one of the advantages we see both with CXL and Gen-Z, Kurtis, and the collaboration between the two synergistic buses is that it's not dependent on any one computer architecture. So, when you look at what's available on the market, and also based on the key members, both within CXL and Gen-Z, both of those buses can attach to any number of compute architectures that are popular -- processors, FPGAs, GPUs, et cetera, et cetera. We see that being especially a great value-add when it comes to flexibility, using acceleration engines at the node, and also when you think about Gen-Z being a next-generation memory fabric that really enables system flexibility, data center design flexibility across the industry. So, I know a lot of folks within the data center community are really geeked to see how that rolls out over the coming years.
16:09 KB: Yeah, I agree with you. Larrie, what's your take on it?
16:14 LC: First of all, I think the focus on persistent memory is key, and CXL 2.0, there was a working group formed to work specifically on enabling fabric or serially-attached memory, including persistent memory. And now, based off that working group, there is now a standard to enable pretty much any persistent memory in a standards-based specification such that when newer technologies come online, they now can actually enable . . . plug into an ecosystem and potentially lead with some very standard drivers and infrastructure. The other one is cache coherency, and in both cases, with CXL and Gen-Z, there are cache control and atomics that really simplify the system design. So, instead of moving data around and processing it, now accelerators can reach into a coherent domain and access the data in a very fine grain, as well as share data between accelerators and processors, and the combination of both technologies will really simplify the system design for enabling much more complicated computing.
17:50 KB: Interesting. That just seems huge as far as getting quickly to answers, and with all the data that we've heard is coming our way that's a key piece to any environment, whether it be in the data center or at the edge. Hiren, give us thoughts as well.
18:13 HP: Sure, I'd say that both organizations have members that are helping to make progress on next-generation computing architectures. An application, I think Scott touched on this as well, that can be enabled is compostable infrastructure. I envision this can be done with Gen-Z switches, endpoints, CXL connect points. Resource managers can dynamically share memory accelerators and computational storage. All of this probably can be done outside of the standard box with disaggregation.
18:53 KB: And let me follow up a little bit with you, Hiren. And so, I know that your company itself has done some work in this space of enabling some of the Gen-Z work and in working through CXL. What do you see as one of the bigger challenges as we go forward?
19:11 HP: Well, I think, from my perspective, the biggest challenge that we're facing them from the Gen-Z perspective might just be software development. Getting Linux drivers up and running, getting resource managers written, et cetera. On the CXL, Gen-Z combination front, again, I think Larrie mentioned this, there's already a working group that's looking on how to bridge those two protocols. So, I see some challenges there as well, but yeah, that's kind of the short-term, near-term challenges I see.
19:47 KB: All right, and let me go to Scott. Question that we get a lot is "These all sound great; when can I have it?" So, when do you see this going to market?
20:01 SK: Yeah. So, the CXL and Gen-Z Consortiums, as been mentioned, we have a joint working group that's exploring the use cases, some of which I mentioned above, and they'll look at the features necessary to move things forward. At this point, there's no features that we're tying to a specific release, but they'll be released as they become available. In each organization, both CXL and Gen-Z, have the freedom to release those features to make progress at their discretion in the context of their respective specifications.
20:37 KB: Excellent. Yeah, and I do expect that we'll see a lot of companies, because of these large industry consortiums supporting one or the other, or potentially even both specifications as we look forward in that time . . . as we go into the future. Really looking forward to seeing this stuff come out.
20:58 SK: Yeah, and just to add on to what you're saying, Kurtis, I mean it's like CXL, as I mentioned at the beginning of this, 2.0 was following quickly on the heels of 1.1. So, it's not from the lack of trying on either standards bodies to get this stuff out. There's a lot of interest, and the teams are working incredibly hard to make sure we're making quick progress in all of these.
21:22 KB: Yeah, yeah. Well, and it was great with you in here, both talking about what the consortiums are doing. There's just a lot of energy behind these. Really looking forward to a memory-centric environment coming out. So, yeah, that'd be great. Now, I appreciate all the inputs you guys had. I was going to open up and see if there were any questions from the audience.
21:43 MB: How do they ask questions? Do we know that?
21:52 KB: That's one of the things I was just looking at, was I'm not sure I know how they . . . If they could get through to ask questions or not.
21:57 MB: Do they chat it? Chat to us to . . .
21:58 Elizabeth Leventhal: They can in the Q & A or the chat.
22:01 KB: OK. So, if you have any questions, put them in the Q & A or the chat, and while I wait for those, I'm going to pick on you guys with some of my own questions, and Larrie, I'll start with you.
22:19 LC: Oh good.
22:22 KB: Yeah. I do that just because you and I have worked together for a long time.
22:25 LC: Yep.
22:25 KB: And you're somebody that's on both boards, and what do you see as the real benefit of these two groups working together versus if they were trying to go it alone?
22:40 LC: Well, going alone is a lonely path, and for just about any technology -- whether it be DDR or SaaS or SAT or storage -- we generally end up having two sort of protocols. It's just one of those things that happens where it's very hard to make one thing work for all possible applications. So, a number of people have said, CXL, short reach, inside the box; Gen-Z, larger scale. And in some ways, by keeping the two consortiums going, they can focus on serving it. Interesting problems are important problems. So, when we talk about the scale of Gen-Z, things like multi-path and reliable failover, more fabric problems, you bring the experts there were in CXL, you never . . . The latency, the coherency becomes more interesting and those experts work there. If we meshed it all together, it'd be a lot more busier, I would think.
23:52 KB: Yeah, I hear what you're saying. All right. And let me ask Matt. As a member of the marketing working group, you get to talk to a lot of people about what CXL . . . or what Gen-Z is doing. What do you see as that common bond between the two groups that makes it better together?
24:17 MB: Well, again, that's a great question, Kurtis. I was thinking while listening to Larrie and some other panelists, one of the great things about the Gen-Z and CXL is that they're open systems, and it's really defined . . . It's defined by the membership. It's defined by those members that choose to participate and help influence the technology. From Samtec's perspective, we're a small fish in a bigger pond compared to some of the folks that are on this call, but that also gives us the opportunity to provide our input from our perspective to influence the protocols and the standards based on our expertise and signal integrity and things like that.
Now, that's not necessarily a marketing-related function, Kurtis, but we see benefit from that. And one of the things that I've discovered working within Gen-Z over the last couple of years, and with a little bit of exposure to CXL is the protocols are only as good as the signal there, and there's a ton of work that goes into the signal integrity. Going from the IC to the PCB, and from the PCB to the connector, and if there's cables involved, Gen-Z, I think about . . . connectivity. That's not just one company that solves that problem. There's collaboration between the summit connector companies and the connector companies and et cetera, et cetera.
25:47 MB: So, I think that's one of the advantages. There's a spirit of collaboration within Gen-Z, there's a spirit of collaboration within CXL, and when you harness both of those, it's really, the sky's the limit. I mean, I don't want to sound like a hokey marketing guy with a one-liner like that, but it's true. The industry needs . . . There's industry needs both at the node level, there's industry needs with looking for a next-generation memory fabric that Gen-Z provides. And to second what Larrie said, without the collaboration between the two consortiums, who knows where things go.
26:30 KB: I've got a question in from one of the attendees. It says, "What are some of the considerations for cables and connectors of CXL interconnects, such as physical cable stiffness, quick connect or . . . to maintain EMI and also good economics?"
26:48 MB: I guess that's a connector-related question, does that mean I have to take that one? [laughter] Well, that's an interesting question, and I think one of the things, especially for CXL, CXL leverages a large infrastructure from PCIe 5.0. So, those type of questions, those type of details are being handled within, not to mention PCI Express so much, but a lot of the folks on this call are involved with those efforts as well. So, there's been a ton of effort, a ton of work that's gone into making sure EMI, SI and the electrical performance of those systems, whether it's the connector level at the top card connectivity, that data's there. And it's just a matter of getting involved with the right specifications, the right suppliers and getting the SI support that you need for a customer-specific application.
27:45 KB: Excellent. Scott, any last thoughts before we wrap this up?
27:51 SK: Yeah, I think going back to one of the earlier questions here, one of the benefits, of course, about having focus within Gen-Z and focus within CXL. Now, there's been tremendous interest in both and it's really generating by decoupling and having a standards body by decoupling that from any given company and being able to drive that out, creating heterogeneous systems that can know how to share cache, coherency across all the components, and get from rack to rack and stuff. There's clearly a lot of demand, and having these organizations available and making great progress outside of their given focused companies, I think this just speaks well to the industry and solving the problems that we have to solve these days.
28:38 KB: All right, and it looks like we're out of time here, and I didn't give you a last shot, I apologize, but I want to thank all of our fine panelists -- Larrie, Matt, Hiren and Scott -- for putting your time in and talking to us about these two very interesting and exciting new interfaces. And then, finally, thank you to the audience for coming in, thank you for your questions, and if you have any other questions both consortiums have websites, feel free to hit those and they will get back to you.