Annual Update on New Memory Technologies Implementing Big, Fast Persistent Memory
Guest Post

Hyperscale Storage in 2025 and How We Got There

Hyperscale storage in 2025 is likely to take up an ever-increasing percentage of the overall storage market. Here is a look at where the market is heading and how we got here.

Download the presentation: Hyperscale Storage in 2025

00:00 Jonathan Hinkle: So, welcome everybody, thank you for joining us. Welcome to Flash Memory Summit. This is one of the first panel discussions we've had here, and hopefully you caught our hyperscale session just a little while ago, one of the first in our great virtual program we have here at Flash Memory Summit this year.

00:17 Jonathan: So, today on this panel, we specially have our speakers from the last session, and I just want to give you a quick introduction of those folks. We have Matt Singer, who is a senior staff hardware engineer with Twitter, and Vineet Parekh, he's also a hardware systems engineer with Facebook. Ross Stenfort, hardware systems for storage at Facebook. Lee Prewitt principal program manager, Microsoft, and also Paul Kaler, advanced storage technologist at HPE.

So, I have you listed a little bit of what the different topics we spoke on just a little while ago. And I think the first step is really to explain that these guys have a lot of deep experience. I have some details on that, but I can't quite access it right now because I'm sharing my screen, but basically, Ross was a long-time storage engineer doing SSDs and controllers, hard disk drives, small and large companies, CNExT and SandForce, LSI and Adaptec. Matt is the technical leader for the hardware team at Twitter and an expert in performance-assistive architecture. Vineet is with Facebook. He's managing their flash across the hardware fleet and their data centers, had over a decade experience at Intel for their SSD and their storage group.

01:46 Jonathan: Lee is a principal hardware program manager, 25 years' plus experience in storage industry, magneto-optical drives, spinning rust up through flash, and also a broad range from mobile to data center, NVMe and UFS; it works a lot on standards so deep background there.

And, also, Paul's a storage architect in the future server architecture team at HPE. He was responsible for researching, evaluating future storage technologies to finding server storage strategy for their . . . servers and defensive standards, helped drive Q.3 and the E3 drive, as well as working with Lee and Ross on the OCP, NVMe SSD spec updates.

02:30 Jonathan: So, very esteemed panel here. We're going to open it up to questions here, that's one of the main reasons we have a panel. Wanted to start us off though, just to get oriented, I have a couple of questions I wanted to . . . I was watching the presentations myself and I wanted to just get a little bit more deep into some of what was presented.

So, the first question is really for Ross and Lee and also Paul, especially, I have a couple of questions, there's two of you, three of you now, and I'll let you choose which ones you want to answer. So, first one. You mentioned in your presentation, the new EDSFF E1.S as a form factor of choice for hyperscale systems and the industry, and the industry is now developed a large array of options with E1.S drives and production. What are your expectations for actual system usage and industry volume ramp over the next few years?

03:25 Lee Prewitt: Who wants to start?

[chuckle]

03:26 Ross Stenfort: Lee, do you want to start, and I can follow you?

03:31 LP: Sure. So, there is, yeah E1.S, it's a great variant, especially for our compute fleet. It allows us to be very fine grain on how we apportion storage to the compute nodes with the ability to not take up large portions of the server with the longer E1.L like we do for storage. With that, of course, you can never just have one standard for these things, and so, obviously there, as people have seen, there is a plethora of the widths, and specifically the widths are tuned for different use cases. We find the 15-millimeter variant very useful for the power envelopes that we want to see for the E1.S drives, so we are pretty bullish on that size and we'll be pushing that one across our compute fleet as we move forward. Ross?

04:31 RS: Thanks. So, let me add to what Lee had to say. What's really great about the E1.S is the mounting holes for the heat sinks are standardized, so you can have one PCB, one firmware, one ASIC, and put different sizes of heat sinks on that based on to meet everybody's needs. For Facebook, we really like the 25-millimeter version of the E1.S. The reason for that is it provides very low airflow with high performance and drives very dense solutions. If you look at my presentations, you'll see a whole bunch of pictures of boxes that support E1.S in a variety of flavors, and so we see this as the next generation for where storage micro-scale is headed.

05:21 Jonathan: Very good. So, do you guys have any feeling for how the industry volume is going to ramp to E1.S for your use case? Where do you see the industry? Should we see some uptick in the next year or two of E1.S usage?

05:33 LP: Yes, absolutely. We're basically over . . . our Generation 8 servers will be transitioning from M.2 to E1.S over that generate . . . Generation 8 lifetime, and then we want to be exclusively on E1.S in our Gen 9 time frame.

05:55 Jonathan: OK, very good, thank you. So, second question -- OCP NVMe cloud, the SSD spec. So, with future revisions of these specs, do you see any opportunity for coordination? You have really a set of different selected requirements, especially based on industry standards, such as NVMe and SNIA. Do you see opportunity for coordination with other industry standards groups, or do you expect the spec to follow a similar model where, as it was first developed, selecting some of the optimal features and configs of the standards that are developed in its groups?

06:34 LP: How do I say this? So, yeah, I think one of the great thing about the OCP spec that we've been able to put together so far, is it's really . . . We call it a specification, but it's really a requirements document, and it's the idea of taking all those other industry specs, PCI-SIG specs, NVMe specs, that sort of thing, as you say, SNIA, and that sort of thing and distilling down to if something is optional, maybe we need to require it for our use cases, so we're going to put in our requirement, say that that optional command actually has to be implemented. If there are multiple ways of doing things, we wouldn't say things like, "Hey, let's do it this way." So, it's putting the guardrails on a series of these specs where obviously there's lots of different variants and different ways of putting these things together, and so it regularizes what the SSD vendors have to do for our use cases.

07:32 RS: Let me add to that, Lee. This is really a collection of all the standards out there, and then putting together all the puzzle pieces. So, what do you actually need to build a drive? Many things are optional in many standards, and some things we need, some things we don't. This is a matter of pulling all of the different standards together to explain how you put together the puzzle pieces, and stuff that's missing in standards, adding in the glue to hold it all together so that you can have a great device out there to deploy in the fleet.

08:08 Jonathan: Got it.

08:10 Paul Kaler: And I think I would just add on to that, that some of the best practices we've all learned of deploying flash in such large scale are some other things that we try and put into that. So, definitely pointing to all the industry standards that are out there, trying to pick those mandatory commands that we need, and then also giving some hints to suppliers to say these are other things that we've seen and have had issues in the past. And, so, really trying to make sure that they understand some of those vaguenesses in the industry specs, how do we specify some of that vagueness out.

08:42 Jonathan: OK, very good. All right, thank you very much. So, question for Matt. So, Matt, in your presentation Using Persistent Memory to Accelerate Tweet Search, you showed many different performance improvements and interesting value for . . . hyper-feel applications in your experiments with persistent memory? What do you think are the more significant characteristics of persistent memory to provide that benefit? Is it the persistence, the lower latency, higher bandwidth at lower power? What are some of the most significant contributors, you think, to add value?

09:20 Matt Singer: Thanks, Jonathan. All of those are important, but I think one of the big differences is being for . . . Applications that need that really high bandwidth that you would get from DRAM, it's the difference in the capacity that you can now put on the memory bus. That's one very important difference. There are going to be, I think, a subset of apps that really benefit from persistence, where you want to be able to restart an application quickly that would want to hydrate or have a very long hydration time, but primarily, I think it's all around latency and capacity if you have a need for one or both of those on your memory bus.

10:14 Jonathan: Very good. Yeah, it has so many different characteristics, I think that will be interesting to see what different applications fit to different characteristics.

10:22 MS: Yeah, it's really interesting that for a lot of applications, you really need to rethink how the application is going to work in this space, because as soon as you remove the . . . If your bottleneck its capacity, as soon as you remove that, you're going to hit another bottleneck, so you really have to start thinking about the whole way that your application's going to be reshaped once you try to move to . . . a different system memory.

10:48 Jonathan: Very good, very good. All right. And we're going to get to questions in just a little bit, everyone's questions. So, Vineet, I really want to ask you, though, in your talk, you mentioned several different pieces of information that could be really useful, very useful doing debug at scale, but they're not in current standards. So, what do you think are the highest priority pieces of information of these that you expect to be specified in the future revisions? And are those . . . Is that going to be specified in the OCP NVMe Cloud SSD spec?

11:18 Vineet Parekh: Yeah. Thanks, Jonathan, for the question. Yeah, that's a pretty interesting question -- that's something which we have been continuously working as to what's the right set of information to include to determine whether a device is in a healthy state or not.

So, if I get to the nutshell of it, today we are heavily reliant on smart and we all know -- probably on this call -- that that really doesn't give us much information about what's going on inside the drive with respect to the health. Telemetry, being mostly encoded and encrypted, has a long debug turnaround time and just doesn't work at scale, so adding data, which is much more easily visible to both sides, the host as well as to the device, which can determine if the device is healthy or not, is very important.

So, one of the examples which I highlight in the presentation, which we've added as part of the OCP spec going forward, is going to be on the latency monitoring. So, that's one of the key problems which we see in the fleet when certain devices have a read or a write, or a trim, which is probably going to take extremely high latency, we would love to capture that and see what's the occurrence of those and then determine what to do with the device. So, yeah.

12:36 Jonathan: Very good. Thank you, yeah, that's excellent. So, now I want to turn it over to more of the attendees if I can figure out how to grab everybody back in. Here we go. So, and . . . All right, so any questions from the participants, can you . . . You can write it in the Q&A on in the . . . or you can do this here in Zoom. Let me see if I can . . . For some reason it's minimized and won't come back up now.

13:24 LP: So, Johnathan, there is one in the Q&A right now.

13:28 Jonathan: OK, let's see. So, the Q&A, do you see that one? I can't see that one yet. So, can you read that one for me?

13:37 LP: From our favorite attendee, anonymous, "With heavy reliance on HDD for bulk storage and hyperscale data centers, what is your opinion of the future of the spinning disk where track and bit dimensions are approaching sub-10 nanometer and below, especially with next-gen Hammer knowing that eventually they will be at a dimension physically not feasible?" Anybody want to . . .

14:05 LP: I can certainly talk a lot about that if nobody else wants to start.

14:10 Jonathan: Yeah, Lee maybe you start us off and then others have some time to think about as well.

14:14 LP: So, I think if you see my presentation this afternoon, you can have basically a slide and it shows that the pros for hard drives are cost, and the cons are pretty much everything else, with flash being the other way around. And, so, we use hard drives because we have to, because just the sheer amount of data that needs to be stored. As the drives are getting denser and denser, the issue becomes one of IOPS scaling, because where we're talking about a mechanical device, they just can't get the IOPS to scale along with the capacity, and at some point you hit an IOPS wall where you can't really service that data on that drive of that capacity. And, so, with that, there are things like dual actuator -- multi-actuator in the future -- that may help with that, but then of course that adds complexity and cost to the device.

So, going forward, we do see, we would really love to get to some sort of fancy . . . I don't know what, "fancy" is not quite the right word, but basically, how do we get the cost of flash down where we can then start to get into the TCO realm that works to replace hard drives? So, we can start to get that flash to move down into cooler and cooler data realms because the cost is coming down. And it doesn't actually have to match the actual cost of the HDD, it just has to be in that kind of a TCO realm, maybe 2x to 3x the price of a HDD would be very interesting to be able to start to reduce the number of spinning devices that we need to buy.

16:04 Jonathan: Yeah, that's great, thank you Lee. Anyone else?

16:13 MS: I'll add that I think that we're seeing more and more applications where we wouldn't be willing to tolerate the access time of HDDs, so I think there's going to be a natural progression towards some . . . Towards more flash and less HDD for a lot of our use cases, except for the coldest of storage units, so I think there's going to be a natural . . . just a natural flow towards more SSD storage.

16:47 Jonathan: Got it, yeah, makes sense. Yeah, there's a growing demand for that performance. All right, any other questions? Let's see, any other in the Q&A. I've been able to pull it up now. My screen is gone. So, the . . . Let's see, I can try to unmute the line here. I'll try to unmute it for just a second here. No, I can't do that. Can you raise a hand if you would like to ask a question and I'll call on you. The participants window. OK, I have a question from Ahmad Dinesh. Ahmad hey, this is Jonathan. Let's see, I'll unmute you now. Go ahead.

17:36 Ahmad Dinesh: Hi, Jonathan how are you? Can you hear me?

17:40 Jonathan: Yes.

17:41 AD: Perfect. So, a multipart question actually. So, is the long-term goal of the NVMe Cloud SSD spec to ultimately have NVMe SSD suppliers that can assure compliance to that spec? And if so, how does one actually confirm that they're compliant aside from making self-validating claims of their own compliance?

18:03 LP: Awesome question. You want to start on that one, Ross?

18:06 RS: Sure. So, a high level . . . The goal is to reduce industry fragmentation because all the drive vendors, as you know, are struggling with resources and time and aligning things, and the question is, lots of people make lots of claims, and what's reality versus what's not? And I think that's also a great opportunity for the market. I know there are a number of people -- test vendors -- who are actually working at things right now, for example, I'm thinking a lot of what's been said, but UNH has publicly posted their test plan for validating these. OK, Austin Data Labs has also mentioned that they are working on things, and I'm sure there are others along with those lines, and I think there's a great opportunity for the industry to innovate to test drives better.

19:13 PK: Yeah, I'll kind of echo that. I think the point of getting to an open specification or a requirements document is that we have the industry, that they can see every single requirement that we're putting in there. So, all of the third-party test labs, they don't have to even engage with us if they want to go figure out, "How do we make a test suite that can go and validate that somebody meets all these requirements?" They can just look at the actual requirements spec that we have in OCP for NVMe, and they can go through it and create a test spec so that then vendors can submit that to them or they can come back and say, "Hey, we ran it across this test suite and we got . . . Everything is passing and so we can validate that very quickly. And it's an open test spec so it makes it easy to make sure that everything that we want tested is in there in an open way."

So, I think doing it this way, it makes the testing and validation much easier for the industry because it's all known exactly what we're asking for.

20:09 LP: Right, and then maybe to add to that a little bit in the sense of, obviously vendors have their own testing that they do internally. Now, they can start to leverage the stuff that's being brought up by UNH and OakGate. And, of course, we ourselves have test suites that we're expanding on and developing all the time. So, you could think of that as being that defense in-depth to ensure that over the qualification lifecycle for that device that we're really, really testing it out to all of those different requirements as we go.

20:43 Jonathan: All right, good, thank you. OK, so I see another question. This is . . . "Can you comment for the role of tape such as LTO for data centers of today and especially in the future?" That's Turguy Goker. So, what is the future of tape, I guess?

21:09 PK: Well, I'll just kick off. Everybody thought tape would be dead by now. That prediction has been made over and over again, and we still see tape being used. So, I think that just following that, we certainly see the need for tape in the future. But I think, again, as we see some of the hard drive technologies that were mentioned earlier like Hammer and other future things are going to drive that cost down. I think that's going to make the role of tape potentially even less, just like flash, as its cost goes down, it makes the role of hard drives even smaller. So, I think it's a diminishing role, but I still see it as a significant role.

21:49 LP: Yeah, agreed. I think it's one of those . . . The amount of data and the temperature of that data have to align with the technology used to store it. And, so, tape obviously is excellent for very large bulk, very, very cold data. And a lot of the stuff that we see around that is, it's all those quarterly reports that are now being archived because they got worked on really hard for a week and the report got made, and now it just needs to be stored away because there are retention policies that are required for that sort of data.

And, so, tape is a good candidate for those sorts of things that we expect. It's got to be there just in case, but we expect we'll never actually ever access it, that sort of thing. And over time, as tape's role grow, as it shrink, some of the things that are coming up, tape does have issues around its . . . You have to be very careful on how you store it and the control of environments that it has to live in and that sort of thing. And, so, you've heard some things, Microsoft have been working very hard on DNA storage. So, again, something that's very cold, but has a high durability factor and that sort of thing.

23:11 Jonathan: All right, very good. So, let's see, I have a question from Chuck. Hey, Chuck. Let me take you off mute here. There you go. Hey Chuck, go ahead.

23:30 Chuck: Hey there, Jonathan, and thank you for the great session. Actually, I did not cue up a question, so I don't know how that one got there.

23:44 Jonathan: OK, your hand is raised, that's . . . [chuckle]

23:46 Chuck: I must have accidentally pushed a button.

23:49 Jonathan: OK, no worries.

23:49 Chuck: Thanks for what you're doing.

23:51 Jonathan: Yeah, no worries. So, anyone else have a question? Put you back on mute. All right, any other questions? Oh, here we go. OK, this is from Turguy again, go ahead. Turguy, are you there? OK, Turguy asked a question earlier about the tape. Turguy, can you hear us? OK, I can't hear you. Unfortunately, I can't hear you, Turguy, so please, maybe you can type in the Q&A again and we'll try there.

I have another question from William Chung. "With the adoption of OCP Cloud SSD Spec, do you see the need for SSD customization reducing in the future?"

25:00 LP: That's the main goal.

25:01 Unknown Speaker: Right, exactly.

25:04 Unknown Speaker: That's exactly what we'd like to have happen, is have a high-quality device and have everybody aligned to what they need.

25:15 PK: We still think that, especially as HPE has joined the fray here to work with the cloud aspects and bringing in some of the enterprise aspects, what we've definitely seen that there are different requirements between enterprise and cloud, but the vast majority of them are shared. And, so, that's really why we saw the value in joining this because we get to, we think, a unified spec where it would be a similar . . . One device that may have a couple of settings that change based on the use case it's going into, but it's still extremely highly leveraged and maybe it's just a couple of settings in firmware that can change even across the same firmware.

So, it's definitely our goal to minimize the amount of customization and change so that . . . To the point earlier about debug and validation testing, the more testing that gets done by multiple people that's actually testing all the same hardware and the same firmware, the more value there is in that.

26:11 Jonathan: Very good. All right . . .

26:12 RS: Paul, we're looking forward to you testing drives and finding problems before we do that we could leverage.

26:21 Unknown Speaker: And vice versa, yeah.

26:23 Jonathan: All right, I have another question from Ahmad. Let's see, OK, Ahmad, you're on, go ahead.

26:30 AD: Hi, everyone, again. Paul, I think you might have already answered this, as you were answering William's question, actually, it was tied to Azure, as HP and Dell are joining into driving the next generation of that specification, hyperscale and OEMs typically do have diverging features, in addition to even some firmware features, diagnostic capabilities, and so I understand your kind of growth, where we can get to some sort of super set, there'll be configuration options, it sounds like, from what you're describing, to configure it differently, depending on the use case. The form factors typically differ quite a bit though as well, E1.S being driven a lot by hyperscale, some OEM also driving E1.S, but do you see a convergence in the actual form factors as well, or is this spec actually -- having not read the spec, perhaps you guys can expand -- does it drive to very specific form factors and capacity targets for those? Or is it more of a firmware-level spec and diagnostics and less about the actual end form factor?

27:33 PK: That's a good question, especially for folks that haven't . . . are not familiar with it. It is more of a firmware-level spec, there are more things than just the firmware requirements in there, but I would say that's the predominant bulk of the spec or requirements document, so it's what does the firmware look like, and can we get to a unified firmware? So, I think that's the majority of it. There are lots of other requirements in there. There are some specific requirements around form factors, so there's some power and performance and other things around form factors as well, so it is a pretty comprehensive spec, it covers lots of different things.

28:08 PK: But I think, from a convergence perspective, I think that there will. . . from a form factor versus just firmware, I think it's easier to get to a firmware convergence where we can maybe have a couple of configuration options that are very quick and easy to change over based on the use case. I think from a form-factor perspective that will likely continue to be diverged. So, I think, for instance, we've got to be able to handle two servers much more than I think the hyperscale space does, so they're very focused on one-use servers, which I think drives the interest on E1, whereas we're much more focused on E3 as a future form factor, but still getting to leverage a lot of that firmware code because really, when you look at the form factor, it's the same SoC, the same ASIC, it would be the same firmware across different form factors, so there're still highly leveraged capabilities between that.

29:01 Jonathan: Yeah, I think there is a . . . clout for E1.S. Remember also that the earlier stage of this, where it started with Microsoft and Facebook, especially aligning on, "Here's a common set." And, so, there's that common set, and then there's an ability to branch out where it needs to branch out, only, right? So, Ross or Lee, you can comment on that as well. And in not just form factor maybe, but it's firmware plus form factor plus other things as well.

29:27 LP: Right.

29:28 RS: Right.

29:28 RS: There's . . .

29:29 LP: Go ahead.

29:29 RS: There's security, there are many topics that are in common. For form factors, it does list requirements around form factors, but that doesn't limit that those are the only form factors you can use as the majority of the other stuff leverage. The goal here is to find commonality among everybody and drive towards common sets of requirements.

29:54 LP: Yeah.

29:55 Jonathan: All right, good. So, a related question from an anonymous attendee again. "What's the key differences between OCP Cloud SSD and generic SSD?"

30:08 PK: Well, I think the way I would put it is, generic SSDs -- if what you're talking about in the question is generic firmware -- that's each vendor's kind of implementation of what they think their customers want for firmware. And what we've seen, is if you, look, pick any drive, what you wind up getting across the industry is everybody implements it slightly different, so generic firmware is not implemented the same way across different suppliers, so you wind up getting some features implemented, some features may not be implemented, and so you wind up with a hodgepodge of compatibility issues between those SSDs, depending on what the supplier decided to put in their own generic firmware. Whereas with the OCP NVMe firmware spec, we're really trying to guide everybody to say, "Here is what we need," so across multiple suppliers they'll implement it the same way so we can have great multisourcing between suppliers, it's good for assurance of supply. So, that's really the biggest difference, is that it would enable us to use an SSD from multiple suppliers and not have to worry about compatibility differences between them.

31:11 Unknown Speaker: The thing . . . Kind of to riff on Paul's take there, to go even deeper, generic firmware is almost an oxymoron, unfortunately, because as he says, each vendor does it slightly differently. But then if today, before the OCP spec at least, all of us customers each had our own purchase specs, which then called out completely different things for things like, say, telemetry or whatever. So, we're all asking for the same things, but we all did it in different ways, and so it becomes this multidimensional problem that the guys, the vendors have to kind of have for their firmware and all their different firmware builds for every single customer and that sort of thing, and so that's kind of . . . We want to get away from that and actually truly have the OCP variant be that generic.

32:10 Jonathan: Yeah, it makes sense. OK. And, so, it looks like we're getting close to time, we're a little bit over . . . So, I want to make sure we hit . . . We have two more questions. I'm going to ask these last two as our final questions.

32:22 Jonathan: One question from Jonathan Hughes: "Question regarding the human-readable telemetry. With each vendor requiring unique telemetry data to debug implementation-specific issues, do you foresee a requirement to standardize human-readable telemetry? And then, if so, if yes, then does it make more sense to drive the standard through OCP or NVMe? And any other thoughts on human-readable telemetry." I think, Vineet, this question might be for you.

32:49 VP: Yeah, I'll take the first part of the question and I'll defer the last part to Ross to answer in more detail. I think there is two . . . Just to highlight a bit, there is a two-fold problem with today's telemetry being encrypted. The first one is just the scale, when tens of thousands of drives which fail in the data center, it is just impossible to capture the telemetry information today, which is encrypted, send it to the vendor and the vendor's going to take its own time before it tells us the first-level debug of what's wrong in the drive. So, we need to speed that whole thing up by . . . and the goal is by having a human-readable log information, we . . . as hyperscalers should be able to debug what's going on wrong with the drive just with the first level.

The second is a privacy problem. As we keep focusing more and more privacy, some way it is just going to be impossible to share a lot of these encrypted information back to the vendors, so that's the two-fold problem which we need to solve.

So, it kind of answers my question that, yes, we need to get to the standardization of the human-readable log, which can help us debug at least the first level information of is the drive in the . . . or not. Now, whether it should be done in an OCP or NVMe, I'll let Ross answer that. Ross, go ahead.

34:04 RS: Well, Lee, you might have some opinions on this topic. Do you want to offer any opinions?

34:09 LP: Sure. So, we take security very, very seriously at Azure because obviously it's customer data. And, so, with that there is a mandate that no binary data can leave the data center, ever. And so to be able to debug the device as Vineet says, debug at scale, it doesn't have to be English text, but at least we need the decoder ring, so that we know exactly what it is before we can send that on to the vendor externally. And, obviously, if there are things there are certain low-hanging fruit around, how we can standardize some of the things like, "Hey, what about a trace log? Can we standardize how a trace log is put together so that when you dump that in telemetry, we know what's there and we can say that "Hey, it's good to go ahead and give that to you guys to debug what the issue is."

Panic IDs and very fine-grain assert messages, things like that, are all great ways to be able to do this in a way that it's around that whole binary issue. Where it gets standardized, NVMe, OCP, I don't think it really strongly matters to me which way, it's just that we need to have a decoder ring for all of that stuff.

35:37 Jonathan: Very good, thank you. All right, one last question. "So, with the emergence of persistent memory and fast storage devices, memory-mapped I/O is used to access such devices. So, memory-mapped I/O in the past was used for read-heavy workloads. Do you believe memory-mapped I/O will need to adapt for other kinds of workloads?" And start us off with Matt here.

36:02 MS: Yeah, I definitely think that there is going to be some adaptations. I don't think it's necessarily going to be in the storage device because I, for one, would not trust a storage device to, based on NAND, to adapt to the memory-mapped writes. The application's going to have to deal with that sort of thing. Now, with something like 3D XPoint, the media is able to adapt very well to taking random writes in a memory-mapped mode, so there's going to be a variety of things, I believe, that have to happen. But I definitely think that we're going to see more and more workloads start to move into this kind of semi-memory-mapped operation.

36:49 Jonathan: And to your point, I think in your presentation too, CXL and some of those standards will help with some of that as well. Very good. Anyone else want to take that question?

37:00 LP: Yeah. Persistent memory in the memory channel is awesome for doing a memory-mapped I/O, of course, right? Because especially in the systems we put together, the DAX ability within Windows, it's a true zero-copy. You're doing load stores directly to that physical memory addressed in that persistent memory. As you start to then abstract that going back to, hey, it's actually doing something over the NVMe and over PCI bus or other buses, the paradigm gets a little bit weaker. But especially if you're using a very fast device -- i.e. like a 3D XPoint-type media device -- it does have some very interesting wins to help out using memory-mapped, and it also then starts to harden your software as you move forward into a persistent memory-type environment.

38:01 Jonathan: Very good. Well, I think we're at our time here. Thank you so much everybody for joining us. Thank you, panelists, esteemed panelists, appreciate it. Thank you for the time, thank you for the great presentations this morning and everyone enjoyed Flash Memory Summit. Thank you very much. Talk to you later.

38:17 LP: Thanks Jonathan.

38:17 Unknown Speaker: Thanks everyone.

38:18 Unknown Speaker: Thank you.

38:20 Unknown Speaker: Thanks Jonathan.

38:20 Jonathan: Thank you.

38:21 Unknown Speaker: Thank you.

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Close