Challenges in Hyperscale: What Hyperscalers Care About Accelerating Performance With NVMe-oF
Guest Post

Top Ten Things You Need to Know About Big Memory Management Today

As persistent memory becomes a more common form of system storage, big memory management will continue to grow to be a very popular topic. But how are implementations going? What are some key issues that have become apparent? Here are ten things you should know.

Download this presentation: Top Ten Things You Need to Know about Big Memory Management Today

00:00 Chuck Sobey: Hello, and welcome to Flash Memory Summit's panel discussion on the "Top Ten Things You Need to Know About Big Memory Management Today."

I'm Chuck Sobey with ChannelScience, and I'm also your FMS conference chair. We're glad you're able to join us for this 15th annual Flash Memory Summit, if virtually. My thanks go to Frank Berry, VP of marketing at MemVerge for organizing this great panel and providing us some challenging questions to get started. Frank structured the panel to go all the way from the medium itself up to the end user.

I'll briefly introduce our panel to you. Steve Scargall from Intel will give us the perspective of the persistent memory manufacturer; Charles Fan from MemVerge gives us the view from the software that manages big memory effectively; Kevin Tubbs from Penguin knows what it takes to create and sell big memory systems; and Hank Driskill from Cinesite will bring the perspective of someone who actually uses big memory to create new worlds on video.

01:05 CS: The way this panel discussion is structured is, we'll have an individual question selected for one panelist and then we have some plenary questions to pose to the group. I encourage the audience to enter questions into the Q&A and we'll gather those. About five minutes before the end of this presentation of this panel session, we'll stop and go to our lightning round, where we'll give everybody a chance to give their top two things we need to know about the big memory management, and that will fulfill our title on the top 10.

So, our first question goes to Steve from Intel. Steve, you authored a popular book for programming persistent memory. Why was it needed? Why wasn't programming as simple as making Optane persistent memory just look like storage and DRAM, so it's plug and play?

02:05 Steve Scargall: Great question, yeah. So, persistent memory programming has some interesting challenges that are new and different to standard memory programming, such as the ability to persistently store our data directly from the application, directly to the media. Our CPUs only atomically store 8 bytes at a time and we also need to think about cache line granularity versus blocks, that you would typically do with storage. So, our programming model states that applications using our Persistent Memory Development Kit can take simple libraries, integrate them into their applications and take the advantage of persistent memory programming and the features that come with that.

02:51 CS: Thank you. So, our next question goes to Charles Fan from MemVerge. Charles, MemVerge's Memory Machine allows applications to provide support for Intel Optane persistent memory. How do you make that possible, differently than Intel?

03:08 Charles Fan: That's a great question. As Steve mentioned, Intel provided a number of ways to access and manage the Optane persistent memory. There's the App Direct Mode, where you can program it with the API, such as the book from Steve explains. It also offers a couple of compatibility modes, namely the Intel Memory Mode and the Intel Storage AD Mode where it can be backward-compatible with DRAM and with storage.

MemVerge provides an additional method that is software-defined. So, how it is different from the native modes Intel provided is, the mode Intel provided come with the hardware and firmware and it is being configured for the entire system, where our software-defined memory machines are created for each running application and each running processes. So, you can have different ratios between DRAM and PMem created for each process and you can have different policies of algorithms configured for them. And we work somewhat like Intel Memory Mode, which is providing a DRAM-compatible service to the application, but we can do this in a more application-specific way. And we can allow more flexibility in terms of performance cost tradeoffs, as well as better isolations between the different applications running on the same server. So, this provides an additional way that application can consume the Intel Optane persistent memory without re-write the application.

04:53 CS: Great, thank you. So, next on our panel, Kevin Tubbs from Penguin Computing. Kevin, congratulations on Penguin's being the first server supplier to offer big memory solutions. Why is it that Penguin took the risk of being on the bleeding edge of this market?

05:13 Kevin Tubbs: Thanks Chuck. That's a very good question. We didn't really view it as a risk, it's as more thought leadership. We've been tracking some trends in the market right now where customers are really focusing on data-driven workloads and an abstraction away from the hardware layer where they're looking at more capabilities. So, you're seeing this heterogeneous compute environment that is working on multiple, different workloads, a lot driven by data and AI, and in mail.

As we saw that, we knew that memory technology would be very important for that, and more importantly, software-defined architectures would be how we would deliver that capability to our end customer. So, we started off very early partnering with Intel and MemVerge, specifically, as one of those key software-defined architecture providers.

06:08 KT: And what we feel that by investing in that early, understanding how to get this emerging technology in a way that's consumable and allow it to be adopted by more end users faster. So, by doing that, you're able to deliver our platform, our capability with, as Charles said, no re-write of the code in a flexible manner. Then that allows us to deliver an optimized hardware platform and deliver it to an AI/ML workload, or traditional HPC workload, all the way out to VDI. And we felt like this combination of cutting-edge memory, technology, and hardware, coupled with software-defined architecture was something we should invest in.

06:56 CS: Great. And I like thought leadership rather than risk. That's a good way to look at it. The next question for Hank from Cinesite. In your creative field, I understand that people want to use the latest tools. It's said that bleeding-edge animation tools make crash recovery especially important. Well, why is that?

07:16 Hank Driskill: Filmmakers are often wanting to create imagery that's never been seen before. So, when you talk about latest tools, it's not so much vendor tools, although those occasionally have bugs. It's more often the case that you're talking about tools that are being developed in-house to enable something that you couldn't do otherwise. So, the upside of that's exciting, the artists are capable of creating something they couldn't do before; the downside is you're building those tools while making the film, and so they're at best a beta, so your crashes are going to be more frequent. Because the story is being iterated on all the way to the finish line, because the art has this really high bar they're reaching for, you end up in really compressed schedules with artists working late nights and weekends, and the costs of crashes become very significant towards delivering the film on time.

08:10 CS: Thank you. It's great to get that insider's view. Next is Tim Stammers from 451 Research. Tim, what impact will big memory have on the industry and when do you predict it'll have it? You're on mute. That's probably 2020, you're on mute. Thanks.

08:35 Tim Stammers: Yeah. Great question. The impact is going to be large. We suspect it's going to be large, we're pretty confident it's going to be large. How large, it's not quite clear, and it's more about how long it will take. We are at the very beginning of this because we have seen, this is Optane memory, is the first mainstream storage class memory or persistent memory on the market. So, we're at the very beginning of a change, it's going to be a big change. The immediate impact, the top level is that we will see faster processing, but not just faster processing, but also reduce costs, and greater availability, and greater flexibility of the way server resources are used.

09:23 TS: It's going to involve a long-term change in database architectures that's where companies like Charles' MemVerge comes in trying to make that transition much easier for companies, not requiring heavy re-writes to applications. And, basically, what you're doing is, this is going to allow a heavy increase to the amount of memory you can put on a motherboard or you can share across a cluster of servers, and they're persistent.

The key here is it's . . . There are two things here, it's not just the addition of an extra layer of memory. It's an extra layer of memory that is persistent, which of course affects the availability by allowing restarts. We've already seen strong endorsements for the PMems from third-party software vendors so, they're very confident this is going to deliver their customers a big boost. I think that's enough to say for now.

10:18 CS: Great. Thank you. We're going to move into the next part of our panel discussion. We call these plenary questions. I would like to open it to the floor. Maybe we can go in sequence here and after that chime in as you like. Gartner recently wrote that data is growing tremendously and the urgency to transform data into value in real time is growing at an equally rapid pace. How does that affect memory infrastructure? Steve, can we start with you?

10:50 SS: Sure. Yeah. I think this is one of my top points that we'll talk about later, but it's all about moving that data as close to the CPU as possible, where it can be processed in real time at low latency, high bandwidth. It's really where persistent memory comes in.

11:11 CS: Charles, would you like to expand at it?

11:14 CF: Sure. So, I think that the trouble is that both of these are happening at the same time, where the size of data is growing and the speed that's required to process it is increasing as well. And when you only have one without the other, there are solutions today. There are good scale-out solution on storage side to handle the capacity, there are good software framework that's memory-centric to the RAMs and DRAM, they can handle the speed. But when both are happening, they're not a good solution.

So, what we are predicting is there will be a more heterogeneous memory market. Today, it's dominated by DRAM, but tomorrow it will have DRAM, PMem and maybe additional type of memories, and together that they can solve this problem of big and fast.

12:03 CS: Thank you. Kevin, your comments? You're on mute.

12:11 KT: Yeah. I got it. Sorry. Yeah. And I totally agree with that. And I think for what we've seen in the market, and I think one of the key things with memory infrastructure is related to real-time work or as a low-latency workloads. And areas where the data is greater than a memory are the costs that you would have to do to put that workload near the memory and those workloads. We're just starting to see that, to Charles' point, in one case I needed more types of compute or specialty computed are ASICs to deliver the kind of performance that I need, but it was actually the need to have the right level of compute with memory increased near the compute that allow certain workloads like graph analytics to actually accelerate and actually accelerate at a more cost-effective and scalable rate.

So, I think as we get through, as this big memory matures, we'll learn that certain techniques and approaches that we took to accelerate workloads may not have been the only way, and by having larger memory footprints and memory infrastructure, as well as data services on top deliverable by persistent memory, it just really gives a lot more flexibility to leveraging and optimizing the workflows and applications. And I think as the data continues to grow, that need to push at real time will come really apparent with big memory computing.

13:56 CS: Great, thank you. Hank, you want to give us your take on this?

14:03 HD: Yeah, I've been working in the film industry for 26 years now. And so we've gone through generations of evolution in the hardware, and it's really exciting to work side by side with an artist as a technical person because no matter how much it grows, they will fill it to capacity every time a new advance comes along that enables some new level of art to be created, which is just extremely exciting to be writing along with, and the challenges that that creates on a technical perspective in how to achieve that new higher level.

So, the first film I worked on, we were working on boxes with 128 megabytes of RAM. Now, the P&M box that we got for testing had 3 terabytes in it, and we filled it with high-end simulations and creating these really amazing images. So, no matter how much they create, it'll always fill. This has that combination of not only letting us take a big step forward in how much memory we can put in a box, but the persistence opens up all kinds of new workflows that are really exciting.

15:21 CS: Sounds exciting. I can't wait to see those films. Tim? What's your view?

15:26 TS: My answer would be, my immediate answer would be, that the Gartner statement of the demonstration, that this is not a solution looking for a problem. This is a solution that has a strong prospect of meeting a problem we already have and to be, to echo Hank, I would say that the IT industry has never stopped finding ways of using more processing power or more processing performance, and Hank talked about graphics, and animation is a classic example of that.

What else was I going to say? Memory has long been the being the bottleneck for a long time and performance and this helps solve it. So, the short answer is, it's the problem, the problem already exists, and this is a prompting solution.

16:14 CS: Thank you. Yeah, I like the perspective. When I go to HPC conferences, the physicists there, they don't talk about solving problems faster, that's always a key, but what they're excited about is that they get to solve bigger problems than they were ever able to do before, and that's I think what you guys are all . . . OK, great, let's move on to the next question, and I'll open it up to whoever would like to start off. The question is the expense of volatile and scarce DRAM user experience hasn't changed since the invention of DRAM in 1969. What took so long?

16:53 TS: I raise my hand for this one.

16:55 CS: OK.

16:57 TS: I have written about this subject a few times, and I have trotted out the same cliche again and again: The road to new memories is littered with broken promises and failed dreams. It's damn hard to do it, and the biggest problem is you can make a new type of volatile memory work in laboratory, but manufacturing it at scale is the big challenge. And there is one very large memory maker that cracked it, put a memory into service in Nokia phones, and it was taken off the market within a year, and that very large memory maker fully acknowledged the failure of that, unfortunately, a failure of that project and never explained why. And it's such a classic example. This stuff is not easy -- not all technical challenges are going to be overcome, and that's why.

17:48 CS: Thanks. Who else would like to comment?

17:53 CF: I would have given the same answer. The physics is hard, and then the manufacturing process is hard -- there are many hoops you have to jump through to create a successful memory. In fact, if you broaden the memory type, there are only really three successful memories so far: DRAM; SRAM, which runs as part of the CPU cache; and NAND flash, you know, the storage SSD is type of non-volatile memory there.

So, those are the only three that really achieved commercial success, and there are probably a hundred types of memory that have been researched as Tim alluded to. So, it's very difficult. So, that's probably the most important answer, but I also think in addition to that . . . How do you get the software ecosystem to embrace the new memory type is also not easy, and that's something I believe Intel is trying to do

Intel, with the collaboration with Micron early on, it really was the first one to introduce his Optane PMem into the market, and I think what Steve and Tim are working on is providing various ways for the application to use it easily, and we are kind of joining the team here from as a third-party ISP, providing a virtualization or middleware layer to make it easier for that application to be adopted. And I think this will be the key for the success for the technology, is how fast the applications and software going to move on top of it.

19:32 HD: I would like to add, because you touched on it a little bit, the advent of SSDs was a complete sea change for our industry. The move from simple illumination to global illumination, the idea that you would actually calculate the physics of light transport around the scene as it bounced around umpteen times meant that you had to keep the entire scene in memory in some form or other, and that was so prohibitive until the advent of SSDs where you could swap in and out parts of your geometry to something while not as fast as RAM was fast enough that it enabled it. You saw the entire industry shift in the years that followed that; you saw the quality of imagery take this huge leap forward in the early 2010s, and most of the major companies in my industry started writing their own renders to take advantage of it, because of that.

And so I'm excited about PMem and this new thing because I was there for that, I got to watch that all happen, and what a huge impact that had on the quality of imagery the artist that could create. So, I'm just super-excited to see what will come from this.

20:51 SS: Yeah, and just to tie some of that together, Intel was working on persistent memory -- at least what we know as Optane persistent memory today -- for almost 10 years. It took us almost a decade to get to this point, and then what we learnt was that talking to people like Hank and people that we've worked with, is that persistent memory programming, it's not easy. There's a lot of harder intricacies that you have to understand and develop, and that's why we have the book and the Persistent Memory Development Kit to abstract all that stuff away. We don't want people reinventing the wheel every time they want to adapt an application to use persistent memory, and it's not what we want developers to do. We want developers to work on development, right?

Like Hank was saying earlier, artists should be working on things that matter to artists like creating worlds, creating characters, not worrying about "I'm going to go make a cup of tea while my system reboots or my application restarts," right? That's my take on things.

21:50 KT: Yeah, and I think that that also speaks to . . . We agree that their hardware is hard, but it speaks to the end user and application and the demand driving from that end, and I think there's two different aspects, just the data-driven and simulation-driven needs that Hank spoke.

But also, similar to what we talked about with SSDs, there was this transition to software-defined architectures that really opened up the ability for hardware providers and software providers to be first-class citizens and push each other in the complexity of the hardware which required the SDKs and the ability for programmers to use it effectively. But that accorded us great, better tools at middleware, so I think as we started seeing that demand, both of them are actually pushing each other faster, and we'll see that continue to grow the adoption.

22:52 CS: Thank you. I want to remind the audience that if they have a question, they can type it in the Q&A because we could go on and on. We are going to get to our lightning round in a minute or two, so to give anybody a chance to type, I'll just add my take to what we're talking about right now.

I've consulted for several memory startups, and the challenges of any of the new physics is quite daunting, and a couple of things you've got to scale to billions upon billions of them, and so I do probability analysis on these types of things. And if you're trying to go after memory, memory just has to work. Nobody wants to know about the errors that are possible with your areas that you're hiding under the layers and the margins that you have to drive to get things like 1016, 1018 types of performance numbers are very, very challenging. It's easy to get the average; it's really tough to get the distribution.

23:57 CS: So, I'll move on to our next plenary question. Actually, now, we're five minutes to the end. It's actually time for us to go to the lightning round, so we have time.

So, we're looking for the top two big ideas for memory management from each of you so that our audience can walk away with their top 10 list, so we'll go right down our list again and begin with Steve, please.

24:24 SS: Yeah, thanks. No. 1 on my list is, like I said before, getting data closer to the CPU in the current memory storage architecture, we have the CPU, DRAM and then we have to drop down into NAND SSDs, which leaves us with a considerable memory capacity gap and a storage performance gap. By filling those two tiers with persistent memory and Optane technology in SSD form, again, we're pushing a lot of the data directly to the CPU that applications can consume directly in user space. For No. 2, again its similar, is that the memory delivered storage capabilities, not memory speeds.

25:06 SS: So, by keeping data in persistent memory, we can eliminate a lot of the requirements and needs to page data from slower storage tiers into memory before the applications can consume that. And we also gain the benefit because it is persistent and applications can now restart in seconds versus minutes or hours -- is what it might take today, again going back to the example that Hank gave us earlier.

So, what we're improving the availability of our environment significantly. We've got examples where applications can go from three nines to four nines to five nines. Because with memory allocator and virtual subsystems here, they're able to now intelligently place data in the most optimal tier, whether we do DRAM, PMem or storage. And then we've also got new features such as snapshots, clones, replications, deduplication, etcetera, that are all possible. And I think in the future what we're going to see particularly working with Charles and MemVerge, you know, we were able now to start using these building blocks to create pools of memory. We are no longer constrained by what we can fit on our desktop or a survey -- we'll be able to use memory in the cloud at cloud scale.

26:22 CS: Thank you. Charles, you're up next.

26:24 CF: Sure. So, my first thing is that thanks to Intel and others who are developing groundbreaking new memory hardware, we are going to see a more heterogeneous memory ecosystem. And I think there emerge a need of a layer of software that allow application to consume memory in a more software-defined way. So, this is my first point. There will emerge a big memory software layer that really helps the application consume the various type of memory hardware together and building intelligence and capability into those layers.

My second point is actually related to persistence. We believe with the Optane persistent memory being able to persisting data without requiring moving the data to a separate storage unit it's actually providing interesting ways to making application become more mobile. And especially for a cloud-centric world.

27:36 CF: So, we are living in a world that's quickly moving to cloud. And they're the cloud-native movement where the stateless applications can be easily deployed anywhere and moved anywhere. But it's not so easy for the stateful applications, because stateful applications use storage. To store them, it's not so easy to move storage. Having a persistent memory and big memory software together, especially with data services such as snapshot, give you another encapsulation of the application that can make it more mobile and you can deploy anywhere, any cloud, and you can move it anywhere between clouds. And I think this is going to provide another mechanism to make the cloud easier for the enterprise to adopt as well, so that's my second point.

28:21 CS: Thank you. Your top two, Kevin.

28:24 KT: Yeah, so my first one, following along with Charles and what Steve said, I think it's been falsely driven by the end-user application. We're seeing this abstraction away from the hardware layer, not that it's not important. But the customers are more interested in "How do I accelerate my workload?" What's my capability perspective? And as I do that in a data-driven workloads that we're seeing this explosion in, that's going to rely on memory-centric technologies. And I think as that technology continues to grow, the key to leveraging it is software to find architectures similar to what Charles is saying. So, I think . . . having the right ecosystem of hardware and software together, but delivered in a way that allows a customer to consume it and adopt it faster, is going to be a very key point.

29:16 KT: On the second point, similar to what Charles said, it's really going to the pin on workload portability. As I start accelerating these workloads and understanding what the big memory computing can do to accelerate and add more data services and capabilities around my workflows, I'm now going to be able to want that to be pervasive from my workload and workflow. And as we move into computing continuum that goes from the cloud to data center where we have on-premises needs all the way out to the edge. And your ability to deliver that workflow and a workload portability in a software stack that takes advantage of the hardware as it moves through and operate on the data where it is, it's going to be very important to delivering that. And I think memory-centric computing and big memory computers are going to be at the center of the real-time and accelerated workloads.

30:16 CS: Thank you. Hank, your top two.

30:21 HD: Yeah, in the animation industry, maximizing the amount of time the artist is spending creating art is absolutely essential. There's this old adage: High quality, on time, on budget -- choose two. And that adage is affected by technology. There's always going to be budget constraints, there's always going to be scheduled constraints, the movie has to finish and get into theaters or wherever it's landing nowadays. But technology enables you to create higher quality imagery and new imagery you can create before in the existing time or enables you to create imagery in less time.

This particular technology is so exciting because it's going to . . . By minimizing downtime, there's a huge psychological component to improving artist morale, and enabling them to stay productive more frequently for longer periods, it means they're going to create higher quality art within that budget and time constraint.

My second one is the advent of PMem. These large memories and persistent memory is going to enable us to think more about keeping our compute resources active.

31:31 HD: Artists right now are very reluctant to shut down an application or to step away from their desk and leave room for other things to run on the piece of hardware they're using. And this is going to enable us to swap things out more easily, more quickly so that you could free up those resources when they leave at the end of the day, whenever we allow them to do that, or even when they go to a meeting, opening up the compute resources to be used for other things, which again, increases our efficiency, increases the quality of the artwork of everything.

32:06 CS: Thank you. And Tim, you want to round out with our . . .

32:08 TS: You know, once again, once again, I find myself echoing Hank. Thanks, Hank.

Yeah, I would say that although we've talked a lot about performance -- and performance is going to be really important, and I've said that myself -- I think it also needs to be remembered that PMems, the advantage of PMem will also be about flexibility and cost reduction, cost reduction for the same level of performance is equally possible, and availability.

And then the other lighting point I would make is that we have a lot of work to do, companies like Charles' are trying to fill the gap and make it easier, but this is major rework of applications and so there is going to be a huge virtualization layer, there is going to be a lot of restructuring of applications and it's about reducing . . . It can be about reducing cost and it is also definitely about availability and flexibility. That's it.

33:07 CS: Thank you. So, we have gone over our time and there actually are a few questions that are queued up. We're not up against any deadline as far as I know there's the last session of the day. Can you gentlemen hang on for a few minutes to field these last questions?

33:23 KT: Sure.

33:24 CS: OK great. Hank, I know you wanted to answer one. Let's see.

33:30 HD: Yeah, there was a question that came in over a text that said, as I mentioned earlier, SSD allowed new visual fix to emerge. In particular, I was talking about the advent of global elimination. It says, "What type of 3D animation or visual, flexible, persistent memory enable?" That's part of the fun of my job is, we never know the answer to that. That's what's exciting is. Like I mentioned earlier, I've been doing this for 26 years and part of why I get so excited about what I do for a living, every new film has a new set of challenges.

Over the years, we wrote a render on Big Hero 6, while making the movie that needed that render. We rendered the entire city of San Fransokyo, hundreds of thousands of buildings and trees, and thousands of crowd characters and all that. For the first time ever, putting that amount of geometry, on screen, so that when Baymax flies over the city of San Fransokyo, you feel it, you feel that entire living breathing city.

34:30 HD: When I supervised the film Moana, we wrote a whole new way to solve water because we knew we had this really high artistic part we wanted to hit with the quality of water, but we also wanted to make a thousand shots of water in that movie. So, we needed to make it extremely efficient for the artists -- we were pushing over a 100 terabytes of data a day, petabytes of storage. Massive compute farm, we had 70,000 cores cranking rendering that movie. Every film is like that, every film has a whole new set of challenges. This is the groundwork, the exciting part of this is, then we can go, "What could we do with this?" And as each film comes along, it's going to push up against those boundaries and it's going to be really, really fun.

35:19 CS: So, is there a capability that you are just pining for that you don't have yet?

35:26 HD: I would say right now, I don't think there's anything we couldn't put to screen. I don't think . . . It's just . . . A lot of it hurts a lot. And I've described this in other conversations, but I don't think I've touched on it too much here, is the story, iterates and iterates and iterates, as long as possible, they turn that crank on evolving the story they're creating and you want them to do that, as long as they can, because each turn in that crank makes it a better movie. And the whole goal is, you make something that means something to the world. You make something that 20 years later people are going to be showing their kids. And 20 years later from there, they're going to be showing their kids. And that's the goal when you create these things and . . .

36:14 HD: So, you want it to be the most amazing thing you can create. But turning that crank on story means because there's this fixed date where they yank it out of your hands to put it in theaters, you're running out of time to actually make the thing. And, so, you want to make the artist experience as efficient as possible and that is in direct contrast to that high-quality bar that the individual artists are setting for themselves and the director is setting for the film as a whole. And, so, new technologies change that dynamic and let us try harder to hit a higher bar. So, I'm just excited to see what this is going to do.

36:51 CS: Thank you. Charles, you, kind of wowed people in the D3 with some of the performance that you showed. Specifically, it's nearly magic that you can do a mix of PMem and DRAM and actually have a better benchmark performance than DRAM only. You touched on it in your talk; it's something to do with management. Would you flush that out for us, please?

37:12 CF: Sure, I also typed a little answer there. So, our software bypassed the OS kernel and we do the memory management ourselves. That we are more aggressive with some algorithms, memory allocation and we are pretty optimized and configurable in the various ways, we can do tiering. So, in this particular case, we achieved a performance higher than DRAM. There are some other cases we can also do that, but not all cases. So, in all cases, we are getting really around the DRAM level and this is also thanks to the various ratios we can configure through our software, Live, so that we can always find the ratio that give us a DRAM-level performance. I think that's the power of being software, that you can be more configurable, more agile and you can have the right configuration to get to a better performance for each applications. So, for this type of application, that . . . MySQL in this case, we were able to get some pretty good performances.

38:19 CS: I know another question that people have had, they've asked me about the endurance. You mentioned that you're aggressive about the management and typically these persistent memories devices have wear that is much more than DRAM. What are you doing inside your Memory Machine for that?

38:38 CF: Yeah. So, DRAM has 1015 . . . rewrite . . . write cycles, you can have. And SSD has like, 104 or 103 or even to the 102, and Optane I think is somewhere in the range of 106, to 108. I think in the earlier generations, it was more to the 106, which was 1 million overwrites. And I do believe, Intel probably have optimization at their level to improve it. In our level, because we are managing various type of memory, that we do have this in mind, in our placement and allocation, what data goes where so that we can have the more actively updated and refreshed data, more in DRAM, which makes sense, both from performance and from the number of overwrites, point of view. And we can have the less-accessed memory pages being placed on PMem. So, this way, combined together that they can achieve the write cycles required by the application.

39:49 CS: Great, thank you. Gentlemen, does anyone have any other comments that you wanted to get out before we say goodbye?

All right, well, I want to thank you all and thank Flash Memory Summit. Thank the audience. This is a fascinating new field and I'm glad we could bring it to FMS. And I know there's new entrepreneurs out there that are saying, "You know, there's something new I can do about this." And that's one of the happiest things, I think, at FMS. There's been a lot of companies, a lot of groups that have been formed from the camaraderie, and we're trying to keep that this virtual year and we'll be back at the Hyatt, down in the lobby, as soon as we can. Thank you all so much.

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close