IT Brand Pulse Awards Open Source Processors for Next-Generation Storage Controllers
Guest Post

Next Great Breakthrough in Flash Memory

Breakthroughs in flash memory keep occurring, and advances raise density and speed and decrease costs. What will be next with flash products? Will it be multiple levels beyond QLC? Here is a panel discussion among five leaders who dig into flash technology and its future.

00:02 Leah Schoeb: Hi, my name is Leah Schoeb, and I am here with FMS. I have a great panel here to talk about breakthroughs in flash, and we're going to spend about a half an hour giving you guys some tidbits and ideas on where we are moving forward in the industry around flash and its future. At first, I would like to introduce the panel. And Jung, would you like to go first and introduce yourself and then we'll get started.

00:35 Jung Yoon: Yes, my name is Jung Yoon. I'm a Distinguished Engineer at IBM Systems group. And my interest or technical focus is in memory technology, including flash, DRAM and storage class memory. And my responsibility at IBM has been focused on our supplier technology enablement for IBM enterprise storage, compute and cloud.

01:11 LS: Sounds good. Rory?

01:14 Rory Bolt: Hi. I'm Rory Bolt. I am a principal architect and a senior fellow at Kioxia. I work in the Memory and Storage Strategy Division, and my primary focus right now is software-enabled flash and hyperscale of applications to flash.

01:31 LS: Excellent. Luca?

01:32 Luca Fasoli: Good morning, everybody. My name is Luca Fasoli, I'm here at Western Digital. Our team takes care of bringing flash products all the way from development into production, and that's what I do.

01:51 LS: Sounds wonderful. Sebastien?

01:54 Sebastien Jean: Hi. My name is Sebastien, I'm the senior Director of System Architecture, over here at Phison Electronics. My focus is in identifying technology trends, working with key customers to build the products that they need, and I also guide our engineering teams and help them to get innovative products out to the market.

02:10 LS: Excellent, excellent. Well, it sounds like we've got a good, well-qualified team to answer the hard questions I'm about to ask everybody. So, let's start off with the first question. So, if we looked through a crystal ball, let's say five to 10 years from now into the future, how do we see NAND architecture and technology evolving to satisfy the needs of future applications? For example, do we need 3D NAND scaling, which will be able to continue by means of layer count increases, lithography scaling, driving cost reduction, density and power performance, increases and lower power consumption altogether, and will SSDs ever replace HDDs in the market with these advances?

03:12 LF: So, it's a great question. It's difficult to use our crystal ball. I can use an 8-Ball right here, but in reality, I think, if we look at the past history, NAND flash has been scaling for 20 years and here at Western Digital, we can see scaling for the foreseeable future. Scaling really takes the aspect of actually five different dimension. One is the normal X and Y scaling that's what we did in the 2D area. Then we have the, of course, the Z dimension, which is the layer count and that continues to increases and you need all three of them to make sure that you're able to achieve the right cost structure to satisfy all of the future requirements. But there's also an aspect of logical scaling, and we have been seeing steadily number of bit per cell increasing.

04:04 LF: In the 2D area, we have two bit per cell, three bit per cell was on certain specific application, but now we see three bit per cell being widespread, including on enterprise application. And we all know very well that X4, four bit per cell, are coming, and we are starting in the industry enabling five bit per cell either through special algorithm, special processing, special cell development, so that we can control all of the distributions and all of the . . . And have a very tight control over what they do. So, that those are kind of critical things.

The fifth element that is only right now starting is adding intelligence as close as possible to the NAND to satisfy the need for the application. One of the clear bottleneck that we see right now in the system is movement of data. Movement of data is expensive, it takes a lot of power, and so we need to create intelligence close to the memory. So, you know, if I look five to 10 years from today, we definitely will see very, very good scaling in the dimension, logical, but also scaling in terms of computational very close to the memory, and this concept of computational storage is something that is starting to take hold right now, and will be ever moreso increasing in the future.

05:29 LS: Agreed. I totally agree. Anybody else?

05:34 JY: I'd like to jump in here. So, as we look at the last five to seven years, I think it's really been amazing in terms of the exponential and explosive growth that the flash has been able to enable for the industry. And as we look at technology in general, we recognize that the DRAM scaling has really been very painful from a logic scaling down into the 7 nanometer, 5 nanometer and below very, very . . . A lot of barriers there, but it's been amazing in the flash arena with the 3D NAND technology and the scalability that it has been offering. And as we look at next five years, people are talking about a market size of $80 billion, $100 billion market.

So, I think it's been really an amazing ride that we have been able to share, and as we look at some of the questions, Leah, that you have given to us here, I kind of look at some of the boundaries that may be shaping up and defining the capability of the continued exponential growth of the flash technology.

07:05 JY: And I'd like to think of it as a box, right? The box of having four walls to it, and that has to do with the reliability wall, the physical form factor wall, third is the cost boundary, and then fourth, obviously, is the performance boundary. And if we just quickly think through a few of the aspects here, especially as we go through the reliability, a lot of the endurance and data retention, things of that nature become very important. From a physical form factor standpoint with the layer count, talking about 300-plus layer count in the next four to five years, really with the ability to keep it within a given Z-height people talk about 20 micron, 25 micron and that becomes an issue.

08:06 JY: And then also with the need for jamming in more terabytes per cubic millimeter, and then also as we look at some of the new form factors that are coming on board with the S1, it's clearly needing the smaller silicon size, and then third element is on the cost. Really, the driving force has been the dollar per gigabyte with the flash scaling that it enables, and then also with the flash, with the operational costs, with the lower power consumption that it brings into the data center. And then lastly on the performance with a strong need for low-latency flash, even more and more as the AI penetration, real-time AI workloads penetrate in the marketplace, really needing high-throughput, low-latency NAND because of the large ingestion of data that's needed. So, I would like to just kind of share my views with the box with the four walls there.

09:24 LS: Sounds fair.

09:24 RB: And if I could . . . Sorry.

09:27 LS: Go ahead, Rory.

09:27 RB: If I could build on that. Sorry, if I could build on that, within the box, there's a lot of flexibility in the design. You've heard the previous speakers talk about some fairly straightforward growth areas in cell density and the like, but there's also the ability to change actually the structure of the devices themselves, and you can always tradeoff between the control logic and the actual storage logic. You can make page sizes larger to reduce the cost per bit at some sacrifices, perhaps on performance, there have already been moves towards the ultra-low latency flash.

10:14 RB: There are a lot of variables that are coming into play here, and I actually see more segmentation, more different types of flash deploying in the future to address a wide spectrum of needs in the marketplace. And when we talk about, at the end of that question was, will flash replace hard disk? Well, it has and already in some places, and it continues to make inroads against hard disks. But it's really going to take some creative solutions to truly displace hard disks and it's going to come not just at the bit cost, but also sort of the TCO layer cost where you have to factor in the power savings, the rack space savings or the reliability savings, no small feat there. This is not exactly new news for the panelists here, but people aren't aware that companies like Microsoft are actually deploying data centers in the ocean, under the water, there is no replacement of a drive, right? So, reliability and the ability to degrade capacity logically over time is a phenomenal area for us to explore and driving flash forward, and could be yet another reason why flash memory would, in fact, continue its inroads in replacing hard disks.

11:48 SJ: I'd like to jump in, if that's OK. I've got a very similar view, which is that it is clear, I think to everyone on the panel and the industry at large, that as the . . . There's definitely room for that scaling to continue. And as SSDs and their costs continue to go down, that will displace more hard drives, but hard drives will be around for a long, long time. What we're seeing as an interesting trend is the need for drives to get smarter, and that's going to be . . . Up until now, we've talked a lot about the NAND and how it's changing, but the very property that makes NAND cheaper, which is a higher density, also makes data harder to retain because there's actually less room in the individual cells to hold electrons and electron leakage when you're holding 10 electrons and you lose one electron -- it's significant.

12:37 SJ: So, the challenge for reliability and retention will keep going up, and when you put that into context of, for example, a data center under the water, that also becomes even more relevant. Traditionally, the way for the last 10, 15 years, the way that we've done algorithms to do data correction on the data coming off of the NAND has been very iterative and algorithmic, very straightforward in a traditional programming sense. What we're seeing at Phison is the benefit of using machine learning to identify the optimal data correction flow, and we're actually seeing an improvement where, over traditional algorithms, where a traditional algorithm, you will see a degradation of 50% of the performance once you are outside of the guaranteed envelope provided by the NAND manufacturer.

13:22 SJ: What we're seeing is that that performance degradation goes from 50%, losing 50% to only losing 20%. So, we're able to recapture 30% of that performance, and this is a trend that I think will permeate the industry as we move forward, just drives that are fundamentally smarter to work with NAND that has more and more challenges retaining data. And that's not a slant against the NAND, it's just a reality that as things get smaller, it's just harder to work with it, and so the drives themselves will have to get smarter.

13:53 LS: And I totally agree with that. And just to build on what Sebastien just said, I worked at AMD, and I happened to work on the client side, so we have a lot of that work going on, on the server side with AI and data centers, but also on the enterprise client side and in the gaming industry. We're also seeing that that drive needs to get smarter, that Z-height needs to get smaller because everybody loves their ultra-thin laptops, but we also need for those drives to get smarter, so totally agree.

I think everybody's got a really good perspective here on where they see NAND technology going in the future. So, let's move on to our next question. This kind of leads into it. So, what do you think are the most important market drivers that will define flash technology requirements? We're talking about density, cost, power and performance like we did before, but let's apply it not to just AI, but also what our cloud providers are doing, what the hyperscalers are doing. We still have server storage, that has not gone away -- that has increased, mobile gaming, edge compute, even IoT and industries we haven't even thought about yet. What do you think are important market drivers?

15:27 LF: Yeah, actually, I want to build upon what Sebastien just said, which is actually to say the NAND is extremely flexible, and the answer is, what market is going to drive the NAND architecture? I think every single one of those market will drive different requirements on the NAND and the 3D flash cell, which is actually quite large and quite solid, and it's a circle of cell, and there's a lot of good that comes out of that by containing the electrons within a synchronous field, it's extremely versatile. And so you can think about adapting that at the cell level or at the technology level or at the architectural level for data center, large-scale storage, X4, X5 with large sequential performance enabled by, for example, ZNS that we're working on. But also at the edge on your mobile phone or on a mobile gaming platform where you are really crazy for latency, well, that's what you need: super-fast memory and a super-fast cell that allows you to have very, very short latency. And the beauty of NAND technology is that it's absolutely very, very flexible, and it's just the ingenuity of our architectures, our architecture that allow us for building whatever we want, and then you can build the intelligence in the NAND or right close to the NAND and do a lot of the processing extremely close to the NAND.

17:11 LF: And then removing that bottleneck of data transfer, through the CPU, through the intelligence and so through the dedicated engine for processing the data. So, I think different markets will drive different requirements, but the beauty of the NAND technology is that it's extremely flexible, and specifically we see 3D NAND expanding in all of those applications and have customized solution through either the NAND itself or through the system in which the NAND is built. So, I'm really, really excited about what's going to happen in the five to 10 years in 3D NAND.

17:50 RB: Right. And if I can add on to that, as I mentioned, a lot of things that we can tradeoff to create innovative storage solutions, a lot of flexibility as Luca was just stating, what I think is going to become critical in the next five to 10 years is actually ensuring consistency and predictability, and coupled with that is the reliability, right? So, it is very much the case, particularly in large distributed systems, that often you're at the mercy of the weakest link. And so you don't want any outliers, you want to have consistent quality of service, tight control over latencies and responses, tight control over failure mechanisms, frankly, and in how the system degrades. So, I see a lot more interaction between the host and the NAND storage devices, specifying really what the requirements are for this application because it's a flexible media, but you need to tune for your application if you're going to extract maximum value.

19:07 JY: And from my perspective of being more on the enterprise side of things with high reliability, high resilience, security type of centric customer base at IBM, I think, clearly, it's very clear from an enterprise compute and storage, hybrid, multi-cloud that we're all very focused on from a market growth standpoint, the flash is very clear it's going to be the driver and the technology enabler for the reasons of the performance with the low-latency capability, high throughput, and then I think some of the reliability and cost factors that we've been discussing, and I find that the flexibility is very interesting.

20:19 JY: As we look at the current TLC-dominant application and in the current flash breakout, but clearly with the cost advantages of QLC and then the potential use of the five bit per cell PLC-type of technology down the road, clearly, I think these are very important growth drivers. And then also from a form factor standpoint with the cloud, small form factor SSDs that the industry is looking at with the X and Y dimension becoming very important, being able to jam-pack as many terabytes per cubic millimeter, and also with the clear focus on from a power consumption and cooling which becomes one of the brick walls, right? So, I think those are the important things. And also from IoT edge standpoint with, again, with the large data ingestions that's needed, the big data that it's driving with all these sensors, all these sensors and monitors, things of that nature, that are proliferating in the industry, clearly will be defining the shape and the requirement of the future flash.

21:50 JY: But I think, personally, I hope that it will be able to continue with its flash scaling, that really is critical from technology scaling, cost scaling standpoint. OK.

22:11 SJ: From my perspective, and this very much echoes a lot of what the other panelists have said, I see the continued innovation coming about by optimizing for specific segments of the market. Five, 10 years ago, we had client and enterprise drives. And as we continue to move forward, each of these different segments has different requirements, so for example, mobile, which are today's modern cell phones, very focused about having a very, very tiny package, 11 millimeters by 13 millimeters, but they still want the high density, 512 gigabytes, 1 terabyte.

22:43 SJ: But the most important thing in that particular market is power because it's all about the battery life. When we talk about hyperscalers, they also care about power, but they're more focused not on battery life, but rather heating and cooling. Data centers can actually run out of cooling capacity before they run out of physical space, so they push a lot for security, low power and they tend to focus on medium densities. Enterprise is becoming more and more interested in ultra-high density, so instead of those 4 and 8 terabyte drives, they want 8, 16, 32 and 64 terabyte drives, and that's where the hard drives are also coming into this place. Now, I know we're talking mostly about SSDs, but just in general, enterprise is very focused on high density.

23:29 SJ: For automotive, that's really interesting. We've got a couple of projects in that area. Cars are basically becoming mobile data centers, they have fairly significant compute capabilities, particularly cars that are able to apply breaks automatically or do different levels of driving automation, they have a lot of compute and they require features that we've only ever seen in data centers that are now being enabled for automotive SSDs. When we look at other products, there's yet another class of product where we're actually adding additional compute capabilities in the SSDs themselves to do real-time analysis on I/O patterns to enhance both performance and security, and security is permeating every aspect of the industry.

24:14 SJ: Firmware is encrypted, signatures are checked on boot, the old days of desoldering a NOR flash and just replacing the firmware with whatever you want are gone. If it's not signed, it won't run, and that security's built in right into the actual chip itself running the firmware. And then lastly, and this is a very recent innovation or a change of direction is the gaming-class SSD, traditionally, SSD or games in general have treated storage as cold storage and they used it for bulk load, but as gaming textures are getting higher and higher, there's a new workload that's evolving called texture streaming.

24:52 SJ: And so that data needs to be loaded runtime while the game is playing, and that's where the SSD really shines because it can guarantee gigabytes per second of bandwidth. But in all of these applications, latency -- and that came up earlier in this panel -- we're being asked by our customers, and of course the workloads themselves are dictating it, is the need to have I/O consistency out to, whereas before, we might have looked at three or four nines, now we're looking at five nines from most segments, which means one in a million, we care about the 1 millionth read command and making sure that it doesn't have a latency spike. So, all of these things are coming together and pushing both quality and specialization per segment. That's what we're seeing on our side.

25:33 LS: Excellent. Excellent. Yeah, and also, just to build on what you were saying, especially about gaming, is not just the texture streaming, but also how compression and decompression is being used because you have most creators using compression for creating the product, but when you're receiving that product, that decompression algorithm is going to make all the difference in the world, and that has a huge impact on how storage handles those decompression algorithms. So, yeah, there's just so much out there and it just looks like a whole new world over the next five to 10 years.

Well, thank you very much everyone for participating and giving us these great insights on what's to come and how things are going to get better in every area of our life that requires storage. And stay tuned because we'll just come back next year and think about what we said last year and see if it's still true. So, thanks to everyone on the panel and everybody else out there. Have a great day and talk to you soon next year.

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close