IDC Real-World Applications and Solutions for Persistent Memory NVMe at Scale: A Radical New Approach to Improve Performance and Utilization
Guest Post

Introduction to Enterprise/Cloud Storage

Looking for a recap of the flash market over the years? You will find it here as well as some insights on the impact of newer technologies -- such as NVMe over Fabrics -- on system architectures.

00:01 Eric Burgener: Good morning and welcome to IDC's event, "Flash Revolutionizing the c\Cloud and Enterprise Storage." I'm Eric Burgener, a research analyst in the infrastructure systems group at IDC focusing on solid-state storage technologies in the enterprise. We've got a pretty full schedule for you today with participation from a number of IDC analysts and about three and a half to four hours of content. Now, before we jump into the schedule, I'd like to make some preliminary comments that'll help set the stage for some of the content that you'll see for the rest of the day. First, let's take a quick look at the evolution of flash in the enterprise.

00:39 EB: Now, all-flash arrays were first introduced back in the late 2011 time frame, and within a year or two, already a couple of the major established storage providers, IBM and EMC, had entered the space through acquisition. Now, these vendors realized this was going to be an important market and wanted to get into that space relatively early. Customers began to see some of the benefits associated with flash, despite the fact that the media itself at the time was significantly more expensive than that of hard disk drives.

01:14 EB: By 2016, it was pretty commonly known that for latency-sensitive primary workloads, the total cost of ownership of all-flash arrays was actually compellingly better than HDD-based systems for a number of these kind of primary workloads. The rest of the vendors had gotten on board by that time frame, and by 2017, there were roughly 30 all-flash arrays available in the market. A number of vendors actually had multiple systems in that space.

Now, in 2017, we saw the very first NVMe-based all-flash arrays or what IDC refers to as NAFAs. And by 2018, we started to see the major vendors jumping into this space as well. With one exception, actually. One vendor had entered the space in 2017, Pure Storage, with an NVMe-based version of their flagship. But we saw in 2018, the rest of the vendors began to jump in, and that market began to grow relatively rapidly.

02:20 EB: By 2019, this was a $2 billion market, and it had extremely high growth rates -- in fact, some of the highest growth rates in the enterprise storage segment overall. Now, I think it's interesting when you think about what's happening in the market today and what happened in the past, that the introduction of NVMe and many of the technologies that it will enable persistent memory, storage-class memory, NVMe over Fabrics, that really opens up the opportunity to use these new technologies in enterprise environments.

And at IDC, we really believe that we're at the very beginning of a transition just like we saw with all-flash arrays that started back in 2011, that within a short seven to eight years, over 80% of the revenues in that primary external storage arena were being generated by AFAs, and we think we're going to see a similarly rapid transition between SCSI to NVMe-based systems in the enterprise storage markets.

03:24 EB: Why flash? What have you done for me lately? (So to speak.) So from an administrative point of view, there were a lot of significant advantages. If you think about how much time spent performance tuning, that went basically to zero with a lot of the traditional workloads with all-flash arrays freeing up significant time for administrators. We saw a significant reduction in the size of the infrastructure that was required to support any given level of performance and or capacity.

This also enabled new application types because of the much lower latency, so obviously the performance advantage was a significant difference from what we saw with flash, and what we saw with hard disk drives in the past. It also enabled the use of inline data services, which allowed some significant capacity savings, and, in fact, for mixed workloads, it was pretty typical to get data reduction ratios in the 3:1 to 5:1 range and the secondary workloads like backup, that number could be significantly higher.

04:27 EB: There were a number of other efficiency advantages to flash as well: lower energy consumption, lower floor space consumption because you needed fewer storage devices. The servers were actually used more effectively, CPU utilization went up significantly. That meant you needed fewer servers to meet a certain level of performance, which meant you had to buy less application software on the server side, so there were savings there as well. And obviously there was a reliability advantage with the solid-state media compared to the mechanical hard disk drives or spinning disk technologies.

05:00 EB: Now, NVMe provides an order of magnitude improvements across almost all of these original TCO advantages. That's not necessarily true in power consumption, but for many of the others, it is clearly true, and this is another one of the contributing factors to the growth of NVMe that we see happening over the course of the next three to four years. Now, I wanted to comment on a couple of the storage trends that you'll analysts talking about today, and this sort of helps to set the stage understanding of the overall market.

05:33 EB: So, number one: memory-driven computing. Now, this is clearly an area where NVMe is enabling the use of persistent memory and storage-class memory products to create shared pools of storage that can be shared across a number of different servers. So, you can get increased . . . basically increased main memory capacity, you could spread that performance across a large number of applications and servers. And this will drive an ability to support a much wider array of real-time applications.

06:04 EB: Now, one of the things that we've seen as part of digital transformation is an increasingly real-time orientation. And in IDC's own DataSphere forecasts, we forecast that by 2024, about 25% of all of the data captured will be real-time data. So, we are clearly moving in that direction, and that is one of the reasons that NVMe performance is becoming so attractive on the enterprise. As enterprises deploy those real-time workloads, they need that kind of performance that they can only get from NVMe.

06:37 EB: Now, solid state is also moving in another direction. I mentioned media costs a little bit earlier. The introduction of new media types, like quad-level cell flash media and ultimately PLC, which will allow storing of five bits of data per flash cell. This is lowering the dollar per gigabyte cost of that media, which then opens up the ability to use those kind of systems with a broader ray of workloads. More secondary workloads, backup, archive, tier 2, VDI virtual machine installations. We've already seen introductions from two established vendors with QLC-based systems this year, and we expect to see more going forward. You'll hear a little bit more about how flash is being deployed in these types of secondary markets as well from Phil Goodwin a little bit later in our agenda.

07:30 EB: And finally, I wanted to mention the impact of NVMe over Fabrics on changing system architectures in the storage arena. So, if you take a look at system architectures to date, a lot of the design decisions, particularly around the shared-nothing arena, were driven because of data locality considerations. The fact that if you did not have data a very short distance away in terms of the latency hop on the compute that will be operating on it, that caused some issues. And this is really limited -- some of the architectures, like hyper-converged infrastructure, may have been limited to just the amount of storage you could get it in in a particular server before you had to incur that latency hop. With NVMe over Fabrics, that latency hop is being significantly reduced, and we're already starting to see startups introducing new types of systems that take advantage of that removal of the data locality barriers. We'll talk a little bit about those later in my session.

08:33 EB: Now, of course, there's cloud and cloud has become ubiquitous. Every enterprise pretty much at this point has got hybrid cloud in place, and we're going to talk a little bit about how we see that evolving over time and how flash is used not only on on-premise IT infrastructure, but also in private cloud infrastructure and in public cloud infrastructure. So, we have a couple of sessions later today that will be delving into more detail along those lines. And finally, I just wanted to do a quick overview of the agenda.

09:04 EB: So, once I wrap my comments up, we'll have our keynote where you'll hear from Ashish Nadkarni and he'll be talking about flash in the era of digital computing, what CIOs need to know. After that we'll have two tracks of three breakout sessions each. Track one will be focusing on cloud infrastructure strategies, and track two will be focusing on solid-state storage in the enterprise.

There'll be a 5-minute break in between each one of those sessions and those sessions will run concurrently. So after the track is complete, we'll then get back together again for a 30-minute panel discussion that includes three end users that will be talking about their experiences using storage-class memory in production environments. At that point, I'll do a quick wrap up of the sessions for the day and we will let everybody adjourn. So, we're glad to have you with us. And at this point, I'd like to pass it off to Ashish Nadkarni.

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center