Nmedia - Fotolia
Dell EMC storage technologists predict the top trends in 2019 will be the growth of storage class memory, NVMe-oF, multi-cloud deployments, autonomous storage and container-based technologies.
Sudhir Srinivasan, senior vice president and CTO of the Dell EMC storage business, and Danny Cobb, a vice president and corporate fellow at Dell EMC, discussed those upcoming trends during a recent podcast interview with TechTarget.
We looked at the impact ultralow latency storage technologies, such as the 3D XPoint developed by Intel and Micron, Samsung's Z-NAND and Toshiba XL-Flash, as well as end-to-end NVMe will have on Dell EMC storage. We also talked about how Dell EMC storage will support deploying hybrid and multi-cloud infrastructure and containerized applications.
What is the premier prediction you'd like to make for 2019 in the enterprise data storage industry?
Danny Cobb: For me, the premier item is the long-awaited arrival of enterprise-grade storage class memory into our customers' data centers. We've seen over the years the slow, steady progress of storage class memory. We've watched it begin to mature, initially shipping in the client and the consumer space. Then we've seen it begin to move its way into the enterprise space in single-node and low-availability situations and things like that. And so for 2019, it finally takes that third step into the enterprise as a completely reliable, multiported, enterprise-worthy storage device that gives us as designers a whole new performance level to deal with. Sort of relative to the 100-microsecond flash world, storage class memory now brings us down into the 10-microsecond storage class memory world. And those extra 90 microseconds matter a lot for customers who have real-time storage demands and are trying to run new, advanced, high-frequency I/O workloads.
I was going to ask you for what types of customers you thought this would matter the most. Is it going to be a significant difference for them?
Cobb: There's going to be a diversity of workloads that this applies to. And so for a traditional OLTP [online transaction processing] workload, any time you can reduce latency, any time you can reduce overall server and CPU utilization, any time you can analyze more transactions per second, close more business per day or whatever that happens to be, you're providing value to the business. In the new emerging workload space, obviously people love to point at the needs of the high-frequency traders who are analyzing click or tick data in real time and then wanting to react and transact based on that real-time information. And so I think they will see tremendous benefit because you're essentially giving them a 10 times improvement in the latency for accessing storage media. And any time you can provide a 10x improvement to someone who has a real-time information need, you're giving them some real business benefit.
What is the type of storage class memory that you think you're going to be using this year?
Cobb: I think we're going to see a variety of things available to us. The one that's on the top of everyone's mind right now is, of course, 3D XPoint from Intel and Micron. We've been discussing that quite a bit since its launch in 2015. And that's really created the definition or the requirements for what this new tier of sort of 10- or 20-microsecond class storage is going to be able to deliver. Not standing still are companies like Samsung and Toshiba who have taken a different approach. They're taking their flash technology and optimizing it further and further to reduce its latency and bring its performance into the same ballpark as 3D XPoint. And so, as storage designers, we're going to have a nice variety of choices for that fastest tier of storage media available to us. But the one that's on the tip of everyone's tongue right now is certainly 3D XPoint.
Is this going to mean that storage systems cost more when you're using storage class memory?
Cobb: There's certainly a cost difference between 3D XPoint and the traditional flash NAND that we're using. And so one aspect of it is if you're simply moving a workload from flash to 3D XPoint, that storage footprint does cost more. But it also delivers up to 10x the benefits that are there. And so for many customers, that's worth it. But in an overall, end-to-end systems design, when you do intelligent data management, when you do tiering as we do in our storage platforms, we're able to take advantage of 3D XPoint or of low-latency flash as both the fastest tier of storage, but also in many ways displacing the cost of [dynamic] RAM (DRAM) in our storage arrays. And so, while if you just looked at 3D XPoint or low-latency NAND versus traditional NAND, you see a cost delta there. If you look at the overall system cost, we also believe we can displace some of the cost of DRAM in our systems and essentially start to amortize the cost of the new media of 3D XPoint at a system level to deliver better price/performance than the predecessor could.
That sounds interesting. Sudhir, what is your top prediction for the coming year?
Sudhir Srinivasan: The biggest one on my mind is cloud. I think cloud is certainly emerging as a big aspect of every customer's infrastructure plans. And I think in 2019, what we'll see is the mainstreaming of the hybrid, multi-cloud world -- a world where it's neither all cloud nor all on prem but a mixture of all of those, including multiple clouds. And I think we're seeing that across the board. Every customer I've talked to is thinking in that direction. And the basic idea is that you want to use the right kind of infrastructure for each kind of application or data set. So, enabling that vision is going to be key for us.
Are customers putting themselves in a situation where they're going to have a lot of silos if they use multiple clouds?
Srinivasan: I think it's sort of like the cloud version of the multivendor strategy. Every customer fears being locked into a particular vendor, or a particular cloud in this case. And so they would like the ability, in theory, to be able to move across clouds. And whether they actually leverage it or not remains to be seen. But I think what I'm hearing a lot of customers do is they will spread their bets. They will not put everything in one cloud or the other. And certain clouds are better at certain aspects or certain services than others perhaps. And so that's the basis they will be using, I think, to place their different workloads in the different clouds.
Are there additional technologies they're going to need to invest in to deal with this situation where they're using multiple clouds?
Srinivasan: Absolutely. That's a great point. Thanks for bringing that up. Part two of the answer is that we have an opportunity to enable our customers to move data and applications and workloads across all of these clouds. And because we already store the data, we understand the data, and we have technologies that allow us to move data from one location to another, that's going to be key.
Are there any other major trends you envision starting in 2019?
Cobb: Maybe one thing I'd add onto that, and it goes a bit hand and glove with the storage class memory trend, is the move to NVMe over Fabrics [NVMe-oF] and really what some in the industry will now call end-to-end NVMe. You and I have talked in the past about the emergence of NVMe and how, as a local implementation, it helps optimize the hardware-software boundary between the CPUs and the storage. But, the truth be told, NVMe was created because of the fact that storage class memory was on the horizon. We needed a more optimized CPU and PCIe storage implementation to take advantage of the very low latency and very high performance of storage class memory. And so NVMe came into existence with that goal in mind, first deployed for flash, but now we'll really see it hit its stride in a world of storage class memory and 10-microsecond media devices.
Then we bring in NVMe over Fabrics, and that now extends those optimizations to the network itself, whether it's the incumbent Fibre Channel SAN that's so popular among the high-reliability, high-availability enterprise storage customers or the next-generation data center network where you're using RDMA-capable Ethernet networks as a way of connecting fabrics of systems together. In any event, the fact that NVMe over Fabrics can layer seamlessly on top of those types of networks and deliver the latency and throughput advantages that are unlocked with storage class memory, that really is just another step in this end-to-end system optimization that we're seeing on the technology front right now. And for 2019, NVMe over Fabrics goes from proof of concept to production. And that's a big step. If you think about all the parts of the data center that have to be touched to deploy a new enterprise class of storage area network technology, all of those things are now lined up and ready to go and ready for production in 2019, and we'll be hearing a lot of success stories about that.
I know we talked about some of the use cases where this technology will be particularly important. But how pervasive do you envision this technology becoming in enterprises? Do end users really need this level of speed that we're talking about with their general workloads? Or is this really going to be more of a niche technology?
Cobb: As we've seen the adoption of any of these new technologies, we look for the areas where the business benefit is most valuable. And particularly with technologies that are driven around performance, they often show up in areas where the additional performance is an absolute requirement because for those use cases, you sort of write your business value proposition on the back of an envelope -- you know, time equals money. And so certainly technologies like that which follow the adoption trajectory of flash itself -- where in 2008, it went to the top of the pyramid at the most information-intensive enterprises -- and now, 10, 11 years later, it's pervasive across the entire industry. I don't know that these other technologies will take a full 10 years to mature and become deployed widely, but I do know that the demand for additional performance to consider larger data sets in less time to make business decisions, make predictions and have an impact, for that value proposition, these technologies will be deployed very, very rapidly. For areas where the technology pull and the business pull isn't quite as fast, it will take a little bit longer until it just becomes the de facto mainstream way these technologies are deployed. And there will always be some legacy around, but to a large extent, the new systems being purchased and rolled out will be based on these new technologies.
Srinivasan: I think there's a confluence of two trends here that I think will propel this forward even faster -- which is the trend of using software-defined storage, which as we know has been growing very fast. And the thing with software-defined storage is it's all Ethernet based. And NVMe over Fabrics, especially over Ethernet obviously, allows that level of performance and reliability and so on and so forth to come into the software-defined storage world as well. So, I think the combination of those two is going to make this go even faster, although I think that particular aspect of it is probably a bit further out -- so probably 2020 and beyond. But I think there's definitely a big trend.
Is there anything else that's going to be driving some of the trends we've talked about today? I hear a lot about edge devices these days. Is computing going to be done a lot more at the edge moving forward then?
Danny Cobbcorporate fellow, Dell EMC
Cobb: There's certainly a trend that I call the real-time edge, and that is the first place where data that's required to perform some type of enterprise decision is first touched by IT. And so if you think about high-speed telemetry or high-speed ingest or scenarios like a connected car doing collision avoidance or a financial trader or a credit card provider doing real-time risk analysis on purchases and things like that, there's tremendous benefit to being able to make that decision or make that inference in an AI sense at the closest point where the data comes into existence. So that means the data is there at the edge. The compute is there at the edge. And many times the decision or prediction happens right there at the edge without taking the time or even having the time to transit a network back to a core enterprise data center or certainly off into a cloud. So, the ability to get the data, the compute and make the decision right at the edge in real time has tremendous value in these new emerging edge and internet of things deployments.
Srinivasan: We have a great example of this already happening today. A lot of these edge use cases are still really emerging and early. But the one that's actually very advanced is in video surveillance, where all of the facial recognition or license plate recognition and that kind of compute already happen at the edge. And a lot of the data is processed right there, and only the relevant pieces of data are propagated upstream into the core or the cloud.
Are there any other short- or long-term trends that you envision happening that we haven't touched on yet?
Srinivasan: The one that's really dear to my heart is what I call autonomous storage or smart storage or intelligent storage. And the idea here is that our customers want our storage devices to be more self-driving. The joke that I make all the time is if we are building self-driving cars, surely by now we can build a self-driving storage system. And we have definitely been on the mission to do that. And it consists of two pieces. Just like in the automobile world or the self-driving autonomous vehicle world, the vehicle itself, in our case the storage system, needs to have a fairly sophisticated machine learning/inference engine-type of capability so it can make those real-time decisions. So, that's really in a sense your edge already. It needs to be able to make those decisions in real time.
But a second component is sort of a global brain, in the cloud perhaps. In the autonomous vehicle world, that would be your weather system or your traffic navigation system that would guide you on what the best routes are at this present moment in time or based on your driving history. So, it does the deep learning across a fleet of devices that are out there in the field under different operating conditions, and it informs the models that are running in the devices at the edge to make them more efficient and self-driving. So, we have started to see the emergence of these technologies in both locations in 2018 in our own product portfolio and other vendors as well. And I think you will see a lot more of that in 2019.
Are there any other major areas in storage where we expect to see new and different trends?
Srinivasan: There is one more area. It's not entirely new and different in the sense of, we've seen the early stages of it in 2018 already, and that has to do with containerized applications and cloud-native applications and specifically persistent storage for those applications, including not just the storage but also data protection for that storage as well. So, this is all about how do you provide enterprise-grade, reliable storage for applications that are built in a cloud-native, born-in-the-cloud approach, whether that's containerized microservices, etc. And this has to do with having the right integration into the modern DevOps frameworks and ecosystems. And so that, I think, is going to be a big topic in 2019.
To what degree are enterprises doing this? Is it just really the largest enterprises that need this storage for containerized environments, or is it seeping down beyond that, because it involves a lot of expertise on the organization's part to take this approach? I'm wondering to what degree you expect to see this happen across all types of users.
Srinivasan: That's a great question. So, I think it's a bit of a sandwich in the sense of, it's definitely happening at the high-end enterprises. And that's driven primarily by them having to sort of react to the threat of the cloud. The agility that you get from these kinds of environments is what's driving them to do that. At the bottom end though, if you're a startup today, this is how you develop your product from day one. You go into the cloud, and you start developing with a containerized, microservice-style architecture. So, you're just born that way. Those are sort of the two things that are driving the option.