Optimizing SSD Performance With AI and Real-World Workloads Software and Services Define Future Infrastructure Consumption
Guest Post

Relevant New Storage Technologies for Digitally Transforming Enterprises

Eric Burgener, research VP in the infrastructure systems group at IDC, explores digital transformation trends from the storage perspective and new storage technologies involved in today's digital transformation movement.

00:01 Eric Burgener: Hi, I'm Eric Burgener, a research vice president in the infrastructure systems group. Today, I'd like to talk to you about relevant new storage technologies for digitally transforming enterprises. We're going to spend the next 25 or so minutes talking about these technologies. But first, I'd like to make a few comments about the backdrop of what's happening in the industry as a whole.

So, if we look at the trends -- we've talked about a number of the trends in the events today -- digital transformation is really taking the industry by storm. And a quick definition of digital transformation is the movement towards much more data-centric business models on the part of businesses. And so this not only impacts what kind of data they capture about customers, markets and their own processes, but how they analyze and use that data to drive business change, to explore new markets, to improve customer satisfaction, things of that nature.

01:02 EB: The digitization of business models is really driving the need for some infrastructure refresh on the IT side, and we're going to be talking about storage, obviously, and flash, but there's major change over the course of the next couple of years -- a high percentage of enterprises are expecting to have to refresh their storage infrastructure as part of their digital transformation efforts.

01:33 EB: Now, against the backdrop of digital transformation, I'd like to talk about a couple of other trends that are happening in enterprise storage. So, No. 1 is the slow migration towards server-based, software-defined architectures. And this is in contrast to the more traditional SAN and NAS architectures that people have been buying over the last 30 years or so. Now, it doesn't mean that the SAN and NAS markets are going away, it just means that we've got very rapid growth in server-based, software-defined. An example of that is hyper-converged infrastructure, but there are obviously non-hyperconverged scale-out-type solutions that are server-based and software-defined. Both of those markets, at this point, are still multibillion dollar markets, and the traditional SAN and NAS market combined is still larger than server-based, software-defined, but we're clearly seeing a migration in that direction. And what we see a lot of CIOs asking themselves as they look to refresh storage infrastructure is, "Can I run this set of workloads on server-based, software-defined or do I need to buy another SAN or NAS system with more traditional architecture?"

02:45 EB: Now, why are people interested in server-based, software-defined? Well, there's three reasons. No. 1 is the . . . Of these environments. So, you can basically run this software on different types of underlying hardware, which gives customers an opportunity to choose servers that they may already have qualified for other purposes and use those for their storage platforms as well. It also makes it easy to shift to different hardware if and when that might become a requirement, and it more easily allows the introduction of new hardware types that get introduced during the lifecycle of that platform. So, it's that flexibility and agility, that's one reason.

03:26 EB: Another reason is the ease of use. So many of these systems have been designed in the last 10 years. They use different design principles when putting together the processes used for management. They're much more self-managing; they're much more composable than the more traditional hardware-defined architectures have been. And this is particularly interesting because we've also seen a move away from dedicated storage management resources towards more generalized IT resources that are managing storage, virtual administrators, Linux and Windows administrators, taking on other tasks in addition to managing what has been their primary area, but storage is becoming one of those. And these types of personnel are less familiar with storage; they've been doing it less long than maybe more dedicated administrators have been, and so that ease of use becomes extremely critical in enabling those types of resources.

04:21 EB: Now, the third area is economics. And just plainly stated, a server-based, software-defined design is typically going to be less expensive than the more traditional SAN and NAS. So, as CIOs ask themselves this question, clearly, there's cost savings with server-based, software-defined, but do they have the capabilities in those platforms to be able to meet workload requirements? That's sort of what needs to be weighed as they look at that decision.

04:56 EB: Now, the other trend I want to talk about is a migration towards much more real-time data, a much more real-time orientation in business processes and IT infrastructure processes in general. Now, in IDC's DataSphere forecast, we've predicted that by 2024, about 25% of all the data captured will be real-time data, and when you're dealing with real-time data, you need a lot more performance in the storage infrastructure. That's what's driving the need for NVMe and some of the other technologies that are based on that, like persistent memory, storage class memory, the need for faster storage networks with NVMe over Fabrics, that type of thing.

05:38 EB: And one other critical issue that's important here, is that while these types of more scale-out workloads that have tended to need a lot more capacity, much higher throughput and bandwidth, those have generally not needed low latency, and have not needed extremely high levels of availability. Well, that's changing during digital transformation because as these new artificial intelligence, machine learning-driven workloads that run best on scale-out-type architectures are being used, those are extremely critical to the success of the organization, which makes them mission-critical, and they have to be highly available.

So, what you've seen is sort of a mixture of some of the capabilities from the traditional enterprise-side of low latency, extremely high availability, five nines plus, and that's being married with capabilities that support petabyte-class capacity, extremely high levels of throughput and bandwidth. So, that's another thing to watch out for, this need for high availability. And, in fact, by 2021, we think that 60% to 70% of the Global 2000 will all have real-time workloads that are also mission-critical, and there will be at least one of those, but many organizations will have more than one.

07:00 EB: Now, a little earlier, I referred to technology refresh, and the fact that a number of organizations that are undergoing digital transformation are actually planning that, and we did a survey earlier this year in 2020 to take a look at the infrastructure refresh plans on the part of organizations, not only for storage, but for other areas as well: Servers, data protection, network. And you can see from the results shown here that roughly 70% of all organizations are planning at least one of these types of infrastructure refresh that's driven directly by their digital transformation requirements. In other words, the workloads that they're putting in place as part of DX and the requirements of those workloads, either higher performance, massive scalability, high availability, et cetera. So, a lot of this will be happening in the coming two-year period.

08:00 EB: Now, as part of that refresh, what technologies are organizations most interested in? So, that was another set of questions that we asked in that same survey that happened earlier this year. The top response on the storage side was access to new types of solid-state media, and specifically what they're looking for here is persistent memory, storage class memory, also QLC media, so quad-level cell flash media that provides a much lower dollar-per-gigabyte cost, and is better enabling the use of all-flash platforms in less latency-sensitive environments, yet while those are still cost-effective.

08:43 EB: So, let's talk a little bit about the new types of solid-state media here. So, how IDC is defining persistent memory is a media like 3D XPoint, Intel's 3D XPoint, that's accessed as a byte-addressable store through a DDR4 or DDR5 type of interface. A storage class memory-type device uses that same media type, 3D XPoint, but it accesses it through a block interface, for example, NVMe, and there's clearly a performance difference between the two. Persistent memory is about an order of magnitude lower latency than what you get from storage class memory. Now, obviously, there's a corresponding price difference -- persistent memory is more expensive, but it's less expensive than DRAM, despite the fact that it's quite close to DRAM in terms of the latencies that it provides. People are obviously looking at these two technologies because of their performance capabilities, low latency and their ability to generate or handle much higher degrees of throughput.

09:45 EB: Now, software-defined storage, I mentioned a little bit earlier some of the benefits around this, and this popped up as one of the top new technologies that customers undertaking storage infrastructure refresh are most interested in bringing on board. And, again, in our survey data, it popped up that CIOs are really making that decision based on, "Can I move these workloads that are pre-existing onto this new type of server-based software-defined design or do I have to buy another SAN or NAS system?"

10:16 EB: The other thing that's of interest in the storage space are scale-out architectures. And what's really critical here is just the ability to easily expand capacity and performance, however you might define that, whether that's throughput, bandwidth or it's low latency across extremely large data sets. So, that's definitely one area of interest.

The other nice thing about scale-out architectures is they provide for a more graceful nondisruptive technology growth path. So, when it comes time to implement multigenerational technology upgrades, new hardware devices, new storage devices, things of that nature, scale-out architectures enable that to occur without having to shut down the environment, so that applications can continue to run, rolling upgrades can be used in these environments to basically move the entire platform over time to later generation technology without having to do a forklift upgrade or incur other types of disruptive operations.

11:22 EB: OK, so turbocharging the storage hierarchy. Actually, I made some of these comments on the prior slide. So, quick definitions here are persistent memory, storage class memory, we talked about those, performance is really the differentiator, cost difference there. Obviously, the most latency-sensitive workloads are the ones that are going to be most interested in PM, but still SCM, no slouch when it comes to latency, and there's a lot of extremely performance-sensitive workloads that'll be run on that.

11:50 EB: Now, I mentioned the NVMe-based NAND flash SSDs, so these provide higher performance than SCSI-based SSDs. And we're talking about not necessarily QLC media types here, but these are made from the more traditional, they're more sort of performance-oriented, they're not necessarily targeted towards lowering the dollar-per-gigabyte cost. So, these are the options that people require NVMe to access. And as I mentioned in my opening comments, we are seeing a transition, a pretty rapid transition in the industry from SCSI to NVMe, and these are the technologies that customers are implementing NVMe to get access to.

12:35 EB: OK, so leveraging some of these newer technologies, I wanted to just point out a couple of companies in our industry that have introduced what I think are some really interesting new products based around these technologies. So, first off, the ability to create memory pools that can be shared across large numbers of servers. There's a startup called MemVerge, and what they've basically done is create a software-based memory virtualization layer that allows customers to combine DRAM and persistent memory into a single pool, and then basically to connect that pool up to servers over NVMe over Fabrics connection. So, you get much higher capacity memory. You'll actually get a lower cost of memory because you're blending the dollar-per-gig cost of traditional DRAM and persistent memory. So, very interesting -- going to enable in-memory computing at a much higher level.

13:32 EB: A couple of other key things about this type of computing that I think are going to be critical to its success, No. 1 that this memory virtualization layer enables existing applications to be able to access that higher performance data store without any changes, so that's going to be a key area. And then, also the ability to apply enterprise-class data services, things like snapshots, replication, encryption, et cetera, to data that's stored in that memory pool. So, very interesting company, just getting started, they've been shipping their products since earlier this year.

14:13 EB: Now, a couple of other companies that have basically taken storage class memory and quad-level cell media and combined them into a single-tier system that can effectively deliver extremely low latencies based on the storage class memory performance characteristics, but yet can store large amounts of data, multi-petabytes, tens of petabytes of data very cost effectively, because the overall data store represents a blended dollar-per-gigabyte cost between the little bit of SCM and the QLC. And these new architectures have also done some very interesting things with the data services that optimize them for use in petabyte-scale-type environments, and really drive much higher levels of efficiency.

So, if you're taking a look at block, file or object-based workloads that you may want to move to these architectures, these are a couple of companies that you could potentially look at: Vast Data right now is supporting file and object, StorOne will enable block, file and object. But really interesting architectures, and ultimately going forward, I think this is going to become an extremely important architecture type to the industry.

15:36 EB: OK, I also want to make some comments about the use of quad-level cell media, so just on its own, not with SCM necessarily. So, there are two established storage vendors, Pure Storage and NetApp, that have both introduced platforms that are based entirely around QLC media. And basically the market that they're going after with these systems is an HDD-based market, hybrid flash arrays or HDD-based, that are being used for secondary workloads, so workloads that don't require the same level of latency sensitivity as maybe the traditional primaries, but yet still might benefit from some of the other capabilities that solid state media brings to the table: A higher density, lower energy consumption, fewer number of devices that are required to create the infrastructure, so infrastructure efficiency issues.

16:29 EB: On the throughput side, there's a really interesting play here. So, obviously, flash provides much higher levels of throughput with smaller numbers of devices than you can get out of the HDD side. So, workloads in the backup arena, archives sometimes, data analytics that may be more batch-oriented, workloads where you're moving large amounts of data, this extremely high-throughput capability of flash media relative to HDD-based media is a very interesting thing here. And there's a point I want to make is that while these systems clearly are narrowing the dollar-per-gigabyte cost difference between pure HDD-based systems and flash-based systems, they're not there yet on the raw media side. In fact, if you take a look at the raw dollar-per-gigabyte costs, they range from four to five times as expensive with the QLC media.

17:26 EB: However, remember, you can use data reduction capabilities like compression, deduplication, in line with these systems without causing any impact to the applications. And that means that you can leverage a 4-1, 5-1, and in backup environments, sometimes 10-to-1 or 20-to-1 data reduction ratios in these environments without any kind of performance impacts. So, that really brings the cost of these systems down very close to the benchmark on the HDD side, which is no longer the 15K rpm HDDs. What we're talking about here are the nearline SAS drives, 7,200 rpm devices that are around two to four cents a gigabyte, if you take a look at not just the device, but the overall cost of a system that's built out of those, so you have to add the controller costs and all those things in as well. And, obviously, you're adding those costs in on the QLC-based systems as you take a look at that dollar-per-gig cost, but I can tell you that they're getting very close.

18:29 EB: And these systems don't have to be lower in dollar-per-gigabyte cost than the pure HDD-based systems to make them very good for some workloads. A lot of workloads in this tier 2 secondary arena really need that additional throughput capability, and they can benefit from the higher reliability of these environments. So, as you get close, you start to overcome some of those concerns. Yeah, there'll always be some workloads that it's just better to run those in a pure HDD-based environment, but there's no doubt that these two companies are narrowing that gap, and we expect to see other companies introducing similar systems going forward. So, the migration of all-flash arrays into the secondary storage arena, we think that's definitely going to be a trend going forward.

19:21 EB: OK, let me make a few comments about why you should care about NVMe. Well, No. 1, unlike SCSI, the NVMe protocol was developed specifically for solid-state storage media. SCSI was actually a protocol that was adapted for use with storage, can be used with flash media, but clearly not as efficient because it just wasn't created for that particular environment like NVMe was. So, there are clear efficiency advantages with NVMe, and NVMe provides access to other types of media that SCSI can't support. So, you can't do persistent memory, storage class memory with SCSI. You can do flash media, a NAND flash media with it, but not these higher performance types. And, obviously, the efficiency capabilities of NVMe, that just means that you need less infrastructure to meet any given performance requirement. It lowers the energy consumption, it lowers the amount of floor space you need or the rack space that you need to build a system to meet a given performance requirement or capacity requirement, for that reason.

20:29 EB: Another interesting thing about this is that the devices, the solid-state devices that are out there, tend to be higher capacity than what you get from just traditional HDDs. And, so, there's not only the data reduction benefit that you get, but you also get this benefit of just, on a straight basis, there's higher media density available with more of these solid-state devices.

20:55 EB: Now, the other thing that's interesting about NVMe, is that don't necessarily assume that NVMe will cost you more at the system level, and let me tell the story of a couple of vendors here in this space as the example. So, Pure Storage was the first sort of larger enterprise storage vendor to introduce one of these systems. They did so in 2017. When they did that, they introduced that system at the same price point as the SCSI-based version of that array. So, basically, that took price out of the equation and said to customers, "Which technology do you want? Do you want NVMe or do you want SCSI?"

As other vendors have introduced NVMe-based versions of their own flagship systems, many, although not all of them, have taken that same approach. So, IBM basically sells their systems at a slight discount, their NVMe systems at a slight discount relative to their SCSI-based systems. NetApp also has got price parity between the SCSI and the NVMe-based versions of these systems. And so again, the issue there is it just removes price as a problem.

22:08 EB: Now, there is still a small price premium with offerings from some of the vendors that's likely to go away over time, but I just caution everyone to not assume that NVMe at the storage system level will be more expensive because there are a number of vendors who have committed to price parity, and that just makes it much easier to choose an NVMe-based system. Why is that important? Even if you don't need the latency capabilities of NVMe right now, but you're going through a storage infrastructure refresh with digital transformation and the workloads you'll be deploying there over time, it's very likely that you will need those kind of capabilities within the lifetime of the array that you're buying today. So, if it doesn't cost any more, why not set yourself up for the future instead of having to incur a forklift upgrade to move from a SCSI-based system to an NVMe-based system when you actually do need that performance? So, this is another consideration that I think is helping a lot of customers make the decision to go NVMe today instead of buy another SCSI platform, and then deal with the potential of having to upgrade that to NVMe within the next several years.

23:25 EB: The other thing that's interesting about NVMe is it provides a growth path to a much higher performance storage network, and what we're talking about here is NVMe over Fabrics. So, we take the same efficiencies in the NVMe protocol and we basically apply that to storage networking. So, what you'll see with these implementations is they're much more efficient than things like iSCSI, as an example. Now, just briefly about NVMe over Fabrics. There's three different types of transports. You can run this over Fibre Channel, over Ethernet and there's several flavors there or over InfiniBand. Most of the enterprises today are choosing either Fibre Channel or Ethernet, and what we've seen is if you've already got a SAN in place, many of those customers are looking to just upgrade to NVMe over Fibre Channel. Most of them already have the hardware in their systems, the Gen 6 Fibre Channel hardware, that will support that, so that just makes that a software upgrade.

24:23 EB: For many of the greenfield deployments, and so this was with a lot of the startups that were out there, like Pavilion Data, Excelero, et cetera, most of those have gone . . . Their customers have gone the Ethernet route. And in Ethernet, there's a couple of options. You can do RoCE, which is RDMA over Converged Ethernet, you can do iWARP, or you can do TCP, NVMe over TCP, which is not an RDMA-based version, but clearly, it provides a lot more performance than what you get from, for example, iSCSI. So, this is going to be a new area. The NVMe over TCP implementations, really, are just starting to become available in mid-2020, but ultimately, because of the cost and the ease of deployment there, we think that's going to be the volume NVMe over Fabrics model that gets deployed much more widely in the enterprise. But again, keep in mind that customers that already have SANs, there's a high percentage of those because it's so easy, that will just go to NVMe over Fibre Channel.

25:30 EB: OK, so I want to make a few additional comments about server-based, software-defined, scale-out designs, which we're seeing more and more enterprises implement, not only in their traditional IT infrastructure, but also as an on-prem private cloud infrastructure. So, I talked a little earlier about some of the benefits here, the economic benefits, that they basically cost less than traditional SAN and NAS. However, they don't necessarily provide all of the same performance availability and functionality that you might get from traditional SAN and NAS. But every year they're getting closer, they're becoming more mature and many customers are moving highly available workloads to these environments.

26:14 EB: So, there's the economic aspect there. Then I wanted to mention the ease of management, we talked about that a little bit earlier. They're also much easier to scale -- you just add another node and the software reconfigures, and you're off to the races, and they're also easier to upgrade with new types of hardware that might become available, new storage devices. You have the ability to mix and match different types of storage devices in these environments. You can also generally mix and match older nodes with newer nodes, which enables the rolling upgrades. So, there's a better growth path here that ultimately should allow you to extend the lifecycle of a storage platform built on the scale-out model to much more than the traditional four to five years that we tend to see with SAN and NAS systems.

27:02 EB: And the one other point that I didn't comment on earlier, but I'll make here is that customers that are building hybrid cloud environments, and again, this is basically most enterprises today, so they've got some workloads that are running in the public cloud. If they're in the public cloud, it's highly likely, although it's not necessarily true, but it's highly likely that they're running on some kind of a server-based, software-defined, scale-out infrastructure. And if you'll be moving workloads back and forth from the public cloud onto some kind of on-premise infrastructure, whether that's traditional or that's more of a private cloud, it does make it easier to move that back if you've got similar architectures.

If you're going to take a workload that you've developed in the public cloud using a scale-out architecture, and you're going to move that back to a traditional SAN, that's going to be a harder play than if what you're moving it back to on-prem is built out of a similar underlying architecture. And we do know from surveys that we've done this year, that hybrid cloud capabilities, integration capabilities, unified dashboard, et cetera, this is becoming an increasingly important purchase criteria as customers look at and decide between the offerings from different storage vendors.

28:25 EB: OK, so I wanted to make a couple of comments also about storage workload consolidation. So, basically, the idea here is that you've got an older system, you maybe have a couple of older systems, you're looking at newer systems that are more powerful, they've got more performance, they can support more scalability. Is there an opportunity to migrate workloads from two, three or maybe even four older systems onto one newer system and gain some of the advantages of centralized management? You've got less infrastructure, so there are some advantages there. Easier data sharing, we're seeing more workloads where the same data set is operated on by different applications, some of which may want to access that data through a block interface, and others of it may want to access it through a file or an object interface.

29:16 EB: So, in the past, you've had to move that data set around to different platforms to enable that access. With storage workload consolidation, if you buy a platform that supports multiple types of media access methods, then that completely negates the need for data migration. You also potentially have fewer storage vendors to manage, so this helps to simplify purchasing, and it also helps maybe to cement the relationship between you and your designated primary vendor a little better, which could benefit both you and them, so that's a possibility. And the ultimate belief around all of these is that it's lower cost if I perform this consolidation.

30:01 EB: However, there are risks, and these risks are based on, in my opinion, people's views of sort of historical limitations in the storage architectures. What those are? So, there's performance, the noisy neighbor problem, some app that I've got running in this environment has a problem, and it causes problems in other applications that are completely unrelated to that app just because that system can't handle that performance requirement, a clear issue. The idea of a failure in one of the applications potentially impacting other applications in that environment, bringing them down; considerations about fault domain size -- if I lose the entire system, OK, well, if I'm running these workloads on three smaller systems, I've only lost a third of my workloads. If they're all on one, and I happen to lose that in a catastrophic disaster, now I've lost three times as many workloads, another critical concern.

31:01 EB: Another area of concern is whether you've got the right multi-tenant management capabilities to be able to implement security and various data services, encryption, quality of service, inline data reduction or replication, et cetera at the application level, so that some workloads running on a platform might need all of those, others might need none of those, others might need some of those. So, that ability to apply those data services granularly is another consideration in this case. And then, obviously, there are sometimes organization limitations that no feature capabilities on the part of the storage platform will resolve, and those tend to do with political ownership applications in an organization, things of that nature.

31:47 EB: For the first three areas, new technology advancements have really improved the ability to resolve these problems. So, performance; NVMe, with its ability to deliver latencies at scale that are at least an order of magnitude, if not two orders of magnitude, lower than what you get from HDDs; you've got bandwidth and throughput that can be two to three times what you get on the older type of architectures in these environments; and you're also seeing a much greater ability to do multi-tenant management at scale. So, apply services granularly, things of that nature.

32:30 EB: Now, availability, let's talk a little bit about that. If you look at 10 years ago, it was a rare storage system that would basically say, "Hey, we can deliver five-nines availability, no problem." Today, most enterprise storage platforms deliver at least that, and in fact, there's a number of vendors that offer a 100% data availability guarantee associated with their platforms. Hitachi, NetApp, HP, do that type of thing. So, the confidence level, and the ability of these systems to deliver extremely high availability has clearly gotten better on the part of vendors, and many of those vendors have the data that they've culled from their cloud-based predictive analytics telemetry systems to prove to prospective customers the level of availability that they have delivered across their entire install base over the last several years.

33:26 EB: So, with the availability of these systems becoming much greater, the added functionality, the added performance with NVMe and derivative technologies really opens up the opportunity to perform storage workload consolidation, and more and more customers are considering this. Now, certain workloads, obviously, will not ever get into that arena, but there's a large percentage of them that do, and there's no doubt that if you can do it effectively, you can save some serious bucks.

33:58 EB: OK, so in closing, I'd like to provide some essential guidance to CIOs and others that'll be making storage purchase decisions going forward, whether those are because of technology refresh requirements or because they're for greenfield opportunities. So, No. 1, is if you've got SCSI-based systems, and they are working for you, there's no need to upgrade those immediately, as for example, to NVMe. If they're working, I would stay with them. There's a lot of capability in SCSI systems, they do a lot of good things in terms of availability and functionality, so no need to move there.

34:34 EB: However, if you're looking at buying new systems, please consider what workloads you might be deploying in your environment as a result of digital transformation in the near-term, the next 18 to 24 months. Will you require any of the capabilities of NVMe: low latency, extremely high throughput, bandwidth, the efficiency advantages of the NVMe protocol? And if you do, please consider NVMe as part of your infrastructure strategy. Keep in mind that there's no price premium to be paid for NVMe capabilities from many vendors, which means that you can get those capabilities essentially for free relative to what you might pay for a SCSI-based system. It's my considered opinion that NVMe should be part of every storage infrastructure strategy in major enterprises today. So, please keep that mind, understand how you'll be deploying NVMe over time in your own environment, and why?

35:34 EB: And the other thing I would also urge is that you do consider server-based, software-defined platforms as these systems come up for refresh. Now, server-based, software-defined, many of those platforms also support NVMe, so that provides an opportunity to move to a more agile architecture while at the same time improving performance for many environments. But I will tell you that for people that continue to buy traditional SAN and NAS, there are a couple of very good reasons. So, No. 1, performance. If you have very large data sets that you need to rack up behind a single control repair and deliver extremely low latency, there's nothing better than the traditional SAN and NAS designs to be able to deliver that capability. The server-based, software-defined, not there quite yet.

36:25 EB: However, I made some comments in the opening about data locality and how NVMe over Fabrics will be affecting how people think about data locality. And as NVMe over Fabrics implementations start to become more widely available, and more widely used, that may change some of those considerations around performance because the need to go across a network to gain access to data on another node in the cluster, will be . . . It could be done much higher performance, and that's really what you don't have to do with a traditional SAN and NAS. So, keep that in mind on that.

37:03 EB: OK, performance. Availability. Many of these server-based, software-defined systems can support four nines availability, and in fact, have been doing so in production environments for the last several years. But there are workloads that clearly require more five nines, six nines. You've got vendors that are giving 100% data availability guarantees, those are all on their traditional SAN and NAS platforms, they're not on their server-based, software-defined. So, when workloads require that level of availability, and you absolutely need to lock that in, that's another great reason for buying a traditional SAN or NAS platform.

And the third area is in terms of data services functionality. So, clearly, the implementations for things like snapshot replication are much more mature on those traditional SAN and NAS environments. They certainly exist on server-based, software-defined, but that's another thing I've seen CIOs think about as they're looking, "Do I need to buy traditional SAN or NAS again? Can I move this to a different platform type?"

38:11 EB: So, please keep those three things in mind, but understand that the capabilities of server-based, software-defined, have really grown up in the last several years. They can meet higher availability requirements, they have a lot of the same functionality that you get out of traditional SAN and NAS, and NVMe allows them to deliver different levels of performance than what you may traditionally think of those kind of scale-out, software-based architectures.

38:38 EB: Now, the other thing I'd like to say is that as you think about moving through storage infrastructure refresh, please do consider storage workload consolidation to help drive better economic benefits for you. There have been significant changes in platforms that address many of the performance at-scale, noisy neighbor, availability, multi-tenant management concerns, that were clearly concerns in the past, and shouldn't have been. But many of the new technologies available today, again, NVMe in particular, are really enabling these types of platforms to consolidate at much denser levels and help you to reap the benefits of centralized administration, potentially dealing with fewer vendors, et cetera.

39:24 EB: There are a couple of vendors out there that are really focusing on dense workload consolidation and have a great story when it comes to the performance availability and multi-tenant management concerns. So, I'd encourage you, if you're considering that, to look at vendors like Infinidat and Vast Data -- they're two that really focus on that workload consolidation aspect, although these capabilities do exist with many other platform vendors that offer similar technologies. So, that basically brings my comments to an end. I'd like to thank you for your attention.

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close