Download the presentation: The State of NVMe Interoperability
00:03 David Woolf: Welcome to our presentation today about NVMe interoperability. I'm David Woolf from the University of New Hampshire InterOperability Lab. And we work very closely with the NVM Express organization on compliance and interop, so we hope to have a unique and informative perspective here.
A little bit about what we cover today in our presentation. First, we'll talk about the architecture of the NVM Express specification and how that's really been designed to foster interoperability. Next, we'll talk about how the NVM Express organization and the community has prioritized interoperability via policy. And, finally, we'll talk about some resources that are available today to help enable interop, and really what we're talking about there is how foundations have been laid to enable NVMe interop. What kind of tools are available.
00:54 DW: Then we'll talk about some observations we've seen here at UNH-IOL, in our lab and in our Plugfests with respect to interoperability. And, finally, we'll take a look to the future. We'll make sure that interoperability is preserved. There's a lot of new features that are being added to the specification, and we want to make sure that we've got the foundation in place to preserve interoperability and to continue to drive adoption.
01:19 DW: Now, NVMe is unique in that it enables many different types of storage applications and use cases, while still maintaining interoperability. And that's really special and it kind of speaks to the effort and the attention to detail that's been put into the specification. And when trying to think of an illustration to demonstrate that versatility of NVM Express, initially I thought of a Swiss Army knife for a multitool. And we all love a good multitool -- they're great and, in a certain sense, the best tool for the job is the one that you have with you at a certain moment. And a Swiss Army knife, it certainly wins in the portability category, but if you had a big project you were going to do, you wouldn't want to tackle that with just a Swiss Army Knife. If you were renovating a house or a building, you wouldn't want this single, very versatile tool. You'd always want to have the right tool for the job because as great as the Swiss Army knife is, it's not really great at anything.
This article is part of
02:19 DW: So, if you were doing a big project, you'd much rather have a comprehensive tool box that has all the right tools for the job. And a set of tools that works together. And I think really that tool box that has all the right tools is a much better description of NVMe, because the spec architects have really done a great job in giving implementers all the tools they need to create a wide variety of storage solutions while maintaining interoperability.
02:48 DW: Now, we're going to talk today about how that is. And really one thing that you may have heard before regarding NVMe, especially with this move to flash storage, is that NVMe is the language of storage. It can support a wide variety of underlying media types, and here we've just got a nice illustration of some of the more common media types that we see. Certainly, there's vendors out there deploying NVMe drives with unique or boutique types of flash. And they can do that because NVMe is agnostic to the underlying media.
03:22 DW: And this is really important for NVMe, because even as advancements are made at the physics level, at the physical implementation of memory, NVMe's solutions can take advantage of that. And, so, we can see a wide variety of NVMe's solutions are all using the same protocol, despite using different media. We see some of the same characteristics on the transport side.
We're going to talk about NVMe over Fabrics. Now, here we have illustrated some common transports for NVMe-oF. They all use the NVMe protocol. And the beauty here is that as those underlying transports do speed bumps or speed iterations as they increase their throughput, again, NVMe protocol can take advantage of that. And we don't want to do too much foreshadowing for the rest of the presentation, but a good example of that is that transition from PCIe 3.0 to PCIe 4.0. Now, from an NVMe perspective, there was no change. It was a seamless switch. Now, of course, the PCI-SIG and folks that spend their days working on PCIe 5.0, that was a big jump, right? And they put a lot of work into making sure that that would work.
04:33 DW: But, again, from a protocol perspective, because NVMe protocol is, to a certain extent, transport agnostic, that transition was relatively easy. And, so, we see that kind of played out across all the different transports that NVMe uses today. Ethernet transports, Fibre Channel transports, and so forth. As those increase in speed, NVMe gets to take advantage of that. So, what do we get out of this? This NVMe tool box? Well, different flash types, different transports, soon even different command sets within NVMe. All these things enable a wide variety of applications. But the core functionality is the same and it makes it that much easier to maintain interoperability from a specification perspective, and also from a product perspective. And, so, we see that NVMe has found a home in a variety of different applications today.
So, now we want to talk about how that spec . . . We've talked about how the spec was architected for interop. Now, we want to dig a little bit into how the NVMe community and the NVMe organization has prioritized interop through policy. Now, first, a lot has been written about new features that are being added to the NVMe specification.
05:50 DW: There's a lot of talk now around zone namespaces and certainly other features. And if we look to the past, the NVMe specification has grown. There have been other features out in the past, like support for reservations, support for Sanitize operation, support for power management, lots of new types of identified data structures. And one thing the NVMe community has done is that every time an idea for a new feature is introduced, the technical working group there asks itself, "Is this backwards compatible? Is this going to cause a problem for interoperability?" So, that question is baked right into the process of creating the NVMe specification. So, always, they're kind of asking themselves that question about interoperability because they want to make sure that that is preserved.
06:37 DW: Now, if any kind of change to the spec or new feature is ratified and added into the specification, then it kind of comes across my desk, in a sense, at UNH-IOL, where we'll work with the Interop and Compliance Committee within NVMe to build tests around these new features, and that's kind of what I spend my days doing, because any change to the specification is a potential vector for an interoperability issue. If you have drives that support new features being connected to hosts that don't yet support those features or vice versa, then those things need to be handled, and so, we design tests to check out those corner cases and make sure that interoperability is preserved.
07:20 DW: Now, speaking about that test program that we work with the Interop and Compliance Committee within NVMe on, it has both interop and compliance components. Now, that's an important distinction. On the compliance front, we're checking specification requirements, we're building very specific test cases to make sure the devices follow those requirements, and we'll update those compliance checks about two times a year in order to stay current with features that have been added to the NVMe specification.
07:53 DW: Alongside that, that testing has an interop component, and that's where we check basic functionality with different operating systems. We'll take an SSD and make sure that, yes, you can connect it to this OS, you can connect it to this host, it's going to come up, you can do I/O to it. And, so, those two things kind of complement each other. The compliance makes sure that folks are following the specification, and then the interop makes sure that at a fundamental level, those SSDs are functioning properly, that they're working, that there is interoperability. And after we go through that process, we do have a public-facing Integrators List for products that pass compliance and interop tests in order to provide some recognition to products that have proven that they are interoperable and compliant to the spec.
08:41 DW: So you can see that NVMe has been architected for interoperability in the way the spec itself is designed, the policies that they use for adding new features, how it leverages other specifications, different media types, and all of that contributes to interoperability and preserving that, which is so important for adoption, and so important for anyone that's going to be consuming NVMe drives.
Now, we're going to talk about what's happening now, and what resources are available to enable interoperability, and one of the most important things we could talk about there is common toolsets that are available for NVMe drives. One of those is the NVMe-CLI, a tool set. This is a command line management tool for NVMe SSDS; you can also use it for NVMe-oF targets and Linux, and we link here to a nice blog post from NVM Express organization about that, about how to use it. Lots of folks have done some pretty neat things with the NVMe-CLI, and then another common tool set is the UNH-IOL Interact tools.
09:49 DW: So, these are specifically designed for running those compliance tests, and that was designed in a way that it could be run on a simple host, on basically any PC that you can get your hands on, in a sense, and we've tried to keep the complexity of the hardware there relatively low, and that makes the compliance tests relatively portable, again, and easier to run, and we found that with that, testing happens more often. We make it easier for people to run these compliance tests, they run them more often, and so we've reduced that friction, or try to reduce that friction, in getting testing done, and what we found is that helps the test get run more and, of course, bugs get found before things go into the field. Another thing that's worth noting about NVMe is that the specification process is tightly integrated with the driver development. So, some of the folks are key contributors to creating the NVMe driver, the open source driver that's available in the Linux kernel. They work very closely with the committee that's writing the NVMe specification as well, so it's a really tight feedback there, and there's lots of benefits there as well.
11:05 DW: So, now we'll get into a little bit of that process that I talked about before, of updating test plans to address new NVMe features, and we'll pull back the curtain a little bit on exactly what we're adding into the test program currently in order to stay current with the NVMe specification, and I'm going to go into a little bit of detail here, but if you want further details on any of the items listed on these next few slides, I'd really encourage you to read the test plans that we've published on UNH-IOL site, also take a look at the specification itself. All we're doing here is really trying to give you an overview of the current updates that are happening to the test plans.
So, you can see here we're aligning to NVMe-MI 1.1 specification, we're aligning the test plans to NVME-oF 1.1 specification. We actually did our alignment to the NVMe v1.4 specification in the past year and, so, right now we're really just addressing ECNs, which are errata against the specification, and TPs or technical proposals. Those are new features that have been added to the specification.
12:15 DW: So, in the next couple of slides, I'll show the details of what's happening there. Another thing that we do every time we update the test program is update test status from FYI to mandatory. So, any time we introduce a new test procedure, a new test case, we're looking at a new feature, when we introduce those tests, we introduce them as FYI. They're not yet a compliance requirement, and so that gives us some time to work out and make sure our test procedure is correct, make sure that the understanding of the specification in the community is correct, make sure that people are actually implementing these new features, and as time goes on and we start to see that, hey, our test procedures are correct, we're seeing people implementing these features correctly, then we'll move those tests from an FYI status to a mandatory status.
13:08 DW: So, let's get into some of the updates that we have this time around, with respect first to NVMe-MI. Again, we're aligning to the NVMe-MI 1.1 specification. So, if you were to go look at that test plan, you would see some new tests around the management endpoint buffer read-write, SCSI enclosure, services send/receive commands, some new test around the auto-pause requirements. If you look down a little bit there -- new tests to address VPD read/write requirements -- that's new in NVMe-MI 1.1.
And then also, these next two bullets are pretty important; we added test cases to address how do you handle, identify a get log, a get features command, so those are kind of core NVMe admin commands if they're being sent over NVMe-MI to an NVMe storage device, so how can those be handled properly? And then the other side of that coin is, how do you address handling NVMe admin commands which are not allowed for NVMe enclosures, how do you handle those? So, we added some test cases for that, and then also how to handle NVMe-MI commands that occur during a sanitized operation, so making sure that it doesn't get interrupted.
14:25 DW: With respect to the NVMe-oF specification, again, we're aligning to the 1.1 spec, and we had to add some test cases, a lot of that was test cases to address errata that was done against the NVMe-oF 1.0 specification. And there's some tests we added to make sure that associations are preserved between a host and controller -- you think about that if there's any kind of disruption to a connection between a host and a controller, is the host going to have to go through this whole enumeration and discovery process all over again? Well, if we preserve associations for . . . as in the spec two minutes, but basically a relatively short amount of time, we can save ourselves some headaches there, and so that's what those tests are checking. If you look further down, you see some tests for requirements around a controller ID of all Fs, so we want to make sure that that case of a controller ID of all Fs is understood properly, so there's some clarifications in the specification there, we've added some tests for that.
15:27 DW: Some tests around the disconnect command, some tests around Keep Alive support. So Keep Alive, of course, it's kind of like this heartbeat command that gets sent to say, "Hey, yes, this fabrics connection over the network is still alive." But there's some special cases there when explicit persistent connections are supported, so we added some test cases for that. And then what to do if a persistent connection is requested but the device doesn't support it, how do we handle that, and then some other tests as well.
Again, these are a high-level review of some of the test cases. If you're interested in any one of those features in particular or any of those test cases, definitely dig into the test plan.
And now to NVMe 1.4. So, there's a couple errata that we addressed here in our updates, and some new features as well, so new tests around the thin P-bit, and then how that affects the use of the namespace utilization field, the Telemetry Host-Initiated Data Generation Number incrementing. So, there were some clarifications there, we wrote some test cases for that.
16:37 DW: And now some tests around the data units written value. So, data units written is essentially a number that allows you to understand how much data has been written to this drive, how much life does it have left, how much has it been used? And it's important to understand what kind of events cause that number to increment and what do not, right? So, it makes sense that a write command makes the data units written value increment. Write Uncorrectable, Write Zeros commands do not impact that, so we created some test cases to make sure that those aren't incorrectly incrementing that data unit's written value and giving the wrong idea of the drives use or life or health or things like that. We added some tests around the Sanitize config command, and then also about when blocks are marked as de-allocated -- that they should be marked as allocated when a write command occurs -- and there's some special cases around again, Write Uncorrectable and Write Zeros. So, we added some tests for that, as well as around endurance groups.
17:44 DW: So, lots of changes there, and again, we do those changes about twice a year to align to the specification. Again, if you're interested in specifics there, I'd really encourage you to dig into the test plan itself. So, now we'll get into a couple of observations we've had at the lab with respect to interoperability. I mentioned this earlier, with respect to PCIe 4.0, we've seen pretty solid interoperability among drives that we've had come in, that support PCIe 4.0 . . . I'm going through testing there. And one thing that's worth talking about here, the NVMe compliance program focuses on NVMe, focuses on NVMe protocol. We don't want to and we don't go and do a bunch of PCIe tests. We'll leave that up to PCI-SIG, they've done a great job over the years in defining and running a bunch of tests for PCIe and maintaining compliance and interoperability there. At NVMe, we don't want to replicate that.
18:45 DW: Once in a while, we see some PCIe issues, and what we've seen there generally are that those issues stem from a misconfiguration of a purchased IP. So, think of a company goes out, they purchase a PCIe PHY, maybe they don't have experts in-house to figure out how to configure that PHY properly, and there's some misconfiguration there, and we might see some PCIe interoperability problems there. But those are always things that are actually relatively easy to fix, and also relatively easy to detect because you connect something into a system, it doesn't show up. We look at layer zero first. Is it on? Is it connecting? And, again, once we see that, we can go back and reconfigure, so that's a relatively minor thing that we've seen once in a while. Again, tends to be if there's not a PCIe expert in-house.
Another thing that we've seen, NVMe boot, if you go back seven, eight, nine years, go back to kind of 2012 time frame. Booting off of NVMe drives could be difficult. There was a lot of kind of very specific things you had to do depending on the host system that you were using, but since then, UEFI support has become pretty much ubiquitous, and we hardly ever see problems with NVMe boot interoperability today.
20:10 DW: Also, with respect to hot plug, you think about hot plug was defined in the PCIe spec, and it wasn't a widely implemented feature, because if you think about a PCIe card, a NIC, or something else being added into a system, it would be very unusual for it to be hot plug. But on the storage side, you think about a drive being added or removed from an array, that hot plug thing is very important. And so in the beginning, we saw again, that hot plug could cause some problems, especially with the adoption of the U.2 form factor and the proliferation of NVMe storage arrays and NVMe servers that need that hot plug, we're seeing pretty solid support for NVMe hot plug and excellent interoperability there. It's worth mentioning, we do that hot plug only on U.2 form factor, because frankly it doesn't apply -- hot plug doesn't really apply to an M.2 form factor or an AIC form factor.
21:14 DW: And we're expecting that since the only thing changing is the connector from that transition from U.2 to E1.L or E1.S drives, we're really expecting that they'll be solid interoperability there as well. And this November, when we do our Plugfest, November 2020, we'll be adding support for hot plug for EDSFF drives. And then one final note on the open source driver; very rarely have we found interoperability problems there, where changes to the driver, perhaps a new kernel, a new release to the driver introduces an interoperability problem, very rare that we've seen that over the last almost 10 years. And then even, honestly, when we have found those problems, that team is really on top of things. We see those fixes in a matter of days or weeks. So, excellent support for interoperability on that front.
22:12 DW: So, now I want to talk a little bit about what's coming down the pipe. As I mentioned before, there's a lot of buzz and excitement around new features that are being added to the specification, a lot of talk about, and some namespaces and the like. And if there's one thing I would take out of this presentation, it's that NVMe has really put an excellent foundation in place and excellent processes in place that have given us interoperability this far. And so even as we add new features, I think we're going to see interoperability continue as new features get added to the spec. We're adding interop tests for them. We're adding compliance tests for them. So, that's something that's really going to protect interoperability and preserve it for the future, even as the specification itself grows and enables more use cases.
23:02 DW: So, in summary, quite a bit of effort has been put in to ensure that NVMe is interoperable and that interoperability has been a key driver of adoption. So, think of NVMe like a tool box that always has the right tool for the job and that has enabled the creation of an incredible variety of storage solutions, and it does that while ensuring interoperability.
23:27 DW: I want to thank you all for your time. Thank you for listening. It was a real privilege for me to be able to participate in this program, so thank you for that. If there's questions about interoperability or compliance or you'd like to learn more, certainly reach out to me at UNH-IOL. I'm pretty easy to find on Google. As well, you can look to nvmexpress.org. They've put out a lot of great material about NVMe technology and interoperability. So, thank you again.