A primer on SSD response time, other performance benchmarks
Consider several SSD performance benchmarks, including SSD response time as a key unit of measurement. Also, note the difference between benchmarks and load generators.
It's no longer big news when a vendor claims its solid-state storage system has reached a performance of 1 million IOPS. Still, it can be tough for a user to understand what these numbers mean and how published SSD performance benchmarks relate to enterprise storage performance issues.
Data storage vendors go to great lengths to demonstrate their products' high performance and to prove to storage managers that their products can handle large volumes of data center activity. Solid-state vendors started publishing SSD performance benchmarks to demonstrate how a 1U or 2U solid-state storage device could outperform a large enterprise-class storage system tricked out with thousands of drives. Vendors also wanted to demonstrate that they could not only achieve 1 million IOPS in such a small, efficient footprint, but they could do it at a fraction of the cost of a high-end storage array.
But the playing field is far from level; for example, high-end storage systems boast data protection and data management facilities that other solid-state storage offerings can't match. Many factors go into real-world performance. Clients, hosts, network connectivity, storage controllers and motherboards affect speed because the drives run in storage arrays and servers that are often networked.
Vendors that offer controller-based storage have been redesigning their storage controllers to handle the increased performance capacity offered by today's SSDs. The addition of dynamic tiering has enabled the solid-state storage layer to automatically service highly active data. When configured and tuned properly, this can greatly increase the performance of a workload. Less frequently accessed data is still stored on rotating media to minimize cost.
An understanding of SSD performance -- and how it is measured -- sheds light on how the technology can potentially boost the performance of mission-critical applications and play an important role in IT infrastructure.
Benchmarks versus load generators
To understand SSD performance benchmarks, it's important to first know the difference between a benchmark and a load generator. Often, load generators are mistaken for benchmarks because admins may use load generators to create a benchmark. There are, however, distinct differences between the two.
A benchmark is a fixed workload that has reporting rules and a fixed measurement methodology so the characteristics can't be changed. Industry-standard benchmarks impose further restrictions, often with an independent reviewer who ensures the compliance of the results. This ensures users get an apples-to-apples comparison between similar products. There are currently two standard bodies that offer industry-standard benchmarks for storage: the Storage Performance Council (SPC) and Standard Performance Evaluation Corporation (SPEC). SPC measures block storage, and SPEC measures file storage performance.
A load generator simulates a desired load for performance characterization and helps reveal performance issues in a system or product. These generators have "knobs" to adjust the desired workload characteristics. Performance professionals and testing organizations use them to validate a product's established specifications. Results often can't be compared with those from other vendors because there's no guarantee that the test conditions were equal while measuring the system under test.
It's important to be aware of these differences since vendors likely measure their IOPS results under different conditions.
Types of SSD benchmarks
Common SSD benchmarks measure the following:
- IOPS. An acronym that stands for input/output operations per second. The metric measures how many reads and writes an SSD can handle per second. The higher the IOPS, the better.
- Throughput. An SSD's data transfer speed, measured in bytes per second. The higher throughput, the better, although throughput is affected by elements such as file size and whether the reads and writes are random or sequential.
- Latency. Shows how long it takes to process an I/O operation. This process translates to SSD response time and is measured in microseconds or milliseconds. The lower the latency, the better.
Price can also be a factor when buying SSDs. Cost considerations include dollars per IOPS, dollars per watt and dollars per rack unit. Published SPC benchmark results include price per performance numbers.
Factors that affect SSD response time
Solid-state storage has unique behavioral characteristics. Since SSDs do not have moving parts, HDD metrics, like rotational latency and seek times, don't apply to SSDs. Because those latencies are eliminated, SSD response times are usually measured in microseconds compared to milliseconds for HDDs. It's important for users to understand how these measurements are performed to ensure that the reported results represent a verifiable and sustainable level of performance. On average, SSD response times are 10 times faster than HDD response times for random writes.
Single-level cell (SLC) SSDs have faster access times than multi-level cell (MLC) SSDs. DRAM-based solid-state storage is currently considered the fastest, with average response times of 10 microseconds instead of the average 100 microseconds of other SSDs. NAND memory has progressed from SLC to MLC to triple-level cell and quad-level cell (QLC). Each step up in bit density per cell lowered the overall price of those SSDs but also decreased performance and endurance. SLC SSDs have the best access times and QLC the worst.
Two SSD drives with similar performance numbers won't always perform the same in the data center. A drive's interface, bits per cell, software and storage protocol also affect overall storage system or server performance.
NVMe is the fastest interface for SSDs because NVMe uses the PCIe bus instead of the slower SATA interface bus. PCIe 4 can use 32 lanes to transfer data compared to four lanes for SATA SSDs. NVMe SSDs were designed to reduce flash latencies and SSD response time.
Fibre Channel is still the highest-performing protocol, but SAS isn't far behind. Most SSD products built around iSCSI and SATA won't produce 1 million IOPS results unless they have other caching features to assist performance.
The location of solid-state storage in the I/O path can also be a factor in producing a million IOPS result. Microsecond SSD response times are easier to achieve if the drive is located closer to the host. Many vendors have taken advantage of this fact with PCIe SSDs and flash cards that plug in to a host like internal HDDs.
Solid-state storage performance measurement
Here are four main steps to demonstrate sustained solid-state performance:
- Create a common starting point. Solid-state storage needs to be in a known, repeatable state. The popular common starting point is a new SSD that has never been used or performing a low-level format on an SSD to wipe the contents and restore it to its original state.
- Conditioning. Solid-state storage must be put in a "used" state. During initial measurements, solid-state shows artificially high performance that's only temporary and not sustainable. These numbers shouldn't be reported as a demonstration of the solid-state's true sustained performance. For example, if random 4 KB writes are run against the storage for approximately 90 minutes, it should put the storage into a "used" state. Depending on the manufacturer, the transfer size or amount of time needed for conditioning may change.
- Steady state. Performance levels will settle down to a sustainable rate; that's the performance level that should be reported.
- Reporting. The level of reporting is important. If a standard benchmark requiring full disclosure wasn't used, there's a minimum amount of information required. The type of I/O is important to know. Most results are reported as 100% random reads because random writes diminish performance. With solid-state storage, most random write workloads don't perform any better than the performance that HDD systems yield. Some results reporting also discloses the number of outstanding I/Os, which is helpful information if coupled with a reported average response time.
Even after following these steps to measure solid-state storage performance, it's still hard to compare results without comparison criteria for fair-use rules. More information about these four steps can be found through the Storage Networking Industry Association's Solid State Storage Initiative.
SSDs vs. other storage
SSDs are considerably faster and more expensive than storage media such as tape and HDDs. While SSDs have replaced HDDs for most enterprise performance needs, there are still faster storage technologies. DRAM provides the highest performance and is also the most expensive. Storage class memory (SCM), including Intel Optane drives, sits between NVMe SSD and DRAM in performance and price.
SCM can be three to five times faster than NVMe SSDs and doesn't have the same wear issues as NAND flash. However, an SCM drive may cost five times as much as a comparable NVMe drive. The price relegates SCM mainly to memory-intensive processes, such as AI and machine learning. So, while NVMe SSDs are seen as a transition to SCM, they remain far more common in enterprise use today.
Editor's note: This article was originally written by Leah Schoeb in 2012 and then updated and expanded by Dave Raffo in 2022.