Storage

Managing and protecting all enterprise data

agsandrew - Fotolia

Three trends causing the rapid commoditization of AFA storage

Advancing technology and changing market forces have shifted the dynamics in the all-flash array market, opening it up to new challengers and benefiting customers.

The all-flash array market has seen a precipitous price decline over the past few years. AFA prices of $15 or $16 per gigabyte of raw capacity have fallen to $1 or less. Discounts of 80% or more are frequently offered. Storage buyers often believe they're getting a good price from a big-name vendor, but then another vendor comes along at the last minute hungry for a deal and offers a better price.

The AFA storage system price war is more surprising when considering how the NAND chip shortage of 2017 and early 2018 caused SSD pricing to increase during this same timeframe. What's the root cause of the rapid commoditization of AFAs? A better understanding requires some context.

Skimming the cream

High-end products and services have always been subject to commoditization. The first to market in a category typically carries a premium price; that's known as cream skimming. As competition increases, the price gradually declines. These products and services use feature and brand differentiation to maintain higher pricing. But, eventually, prices decline as those features and functions show up in lower-priced competitors.

Differentiation is the perceived market value that comes from a product's ability to solve a specific problem, as well as its attributes, features, quality, attractiveness and brand. Commoditization happens when products cease to be differentiated from their competitors. Once products and services become commodities, price is the key differentiator. More sophisticated IT buyers may differentiate on TCO rather than price, although that, too, is the exception, not the rule.

Data storage has been hit by this process. The storage pricing comparison metric has generally been price per gigabyte of raw capacity, and, though imperfect, it's still the principal tool used to evaluate storage systems (see "Storage comparison metrics"). One positive about this flawed metric is it has steadily declined industrywide through the years, mostly because of the continuous capacity gains of both hard disk and solid-state drives. HDD capacity gains have slowed during the past few years; however, SSD gains have accelerated as NAND fabrication has gone to 3D. The steady decline in the gigabyte-of-raw-capacity metric doesn't explain how AFAs got commoditized so fast. 

Storage comparison metrics

Price per gigabyte of raw capacity is the main metric used by storage buyers to evaluate storage systems. It commoditizes the storage buy in the following ways:

  • Raw capacity is the sum of all of the drives' rated capacity.
  • Usable capacity is the amount of capacity left after the raw capacity has been formatted, file system imposed and RAID established.
  • Effective usable capacity is the amount of capacity available to be written to after thin provisioning, deduplication, compression, snapshots and clones are considered.

It's a flawed metric for several reasons. It doesn't consider the value of performance; capacity minimization, such as storage software efficiency, thin provisioning, deduplication, compression and zero capacity snapshots; performance and capacity scalability; data protection capabilities, such as snapshots, replication, mirroring, continuous data protection and high availability; management software; and analytics software.

Price per raw gigabyte also fails to consider TCO, including the costs of supporting infrastructure, maintenance, operations, upgrades, personnel to operate and tech refresh data migration. It erroneously assumes all these values and costs are equal.

More reliable metrics are TCO per effective usable gigabyte of capacity combined with TCO per random IOPS and TCO per sustained throughput. These metrics include actual costs that are substantially different among competitive AFA products. They also include hardware minimization effectiveness, which again varies by vendor product.

One area of note when assessing vendors: They will vary on how much storage hardware minimization they guarantee because data deduplication and compression will vary by data type. It's prudent to get each vendor's guarantee in writing.

As established, commoditization results from increased competition and an inability to differentiate value between competitive products. This is part of what's happened in the AFA market. But there's more, a lot more. Three market trends have effectively caused the unprecedented commoditization of AFA storage:

  • a shrinking total available target market;
  • appreciably increased direct competition; and
  • growing indirect competition.

Shrinking total available target market

Public cloud has been around for a decade. It's had the biggest effect on the small-to-medium business and midtier markets. New businesses rarely have their own data center. IT in the public cloud is simpler to set up and less expensive upfront. It lets businesses take advantage of prepackaged software-as-a-service applications, such as Microsoft Office 365, Google Suite, Salesforce, Oracle and Magento, as well as infrastructure as a service, platform as a service and other as-a-service offerings.

When organizations run IT in the cloud, they don't need in-house storage or an AFA. And even though running IT in the cloud costs more in the long term, it converts capital expenditures into operating ones while eliminating the need to own and operate one or more data centers. This has huge appeal to the midtier market and has caused many businesses to move more applications to the public cloud. The result is a shrinking total available market. 

Appreciably increased direct competition

Why is there significantly more direct competition with all-flash array storage than there was previously with other storage system products? A lot of it is the result of substantial innovation in underlying storage technologies.

More sophisticated IT buyers may differentiate on TCO rather than price, although that, too, is the exception, not the rule.

The traditional gap between high-end storage systems -- typically classified as enterprise or midtier -- and mass-market or lower-end storage system counterparts was once large in all respects. Lower-end systems couldn't match high-end ones in performance, reliability, scalability, functionality and data protection, partly because high-end systems frequently had custom ASICs and highly engineered, complicated caching hardware systems to provide unmatched performance. And high performance was typically required in the shared storage environments. As a result, there was also a large cost and price gap, preventing the commoditization of high-end systems. High capacity, too, was exclusive to high-end systems. 

Substantial technological advances have changed the old paradigm, narrowing and even eliminating the performance and capacity gaps. The first advance to transform storage was the increasingly functional Intel x86 microprocessor. Moore's law may have slowed, but it hasn't stopped. The x86 CPU continues to become more powerful, adding more transistors, cores, power reduction and storage capabilities every couple of years. The latest iterations include several storage functions, such as XOR. These rapid CPU improvements make ASICs less attractive, with too little or no advantage compared to their upfront and ongoing costs, resulting in most storage systems now standardizing on x86 storage controller architectures. 

The x86 platform architecture comes with loads of standard off-the-shelf tools, making software development faster and cheaper. It also enables the decoupling of storage software stacks from storage system hardware -- a paradigm shift. This decoupling has allowed for the development of storage software separate from hardware, enabling a new class of competitor under the software-defined storage, or SDS, banner. SDS enables the storage software stack to run on commercial off-the-shelf (COTS) servers, also known as white box hardware. 

Commodity hardware is a huge market shift. COTS server hardware eliminates the hardware premium pricing once so prevalent in storage systems. Hard disk and solid-state drives often had list prices 10-times higher in an enterprise storage system than the equivalent drive in a server. That drive in midtier storage could be six-times higher than the server equivalent. Even after all discounts, the drive in a server is seldom more than one-third the price of the same drives in AFA storage systems. Also, maintenance costs of those drives after warranty is based on the higher Manufacturer's Suggested Retail Price or list price, not the net discounted price.

SDS running on COTS server hardware means much lower storage costs. Server manufacturers have been aware of these technology changes and have developed a series of servers optimized for many drives, including all-flash designs.

The open source advantage

Another major advancement has been open source storage software, such as CentOS, Ceph, Docker, FreeBSD, Linux, Swift and ZFS. It lowers the barriers to entry for new SDS or storage systems, because storage software developers don't have to start from scratch. Open source provides storage software features that are already developed and available at minimal costs. Even advanced functionality is available; for instance, deduplication is now part of the Red Hat Enterprise Linux kernel. The storage system functionality barrier to entry has disappeared.

Perhaps the biggest technology advancement and an essential element of all-flash array storage has been the flash drive itself. SSDs have changed the storage performance envelope and leveled the playing field. Just about any AFA can now match or surpass the massive amounts of IOPS or throughput of enterprise AFA storage.

IOPS latency, as with performance, is important to many transactional applications. It, too, has seen significant technological improvements with the development of nonvolatile memory express SSDs and NVMe over Fabrics (NVMe-oF). Nonvolatile memory express is an open standard driver that has significantly reduced latency between a server or controller and the NVMe drives, eliminating proprietary vendor driver stack lock-in in the process. NVMe-oF takes advantage of the NVMe driver and remote direct memory access to provide similar low latencies over fabrics -- Fibre Channel, Ethernet and InfiniBand -- to NVMe SSDs within a server or storage system. Both are published standards and embraced by the open source community, as well as network adapter and SDS vendors. 

Using these advancements and standards, several new AFA vendors have become leaders in delivering much lower latency, extreme IOPS and higher throughput. As newcomers with less storage baggage, they're more nimble and can take advantage of even newer storage advances such as storage-class memory drives -- Intel Optane and Micron QauntX 3D XPoint technology -- which are faster and have lower latency than flash SSDs. Rapid adoption of the latest storage technologies has enabled these new competitors to surpass the fastest of the established enterprise storage systems, taking away their crown as the pre-eminent performance storage. (See "Upstart, high-performance AFA competitors" for a list of new companies that have turned AFA performance into a competitive advantage.)

New, high-performance AFA competitors

Another innovation driving more AFA competition is tiering, where cool or cold AFA data is archived to lower-cost secondary systems or cloud storage. Moving cool and cold data off of high-performance AFAs reduces the capacity required for primary data by 80% or more. That in turn lowers the cost of primary storage by an equivalent amount. Many legacy AFA storage vendors have adopted this capability, but frequently charge a fee to manage cool and cold data no longer on their arrays to make up for the loss of chargeable capacity. New and hungry competitors don't do that, giving them a competitive edge.

The technological innovations and trends cited here have lowered the barriers to AFA market entry, creating a slew of new competitors. These nimble competitors have increased pricing pressure on big name brands, forcing their prices down.

Growing indirect competition

The age of the IT specialist is in decline. With convergence and hyper-convergence, the IT market pendulum has swung back toward integration of what have been separate disciplines of storage, networking and compute.

Converged infrastructure is the integration of the server, hypervisor, networks and storage hardware systems into a single SKU and management infrastructure. Hyper-converged infrastructure (HCI) goes deeper with the integration. It combines the server, hypervisor and storage within the server at a software level. Of the two types of convergence, HCI is growing faster because of lower costs and simpler management. Nutanix is the HCI industry's prime mover and sales leader, although most server and white box vendors now sell HCI systems. 

HCI doesn't need stand-alone AFAs. It has redundant storage, and all-flash SSDs in HCI cost a lot less. It's similar to SDS, with the drives costing two-thirds or less than the drives in AFAs. HCI, in effect, reduces the need for stand-alone, shared AFAs. 

Need for differentiation

Commoditization occurs when there's no perceived differentiation in problems solved, value, functionality or brand. This has led several AFA vendors to attempt to differentiate their products.

Some, such as IBM, Nimbus Data, Pavilion Data Systems, Pure Storage and Western Digital, make unique SSD form factors. This lets them increase their storage density per rack unit. Another group has tried to differentiate their AFA storage through performance, as mentioned earlier. Others have developed unique hardware architectures. Infinidat, for example, isn't an AFA, but a hybrid array of SSDs and HDDs that claims AFA performance with up to 3 TB of dynamic RAM caching.

Hewlett Packard Enterprise now relies primarily on analytics (Nimble) and less so on a unique ASIC (3PAR). StorOne's approach is based on its complete rewrite of the storage stack and collapsing the many layers into a single process or layer that requires much less hardware to provide the same IOPS and throughput. 

Dell EMC bundles additional software and services to differentiate itself. It also covers its bases by selling multiple products in each category of AFA, SDS, converged infrastructure and HCI. NetApp also sells products in each category and guarantees quality of service for its SolidFire array and HCI, as well as the scalability and snapshot services of its ONTAP OS.

All that differentiation helps vendors stand out and solve different problems while slowing commoditization of their products -- but only to a point. Differentiation doesn't overcome the shrinking market and increased competition pressures.

What this all means

A shrinking market due to the rapid growth of the cloud and HCI plus the explosion of competitors will continue to force all-flash array pricing down and commoditization up. Differentiation acts as a speed bump barely slowing this race to the pricing bottom. This is good news for storage buyers, but not storage vendors. Margins will continue to shrink, competitors will disappear and new leaders will emerge. 

The AFA storage vendors that survive will solve urgent and costly user problems that others don't, or they'll solve them better than their competitors. Differentiation purely on price or cost is a race to the bottom that benefits no one in the long term.

Article 1 of 6

Dig Deeper on Flash memory and storage

Get More Storage

Access to all of our back issues View All
Disaster Recovery
Data Backup
Data Center
Sustainability
and ESG
Close