Storage is still recovering from COVID-19. This year, vendors expanded partnerships and focused on developing software but released little innovative new data storage hardware.
Industry experts said this is both to be expected and not necessarily bad given the room for storage software to grow within existing hardware.
Experts pointed to the pandemic as one culprit. It forced companies to invest in more cloud services and refresh their hardware off cycle to address hybrid work demands. This resulted in lower demand for new hardware in the last couple of years.
"There were a lot of refreshes that happened during the pandemic that basically changed the product refresh cycle a little bit," said Brent Ellis, an analyst at Forrester Research.
The change in cycle will continue to affect on-premises hardware purchases in unpredictable ways, he said.
Experts also said the lack of innovation may stem from the focus on generative AI and the shift in venture capital (VC) funding and capital to that technology. Plus, existing data storage hardware can now support more software improvements, which provides incentive for vendors to continue developing storage OSes.
Different market, fewer new products
Not only was 2020 a time for a forced refresh, but it also marked a shift in funding and investing, according to Marc Staimer, founder and president of Dragon Slayer Consulting. Investments from VCs in storage startups started and continue to drop off significantly thanks to the rise in hybrid work and AI technologies.
"If you're not seeing a lot of money going into startups in the storage space, you're not going to see a lot of innovation," he said.
Established vendors aren't where innovation lies, particularly for hardware innovation, Staimer said. Startups come in with some new way of doing things, and once they begin to take away market share from established companies, they become acquisition targets.
Now VC funds for storage are headed elsewhere, particularly toward generative AI. Enterprises are currently focused on compute to meet the demand of AI use cases, but not all budgets in IT increased to make room for the new technology, according to Joseph Unsworth, an analyst at Gartner. AI needs expensive components such as GPUs and the high-bandwidth memory they sit on it.
"You got to pull back somewhere. … Do you need all that storage?" he asked. If less money goes into the storage market, less money will go into research, he added.
Recent market conditions, such as the sharp drop in the price of NAND, have affected innovation on the media side of storage as well, Unsworth said. Technology development hasn't stopped but has been delayed due to lack of funding.
Aside from lower funds, there is a limitation as to how far some of the technology can go, Ellis said. All of the array vendors began offering quad-level cell (QLC) versions of their products this year, which provides more bits per cell over triple-level cell. The first enterprise QLC drives were introduced in 2018 and are now seeing mainstream adoption. But getting customers to make the jump from QLC to an even higher bits-per-cell offering -- penta-layer cell -- is more problematic given it has yet to be proven in the enterprise.
"They start to get a little bit less stability from the different cells, and [the vendors will] have to build in more redundancy," he said.
This introduces a plateau for storage technology in the enterprise that will most likely lead to a further waning of innovation for NVMe-based storage devices, Ellis said. He pointed to CPUs as an example, which have also experienced a decline in innovation. They once doubled processing speed every year or two, but that is no longer the case.
Ellis does see more innovation with compute over storage hardware, given the growth in popularity of smartNICs, data processing units and GPUs as they are being used with AI.
Still room to grow
While hardware innovation may have plateaued, more can be done with existing hardware, especially on the media side, according to Dave Pearson, an analyst at IDC.
Marc StaimerFounder and president, Dragon Slayer Consulting
"There is still some headroom in terms of efficiency and consolidation in the data center, just with what is in existence," he said.
There is also unrealized potential in terms of storage management and efficiency for software before vendors consider a need to change underlying hardware, Pearson said.
For SSDs in large storage arrays, NVMe hasn't been used in an efficient manner until recently, according to Morgan Littlewood, senior vice president of product management and business development at iXsystems and formally of Violin Systems, an early all-flash pioneer.
But NVMe is becoming the de facto interface in the enterprise, which changes how flash can be utilized. NVMe allows for large capacity drives, up to 60 TB in some cases, and more performance, he said.
"Now, it's all about managing that capacity, and that is really a job for software," Littlewood said.
While there is room to grow in existing systems, hardware innovation is still necessary to keep pace with innovations happening in other parts of the stack and to keep up with changing technology demands, Staimer said.
"Someone will innovate and make obsolete storage systems in every field in every phase," he said.
Adam Armstrong is a TechTarget Editorial news writer covering file and block storage hardware, and private clouds. He previously worked at StorageReview.com.