peshkova - Fotolia

Dell EMC storage CTOs predict 2020 trends

Dell Technologies storage president and GM offers exclusive glimpse into CTO Council's yearlong research project with predictions for 2020 and beyond.

Based on a "crystal ball" research project conducted by Dell Technologies' Storage CTO Council, we can expect compute and storage to become more intertwined, flash-based object stores to be in demand, and faster data management at exabyte scale in 2020 and beyond.

Those are a sampling of the trends the Dell EMC storage council identified in 2019. The 14-member council's senior engineers and fellows -- including Ph.D. and patent holders -- examined more than 90 technologies to prioritize the most important future trends and determine how Dell EMC storage should respond. The team categorized technologies into three buckets: next-generation storage, data and security, and new deployment models. The council's research will factor heavily into Dell EMC storage product strategy, investment and development decisions in the new decade.

"The big theme is the move from storage of bits to managing data," said Dan Inbar, Dell EMC storage president and general manager. "The reality is, as the edge grows, there's a lot more data created. You have to analyze it sooner rather than later. You have to control where it goes. So, it's all about looking at this as data, versus the way we used to look at it as bits and bytes."

Dan Inbar, Dell EMC storage president and general managerDan Inbar

Inbar became Dell EMC storage president and GM in September, when Jeff Boudreau moved up to president and general manager of the Dell Technologies infrastructure solutions group. Inbar was previously a senior vice president and Israel-based R&D site lead for next-generation midrange Dell EMC storage products.

Here are five predictions that Inbar shared with TechTarget through a phone interview and email. He based them on the research that the Dell EMC Storage CTO Council conducted.

1—Expect to see new storage designs in 2020 that will further blur the line between storage and compute.

"We hear from customers who are looking for flexibility in their traditional SANs," Inbar said. "In most cases, keeping compute separate from storage, as is typical, works fine. Other times, it makes sense to have compute as close to storage as possible to support data-centric workloads, such as analytics and intense databases. Traditional arrays are evolving based on software-defined and container-based architectures to let customers selectively apply data services across different workloads and even allow running applications natively on the storage system itself.

"We're not talking about hyper-converged infrastructure, which combines compute, storage and software-defined networking. HCI is good for scalable applications, but there are latency-driven applications where it would make sense for compute and storage to be closer together. It could open up new use cases, such as AI/ML[machine learning]/analytics at edge locations and/or private cloud. It could also lead to lower cost of ownership and simplification for IT teams and application owners that don't always have to rely on a storage admin to provision or manage the underlying storage. This is becoming more important as new IT professionals come into the workforce that don't necessarily have the specialized expertise of a storage admin in the data center."

2—More customers will shift to software-defined infrastructure (SDI) in 2020 to augment traditional SANs and HCI.

"Hyperscalers and enterprises running high-performance or complex multi-hypervisor workloads have been using software-defined storage and other forms of software-defined infrastructure for years," Inbar said. "Traditional enterprises have been slower to adopt. We've seen gradual movement of enterprise customers redeploying certain workloads that have different requirements for capacity and compute than what traditional three-layer SANs can provide. These are customers who want the flexibility of scaling not just different capacities of storage and compute but also platforms that SDI can support uniquely. It's for the customer that needs to consolidate multiple high-performance or complex workloads, such as databases that need consistent submillisecond latencies and asymmetric scaling of compute and storage. Some SDI solutions can support any combination of compute platforms -- bare metal, multi-hypervisor, containers -- without needing to physically partition them across separate clusters."

3--Data set management comes of age in 2020 for storage objects and application data distributed across siloes of on-premises and cloud data stores.

"Data set management is about storing data transparently and making any part of it rapidly discoverable through metadata, so customers can instantly find the data they want and apply new ways to make it actionable," Inbar said. "New advances will make the technology superfast and superefficient for indexing at exabyte scale. This is the natural evolution of indexing technology tied to machine learning technologies. Compute horsepower was lacking previously. Prior versions were never going to scale to this level. Enterprises now have far too much data for yesterday's data management tools to be effective."

4—Delivering storage arrays as a service will expand as the use of multiple clouds increases to manage economics, obtain access to best-in-class services and protect against lock-in.

"Applications will continue to evolve, adopting cloud-native microservice architectures to allow for greater scale and portability," Inbar said. "With IT resources becoming more distributed, the need will increase for managing applications and data in a geo-distributed world. Storage endpoint deployment models have been expanding to deliver storage arrays as a service, directly connected to one or more public clouds. This model enables data placement where customers can efficiently share data sets across applications, running in different clouds. Plus, they get all of the data management functions of the array, including snapshots, replication, multi-protocol data access, encryption and authentication. It's existing storage products at the core but reimagined for cloud application deployment and consumption. New products will also emerge that span the endpoints and deliver higher-level and unifying data management functions."

5—High-performance object storage is no longer a laughing matter in 2020.

"When we began talking about a flash-based object storage appliance several years ago, people in the industry would laugh at us," Inbar said. "They said no one needs it. Now everyone is asking for it. One reason is the demand from application developers. Another is analytics. The added performance of flash and NVMe are opening up tremendous opportunity for object-based platforms to support applications that require speed and near-limitless scale, such as analytics, advanced driver-assistance systems, IoT and cloud-native app development. Flash-based object with automated tiering to disk offers a cost-effective option, particularly when a customer is talking about petabyte or exabyte scale. It allows you to move the data you need up to the flash tier to run analytics and high-performance applications and then move the data off to a cold or archive tier when you're done with it.

"There used to be a prediction in our industry about how some day object will replace file. It's hard to believe in something so absolute. However, as object becomes tuned for flash and NVMe, I think we'll see an uptick in the adoption of object and a shift of certain workloads away from file, especially for things like images, log data and machine-generated data."

Dig Deeper on Storage management and analytics

Disaster Recovery
Data Backup
Data Center
and ESG