Getty Images

HPE GreenLake exec: It's always the year of storage

In this Q&A, HPE's Omer Asad talks about HPE GreenLake's transformation into an on-premises hyperscaler and the importance of both storage hardware and the abstraction of it.

Hewlett Packard Enterprise is showcasing its ever-expanding GreenLake, which brings cloudlike features to customers with a focus on expanding services, partnerships and AI. One executive points out that while storage isn't front and center in messaging, it is vital to the company going forward.

At HPE's Storage Day media event in April 2023, CEO Antonio Neri christened this the "year of storage" as reported in Forbes. At HPE Discover, Neri clarified that this was an internal message to his storage team, celebrating the innovations they had done around the new HPE Alletra Storage MP and laying a storage foundation for GreenLake.

In this Q&A, Omer Asad, vice president and general manager of data infrastructure and SaaS platforms at HPE, talks about why different hardware is still needed, but also abstraction from the hardware; whether speeds and feeds matter still; and whether GreenLake can cause storage needs to shrink.

Editor's note: This Q&A has been edited for clarity and conciseness.

In the day one keynote at HPE Discover, Antonio Neri proclaimed 2023 as the 'year of AI.' Do you still see this as the year of storage?

Omer Asad, vice president and general manager of data infrastructure and SaaS platforms, HPEOmer Asad

Omer Asad: It's always the year of storage. Storage is the highest-margin product, not just for HPE, but generally for every other vendor as well. I had presented arguments around data protection, private cloud and data protection as a service, and the culmination of that was increased sales coverage and increased quota retirement on aggressive sales compensation plans for storage.

The message is targeted toward how aggressive we are going to go on storage sales. And the numbers speak for themselves: Alletra is the fastest-growing product that we have ever introduced to market. For sales, it is the year of storage.

How do you see GreenLake playing out?

Asad: The strategy that I am executing toward is I want [GreenLake] to become the on-premises hyperscaler. We came up with the concept to finance your infrastructure with HPE financial services. Does that simplify [the customer's] life? Not really -- it simplifies [their] way to pay. The next step is building the cloud console, which is the equivalent of AWS console.

I want [HPE GreenLake] to become the on-premises hyperscaler.
Omer AsadVice president and general manager of data infrastructure and SaaS platforms, HPE

AWS came along and said, 'Our infrastructure is hosted in our regions. Here is a cloud console to consume it.' What they did was a SaaS-ification of infrastructure -- they simplified their lives. I want to provide customers with a platform through which they can buy different services.

The difference being that a hyperscaler like AWS has hardware for services offered at its data center. For HPE, the hardware will be sitting at an MSP or at a customer's data center. The consumption and operational model will be just like the cloud.

The Alletra MP allows for configuration for different types of storage. Is there a plan for a more consolidated SKU to run all storage on?

Asad: No. We volume simplified supply chain and go-to-market operations in a single box, Alletra MP. The engineers designed it to be configurable to take on an IOPS controller personality for performance or a JBOF personality for capacity. These can grow independent of each other, unlike traditional storage arrays. This is the disaggregated modern architecture of Alletra MP.

There are now two planes configured: a cloud control plane and a standard hardware plane on different SKUs for charging mechanisms. There are different services on top of Alletra MP to satisfy different customers, be it file, block or [GreenLake Private Cloud Business Edition]. The hardware team executes independently, maybe adding storage class memory to accelerate block performance or SLC NAND with metadata tagging to optimize object performance awareness.

Different SKUs allow for optimizing performance.

The Alletra portfolio has several models -- is it important for customers to have this choice in array?

Asad: I want to completely abstract the hardware decision for customers for primary storage. The challenge is with block storage. SAN administrators are 'married' to their SANs. The transition [for complete abstraction] will happen when the last SAN administrator turns 65 and retires.

The functionality we built into the software can exploit different pieces of the hardware, and that is where we want the world to go. Alletra MP will not care if file, object, block or a key value storage with a database is running on top of it. With Private Cloud Business Edition that was launched yesterday, there will be a stack of ProLiants [servers] and MPs. From GreenLake cloud console, we can deploy any pieces of software on top of that.

My desire is to abstract hardware, but the value needs to be in the storage operating system for the use case that is solved.

There is a push to expand sales into the SMB market. Will HPE execute this strategy with hardware or with GreenLake?

Asad: 100% private cloud GreenLake. Customers should not worry about the hardware that is going to be sold to them in a subscription. The value is in the abstraction for the customer. Customers specify the performance of their workload, and our software plus hardware delivers that.

In a sustainability breakout session, Monica Batchelder, chief sustainability officer at HPE, said, 'We no longer live in a world of speeds and feeds.' At last year's Discover, you said speeds and feeds are important in the context of how hardware is developed -- has your stance changed?

Asad: It depends on the workload. A customer running SQL doesn't care, but an SAP HANA customer's first question will be, 'Can you do 2 million IOPS from this controller?' Modern distributed applications, those guys are more relaxed. They are more interested in how many web transactions can be pushed.

There are two ways to look at this. If a customer has a massive data center and is worried about their footprint, they will ask for efficient consolidated hardware. A customer just starting out or freshly moved away from the cloud -- and have never worked with hardware limitations before -- may just throw hardware at the problem. If you keep adding nodes, performance will scale, but eventually it will run up a massive hardware bill. This is where they discuss consolidation and how many IOPS can be met on less hardware.

A larger customer will always say what Monica said: We are no longer in the world of speeds and feeds. You need to consolidate, consolidate, consolidate.

A customer on the expo floor said GreenLake can cause storage needs to go down, as HPE will host the application or service needed, and therefore HPE will handle its storage. Can customers see lower storage needs and costs this way?

Asad: GreenLake allows customers to start with lower commit levels. If your application is not successful, or if your application doesn't consume the way it is supposed to consume, you will never pay HPE that money.

In the pre-as-a-service world, you bought your storage frame and 3X storage for more growth. With all storage, you never know your growth rate. Now, your storage consumption will actually mimic the expenditure -- you never pay upfront, making the expenditure go down. If your application never sustained growth rates that you had imagined, you're not going to spend on storage, and storage is 60% of your workload costs.

Adam Armstrong is a TechTarget Editorial news writer covering file and block storage hardware and private clouds. He previously worked at StorageReview.com.

Next Steps

Refining HPE GreenLake as it sets its sights on everything

Dig Deeper on Cloud storage

Disaster Recovery
Data Backup
Data Center
Sustainability
and ESG
Close