This content is part of the Conference Coverage: A conference guide to AWS re:Invent 2023

S3 Express One Zone set to power generative AI workloads

Andy Warfield of AWS Storage breaks down the new AWS S3 tier of fast object storage and discusses future S3 development in a session at the vendor's re:Invent 2023 conference.

AWS' S3 object storage is shifting from being a data repository to a vital part of the stack for generative AI and machine learning development.

That's the idea behind S3 Express One Zone (EOZ), the public cloud's latest S3 object storage service, according to Andy Warfield, vice president and distinguished engineer at AWS Storage.

Andy Warfield, vice president and distinguished engineer, AWS StorageAndy Warfield

Ideal storage performance is when customers are not thinking about storage, he said during a breakout session at AWS re:Invent 2023 in Las Vegas on Tuesday. "Storage improvements are really like efficiency improvements."

S3 Express teardown

The EOZ storage operates differently from the S3 Standard tier, using a new type of S3 bucket to deliver the cloud's fastest object storage available, Warfield said. The S3 bucket is limited to EOZ service users.

Block and file services still outpace EOZ under ideal circumstances, but Amazon Elastic File System (EFS) provides equivalent performance to the company's file services for most workloads. Warfield said Express One Zone targets about 3 to 10 milliseconds of latency, while EFS comes in around 2 to 5 ms. AWS Elastic Block Store General Purpose leads the trio at 1 to 2 ms of latency.

Although still behind file and block, EOZ offers faster latency than S3 Standard, which comes in anywhere between 10 to 200 ms.

The decrease in latency comes from overcoming the original construction and business intentions of S3, as well as the Express One Zone team experimenting over the past several years, Warfield said.

A chart of expected latencies for workloads across AWS Storage services.

The S3 object storage service is built around regional redundancy and being able to operate across the vendor's Availability Zones (AZs) that data center customers can choose to purchase services from and store their data. This design assumption helps maintain a consistency of performance and availability, but can act as a throttle as well.

"There are a lot of internal round trips in S3 to make sure the system is functional at a regional level," Warfield said.

Express One Zone operates only within the one AZ chosen by the customer. The storage can still take and send requests to other zones, but operates within just one zone to eliminate redundancy slowdown.

The S3 team also developed the Directory Bucket, a new type of storage resource for S3 objects under the cloud's branding. This bucket offers increased transactions per second and provides what Warfield called session-based authentication, eliminating many of the metadata checks that could slow down performance. The bucket can still take and send requests to other AZs, but operates within one AZ to eliminate redundancy.

The launch of S3 EOZ this week included numerous other storage service and product launches, including new file and block services. Warfield's presentation highlighted other products already released in November or just now entering general availability, such as the Mountpoint for S3 Container Storage Interface (CSI) driver and the Amazon S3 Connector for PyTorch, an open source machine learning framework used in deep learning research.

The Mountpoint for S3 CSI driver adds the ability to access S3 objects through a file system interface specifically for container storage, primarily to support Kubernetes applications. The Amazon S3 Connector for PyTorch enables S3 object storage to expedite PyTorch operations by bypassing storage checkpoints that homegrown solutions stumble on while maintaining a high throughput.

Learning with S3

The development of a faster S3 storage tier evolved out of how S3 customers had altered their usage of S3 storage over the years, Warfield said in a follow-up interview with TechTarget Editorial.

At this point, [customers] want to treat [S3] effectively like random access storage.
Andy WarfieldVice president and distinguished engineer, AWS Storage

"At this point, [customers] want to treat [S3] effectively like random access storage," he said. "[They] want to write an app that is directly working with data in S3 instead of having to copy stuff out."

The development and growth of data analytics frameworks and services almost a decade ago started the trend of using S3 storage as part of the application workflow and stack. For example, users of Hadoop, an open source data processing framework, and its associated Hadoop Distributed File System (HDFS), created S3 connective services to use the storage directly, Warfield said.

"The S3 team historically looked at the service being one side of a REST API," he said. "We saw the analytics folks like the Hadoop community shift and write S3A, which was a connector to allow you to use S3 the way people were using HDFS in enterprise clusters -- so you could run these big analytics jobs against S3."

The Hadoop action might have pushed the S3 team to reexamine the product lineup, but the customer demand for cost-effective high performance from AWS services in the wake of generative AI and machine learning has driven AWS decisions in the past few years.

Warfield said he'd like AWS' storage services to further develop interoperability among storage types, enabling customers to pick and choose what storage can work best for an application as needed.

"I want to make sure [that a] decision to work with data in one modality, whether it's file or object or whatever, doesn't preclude other uses down the line," he said.

Tim McCarthy is a journalist from the Merrimack Valley of Massachusetts. He covers cloud and data storage news.

Dig Deeper on Cloud storage

Search Disaster Recovery
Search Data Backup
Search Data Center
Sustainability
and ESG
Close