alphaspirit - Fotolia

Red Hat Ceph, Gluster roadmaps focus on HCI, containers

Red Hat's Ceph and Gluster storage roadmaps reflect increased focus on hyper-converged infrastructure, containers and big data, in addition to performance improvements.

Roadmaps for Red Hat Ceph and Gluster storage reflect an increasing level of focus on hyper-converged infrastructure, containers and big data in tandem with the usual performance and feature improvements.

Red Hat roadmaps are no secret, because they follow the development work done in the open source Ceph and Gluster communities. But Red Hat ultimately decides when new capabilities are sufficiently tested and ready for general release in its commercially supported products, and the vendor often adds enhancements and packages, as well as performance and sizing guides, to the mix.

For instance, Red Hat recently launched an open source software-based hyper-converged infrastructure product that combines Red Hat Gluster Storage and Red Hat Virtualization and targets remote offices and branch offices (ROBOs).

The long-term roadmap that Gluster product managers recently unveiled indicates plans to expand the scale and enable smaller configurations that start at a single node. The current minimum is a three-node cluster, and the maximum configuration is nine physical servers for the Red Hat Hyperconverged Infrastructure product.

"We're looking at scaling up for customers who need to go more than nine," said Sayan Saha, a director of product management for Red Hat's storage business. "And we have found that in ROBO environments, a lot of people want to start at one node. They understand they won't get the redundancy, but they want to start at one because that's all the space they have."

Red Hat Hyperconverged Infrastructure users supply their own hardware. But Saha said Red Hat hopes to find a validated appliance partner to bundle the software beginning in early 2018.           

Major back-end change for Ceph

Red Hat tends to recommend Ceph for highly scalable cloud and object storage deployments and Red Hat Gluster for hyper-converged, containerized and scale-out NAS use cases.

A major change in store for Red Hat Ceph is the new BlueStore back end for the physical or logical storage units, known as object storage devices. The new BlueStore back end is designed to overcome the limitations of the XFS file system that Ceph currently uses to write data to disk.

The Ceph open source community has released BlueStore, but Red Hat won't make available a technical preview of BlueStore until its Ceph Storage 3.0 release, which is "hopefully toward the end of [2017]," according to Neil Levine, a director of product management at Red Hat. The general availability date for the final BlueStore release is uncertain.

"BlueStore is one of the biggest changes to the Ceph architecture in many years," Levine said. "BlueStore does native writes to the block device. The obvious benefit here is you remove a major abstraction, so you get a huge performance boost."

Levine said removing the XFS layer would increase performance without having to change hardware. He said BlueStore would also allow customers to mix and match hardware. They can choose fast solid-state drives for metadata and slower hard disk drives for block data.

"You can optimize the I/O that you're doing for the individual devices, whereas XFS sort of treats everything the same," Levine said.

Levine called BlueStore technology "a very, very critical piece of the stack, because this is the bit that actually writes the data."

Levine said BlueStore would not require a major forklift upgrade for current Red Hat Ceph users. He said customers would need to reinitialize individual disks -- taking them out, formatting them and putting them back in -- a task that could be time-consuming for customers with 1,000 or more disks. The Red Hat roadmap includes a tool designed to advise users on the best approach with large clusters, according to Levine.

"BlueStore has huge promise in terms of performance, but it's also very complex and a hard thing to deliver. That's why we're taking our time to do it," Levine said.

CephFS and iSCSI support

Additional highly requested features due in Red Hat Ceph 3.0 include general availability of the CephFS file system and iSCSI support for block storage. Levine said the iSCSI implementation is sufficient to enable tier-two or secondary storage on Ceph. The primary use cases or workloads for CephFS are OpenStack cloud deployments, according to Levine.

The Ceph RADOS Block Devic3 (RBD) is the leading back end in use with OpenStack Cinder block storage. Red Hat ships its own OpenStack Platform (OSP) distribution. Levine noted this year's OSP 11 release would support hyper-converged deployments collocating compute and Ceph block or object storage on the same node, dedicated monitor nodes, and Cinder replication with Ceph RBD.

Levine said Red Hat's OSP 12 release, loosely targeted for the end of 2017, would support the OpenStack Manila file-share service with CephFS, containerized Ceph and encrypted RBD volumes.

The current Red Hat Ceph 2.3 release, which became generally available last month, added a lightweight NFS interface to enable users to access the same object storage data via NFS or the Simple Storage Service (S3) API; compatibility with the Hadoop S3A file-system client to enable customers to use big data analytics tools and store data in Ceph; and support for Ceph in a containerized format, as Red Hat is doing with all of its software products.

Levine said big data presents an interesting growth opportunity for object storage. He said Red Hat would make available performance and sizing guides to enable customers to tune Ceph for use with big data applications.

Supporting Ceph in a containerized format is a prelude to using Kubernetes to manage Ceph, a direction Red Hat would like to go next year, according to Levine. He said, "Containerization is a huge foundation for a lot of big changes we can do later."

Upcoming Red Hat Gluster features

Red Hat is working to enable the Gluster distributed file store -- not Ceph -- to become the default persistent storage platform for the Red Hat OpenShift container management platform.

"Ceph will probably continue to be a peer of OpenShift, although it's not going to be running inside OpenShift, because that's for application developers. We're for infrastructure builders," Levine said. "Gluster is much for the OpenShift crowd. They don't really want to worry about all of the storage details. They're not trying to make a massively scalable 10 PB storage environment and tune it. They just need storage services for an application."

Red Hat Gluster Storage (RHGS) can run inside OpenShift in containers, or outside OpenShift in a network-accessible dedicated cluster, in virtual machines that front-end NAS or SAN arrays, or in Amazon Web Services, Microsoft Azure or Google Cloud.

New and upcoming container-focused features in RHGS this year include iSCSI support; S3 API access to data in a Gluster volume; and brick multiplexing to lower CPU, memory and port consumption and increase the number of volumes per cluster.

Additional new capabilities on the way courtesy of development work done by Facebook, a major Red Hat Gluster user, include active-active georeplication and GF Proxy.

Active-active georeplication, also known as multimaster replication, enables a single Gluster volume to span multiple sites. Reads are local and writes are propagated using self-heal to remote sites. Updates to a Gluster volume can happen from multiple geographic locations. Active-active georeplication is due in RHGS 3.4 late this year or early next year.

The Facebook-developed GF Proxy is designed to address the client-side churn that happens with near-simultaneous upgrades of Gluster servers and clients. Saha said Facebook replaced the client-side logic with a thin proxy and moved that logic to the server to smooth the update process, so clients change less frequently. GF Proxy is expected next year with RHGS 4.0.

Another key feature due with RHGS 4.0 is a next-generation management plane, called GlusterD 2.0, to expand the number of nodes per cluster and strengthen consistent state management.

Saha said one much-requested feature due later this year or in early 2018 in RHGS 3.4 is the ability to back up data to S3-compatible object stores in the public cloud.

"We'll have Gluster's georeplication, which is [Secure Socket Shell] SSH, to now start using HTTPS-based transfer and talk S3 to write data," Saha said. "We hope we can do it pretty soon. It's not two years out."

Next Steps

Will Red Hat's software HCI catch on?

Persistent container storage options come on the market

Tips on persistent storage for containers

Dig Deeper on Storage management and analytics

Disaster Recovery
Data Backup
Data Center
and ESG