Noted skeptic and pragmatist George Santayana once quipped, "Those who cannot remember the past are condemned to repeat it." This often-quoted aphorism came to mind recently as I listened to a marketing VP caution me about a chat I was scheduled to have with his boss, the CEO of an up-and-coming storage vendor. He told me I would be smart to refrain from mentioning IBM or Sun Microsystems or virtually any vendor with innovations that predated the year 2000.
The CEO apparently believed the advent of Google at the beginning of the millennium was such a game-changer it rendered all prior tech obsolete. Clouds, virtualization and software-defined had changed everything we know about IT, the VP said, and his boss would shut down any discussion that referenced the "Dark Ages."
This wasn't the first time I heard this sort of ageism in the tech industry. A couple years ago, I did a deep dive into the question of what would keep older IT careerists relevant in the brave new world of clouds and virtualization. My research into ageism in the tech industry uncovered quite a few glaring threads of what I regarded as ignorant and ageist thinking in the proclamations of tech movers and shakers. From venture capitalists saying only good ideas came from firms with management teams under 30 years of age -- i.e., not set in their ways by old technology assumptions -- to IT recruiters encouraging older IT practitioners to get plastic surgeries to appear younger in order to gain lucrative positions to disparaging quotes about the work of innovators and developers in the pre-cloud era of computing. Ageism in the tech industry was alive and well.
I edited down a fairly voluminous and damning testament to tech-related stupidity into five or six recommendations about using "the wisdom of age" as a mentor and leader. I put the subject aside because it was too disturbing to contemplate for long. Then came this interview. I balked, of course.
Noting to the marketing VP that today's technology is completely grounded in age-old engineering innovations, I wondered whether his CEO's perspective might take his development efforts down previously determined dead ends. He shrugged and said the boss's view was his view, and guided by his own experience upon entering the tech industry at the beginning of the "Google revolution."
The CEO turned out to be a bright fellow, not the supercilious schmuck I expected. We held some common views, including the notion that the file system had become an impediment to realizing some of the scaling and optimization potential of architectures such as virtual SANs.
I have long believed the file system was due for a refresh, especially the self-destructive attributes of common file systems that reflected pragmatic decisions made in the context of the 1960s, when the price of disk-based storage was exorbitant and the idea of versioning or journaling was viewed as wasteful. The attributes of storage media, such as cost and functionality, have changed.
The CEO and I agreed. It's time to reconsider the parameters that guided the selection of a file system that overwrites the last good version of a file when you save a new version.
History repeats itself
The deconstruction of monolithic storage is also long overdue. By the late 1990s, everyone was selling the same box of Seagate hard disks, but increasing their price by adding value-add software to an array controller. That's how EMC Data Domain managed to charge $410,000 Manufacturer's Suggested Retail Price for a box of disk drives costing roughly $13,000; it added deduplication software as a value-add. It was getting ridiculous.
The software-defined storage (SDS) revolution has been framed as redress for storage vendor greed. Moving value-added software off array controllers and into a centralized -- or federalized -- server-side stack isn't a new idea, however. IBM did this with System Managed Storage (SMS) on mainframes in 1993. Some would argue Big Blue developed SMS in part to own the entire data center -- from processor to direct access storage device. Forgetting this bit of history, we saw VMware, and maybe Microsoft, pursue the same objective of data center hegemony with their proprietary SDS stacks and the hyper-converged infrastructure appliance silos built upon them.
The idea of SDS deconstruction may have been a good one, built on the shoulders innovators and engineers who predated Google by decades. But the collective failure of the industry to remember what happened the last time we pursued this objective led to the implementation of SDS architecture that didn't reduce the cost or complexity of the monolithic storage era. It merely changed masters.
To date, there hasn't been a serious debate over what belongs in the SDS stack. That's changing, however. We're beginning to see certain vendors broadening the SDS stack's functionality. VMware introduced its version, mostly unchallenged, a couple years ago with vSAN. But it didn't take long for others to realize VMware's vSANs were effectively expensive silos, locking consumers in and competitors out.
DataCore Software, which argued for SDS long before it became a term, was among the first companies to challenge this model by adding storage virtualization and then parallel I/O handling to the SDS stack. In addition, companies like Nutanix and Cohesity are refining software-defined stacks and the hyper-converged infrastructure built upon them. (I'll do a drill down on Cohesity's SpanFS technology in a future column.)
For now, my argument stands. We need to beware of ageism in the tech industry and acknowledge that even old timers -- like me -- may have insights to offer this new world of storage in the Google and cloud era. There's no room for ageism in IT. Respect your elders!
Dig Deeper on Storage architecture and strategy
Apply hyperscale data center design principles to IT storage
4 sources of storage latency and PMem's emerging role
Use multi-cloud software-defined storage to prevent storage silos
DataCore adds new HCI, analytics, subscription price options