Evolution of SSD Storage in Public Cloud Gen-Z: An Ultra-High-Speed Interface for System-to-System Communication
Guest Post

Flash in the Era of Digital Infrastructure: What CIOs Need to Know

Explore the changing nature of applications and workloads, the shift from hyper-converged to composable and disaggregated infrastructure, edge computing and more.

00:00 Speaker 1: My presentation today is divided into six parts. I will start off with some thoughts on how the future of digital infrastructure is distributed -- you can think of it as a core edge endpoint continuum. We'll then shift to a brief discussion on the changing nature of applications and workloads. After that, we look to infrastructure for time-to-value workloads like AI and massively parallel computing. Later, we'll talk about the shift from hyper-converged to composable and disaggregated infrastructure. We'll then talk about edge computing and conclude today's presentation with an essential guidance for CIOs and IT decision-makers.

00:43 S1: So how has flash ushered in an era of distributed computing? Let us take a quick journey down memory lane. Flash has started to make its presence felt in the data center, only in the last five to six years. We started with an era of mainframes when flash did not exist, we then moved into a client server model and then ushered in cloud computing. It is in this era of cloud computing that flash started making its way into the data center and is now the foundation of distributed computing.

01:15 S1: With cloud computing becoming mainstream, I believe that it is leading to a new era of distributed computing that stitches together various cloud and traditional IT environments with a common orchestration automation and operations layer. By 2022, IDC estimates that 65% of global GDP will be digitized, increasing the reliance on businesses on an infrastructure to support much more than traditional business applications. The digital service is being developed to deliver modern and increasingly automated customer and work experiences and intelligent business operational systems depend upon having early access to innovative but resilient and trusted technology, at the physical and logical level, anywhere in the world.

02:05 S1: A large percentage of a digital enterprises' revenue depends on the responsiveness, scalability, resiliency of the infrastructure deployed within its own facilities, as well as the ability to take advantage of third-party provided and operated infrastructure resources delivered as a service. The emerging digital infrastructure ecosystem, increasingly built upon a cloud foundation, focuses on ensuring ever faster delivery of innovative infrastructure hardware, software, resource abstraction and process technologies to support development and continual refinement of resilient digital infrastructure and digital experience.

02:47 S1: Digital infrastructure does not just reside in central enterprise or cloud data centers. It includes the assets and resources that enable the shifting of applications and code for enhancing customer experiences and embedding intelligence and automation into business operations and supporting ongoing industry innovation at edge locations.

Given these requirements, the most significant development over the next several years is the acknowledgement that the future of infrastructure, when it comes to a CIO's priority, is cloud everywhere. The transition to cloud-centric digital infrastructure, which is already underway and will accelerate following the pandemic, depends on the commitment to a digital strategy. It enables timely access to and consumption of innovative infrastructure technologies to support digital business models. It also aligns technology adoption and IT operational governance with business outcomes.

03:50 S1: Let us now take a look at the changing nature of applications and workloads. For some of you, the term workload may be confusing. IDC refers to workloads as applications along with the corresponding data sets. So, a database workload, for example, is the combination of database, the application, along with database, the data sets, which include application binaries and all the data files that hold data in them. Applications are very dynamic and their architectures change all the time. This slide depicts the general trend of evolution of applications. This evolution model may not apply to specialized applications such as voice recognition, video rendering and high-performance computing. These specialized applications may follow the general trend, not the exact sequence.

04:42 S1: Legacy applications are centralized and stateful and are dependent on underlying infrastructure for their resiliency, availability and security. They follow a scale-up model to support increasing load or performance. Enterprise business applications, email servers, web servers and so forth fall under this category. This includes megalithic and monolithic applications.

Next, we have modern applications. These are distributed almost stateless, less or not dependent on supporting infrastructure for resiliency, security and reliability, and they follow scale-out models to support higher loads and better performance. They consist of cloud-based applications and cloud-native applications, and lately, serverless applications.

05:30 S1: Finally, we come to next-generation applications. These refer to the upcoming category of applications that exhibit the traits of modern applications while pushing the limits of modern infrastructure. They are characterized by an extreme scale of computation, data storage and/or communication technologies.

05:51 S1: Let's talk about applications. Let us quickly chat about the infrastructure that supports them. It is shifting from centralized to distributed, from stateful to stateless, from rigid to elastic, and finally from scale-up to scale-out. The data sets for these applications are changing too. They are shifting from a structured set to a combination of structured, semi-structured and unstructured data. Access mechanisms are changing -- it's going from block to file to object, and flash is underpinning this transformation. Without flash, it would be nearly impossible to adopt any next-generation applications.

06:31 S1: This slide depicts the evolution of application design through generations of IT infrastructure. Application design patterns revolved around specific IT infrastructure patterns for some time. However, they gained critical velocity where they have started pushing the limits of IT infrastructure. They escape from that infrastructure, the boundaries, if you . . . Latch on to the next generation of infrastructure. This behavior is like orbitals raising maneuvers of satellites, such as Cassini, jumping off to another planet's orbit, slingshots as they are called.

07:08 S1: Our discussion on application transformation would not be complete without mentioning service mesh infrastructure. Service mesh infrastructure provides a scalable, policy-driven, declarative and developer-centric approach to secure and manage service to service communications. While this appears to be a redux of SDN, service mesh infrastructure is distinctly application developer-centric. Service mesh also moves the control to Layer 7 for the most part of the networking stack, much closer to the applications . . . Is the end platforms that operate mostly at Layer 4s and below.

07:48 S1: Business applications are becoming largely microservices-based, attracting more services toward them. Enterprise IT environments are also becoming heterogeneous with applications and services communicating across environments. With the widespread adoption of microservices-based architecture, the number of applications with services in enterprise IT environments is expected to grow exponentially due to the influence of service mesh.

08:20 S1: Now let us look at infrastructure for time-to-value workloads. These include applications like AI and technical, scientific and engineering applications. Time-to-value workloads, many of the ones on this slide are ones that businesses rely on to gain deep insights, insights that they leverage for competitive differentiation.

Artificial intelligence or AI is a key emerging workload. AI capabilities help accelerate digital transformation in enterprises. AI and machine learning capabilities provide competitive advantage to enterprises through new business models, digitally enabled products and services, and thereby enabling businesses to improve user experience, to increase productivity and to innovate. IDC predicts that the worldwide AI market will go from 28.1 billion roughly in 2018 to 98.4 billion roughly in 2023 at a CAGR of 28.5%.

09:24 S1: Flash underpins compute for AI workloads. Every one of these market segments are accelerated by flash. Without flash, AI workloads would be I/O and latency bound, creating a challenge for companies looking to compress the time to value from the data they create.

09:42 S1: As building AI capabilities becomes increasingly urgent, IDC sees that businesses are confused about the process of building their own AI infrastructure stack. IDC is seeing a growing number of AI server, storage and processor vendors develop AI stacks that consists of abstraction layers, orchestration layers, development layers and data science layers that are intended to operate seamlessly together.

These stacks typically combine open source software, proprietary software and non-monetized commercial software layers that are intended to help customers, IT infrastructure teams, developers and data scientists collaborate in a pre-designed stack without having to build it all by themselves. IDC believes that AI infrastructure stacks provide a clear advantage to customers and that their variety, while confusing, is not a disadvantage, particularly. IDC does not expect vendors to collaboratively develop a common standard AI stack. This would defeat the advantage for customers of having multiple flavors to choose from to begin with.

10:50 S1: By offering an AI framework, IDC hopes to provide a guide for IT vendors, encouraging them to improve the versatility of their stack, thereby increasing its ubiquitous adoption. Needless to say, flash is an important building block of this AI stack. Without flash, compute cannot perform the way it otherwise performs.

11:13 S1: The demand for computational power worldwide is driven by an insatiable appetite to reduce the time to value from data sets that are increasing in size and complexity with each passing day. Newer use cases are constantly born, and with each new use case, the demand for computational approaches goes up. Computational platform architectures are constantly evolving so that organizations can take better timely actions based on deep insights gleaned from weaving together data sets from diverse sources.

Computing platform architectures that overlay parallelization on top of serial processors, running preemptive multitasking operating systems are nothing new. They are based on the premise that any computational job is best executed by logically chopping the job up into smaller joblets and running these joblets in parallel on independent processor systems or sub-systems. On the data side, this means the sharding of data sets to preserve compute localization.

12:20 S1: Much of the early approaches to parallelization had scaling limitations because of the limitations of the underlying transport mechanisms or the technologies used to contribute systems or sub-systems together in a coherent fashion, RISC being one of them. The lack of low-latency and high-bandwidth fabric also means that certain tasks could not be parallelized because of the lack of coherency between these systems. These limitations are slowly going away leading to the increasing use of large-scale clustered or parallelized computational systems that span the data center and even the cloud.

12:57 S1: Massively parallel computing is an emerging computational platform and data management architecture that relies on massive parallelization for processing large volumes of data or executing complex instruction sets in the fastest way possible. Today and in the near future, MPC or massively parallel computing is adopted as an approach across three workloads and use cases such as artificial intelligence, big data and analytics, and modeling and simulation. Other use case groups may emerge, should this approach be rendered economically scalable, vis-à-vis, scale-up approaches.

13:36 S1: IDC distinguishes four levels of parallelization. This slide illustrates these levels from the smallest within the processor to the largest within geographically dispersed localities. I receive views parallelization as a multi-step process. At the lowest level, parallel processing can take place inside a microprocessor, whereby the process scheduler distributes part of the instruction among the processor cores. However, IDC does include processors with many, many more cores. For example, hundreds that have been designed and can be scheduled to perform parts of complex instruction or a simple instruction set on a large volume of data in parallel, in the definition of MPC.

14:19 S1: At the next level, such parallel processing can take place between two or more microprocessors within a compute node. Here, the process scheduler will allot an instruction to the available microprocessor or distribute an instruction between the available microprocessors according to an operating system specific priority scheme to either achieve efficiency or give a certain task priority to meet a deadline.

In the next level, compute nodes are clustered for the specific purpose of speeding up a compute task such as performing a very complex instruction set or performing a relatively simple instruction set on massive amounts of data by parallelizing the task. The cluster can be built with memory and flash, that is shared among the nodes, or each node can have its own memory and flash, and a task scheduler distributes the instruction components and data amongst the nodes based on a specific algorithm.

15:17 S1: At the highest level, coordinated parallelization is achieved by linking and scheduling clusters within one location such as a data center or between dispersed locations that are linked by a wide area network. This slide illustrates IDC's view on composable and disaggregated systems that are an evolution of what we call as traditional converged infrastructure. While the new technology is a significant leap forward, the gist of this evolution is as follows. The initial conversion was at the provisioning and management layer, leading to a breed of solutions called integrated or converged infrastructure. The next generation delivered composability, via an operating platform layer, also known as a hypervisor, leading to a breed of solutions called hyper-converged infrastructure, we know of HCI today. This started a transition towards an industry standard and an API-based, software-defined infrastructure or SDI.

16:18 S1: The new breed of infrastructure now takes this concept further. The hardware side is moving toward disaggregation, while the software side is moving toward composability, via a unified API-based provisioning, orchestration and automation layer. Needless to say, flash has enabled journey from converged to composable.

16:43 S1: So why are people replacing legacy infrastructures with HCI? The number one reason for replacing legacy infrastructure with hyper-converged is performance; flash underpins performance. Nowhere is this more prominent than in the case of hyper-converged infrastructure. For example, in the appliance category, all flash appliances are estimated to grow at a five-year compound annual growth rate of nearly 20%. In the case of software-plus certified servers, all-flash servers are estimated to grow at a five-year CAGR of 39.1%, and as a total market, the all-flash hyper-converged market is estimated to grow at a five-year CAGR of 24.5%. These are staggering numbers.

17:34 S1: Newer computing platform architectures offer an alternative to the monolithic computing platform architecture that we know of today. Indeed, a decentralized and distributed computing approach is aimed at reducing the burden that is placed on a central processing unit or a CPU for executing privileged operations. Decentralized architectures borrow an approach known as "accelerated computing" that is gaining traction in the industry. We know of these because of GPUs, FPGAs and ASICs that are used for workloads like AI today, for example. Specially designated processors, also known as accelerators of which coprocessors are a variant, are used to offload specific portions of a user space payload -- for example, GPUs for mathematically intensive functions to accelerate outcomes of the workload.

18:25 S1: However, even in the most . . . Computing deployments, the CPU is in command of the platform and it runs all the embedded and management payloads on it. Along with general purpose workload accelerators, IDC's function accelerators, creating a three-spoke distributed computing model that we expect to be the foundation of distributed composable and disaggregated infrastructure. Function accelerators offer an intermediate level in between full hardware disaggregation and software composability. This slide shows how the journey will take place. It starts with a software-only approach with no fabric assist. This is plain composability without any assumption of underlying hardware capabilities.

19:13 S1: Next, we have software with enabling platform and fabric assist. This is the next level, where the vendor relies on an enabling hardware platform, proprietary high-speed connectivity to boost software. With an accelerator or coprocessor offload, we can achieve partial disaggregation by offloading privileged embedded functions onto dedicated hardware that sits outside the control of a monolithic CPU and can perform functions autonomously. In this level, the fabric offload is accomplished by a combination of software-defined and high-bandwidth networking that is offloaded to the function accelerator itself.

19:56 S1: Finally, we arrive at full disaggregation. This is enabled by a modification of the CPU subsystem and a full fabric offload for embedded management and user space functions beyond what can be offloaded to a coprocessor today. We are not there yet, but standards like Gen-Z and CXL promised to take us there. Computational storage expands upon this journey with the accelerator getting placed right near the data persistence device. In other words, it takes compute to the storage or flash layer. In recent times, vendors have started to offer converged file and object systems -- some call it fast file; the idea is the same -- and that is to offer a single storage system that can support file, block, object streaming media as well as container persistence access. This is important as it provides a level of software-defined infrastructure that can be integrated with cloud-native applications and modern cloud-native application development processes.

21:02 S1: Now, let us look at edge computing and the role of flash in it. Edge computing describes all computing storage and networking and connectivity processes that occur outside the organization's core, but not on the actual endpoints themselves. There are several reasons why investing in edge computing makes sense for a modern enterprise. Many of these objectives are only met when flash is used for data persistence and caching functions at the edge. Flash-powered edge computing platforms enable forms to embrace a common compute reference architecture. No longer do companies need to compromise between form factors, performance and storage; it is all available regardless of the functions serviced by the platform. In other words, one can deploy a software-defined and converged IT, OT and CT infrastructure stack.

22:00 S1: This slide illustrates my previous point. Edge functions can be deployed on either a heavy edge tier or a light edge tier. The heavy edge tier is comprised of systems adapted from data-center-grade servers meant to perform heavier tasks such as analytics. These are usually general-purpose servers that are built with data-center-grade industry standard computing hardware but are ruggedized for the edge and designed to integrate OT and CT functions such as control and data acquisition systems. Because many of the IT functions such as data analytics require significant computing resources, heavy edge systems must be built with space and power-efficient hardware.

22:42 S1: The light edge includes devices that are based on the concept of gateways, the initial design of which was based on Intel's reference design, based on low-power Atom chips. Intel initially enlisted 15 or so vendors to build them for various use cases with a broad set of objectives such as embracing standardized compute form factors at the edge for running IT, OT and CT apps and to enable efficient data capture and support limited analytics at the edge to minimize data transfer back to the core.

I would like to conclude my presentation with some guidance for vendors and IT decision-makers. For vendors, enabling their clients to transition from traditional infrastructure to digital infrastructure means faster delivery of reliable digital services and experiences. It comprises of three parts. First, it is the use of innovative cloud-native technologies. Second, to enable greater resilience and reach via ubiquitous deployment. And third, self-regulation via autonomous operations.

23:49 S1: For CIOs, this means four key goals. First, resource optimization. This involves implementing solutions that can do more for less and creating additional value through investments that are bite-sized and achievable.

Second, consistent resiliency. This involves new and modernized applications designed for failure and graceful degradation versus five nines and vital service recovery chains.

Third, continual enhancement. It includes fast access to new technologies, capabilities and new services based on business needs. It includes automated upgrades of software and hardware assets that minimizes the accumulation of technical debt and security risk to progressively reduce stranded capacity associated with legacy applications.

And finally, a digital strategy that ensures a consistent delivery of IT across the entire enterprise. With that, I would like to end this presentation by thanking you for your time and attention, please do reach out to us should you have any questions or need further details. Have a great day. Thanks.

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Close