Tech Accelerator

Flash Memory Summit 2020

Flash Memory Summit was virtual this year, spanning three days. Here, you have unlimited access to the exclusive Flash Memory Summit keynote presentations and sessions.

In previous years, the Flash Memory Summit annual conference has taken place in person, but with travel in a COVID-19 world being out of the question, IT pros attended the conference virtually to learn the latest storage tricks and trends, and network with other pros in the industry. Taking place from Tuesday, November 10 through Thursday, November 12, Flash Memory Summit 2020 included keynote presentations -- some with live Q&As -- as well as three days of breakout sessions and an expo floor.

TechTarget's SearchStorage.com partnered with Flash Memory Summit to connect you with industry leaders, strategies and insights to help you meet your organization's unique storage needs.

Here, you will have unlimited access to the keynote and session presentations. You can download all available presentations here. To help you get the most out of the virtual conference experience and focus on the content that matters to you, presentations are organized by:

The keynote presentations are below. You can download all the keynote presentations here. Click on the hyperlinks above to get your hands on the other presentations.

Keynote 1: IDC

Matt Eastwood, Senior Vice President at IDC

Today's Trends in Cloud and the Future Enterprise

Emerging technologies continue to disrupt the way business is done all around the world. Enterprise and government entities looking to participate in today's marketplace are fast realizing the necessity of a comprehensive digital transformation blueprint that integrates emerging third-platform technologies across a continuum that stretches from edge to core to cloud. Customers are doubling down on hybrid cloud to deliver the resiliency and agility their digital businesses require. This requires focus on digital innovation spanning insights, edge, customer experience, application development and trust. This is significantly changing computing and data storage ecosystems as customers look for partners to deliver new and more sophisticated solutions to address the challenges associated with a more distributed operation. Attend this keynote session with analyst Matt Eastwood, senior vice president at IDC, who discusses the challenges and opportunities facing today's infrastructure market given the changes to global dynamics and customer priorities. Link to Presentation

Keynote 2: Numem

Jack Guedj

DNN Accelerator for the High-Performing Space Computing Program

Like most applications, space systems are migrating from centralized to distributed processing for more autonomous decision making, lower latency, less dependence on communications systems, and lower power. NASA has launched a DNN (deep neural network) Radiation Hardened Coprocessor as a companion chip to its upcoming High-Performance Spaceflight Computing Processor. The new combination provides over 100 times the computing ability of current systems.  MRAM makes a big difference in the AI coprocessor because it offers both high density and low power. Moreover, MRAM is inherently radiation-tolerant, which is essential for space applications. Link to Presentation

Keynote 3: Western Digital

Siva Sivaram

Storage as the Driver of Change: Rethinking Data Infrastructure

This year has seen a dramatic acceleration of digitization across all spheres of our lives. Driving this change are data architectures built to be cloud-native, tied to mobility and intelligent endpoints, and connected by high-performance networks. Extracting value from the accelerating influx of data is the key to the success of any enterprise. Link to Presentation

Keynote 4: Marvell

Thad Omura, Vice President of Marketing at Marvell Semiconductor

Using Hardware Acceleration to Increase NVMe Storage Performance

High-speed NVMe connections coupled with 3D NAND flash technology have produced much larger and more capable SSDs. When networked, they can provide huge amounts of high-speed storage at low cost to data-driven applications. Unfortunately, they also stress system processors and software support. New hardware acceleration methods can restore system balance at a high level. They seamlessly virtualize, protect and scale native NVMe and NVMe-oF to reach their full potential, maximizing the capabilities of each SSD. Solutions deployed today in advanced controllers improve the performance of NVMe storage infrastructures in both cloud and enterprise systems. They provide both disaggregated flash storage and storage class memory at the right cost and power levels to address the next generation of customer applications such as intelligent data analytics, AI/ML, HPC, high-resolution video and other demanding requirements. Link to Presentation

Keynote 5: Fungible

Pradeep Sindhu, Co-Founder and CEO of Fungible

The Evolution of Data Centers in the Data-Centric Era

Big data and fast storage have led to tremendous strain on computing resources. General purpose CPUs cannot cope with the petabyte-class data flows coming from NVMe SSDs and persistent memory. The result is delays and underutilization of new storage technology. We need solutions that can handle storage tasks much more efficiently, can operate within current architectures, are scalable, and do not greatly increase costs, power consumption or space requirements. Data processing units (DPUs) can break through these roadblocks. They can be packaged readily into both compute and storage servers, offloading CPUs and GPUs from handling data-centric storage, network and security functions. The Fungible DPU not only executes these functions exceptionally well, it also enables highly effective disaggregation of compute and storage resources at massive scale, These disaggregated pools of resources can then be composed together on demand to form specific combinations of resources to meet application requirements, realizing what we are calling Fungible Data Centers. Link to Presentation

Keynote 6: Nvidia

Kevin Deierling, VP of Marketing at Mellanox and Manuvir Das, Head of Enterprise Computing at NVIDIA

GPUs, CPUs and Storage: Bringing AL and ML to Data Centers Everywhere

Artificial intelligence and machine learning are rapidly becoming essential for almost all businesses. However, AI applications are highly compute-intensive and require accelerated computing to perform well. Fortunately, the GPU and new parallel programming models create a powerful accelerated computing platform to solve complex AI problems across a broad range of disciplines. GPUs do the computational job, but they introduce a new problem -- namely getting data to the GPUs fast enough to keep them busy. Enter the DPU or data processing unit, which is ideally suited to offload, accelerate and isolate data center workloads and keep the CPU and GPU running efficiently. New storage architectures that take advantage of the DPU are essential to make this happen. Storage must deliver huge capacity, extremely fast access, low latency, high throughput, easy and flexible management, and high levels of scalability. And, of course, must come at a reasonable cost to permit use in both edge and central computing for all data centers (ranging from small local installations to huge clouds and mega centers). Flash-based local and networked storage is the obvious solution and is integral to capitalizing on GPUs and DPUs to make AI/ML available everywhere. Link to Presentation

Keynote 7: Xilinx

Gopi Jandhyala, Vice President, Data Center Engineering at Xilinx

The Future Adaptable Data Center

From the disaggregation of compute, networking and storage to security, the modern data center must evolve to address the demands of changing workloads. Technology megatrends are driving the evolution of performance improvements in the data center, new state-of-the-art computational storage solutions, and the architectures needed to support them. Link to Presentation

Keynote 8: Pliops

Steve Fingerhut, President & Chief Business Officer at Pliops and Uri Beitler, Founder & CEO of Pliops

Storage Processors to Unleash the Full Potential of Flash Storage

With data storage and the computational requirements for processing that data growing at an exponential rate, the status quo approaches aren't cutting it anymore. SSDs have become mainstream for high performance cloud and enterprise applications, but their potential hasn't resulted in the expected system performance gains. And the lowest cost ZNS/QLC-enabled SSDs and most advanced storage class memory (SCM) products are limited to specialized environments with extensive software rework required. Applications are suffering, reliability is faltering, and data center sprawl is costing us. This is because server architectures aren't balanced and software workarounds magnify the inefficiencies. It's time to simplify infrastructure, stop the over-investment in storage, and scale more efficiently. A new class of storage processors can, in fact, rebalance server architectures, increase computational output with a reduced footprint, and eliminate the need for software changes, unleashing the full potential of your storage investment. Link to Presentation

Keynote 9: Intel

Alper Ilkbahar, Vice President & General Manager, Data Center Memory and Storage Solutions at Intel

Intel Optane Persistent Memory: From Vision to Reality

Digital transformation fuels accelerating demand for compute, and with it an exponential demand for the memory to support that compute, but traditional DRAM is not scaling to meet this demand. Intel Optane persistent memory (PMEM) introduces a unique new memory technology that creates a new tier in the memory and storage hierarchy that allows data centers to architect either two-tier memory or two-tier storage applications to support this high demand for data to feed our compute. This new persistent memory tier with combined memory and storage capability allows architects to match the right tools to workloads. The result is reduced wait times and more efficient use of compute resources, allowing enterprises to drive cost savings and massive performance increases that help them achieve business results. Enterprises will also benefit from innovations and new usages from the already large software ecosystem that is supporting PMEM. This presentation highlights this new memory and storage technology and showcases some great examples where PMEM is used to achieve high memory capacity at a lower cost level without compromising performance. The future of memory and storage is here today with Intel Optane persistent memory. Link to Presentation

Keynote 10: IBM Storage

Alistair Symon

Breakthroughs Enabling Enterprise QLC SSDs

QLC flash offers the density required to replace hard drives in many storage systems leading to lower costs, lower power usage and higher speed. However, it has so far displayed serious performance and endurance issues, limiting its use to low-activity and archiving systems. Innovations now enable all-flash arrays based on QLC to offer the same endurance and performance as TLC-based arrays. Enhancements in systems and SSD controllers overcome QLC's write endurance limitations. New techniques can also reduce QLC program times. The end result is enterprise storage systems with substantial cost savings and stunning capacity increases. All-flash can replace hard drives in an ever-increasing range of applications including AI/ML, analytics, databases and content management and production systems. Link to Presentation

Keynote 11: NEO Semiconductor

Andy Hsu, Director of Engineering at Intel

New Flash Architecture Combines QLC Density With SLC Speed

QLC NAND flash has already found many applications due to its higher density and lower price than other NAND types. However, a serious limitation is that QLC has relatively low performance (especially write speed). A new architecture called X-NAND results in both TLC and QLC NAND with read/write performance comparable to SLC. Such architecture can produce the high-speed, low-cost solution required by such emerging applications as AI/ML, 5G, real-time analysis, VR/AR and cybersecurity. X-NAND architecture can be applied to different types of SSDs, from SLC to PLC flash. Simulation shows that X-NAND in SLC can deliver an astonishing read throughput of 85 GB/s. The architecture enables NAND flash to be readily integrated into ultra-high-bandwidth 3D integrated chips.

Keynote 12: NVM Express

Amber Huffman, Chief Technologist at Intel

NVM Express Technology: Powering the Connected Universe

The future surely belongs to speed and data-hungry applications such as AI, real-time analytics, VR/AR, high-resolution video, amazing games, 5G and IoT. NVM Express technology offers a proven road to making them widely available at low cost. Recent NVMe advances such as zoned namespaces, persistent memory regions, and higher performance interfaces such as PCIe 4.0 and 5.0 are just the beginning. NVMe technology will be extended to support hard disk drives and computational storage. Computational storage will supercharge performance by moving program execution to NVMe devices, increasing parallelism and eliminating the need to move huge data sets. Please join us in developing and implementing the storage architectures that will power the connected universe. Link to Presentation

Keynote 13: IBM Almaden Research Center

Geoffrey Burr, Distinguished Research Staff Member at IBM Almaden Research Center

Accelerating Deep Neural Networks With Analog Nonvolatile Memory Devices

Deep neural networks (DNNs) are large artificial neural networks used to solve enormous AI problems. However, current CPUs and GPUs and even proposed AI chips may not be able to provide all the computing power these networks require. One solution is to use neuromorphic computing based on analog memory devices. Such devices could provide extremely high performance and greatly improved energy efficiency for inference and training of DNNs. The origin of this opportunity and the challenges inherent in delivering on it will be discussed, including analog memory materials and devices, circuit and architecture choices and challenges, and the current research status and prospects. Link to Presentation

Keynote 14: Microsoft

Karin Strauss, Senior Principal Research Manager at Microsoft

What You Need to Know About DNA Data Storage Today

The idea of DNA storage is fascinating. Why not use nature's methods of storing DNA molecules to read and write ordinary digital information? It combines the amazing advances in biotechnology with the continued rapid progress in computer science and engineering. It can provide mind-boggling densities (an exabyte per cubic inch), long lifetimes (thousands of years), and widely available storage and retrieval technologies (already used in biotechnology and ramping up on clinical applications). Of course, major challenges remain on the road to production systems, including throughput, scaling, automation, data integrity, manufacturability and cost. The Microsoft project has demonstrated the feasibility of gigabyte storage with more to come. It has also shown the use of DNA to perform special-purpose computations such as large-scale similarity search. Link to Presentation

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Close