Browse Definitions :
Definition

von Neumann bottleneck

What is the von Neumann bottleneck?

The von Neumann bottleneck is a limitation on throughput caused by the standard personal computer architecture. The term is named for John von Neumann, who developed the theory behind the architecture of modern computers. Earlier computers were fed programs and data for processing while they were running.

In 1945, von Neumann proposed a computer design that was based on the concept of the stored program computer, in which program instructions and data are held in memory. Known as the von Neumann architecture -- or sometimes the Princeton architecture -- this model became the standard used for many of the computers to follow and continues to be used for a large number of today's systems.

In the von Neumann architecture, a computer's main components include the central processing unit (CPU), the memory unit, and the input and output devices. The CPU contains the control unit, arithmetic logic unit and registers. The processor and memory are separate components, with data moving between them via the system bus. The memory unit, often referred to as main memory or primary memory, stores both the program instructions and data.

von Neumann architecture
Diagram illustrating the von Neumann computer architecture.

The system bus is used to transfer all data between the components that make up the von Neumann architecture, creating what has become an increasing bottleneck as workloads have changed and data sets have grown larger. Over the years, computer components have evolved to try to meet the needs of these changing workloads. For example, processor speeds are significantly faster, and memory supports greater densities, making it possible to store more data in less space.

In contrast to these improvements, transfer rates between the CPU and memory have made only modest gains. As a result, the processor is spending more of its time sitting idle, waiting for data to be fetched from memory. No matter how fast a given processor can work, it is limited by the rate of transfer allowed by the system bus. A faster processor usually means that it will spend more time sitting idle.

Overcoming the von Neumann bottleneck

The von Neumann bottleneck has often been considered a problem that can be overcome only through significant changes to computer or processor architectures. Even so, there have been numerous attempts to address the limitations of the existing structure:

  • Caching. A common method for addressing the bottleneck has been to add caches to the CPU. In a typical cache configuration, the L1, L2 and L3 cache levels sit between the processor core and the main memory to help speed up operations. The L1 cache is the smallest, fastest and most expensive. The L3 cache, which is shared among multiple processor cores, is the largest, slowest and least expensive. The L2 cache falls somewhere between the two.
cache memory
Diagram illustrating how cache memory is a memory block separate from main memory accessed before main memory. Caching is a common method used to overcome the von Neumann bottleneck.
  • Prefetching. Instructions and data that are expected to be used first are fetched into the cache in advance so they're immediately available when needed.
  • Speculative execution. The processor performs specific tasks before it is prompted to perform them so the information is ready when needed. Speculative execution uses branch prediction to estimate which instructions will likely be needed first.
  • Multithreading. The processor manages multiple requests simultaneously, while switching execution between threads. The multithreading process usually happens so quickly that the threads appear to be running simultaneously.
  • New types of RAM. Current developments in RAM technologies promise to help address at least part of the bottleneck issues by getting the data into the bus more quickly. Emerging areas of development include resistive RAM, magnetic RAM, ferroelectric RAM and spin-transfer torque RAM.
  • Near-data processing. With NDP, memory and storage are augmented with processing capabilities that help improve performance, while reducing dependency on the system bus. One type of NDP is processing in memory, which integrates a processor and memory in a single microchip.
  • Hardware acceleration. Processing is shifted to other hardware devices to reduce the load on the CPU and dependency on the system bus. Common types of hardware acceleration include GPUs, application-specific integrated circuits and field-programmable gate arrays.
  • System-on-a-chip. A single chip contains processing, memory and other system resources, eliminating much of the data transfer on the system bus. Mobile devices and embedded systems use SoC technology extensively. However, the technology is now making its way into the computer industry, with Apple silicon leading the way.

See also: instruction, input/output (I/O), read-only memory (ROM), neuromorphic chip, singularity (the), neuromorphic computing.

This was last updated in September 2022

Continue Reading About von Neumann bottleneck

SearchNetworking
  • network security

    Network security encompasses all the steps taken to protect the integrity of a computer network and the data within it.

  • cloud-native network function (CNF)

    A cloud-native network function (CNF) is a service that performs network duties in software, as opposed to purpose-built hardware.

  • Wi-Fi 6E

    Wi-Fi 6E is one variant of the 802.11ax standard.

SearchSecurity
  • incident response

    Incident response is an organized approach to addressing and managing the aftermath of a security breach or cyberattack, also ...

  • MICR (magnetic ink character recognition)

    MICR (magnetic ink character recognition) is a technology invented in the 1950s that's used to verify the legitimacy or ...

  • What is cybersecurity?

    Cybersecurity is the protection of internet-connected systems such as hardware, software and data from cyberthreats.

SearchCIO
  • privacy compliance

    Privacy compliance is a company's accordance with established personal information protection guidelines, specifications or ...

  • contingent workforce

    A contingent workforce is a labor pool whose members are hired by an organization on an on-demand basis.

  • product development (new product development -- NPD)

    Product development, also called new product management, is a series of steps that includes the conceptualization, design, ...

SearchHRSoftware
  • talent acquisition

    Talent acquisition is the strategic process employers use to analyze their long-term talent needs in the context of business ...

  • employee retention

    Employee retention is the organizational goal of keeping productive and talented workers and reducing turnover by fostering a ...

  • hybrid work model

    A hybrid work model is a workforce structure that includes employees who work remotely and those who work on site, in a company's...

SearchCustomerExperience
  • digital marketing

    Digital marketing is a general term for any effort by a company to connect with customers through electronic technology.

  • hockey stick growth

    Hockey stick growth is a growth pattern in a line chart that shows a sudden and extremely rapid growth after a long period of ...

  • Salesforce Trailhead

    Salesforce Trailhead is a series of online tutorials that coach beginner and intermediate developers who need to learn how to ...

Close