Browse Definitions :


What is pipelining?

Pipelining is the process of storing and prioritizing computer instructions that the processor executes. The pipeline is a "logical pipeline" that lets the processor perform an instruction in multiple steps. The processing happens in a continuous, orderly, somewhat overlapped manner.

In computing, pipelining is also known as pipeline processing. It is sometimes compared to a manufacturing assembly line in which different parts of a product are assembled simultaneously, even though some parts may have to be assembled before others. Even if there is some sequential dependency, many operations can proceed concurrently, which facilitates overall time savings.

Pipelining creates and organizes a pipeline of instructions the processor can execute in parallel.

An illustration of parallel processing with the use of multiple operators to improve throughput
Creating parallel operators to process events improves efficiency.

The pipeline is divided into logical stages connected to each other to form a pipelike structure. Instructions enter from one end and exit from the other. Pipelining is an ongoing, continuous process in which new instructions, or tasks, are added to the pipeline and completed tasks are removed at a specified time after processing completes. The processor executes all the tasks in the pipeline in parallel, giving them the appropriate time based on their complexity and priority. Any tasks or instructions that require processor time or power due to their size or complexity can be added to the pipeline to speed up processing.

How pipelining works

Without a pipeline, the processor would get the first instruction from memory and perform the operation it calls for. It would then get the next instruction from memory and so on. While fetching the instruction, the arithmetic part of the processor is idle, which means it must wait until it gets the next instruction. This delays processing and introduces latency.

With pipelining, the next instructions can be fetched even while the processor is performing arithmetic operations. These instructions are held in a buffer close to the processor until the operation for each instruction is performed. This staging of instruction fetching happens continuously, increasing the number of instructions that can be performed in a given period.

Within the pipeline, each task is subdivided into multiple successive subtasks. A pipeline phase is defined for each subtask to execute its operations. Like a manufacturing assembly line, each stage or segment receives its input from the previous stage and then transfers its output to the next stage. The process continues until the processor has executed all the instructions and all subtasks are completed.

In the pipeline, each segment consists of an input register that holds data and a combinational circuit that performs operations. The output of the circuit is then applied to the input register of the next segment of the pipeline. Here are the steps in the process:

  1. Fetch instructions from memory.
  2. Read the input register, and decode the instruction.
  3. Execute the instruction.
  4. Access an operand in data memory.
  5. Write the result of the operation into the input register of the next segment.

Types of pipelines

There are two types of pipelines in computer processing.

Instruction pipeline

The instruction pipeline represents the stages in which an instruction is moved through the various segments of the processor, starting from fetching and then buffering, decoding and executing. One segment reads instructions from the memory, while, simultaneously, previous instructions are executed in other segments. Since these processes happen in an overlapping manner, the throughput of the entire system increases. The pipeline's efficiency can be further increased by dividing the instruction cycle into equal-duration segments.

Arithmetic pipeline

The arithmetic pipeline represents the parts of an arithmetic operation that can be broken down and overlapped as they are performed. It can be used for used for arithmetic operations, such as floating-point operations, multiplication of fixed-point numbers, etc. Registers are used to store any intermediate results that are then passed on to the next stage for further processing.

Advantages of pipelining

The biggest advantage of pipelining is that it reduces the processor's cycle time. This is because it can process more instructions simultaneously, while reducing the delay between completed instructions. Although pipelining doesn't reduce the time taken to perform an instruction -- this would sill depend on its size, priority and complexity -- it does increase the processor's overall throughput.

Furthermore, pipelined processors usually operate at a higher clock frequency than the RAM clock frequency. This makes the system more reliable and also supports its global implementation.

Possible issues in pipelines

Although processor pipelines are useful, they are prone to certain problems that can affect system performance and throughput. Two such issues are data dependencies and branching.

Data dependencies

A data dependency happens when an instruction in one stage depends on the results of a previous instruction but that result is not yet available. This can happen when the needed data has not yet been stored in a register by a preceding instruction because that instruction has not yet reached that step in the pipeline.

Since the required instruction has not been written yet, the following instruction must wait until the required data is stored in the register. This waiting causes the pipeline to stall. At the same time, several empty instructions, or bubbles, go into the pipeline, slowing it down even more.

The data dependency problem can affect any pipeline. However, it affects long pipelines more than shorter ones because, in the former, it takes longer for an instruction to reach the register-writing stage.


Branch instructions can be problematic in a pipeline if a branch is conditional on the results of an instruction that has not yet completed its path through the pipeline. If the present instruction is a conditional branch and its result will lead to the next instruction, the processor may not know the next instruction until the current instruction is processed. That's why it cannot make a decision about which branch to take because the required values are not written into the registers.

Other possible issues during pipelining

In addition to data dependencies and branching, pipelines may also suffer from problems related to timing variations and data hazards. Delays can occur due to timing variations among the various pipeline stages. This is because different instructions have different processing times. Data-related problems arise when multiple instructions are in partial execution and they all reference the same data, leading to incorrect results. A third problem in pipelining relates to interrupts, which affect the execution of instructions by adding unwanted instruction into the instruction stream.

Superpipelining and superscalar pipelining

Superpipelining and superscalar pipelining are ways to increase processing speed and throughput.

Superpipelining means dividing the pipeline into more shorter stages, which increases its speed. The instructions occur at the speed at which each stage is completed. In a pipeline with seven stages, each stage takes about one-seventh of the amount of time required by an instruction in a nonpipelined processor or single-stage pipeline. In theory, it could be seven times faster than a pipeline with one stage, and it is definitely faster than a nonpipelined processor.

Superscalar pipelining means multiple pipelines work in parallel. This can be done by replicating the internal components of the processor, which enables it to launch multiple instructions in some or all its pipeline stages.

Learn about parallel processing; explore how CPUs, GPUs and DPUs differ; and understand multicore processers.

This was last updated in December 2022

Continue Reading About pipelining

  • network interface card (NIC)

    A network interface card (NIC) is a hardware component, typically a circuit board or chip, installed on a computer so it can ...

  • User Datagram Protocol (UDP)

    User Datagram Protocol (UDP) is a communications protocol primarily used to establish low-latency and loss-tolerating connections...

  • Telnet

    Telnet is a network protocol used to virtually access a computer and provide a two-way, collaborative and text-based ...

  • advanced persistent threat (APT)

    An advanced persistent threat (APT) is a prolonged and targeted cyber attack in which an intruder gains access to a network and ...

  • Mitre ATT&CK framework

    The Mitre ATT&CK (pronounced miter attack) framework is a free, globally accessible knowledge base that describes the latest ...

  • timing attack

    A timing attack is a type of side-channel attack that exploits the amount of time a computer process runs to gain knowledge about...

  • employee resource group (ERG)

    An employee resource group is a workplace club or more formally realized affinity group organized around a shared interest or ...

  • employee training and development

    Employee training and development is a set of activities and programs designed to enhance the knowledge, skills and abilities of ...

  • employee sentiment analysis

    Employee sentiment analysis is the use of natural language processing and other AI techniques to automatically analyze employee ...

Customer Experience
  • customer profiling

    Customer profiling is the detailed and systematic process of constructing a clear portrait of a company's ideal customer by ...

  • customer insight (consumer insight)

    Customer insight, also known as consumer insight, is the understanding and interpretation of customer data, behaviors and ...

  • buyer persona

    A buyer persona is a composite representation of a specific type of customer in a market segment.