What is hierarchy in computing?
Generally speaking, hierarchy refers to an organizational structure in which items are ranked in a specific manner, usually according to levels of importance. In computing, there are various types of hierarchical systems. For example, in most file systems, files are placed in specific places based on a hierarchical tree model.
Similarly, computer memory hierarchy ranks memory components in terms of access and response times. Typically, this hierarchy contains several levels of memory with different access speeds and performance rates.
Computer hierarchy explained
The word hierarchy comes from the Greek words hieros, meaning "sacred," and archos, meaning "ruler." It's likely that the word first entered the English lexicon in the 14th century, referring to a system in which things or people are arranged in some order, usually according to their importance.
In computing devices, file systems are usually hierarchical. In such systems, a file is placed in a directory (folder in Windows) or subdirectory at a desired place within the tree structure. Memory is also hierarchical based on the speed and use. Memory hierarchy is essentially employed to organize memory in such a way that data access time can be minimized, thus improving system performance.
In hierarchical memory systems, processor (CPU) registers are at the top of a pyramid-like structure (level 0) while optical disks and tape backup devices are at the bottom (level 4). This system was developed based on a type of program behavior known as "locality of references." This behavior refers to the tendency of programs to access instructions that have addresses or memory locations near one another in order to speed up access and improve performance.
Memory hierarchical pyramid
The five levels in a memory hierarchy are categorized based on speed and usage and form a pyramid. The levels in a memory hierarchical pyramid are the following:
- Level 0: CPU registers
- Level 1: Cache memory
- Level 2: Primary memory or main memory
- Level 3: Secondary memory or magnetic disks or solid-state storage
- Level 4: Tertiary memory or optical disks or magnetic tapes
The primary memory is known as the internal memory. It is directly accessible by the computer's processor. The secondary memory, also known as external memory, can be accessed by the processor through the I/O module. It consists of peripheral storage devices.
As a general rule, the cost and capacity of each memory level varies inversely with speed. Thus, CPU registers are the fastest while tertiary memory devices are the slowest.
Level 0: CPU registers. A CPU register is a small section of memory in a CPU that can store small amounts of the data required to perform various operations. It loads the resulting data to the main memory and contains the address of the memory location.
Registers are present inside the CPU and therefore have the quickest access time. Since they are the fastest memory type, they are the most expensive. They are also the smallest in size, typically measured in kilobytes.
A CPU register is implemented using digital logic circuits called flip-flops. It is an implementation of static RAM (SRAM) within the processor. Most processors include a program counter register, a status word register for decision-making and an accumulator to store data and mathematical operations.
Level 1: cache memory. Cache memory is required to store segments of programs or chunks of data that are frequently accessed by the processor. When the CPU needs to access program code or data, it first checks the cache memory. If it finds the data, it reads it quickly. If it doesn't, it looks into the main memory to find the required data.
Cache memory is usually smaller in size than a CPU register, typically measured in megabytes (MB). It is implemented using SRAM. Usually, the cache is inside the processor. However, it may also be implemented as a separate integrated circuit (IC).
Level 2: primary/main memory. The primary memory communicates with the CPU and with the peripheral or auxiliary memory devices through the I/O processor. It is the primary storage unit of a computer system; it's often referred to as random access memory (RAM) and is implemented using dynamic RAM (DRAM) components. However, main memory may also include read-only memory (ROM).
Any program or data that is not currently required in the main memory is transferred into the auxiliary memory to create space for programs and data that are currently active. Main memory is less expensive than CPU registers and cache memory, and is also larger in size (typically measured in gigabytes).
Level 3: secondary storage. Secondary storage devices such as magnetic disks occupy level 3 of the memory hierarchy. Usually, both faces of a magnetic disk are utilized to store programs and data. Further, multiple disks may be stacked on a spindle to provide a larger memory ecosystem. In many systems, magnetic disks are being replaced by non-mechanical solid-state storage devices.
Secondary storage devices act as backup storage and are much cheaper than the main memory and cache. These memory types are also large in size and generally have capacities of up to 20 terabytes (TB).
Level 4: tertiary storage. Tertiary storage devices are usually magnetic tapes or optical disks. These devices are typically used to store duplicate or archive copies of data. Also known as auxiliary storage, tertiary memory devices are usually used to store programs and data for the long term or when not required for immediate use.
Tertiary devices are suitable for data archiving and backup. They are the cheapest and slowest memory type; they typically have capacities of 1 TB to 20 TB.
Necessity and benefits of memory hierarchy
Memory hierarchy is about arranging different kinds of storage devices in a computer based on their size, cost and access speed, and the roles they play in application processing. The main purpose is to achieve efficient operations by organizing the memory to reduce access time while speeding up operations.
Since CPU registers are the fastest to read and write, they are placed at the top of the hierarchy. On the other hand, mass storage devices like optical drives and magnetic tapes are the slowest and largest and therefore occupy the last level in the pyramid.
Creating a memory hierarchy simplifies memory distribution. It also allows data to be spread among different memory types to maintain its security, reduce access times and ensure its availability. In addition, a hierarchical structure permits demand paging and pre-paging and decreases the per-bit cost of the computing system.
Hardware designers try to improve the performance of processor memory by increasing the size of the cache memory which allows processors to access the required programs or data much faster. Designers also try to reduce the dependency on the main memory, which is slower and affects the performance of the processor and the overall computer system.
Characteristics of memory hierarchy
The key characteristics of a memory hierarchy include the following:
Capacity. Capacity is the volume of information that a memory device can store. As we move down the memory pyramid, the capacity or memory size increases.
Access time. Access time is the time interval from when a read/write request is made and when the data actually becomes available. It increases as we move from the top to the bottom of the memory hierarchy. Registers, which are present inside the CPU, have the shortest access time, meaning they are the fastest. At the bottom of the pyramid, magnetic tapes and similar storage devices have the greatest access time.
Performance. Without a memory hierarchy, there is a speed gap between CPU registers and the main memory. This increases access time and directly impacts the system's performance. Performance can be improved by reducing the number of levels required to access and manipulate data.
Cost per bit. The cost per bit is calculated by dividing the total cost of the memory by the total number of accessed bits. As we move from the top of the memory hierarchy to the bottom, the cost per bit decreases. This is because internal memory is costlier than external memory.