TechTarget.com/whatis

https://www.techtarget.com/whatis/definition/memory

What is computer memory and what are the different types?

By Stephen J. Bigelow

Memory is the electronic holding place for the instructions and data a computer needs to reach quickly. It's where information is stored for immediate use. Memory is one of the basic functions of a computer, because without it, a computer wouldn't be able to function properly. Memory is also used by a computer's operating system (OS), hardware and software.

There are technically two types of computer memory: primary and secondary. The term memory is used as a synonym for primary memory or as an abbreviation for a specific type of primary memory called random access memory (RAM). This type of memory is located on microchips that are physically close to a computer's microprocessor.

If a computer's central processing unit (CPU) had to only use a secondary storage device, computer systems would be much slower. In general, the more primary memory a computing device has, the less frequently it must access instructions and data from slower -- secondary -- forms of storage.

What is random access memory?

Solid-state memory is an electronic device that's represented as a two-dimensional matrix of single-bit storage cells or bits. Each set of storage cells is denoted as an address, and the number of storage cells at each address represents the data depth. For example, an extremely simple memory device might offer 1,024 addresses with 16 bits at each address. This would give the memory device a total storage capacity of 1,024 X 16 or 16,384 bits.

RAM is the overarching concept of random access. A CPU can read or write data to any memory address on demand, and will typically reference memory content in unique, radically different orders depending on the needs of the application being executed.

This random access behavior differs from classical storage devices, such as magnetic tape, where required data has to be physically located on the media each time before it can be written or read. It's this rapid, random access that makes solid-state memory useful for all modern computing.

Many types of RAM report performance specifications against two traditional metrics:

  1. Random access read/write performance. This is where addresses are referenced in random order.
  2. Sequential access read/write performance. This is where addresses are referenced in sequential order.

Primary vs. secondary memory

Memory is broadly classified as primary and secondary memory, though the practical distinction has fallen into disuse.

Primary memory refers to the technologies and devices capable of supporting short-term, rapidly changing data. This mainly encompasses cache memory and RAM located close to -- and accessed frequently by -- the main CPU.

Secondary memory refers to the technologies and devices primarily used to support long-term data storage where data is accessed and changed far less frequently. This typically includes memory devices, such as solid-state flash memory, as well as the complete range of magnetic hard disk drives (HDDs) and solid-state drives (SSDs).

In most cases, data is moved from secondary memory into primary memory where the CPU can execute it. It's then returned from primary memory to secondary memory when the file is saved or the application is terminated.

It's possible to use secondary memory as if it were primary memory. The most common example is virtual memory, which the Windows OS uses to allow more applications and data than solid-state RAM can accommodate. However, virtual memory provides greater latency and lower performance than solid-state primary memory. This happens because it takes longer for drives to read or write data, resulting in lower performance for applications using virtual memory.

Volatile vs. non-volatile memory

Memory can also be classified as volatile or non-volatile memory.

Memory vs. storage

Memory and storage are easily conflated as the same concept; however, there are important differences. Put succinctly, memory is primary memory, while storage is secondary memory. Memory refers to the location of short-term data, while storage refers to the location of data stored on a long-term basis.

Memory is often referred to as a computer's primary storage, such as RAM. Memory is also where information is processed. It lets users access data that's stored for a short time. The data is only stored for a short time because primary memory is volatile and isn't retained when the computer is turned off.

The term storage refers to secondary memory where data in a computer is kept. An example of storage is an HDD. Storage is non-volatile, meaning the information is still there after the computer is turned off and then back on. A running program might be in a computer's primary memory when in use -- for fast retrieval of information -- but when that program is closed, it resides in secondary memory or storage.

The amount of space available in memory and storage differs as well. In general, a computer will have more storage space than memory. For example, a laptop can have 16 gigabytes (GBs) of RAM while having 1 terabyte (TB) or more of storage. The difference in space is there because a computer doesn't need to quickly access all the information stored on it at once, so allocating a few GBs of memory space to run programs will suffice for most modern applications.

The terms memory and storage can be confusing because their use today is inconsistent. For example, RAM is referred to as primary storage and types of secondary storage can include flash memory. To avoid confusion, it's easier to talk about memory in terms of whether it's volatile or non-volatile and storage in terms of whether it is primary or secondary.

How does computer memory work?

When an OS launches a program, it's loaded from secondary memory to primary memory. Because there are different types of memory and storage, an example of this could be a program being moved from an SSD to RAM. Because primary storage is accessed faster, the opened program can communicate with the computer's processor at faster speeds. The primary memory can be accessed immediately from temporary memory slots or other storage locations.

Memory is volatile, which means that data in memory is stored temporarily. Once a computing device is turned off, data stored in volatile memory is automatically deleted. When a file is saved, it's sent to secondary memory for storage.

There are several types of computer memory, and computers operate differently depending on the type of primary memory used. However, semiconductor-based memory is typically associated with computer memory. Semiconductor memory will be made of integrated circuits with silicon-based metal-oxide-semiconductor (MOS) transistors.

Types of computer memory

Memory can be divided into primary and secondary memory. There are many types of primary memory, including the following:

Advanced memory technologies

Beyond the common memory types, electronic device manufacturers are constantly developing new and innovative memory technologies to meet enterprise and consumer needs. Advanced and emerging memory technologies include the following:

Memory specifications

Memory devices are described in technical specifications that define their operational characteristics. Common memory specifications include the following:

Memory optimization and management

Memory optimization involves a variety of techniques to improve the use and lifespan of computer memory, such as the following:

Timeline of the history and early evolution of computer memory

In the early 1940s, memory was only available up to a few bytes of space. One of the more significant signs of progress during this time was the invention of acoustic delay line memory. This technology enabled delay lines to store bits as sound waves in mercury, and quartz crystals to act as transducers to read and write bits. This process could store a few hundred thousand bits.

In the late 1940s, non-volatile memory began to be researched, and magnetic-core memory, which enabled the recall of memory after a loss of power, was created. By the 1950s, this technology had been improved and commercialized. That led to the invention of PROM in 1956. Magnetic-core memory became so widespread that it was the main form of memory until the 1960s.

Metal-oxide-semiconductor field-effect transistors, also known as MOSFET or MOS semiconductor memory, was invented in 1959. This enabled the use of MOS transistors as elements for memory cell storage. MOS memory was cheaper and needed less power compared to magnetic-core memory. Bipolar memory, which used bipolar transistors, started being used in the early 1960s.

In 1961, Bob Norman proposed the concept of solid-state memory on an integrated circuit (IC) chip. IBM brought memory into the mainstream in 1965. However, users found solid-state memory to be too expensive compared to other memory types.

Other advancements during the early to mid-1960s were the invention of bipolar SRAM, Toshiba's introduction of DRAM in 1965 and the commercial use of SRAM in 1965. The single-transistor DRAM cell was developed in 1966, followed by a MOS semiconductor device used to create ROM in 1967. From 1968 to the early 1970s, N-type MOS, or NMOS, memory also became popular.

By the early 1970s, MOS-based memory was more widely used as a form of memory. In 1970, Intel had the first commercial DRAM IC chip. One year later, erasable PROM was developed and EEPROM was invented in 1972.

Quantum computing can process massive amounts of data at great speed. Learn more about how it works and what the storage challenges are.

06 Mar 2025

All Rights Reserved, Copyright 1999 - 2025, TechTarget | Read our Privacy Statement