Browse Definitions :
Definition

supercomputer

What is a supercomputer?

A supercomputer is a computer that performs at or near the highest operational rate for computers.

Traditionally, supercomputers have been used for scientific and engineering applications that must handle massive databases, do a great amount of computation or both. Advances like multicore processors and general-purpose graphics processing units have enabled powerful machines that could be called desktop supercomputers or GPU supercomputers.

By definition, a supercomputer is exceptional in terms of performance. At any time, there are a few well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The term supercomputer is sometimes applied to far slower -- but still impressively fast -- computers.

How supercomputers work?

Supercomputer architectures are made up of multiple central processing units (CPUs). These CPUs have groups composed of compute nodes and memory. Supercomputers can contain thousands of nodes that use parallel processing to communicate with one another to solve problems.

The largest, most powerful supercomputers are multiple parallel computers that perform parallel processing. There are two parallel processing approaches: symmetric multiprocessing and massively parallel processing. In some cases, supercomputers are distributed, meaning they draw power from many individual PCs in different locations instead of housing all the CPUs in one location.

Supercomputer processing speed is measured in quadrillion floating point operations per second, also known as petaflops or PFLOPS.

Differences between general-purpose computers and supercomputers

Supercomputers are general-purpose computers that function at the highest operational rate or peak performance for computers. Processing power is the main difference between supercomputers and general-purpose computer systems. A supercomputer can perform 100 PFLOPS. A typical general-purpose computer can only perform hundreds of gigaflops to tens of teraflops.

Supercomputers consume lots of power. As a result, they generate so much heat that they need to be stored in cooling systems.

Both supercomputers and general-purpose computers differ from quantum computers, which operate based on the principles of quantum physics.

What are supercomputers used for?

Supercomputers perform resource-intensive calculations that general-purpose computers can't handle. They often run engineering and computational sciences applications, such as the following:

  • weather forecasting to predict the impact of extreme storms and floods;
  • oil and gas exploration to collect huge quantities of geophysical seismic data to aid in finding and developing oil reserves;
  • molecular modeling for calculating and analyzing the structures and properties of chemical compounds and crystals;
  • physical simulations like modeling supernovas and the birth of the universe;
  • aerodynamics such as designing a car with the lowest air drag coefficient;
  • nuclear fusion research to build a nuclear fusion reactor that derives energy from plasma reactions;
  • medical research to develop new cancer drugs, understand the genetic factors that contribute to opioid addiction and find treatments for COVID-19;
  • next-gen materials identification to find new materials for manufacturing; and
  • cryptanalysis to analyze cyphertext, ciphers and cryptosystems to understand how they work and identify ways of defeating them.

Like any computer, supercomputers are used to simulate reality but on a larger scale. Some of the functions of a supercomputer can also be carried out with cloud computing. Like supercomputers, cloud computing combines the power of multiple processors to achieve power that is impossible on a PC.

list of ways supercomputers are used
Scientists and engineers use supercomputers to simulate reality and make projections.

Notable supercomputers throughout history

Seymour Cray designed the first commercially successful supercomputer. It was the Control Data Corporation (CDC) 6600, released in 1964. It had a single CPU and cost $8 million -- the equivalent of $60 million today. CDC 6600 could handle 3 million FLOPS and used vector processors.

Cray went on to found a supercomputer company named Cray Research in 1972. Although the company has had several different owners, it is still in operation as Cray Inc. In September 2008, Cray Inc. and Microsoft launched CX1, a $25,000 personal supercomputer aimed at the aerospace, automotive, academic, financial services and life sciences markets.

IBM has been a keen competitor. IBM Roadrunner was the top-ranked supercomputer when it was launched in 2008. It was twice as fast as IBM's Blue Gene and six times as fast as any other supercomputer at that time. IBM Watson is famous for having adopted cognitive computing to beat champion Ken Jennings on the popular quiz show Jeopardy!

Top supercomputers of recent years

Sunway's Oceanlite supercomputer is reported to have been competed in 2021. It is thought to be an exascale supercomputer, which is one that can calculate at least 1018 FLOPS.

In the United States, some supercomputer centers are interconnected on an internet backbone known as the very high-speed Backbone Network Service, or vBNS, which is part of the National Science Foundation Network (NSFNET). NSFNET is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2, a university-led project, is part of this initiative.

At the lower end of supercomputing, data center administrators can use clustering for a build-it-yourself approach. The Beowulf Project offers guidance on how to put together off-the-shelf PC processors, using Linux operating systems, and interconnecting them with Fast Ethernet. Applications must be written to manage the parallel processing.

photo of Berzelius supercomputer
Berzelius is a Swedish supercomputer designed for AI research.

Countries around the world are using supercomputers for research purposes. One example is Sweden's Berzelius, which began operation in the summer of 2021. The system will be used for AI research primarily in Sweden.

Some top supercomputers of the last two decades

Year Supercomputer Peak speed (Rmax) Location
2021 Sunway Oceanlite 1.05 exaFLOPS (unofficial) Qingdao, China
2021 Fujitsu Fugaku 442 PFLOPS Kobe, Japan
2018 IBM Summit 148.6 PFLOPS Oak Ridge, Tenn.
2018 IBM Sierra 94.6 PFLOPS Livermore, Calif.
2016 Sunway TaihuLight 93.01 PFLOPS Wuxi, China
2013 NUDT Tianhe-2 33.86 PFLOPS Guangzhou, China
2012 Cray Titan 17.59 PFLOPS Oak Ridge, Tenn.
2012 IBM Sequoia 17.17 PFLOPS Livermore, Calif.
2011 Fujitsu K computer 10.51 PFLOPS Kobe, Japan
2010 NUDT Tianhe-1A 2.566 PFLOPS Tianjin, China
2009 Cray Jaguar 1.759 PFLOPS Oak Ridge, Tenn.
2008 IBM Roadrunner 1.105 PFLOPS Los Alamos, N.M.

Supercomputers and artificial intelligence

Supercomputers often run artificial intelligence (AI) programs because they typically require supercomputing-caliber performance and processing power. Supercomputers can handle the large amounts of data that AI and machine learning application development use.

Some supercomputers are engineered specifically with AI in mind. For example, Microsoft custom built a supercomputer to train large AI models that work with its Azure cloud platform. The goal is to provide developers, data scientists and business users with supercomputing resources through Azure's AI services. One such tool is Microsoft's Turing Natural Language Generation, which is a natural language processing model.

Another example of a supercomputer engineered specifically for AI workloads is Nvidia's Perlmutter. It is No. 5 in the most recent TOP500 list of the world's fastest supercomputers. It contains 6,144 GPUs and will be tasked with assembling the largest-ever 3D map of the visible universe. To do this, it will process data from the Dark Energy Spectroscopic Instrument, a camera that captures dozens of photos per night containing thousands of galaxies.

Photo of the Perlmutter supercomputer
Nvidia's Perlmutter supercomputer was launched in 2021 and is being used to find solutions to problems in astrophysics and climate science.

The future of supercomputers

The supercomputer and high-performance computing (HPC) market is growing as more vendors like Amazon Web Services, Microsoft and Nvidia develop their own supercomputers. HPC is becoming more important as AI capabilities gain traction in all industries from predictive medicine to manufacturing. Hyperion Research predicted in 2020 that the supercomputer market will be worth $46 billion by 2024.

The current focus in the supercomputer market is the race toward exascale processing capabilities. Exascale computing could bring about new possibilities that transcend those of even the most modern supercomputers. Exascale supercomputers are expected to be able to generate an accurate model of the human brain, including neurons and synapses. This would have a huge impact on the field of neuromorphic computing.

As computing power continues to grow exponentially, supercomputers with hundreds of exaflops could become a reality.

Supercomputers are becoming more prevalent as AI plays a bigger role in enterprise computing. Learn the top nine applications of AI in business and why businesses are using AI.

This was last updated in March 2022

Continue Reading About supercomputer

SearchNetworking
  • virtual network functions (VNFs)

    Virtual network functions (VNFs) are virtualized tasks formerly carried out by proprietary, dedicated hardware.

  • network functions virtualization (NFV)

    Network functions virtualization (NFV) is a network architecture model designed to virtualize network services that have ...

  • overlay network

    An overlay network is a virtual or logical network that is created on top of an existing physical network.

SearchSecurity
  • X.509 certificate

    An X.509 certificate is a digital certificate that uses the widely accepted international X.509 public key infrastructure (PKI) ...

  • directory traversal

    Directory traversal is a type of HTTP exploit in which a hacker uses the software on a web server to access data in a directory ...

  • malware

    Malware, or malicious software, is any program or file that is intentionally harmful to a computer, network or server.

SearchCIO
  • data latency

    Data latency is the time it takes for data packets to be stored or retrieved. In business intelligence (BI), data latency is how ...

  • chief data officer (CDO)

    A chief data officer (CDO) in many organizations is a C-level executive whose position has evolved into a range of strategic data...

  • information technology (IT) director

    An information technology (IT) director is the person in charge of technology within an organization. IT directors manage ...

SearchHRSoftware
SearchCustomerExperience
  • implementation

    Implementation is the execution or practice of a plan, a method or any design, idea, model, specification, standard or policy for...

  • first call resolution (FCR)

    First call resolution (FCR) is when customer service agents properly address a customer's needs the first time they call.

  • customer intelligence (CI)

    Customer intelligence (CI) is the process of collecting and analyzing detailed customer data from internal and external sources ...

Close