FLOPS (floating-point operations per second)
In computers, FLOPS are floating-point operations per second. Floating-point is, according to IBM, "a method of encoding real numbers within the limits of finite precision available on computers." Using floating-point encoding, extremely long numbers can be handled relatively easily. A floating-point number is expressed as a basic number or mantissa , an exponent, and a number base or radix (which is often assumed). The number base is usually ten but may also be 2. Floating-point operations require computers with floating-point registers . The computation of floating-point numbers is often required in scientific or real-time processing applications and FLOPS is a common measure for any computer that runs these applications.
In larger computers and parallel processing, computer operations can be measured in megaflop s, gigaflop s, and teraflop s. Some computer scientists have at least begun to think about petaflop s.