Multi-touch, in a computing context, is an interface technology that enables input through pressure and gestures on multiple points on the surface of a device.  Although most commonly used with touch screens on handheld devices, such as smartphones and tablets, multi-touch has been adapted for other surfaces as well, including touch pads and mice, whiteboards, tables and walls.   

Gestures for multi-touch interfaces are often selected to be similar to real-life movements, so that the actions are intuitive and easily learned. (See: natural user interface)

Examples of multi-touch include:

  • Typing on a software keyboard as on a hardware one, with keyboard shortcuts, capitalization and other elements that require multiple key presses simultaneously.
  • Bringing fingers together in a pinching movement on an image to zoom out or opening them from a pinched position to zoom in.
  • Holding fingers apart and moving them in a clockwise motion to rotate an image in that direction.
  • Reshaping an object in a touch screen display as you would a real-life object. 
  • Flicking a finger on the corner of a display to turn a page in an e-Reader.

Here’s a very basic explanation of how the iPhone’s multi-touch screen works:

Capacitors in the external layer are identified according to their coordinates. When touched by a finger, each capacitor sends a signal to the processor. Interpretive software takes the raw data and  calculates the  location, size and shape or pattern of any touches on the screen. A gesture recognition program takes that data and uses it in conjunction with information about the application the user was running to match the touch information to a particular gesture. If a match is found, the result is relayed to the application as a command; if no match is found, the touch is considered to be unintentional and is ignored.  

This was last updated in November 2011

Continue Reading About multi-touch

Dig Deeper on Mobile infrastructure