How will the metaverse affect the future of work? The Metaverse Standards Forum: What you need to know

spatial computing

What is spatial computing?

Spatial computing broadly characterizes the processes and tools used to capture, process and interact with three-dimensional (3D) data. Spatial computing is a technology defined by computers blending data from the world around them in a natural way.

Spatial computing in action could be as simple as controlling the lights when a person walks into a room or as complex as using a network of 3D cameras to model a factory process. Spatial computing concepts also play a role in orchestrating autonomous computing applications in warehouse automation, self-driving cars and supply chain automation.

Users commonly interact with spatial computing applications through virtual reality (VR) headsets that mirror the physical world, or mixed reality devices that overlay data onto a view of the physical world.

Components of spatial computing can include camera sensors, internet of things, digital twins, ambient computing, augmented reality (AR), VR, artificial intelligence (AI) and physical controls. Substantial advancements in these technologies are making spatial computing more practical.

The term spatial computing was coined by Simon Greenwold, who described the concept's importance in his 2003 master's thesis at MIT as "an essential component for making our machines fuller partners in our work and play."

Companies including Apple, Google, Magic Leap, Meta and Microsoft offer spatial computing devices for consumer audiences.

How does spatial computing work?

Spatial computing mirrors how we interact with virtual objects in the real world. Humans translate the two-dimensional images that they see into a 3D model of the world, make sense of objects in the world and then direct their hands to act. For example, when we pour a cup of tea, we watch the cup as we pour, determine when the cup is full and then stop pouring. Spatial computing does the same but with computers, sensors and actuators.

Spatial computing involves the following steps:

  1. Gather data. Spatial mapping techniques are used to gather data around the user and the device's surroundings. Techniques such as photogrammetry, lidar and radar capture a 3D model of the world. Lidar or radar measure the reflection of a laser or radio signal off objects around a scanner to automatically capture a point cloud, which are sets of data points in a space that represent 3D shapes and distances. Photogrammetry, described as the art and science of creating 3D models from photographs, combines imagery from multiple images or cameras. Newer AI techniques can also capture a richer representation using a handful of images.
  2. Analyze the data. Techniques such as machine vision analyze this data to make sense of the imagery. AI techniques help identify individual objects in a scene, their movement and interactions with other objects. For example, they can look for product defects, understand gait walking patterns or analyze how different workers perform a process.
  3. Take action. Handheld controllers, motion sensors and speech recognition help users interact with the device and their surrounding environment. For example, the analysis of collected digital 3D images captured from the physical processes enable a self-driving car to detect a pedestrian in front of it and stop itself in real time. A building control system can adjust the heat or the lights when someone walks into a room in response to their preferences, which are stored in a database.

Spatial computing experienced through a headset uses technology such as AR and VR, internal and external camera sensors, controllers and motion tracking. These headsets gather data on and around the user and their movements as well as analyze and interpret the incoming data to respond accordingly.

Key features and benefits of spatial computing

Spatial computing can improve enterprise processes in the following ways:

  • Aligning computer programming with how humans think of the world.
  • Enabling the creation of new physical workflows.
  • Combining data from multiple types of sensors to streamline user experience.
  • Automating the process of creating digital twins.
  • Connecting the dots between robotic process automation and physical automation.
  • Providing new ways of interacting between people, robots and products in physical space.
  • Helping companies measure the performance of physical process variations.
  • Enabling the orchestration of multiple physical processes.
  • Improving the design of physical facilities and processes.

Industry use cases for spatial computing

Spatial computing is being used in the following ways:

  • Manufacturing facilities can monitor the production line during each step of the process. This can help identify the different steps involved in making a product. It can also determine when and why different teams might take different approaches and the impact that has on time and quality.
  • Warehouses can combine data about the physical location of products with the movement of robots and humans that pack these goods. This can help guide employees and robots to the products. This can also be used to simulate different warehouse layouts to improve overall efficiency and reduce worker burnout.
  • Property management firms can use spatial computing to build a model of an office overlayed with different layouts to optimize the use of space.
  • Facilities management can program automated lighting and environmental controls to adjust lighting, heating and cooling to worker preferences.
  • Hospitals can use location tags to help teams automate key patient details or procure special equipment in an emergency.
  • Organizations can use 3D visualizations of physical products to view a product or model at each stage of development. For product design, an organization can test a project's format, ergonomics and predicted use in a 3D space.
  • Employees can use headsets for remote collaboration. This lets them work remotely but still collaborate with one another in a shared space.
  • Human resources departments can use spatial computing to reduce the time needed to train new employees or to improve training results and experiences.

Examples of spatial computing

The following are examples of spatial computing:

  • A mixed reality headset that overlays a repair manual to guide the technician.
  • A network of cameras that automatically models a car production process.
  • A spatial computing analytics program that coaches employees on how to reduce harmful movements.
  • A spatial model of the production process that lets managers simulate variations to optimize the process.
  • Occupancy analytics programs that automate elderly safety checks for relatives and caregivers.
  • Offices that dynamically tailor office lighting and environmental controls to individual workers.

Examples of spatial computing headsets include the following:

Spatial computing vs. VR

Spatial computing can be used as a general term that extends to technologies such as AR, VR and mixed reality.

VR simulates a 3D environment that lets users explore and interact with a virtual surrounding. The environment is created using computer hardware and software delivered through a wearable headset. AR works similarly, but instead of simulating a different environment, it overlays simulations on top of real-world environments.

AR, VR and spatial computing differ in that, in spatial computing, digital simulations can interact with or appear to modify physical environments.

For example, a digital object overlayed on a headset will understand that it's at rest on a real-world table. The user could walk around the table in real space to see the back of the object, pick it up and place it on a real pedestal. The headset should be able to represent a digital object, understand the real-world environment it's in and surrounding real-world objects, and interact with the user or those nearby physical objects.

Spatial computing vs. edge computing

While spatial computing and edge computing sound similar, they refer to different general ideas. Spatial computing blends digital and real-world data in a natural way, whereas edge computing moves data processing closer to the user.

Edge computing is a distributed IT architecture where client data is processed at the periphery of the network, as close to the originating source as possible.

In simplest terms, edge computing moves some portion of storage and compute resources out of the central data center and closer to the source of the data itself. Instead of transmitting raw data to a central data center for processing and analysis, that work is performed where the data is actively generated.

A spatial computing headset could be considered an edge computing device, for example, if the data the sensors pick up is then processed in the headset instead of being sent to a separate device.

Future of spatial computing

Despite the potential benefits of spatial computing, the technology has had limited success. Spatial computing headsets typically have the following three issues:

  • The cost of the hardware. The required sensors, processing power and materials for some spatial headsets can cost up to a few thousand dollars.
  • The weight of the hardware. This directly affects how long a headset feels comfortable to wear while moving around or sitting down.
  • Mobility and battery life. If the headset is wireless, the user can experience increased mobility, but the headset might have a short battery life. Likewise, if the headset is wired, the headset will have a longer battery life, but the user will be tied down by a cord.

However, these technologies have made more recent advancements, which makes the experience more practical. For example, the Apple Vision Pro, which some claim has the potential to revolutionize spatial computing, can fit a number of sensors. This makes the experience of moving around with it and using hand gestures smoother and more responsive. Apple Vision Pro uses two main cameras, four downward-facing cameras, two infrared cameras, two TrueDepth cameras, two side cameras and five other sensors.

Learn more about Apple's spatial computing technology and the Apple Vision Pro.

This was last updated in February 2024

Continue Reading About spatial computing

Dig Deeper on CIO strategy

Cloud Computing
Mobile Computing
Data Center
and ESG