Computational photography is the use of computer processing capabilities in cameras to produce an enhanced image beyond what the lens and sensor pics up in a single shot.
Computational photography is common in digital cameras and especially smartphones, automating many settings to make for better point-and-shoot abilities. Using image processing algorithms, computational photography improves images by reducing motion blur and adding simulated depth of field, as well as improving color, contrast and light range.
While enhancements are often added to digital cameras, they are even more frequently featured in smartphones, where there is less space for a big lens that might otherwise enhance pictures. Smartphones also have a relative abundance of processing power over the typical digital camera.
In computational photography, a number of pictures are often taken for cross-referencing. After cross-referencing images, the software automates many of the settings a photographer might carefully set to make artful shots. These images are sometimes cut into tiles and sections from some frames and might be blended, dropped or used to eliminate blur from random movement or get the best detail or light and dark balance. Some features in digital photography, like image stabilization, are both in hardware and in software. For example, the cameras lens might move to compensate some small shake, while software might cross-reference the picture with data from the gyroscope to make more broad movement stabilization.
Computational photography became a hot topic with the 2017 release of the Pixel 2 smartphone, which uses machine learning to edit several images of the same location at different angles together to complete a larger scene. As computational photography technology improves, smartphones allow non-professional photographers to produce images of increasing quality. This provides a challenge for photography as an art form as to whether the artist or the smartphone is responsible for the quality of a photograph.