
A Timeline of Machine Learning History
Machine learning, an application of artificial intelligence (AI), has some impressive capabilities. A machine learning algorithm can make software capable of unsupervised learning. Without being explicitly programmed, the algorithm can seemingly grow "smarter," and become more accurate at predicting outcomes, through the input of historical data.
The History and Future of Machine Learning
Machine learning was first conceived from the mathematical modeling of neural networks. A paper by logician Walter Pitts and neuroscientist Warren McCulloch, published in 1943, attempted to mathematically map out thought processes and decision making in human cognition.
In 1950, Alan Turning proposed the Turing Test, which became the litmus test for which machines were deemed "intelligent" or "unintelligent." The criteria for a machine to receive status as an "intelligent" machine, was for it to have the ability to convince a human being that it, the machine, was also a human being. Soon after, a summer research program at Dartmouth College became the official birthplace of AI.
From this point on, "intelligent" machine learning algorithms and computer programs started to appear, doing everything from planning travel routes for salespeople, to playing board games with humans such as checkers and tic-tac-toe.
Intelligent machines went on to do everything from using speech recognition to learning to pronounce words the way a baby would learn to defeating a world chess champion at his own game. The infographic below shows the history of machine learning and how it grew from mathematical models to sophisticated technology.
