Manage Learn to apply best practices and optimize your operations.

Timeline of AI winters casts a shadow over today's applications

Past generations have had to deal with rounds of inflated expectations and crushing disappointment when it comes to AI. This timeline charts the history of AI's rises and falls.

Timeline of AI winters casts a shadow over today's applications

We've seen this story before, and it hasn't had a happy ending in the past.

This isn't the first time there's been high levels of excitement around AI. In at least two previous eras, people were sure AI was about to change the world. And then a harsh AI winter ensued, quashing those hopes.

As we think about today's AI technology and where it might go from here, it's useful to keep in mind how trends in AI research and development have played out in the past. Right now, businesses are betting big that AI will transform their industries and are rushing to be the first among their competitors to attain that AI edge. But perhaps a little caution is in order. This AI winter timeline explains the reasons behind the collapses of AI development efforts in the past.

AI was, in fact, present for the earliest days of computing. Researchers like Alan Turing and Marvin Minsky explored the possibilities of AI as far back as the 1940s and 1950s. A proposal made ahead of the Dartmouth Summer Research Project on Artificial Intelligence, the 1956 event generally thought to represent the founding of AI as a formal discipline, stated that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

That statement was a bit ahead of its time, though, and it contributed to inflated expectations that were impossible to meet. As researchers and funders saw disappointing early results from neural networks, the field pulled back on AI work in the 1970s.

The cycle repeated itself in the 1980s, as the initial success of expert systems led some to see renewed strong potential for AI. If a computer could simulate human reasoning through if-then logic, what other elements of intelligence could be replicated? But, again, the technology of the day could only accomplish so much, and the high hopes were eventually dashed.

Today, we are in a much different era of computing. Compared to past generations that tried to push AI forward, we have distributed computing systems -- which dwarf the processing power of the past -- and vast troves of training data on which AI systems can cut their teeth. These are distinct advantages that AI developers lacked in the past and are two of the primary drivers behind today's AI advances.

But it's still an open question how far the technology can go. Carnegie Mellon researcher Zachary Lipton has pointed out that most of what we call AI today is simply machine learning algorithms engaging in some form of pattern matching. While developers have found interesting and useful ways to apply this approach, it's hard to see it ever delivering a machine with any kind of broad or general intelligence.

This may be a problem for science fiction fans who dream of a future where they can converse with intelligent robots, but it may not be as much of a concern for enterprises. Machine learning is delivering value today, and most businesses are only scratching the surface of what their AI implementations can do. Whether or not a machine learning application can truly be said to be intelligent is beside the point for now.

So, is the AI winter timeline bound to repeat itself again? Probably not in the near term. But if the industry starts seeing hype outpace real advances, we could find ourselves again in a place where AI researchers and their funders are disappointed and pull back. Keeping AI excitement grounded in terms of real business value may help lessen this as more enterprises roll out machine learning technology and other forms of AI.