The future of AI: What to expect in the next 5 years
AI's impact in the next five years? Human life will speed up, behaviors will change and industries will be transformed -- and that's what can be predicted with certainty.
For the first half of the 20th century, the concept of artificial intelligence held meaning almost exclusively for science fiction fans. In literature and cinema, androids, sentient machines and other forms of AI sat at the center of many of science fiction's high-water marks -- from Metropolis to I, Robot. In the second half of the last century, scientists and technologists began earnestly attempting to realize AI.
Brief history of AI's impact on society
At the 1956 Dartmouth Summer Research Project on Artificial Intelligence, co-host John McCarthy introduced the phrase artificial intelligence and helped incubate an organized community of AI researchers.
Often AI hype outpaced the actual capacities of anything those researchers could create. But in the last moments of the 20th century, significant AI advances started to rattle society at large. When IBM's Deep Blue defeated chess master Gary Kasparov, the game's reigning champion, the event seemed to signal not only a historic and singular defeat in chess history -- the first time that a computer had beaten a top player -- but also that a threshold had been crossed. Thinking machines had left the realm of sci-fi and entered the real world.
The era of big data and the exponential growth of computational power in accord with Moore's Law enabled AI to sift through gargantuan amounts of data and learn how to perform tasks that had previously been accomplished exclusively by humans. The coupling of generative AI with large language models to create ChatGPT in 2022, its subsequent iterations and related algorithms, served as a proof of concept that machine learning could produce technologies far more powerful and captivating than earlier chatbots.
This article is part of
A guide to artificial intelligence in the enterprise
The effects of this machine renaissance have permeated society: Voice recognition devices such as Alexa, recommendation engines like those used by Netflix to suggest which movie you should watch next based on your viewing history, and the modest steps taken by driverless cars and other autonomous vehicles are emblematic of a rudimentary stage of 21st century AI. ChatGPT 4, Dall-E, Midjourney and other contemporary generative AI systems are currently disrupting most business sectors. But the next five years of AI development will likely lead to major societal changes that go well beyond what we've seen to date.
How will AI impact the future?
Speed of life. The most obvious change that many people will feel across society is an increase in the tempo of engagements with large institutions. Any organization that engages regularly with large numbers of users -- businesses, government units, nonprofits -- will be compelled to implement AI in the decision-making processes and in their public- and consumer-facing activities. AI will allow these organizations to make most of the decisions much more quickly. As a result, we will all feel life speeding up.
Broad efficiency gains. Business enterprises will almost certainly be compelled to integrate and exploit generative AI to improve efficacy, profitability and, most immediately, efficiency. Corporations' duty to increase shareholder value and fear of falling behind competitors that integrate and deploy AI more aggressively will make for a virtually irresistible imperative: Fully embrace AI or see your investors turn bearish as peers pull ahead.
End of privacy. Society will also see its ethical commitments tested by powerful AI systems, especially privacy. AI systems will likely become much more knowledgeable about each of us than we are about ourselves. Our commitment to protecting privacy has already been severely tested by emerging technologies over the last 50 years. As the cost of peering deeply into our personal data drops and more powerful algorithms capable of assessing massive amounts of data become more widespread, we will probably find that it was a technological barrier more than an ethical commitment that led society to enshrine privacy.
Thicket of AI law. We can also expect the regulatory environment to become much trickier for organizations using AI. Presently all across the planet, governments at every level, local to national to transnational, are seeking to regulate the deployment of AI. In the U.S. alone, we can expect an AI law thicket as city, state and federal government units draft, implement and begin to enforce new AI laws. And the European Union will almost certainly implement its long-awaited AI regulation within the next six to 12 business quarters. The legal complexity of doing business will grow considerably in the next five years as a result. The European Union AI ACT, the world’s first major AI regulatory scheme, cleared a final vote in the spring of 2024, and many observers imagine it will set a standard for clear and effective legal enforcement. But large multinationals are working hard to water down its secure carve-outs and otherwise defang the regulation. In effect, considerable uncertainty defines the AI regulatory arena on both sides of the Atlantic and will probably continue to do so far at least several more years.
Human-AI teaming. Much of society will expect businesses and government to use AI as an augmentation of human intelligence and expertise, or as a partner, to one or more humans working toward a goal, as opposed to using it to displace human workers. One of the effects of artificial intelligence having been born as an idea in century-old science fiction tales is that the tropes of the genre, chief among them dramatic depictions of artificial intelligence as an existential threat to humans, are buried deep in our collective psyche. Human-AI teaming, or keeping humans in any process that is being substantially influenced by artificial intelligence, will be key to managing the resultant fear of AI that permeates society.
Which industries will AI have a big impact on?
The following industries will be affected most by AI:
- Education. At all levels of education, AI will likely be transformative. Students will receive educational content and trainings tailored to their specific needs. AI will also determine optimal educational strategies based on students' individual learning styles. By 2028, the education system could be barely recognizable.
- Healthcare. AI will likely become a standard tool for doctors and physician assistants tasked with diagnostic work. Society should expect the rate of accurate medical diagnosis to increase. But the sensitivity of patient data and complexity of navigating the laws that protect them are also likely to lead to an even more complicated medical-legal environment, shifting patient expectations regarding ownership of and access to their medical data, and increased costs of doing business.
- Finance. Natural language processing combined with machine learning will allow banks and financial advisors as well as sophisticated chatbots to efficiently engage with clients across a range of typical interactions: credit score monitoring, fraud detection, financial planning, insurance policy matters and customer service. AI systems will also be used to develop more complex and rapidly executed investment strategies for large investors.
- Law. We can expect to see the number of small and medium-sized firms to fall over the next five years, as small teams of one to three humans working with AI systems do the work that would have required 10-20 lawyers in the past and do it more quickly and more cost effectively. Given the proper prompts, generative AI is already able to provide rudimentary summaries of applicable laws and draft contract clause language. Based on the last few years of AI development and presuming it continues apace, by 2028 the number of human lawyers in the U.S. could be cut by 25% or more.
- Transportation. The near-term future will see more autonomous vehicles in private and commercial use. From the cars many of us drive to work, to the trucks carrying goods along the highway, to the space craft ferrying humans and cargo to the moon, transport by autonomous vehicles will probably be the most dramatic instance of our having arrived in the age of AI.
Examining AI's long-term dangers
The notion that AI poses an existential risk to humans has existed almost as long as the concept of AI itself. But in the last two years, as generative AI has become a hot topic of public discussion and debate, fear of AI has taken on newer undertones.
Arguably the most realistic form of this AI anxiety is a fear of human societies losing control to AI-enabled systems. We can already see this happening voluntarily in use cases such as algorithmic trading in the finance industry. The whole point of such implementations is to exploit the capacities of synthetic minds to operate at speeds that outpace the quickest human brains by many orders of magnitude.
However, the existential threats that have been posited by Elon Musk, Geoffrey Hinton and other AI pioneers seem at best like science fiction, and much less hopeful than much of the AI fiction created 100 years ago.
The more likely long-term risk of AI anxiety in the present is missed opportunities. To the extent that organizations in this moment might take these claims seriously and underinvest based on those fears, human societies will miss out on significant efficiency gains, potential innovations that flow from human-AI teaming, and possibly even new forms of technological innovation, scientific knowledge production and other modes of societal innovation that powerful AI systems can indirectly catalyze.
Michael Bennett is director of educational curriculum and business lead for responsible AI in The Institute for Experiential Artificial Intelligence at Northeastern University in Boston. Previously, he served as Discovery Partners Institute's director of student experiential immersion learning programs at the University of Illinois. He holds a J.D. from Harvard Law School.