MIT Technology Review's Amy Nordrum and Niall Firth unpack what trends to watch in AI, from LLMs+ and the China-spurred open source AI boom to weaponized deepfakes and AI malaise.
CAMBRIDGE, Mass. – AI change is fast-paced. From advancements in agentic technology and robotics to evolving regulatory landscapes and public perceptions, it can be tricky for leaders to stay on top of AI and keep track of its progress.
MIT Technology Review is no stranger to emerging technologies. For the first time, at its EmTech AI conference this week, MIT Technology Review unveiled a list of the 10 things that matter in AI in 2026, compiling a mix of AI advancements, trending topics and emerging sentiments.
In the session "Exclusive First Look: The 10 Things That Matter in AI Right Now," MIT Technology Review executive editors Amy Nordrum and Niall Firth explored each of the 10 topics to keep an eye on, briefly unpacking what they mean in the larger AI context. The 10 topics to watch are as follows:
Humanoid data. By training humanoid robots on LLM-inspired data, companies are betting that future humanoids will be better than humans at certain tasks.
LLMs+. Everyone's asking: What's next after LLMs? Well, it might be more LLMs -- or LLMs+, as MIT Technology Review calls it. Future LLMs might use mixture-of-experts models and context window advancements to enable engagement with more complex, multipart problems.
Supercharged scams. From phishing to security bugs, it's increasingly easy to use AI for scamming crimes.
World models. These will continue gaining traction as a viable way to train robots and agents on real-world environments.
The new war room. Militaries are increasingly using AI for their tasks, such as evaluating political sentiment and generating threat intelligence reports. Forward-thinking cases could include chatbots that provide military strategy after training on classified data.
Weaponized deepfakes. AI-based deepfakes, often targeted at minority groups or used for political purposes, are getting more realistic and dangerous. Many experts hypothesize that the societal effect could be permanent.
Agent orchestration. Recent advances in agentic AI point to orchestration as the next big thing, with multi-agent ecosystems and networks working together on complex tasks.
China's open source bet. The wave of excitement from DeepSeek might have worn off, but it created a chain of open source AI products. This is a fundamentally different way of doing things from the U.S. Silicon Valley AI vendor scene.
Artificial scientists. AI for science is taking off, with agents possibly producing hypotheses and experiments in the future, and acting as scientists themselves. OpenAI recently told MIT Technology Review that this is their new "North Star."
Resistance. Humans are tired of AI taking their jobs, weary of data centers and their impacts, and scared of AI's ethical implications. This is leading to growing AI resistance and protest.
At EmTech AI, an interactive wall let attendees share their thoughts about the top 10 things to watch in AI. The wall was quickly filled with excitement, theories on what's next and anxiety about how these topics might affect society.
The following interview took place at MIT Technology Review's EmTech AI conference. Nordrum and Firth discussed how the top 10 list came about and which trends or advancements they're especially focused on. They also explore some topics with significant societal effects: AI malaise, weaponized deepfakes and the lack of guardrails to keep up with innovation.
Editor's note: The following interview was edited for length and clarity.
Identifying "top 10s" is daunting for anything, let alone AI. How did MIT Technology Review compile the leading developments in AI that are worth watching in 2026?
Amy Nordrum: We produce a number of lists of 10 things, so it's something that we're in the habit of doing. We have a list of 10 breakthrough technologies. We also do a list of 10 climate tech companies each year. I like doing these because it's a different approach to our editorial coverage and forces us to think about all the technologies we cover in a different way and take a step back from the day-to-day news coverage and think about what's really going to have the highest impact.
There's so much happening in AI; We knew that it would be helpful if we did the same exercise for AI. We have a really strong team of AI reporters and editors who are deep in this coverage, day in and day out. We collectively just started brainstorming what are some of the things that are happening right now in AI? The big picture trends, the most recent advances and the most important ways the technology is evolving that people should know about.
Are there any specific trends or developments that stick out to you as being especially pertinent to keep an eye on?
Niall Firth: For me, it's LLMs+. We coined the term because the thing everyone wants to know about is what's coming after LLMs. And three of the editors and reporters were all circling around these different approaches, what they think the next thing is going to be. And as we talked about it, it seemed to be more and more the case that it was bolted on to LLMs. That's going to be the next evolution. So, we kind of honed it down and came up with our own take on what's next.
Nordrum: One of my favorites on the list was China's open source bet, because everybody heard about DeepSeek and its big moment. But with that item, our China reporter Caiwei [Chen] was talking about everything that's happened since and bringing people up to date on what the state of the industry is and how it's not just DeepSeek and that one model. It's a whole wave of companies that are building open source in China and also exporting their approaches to many other parts of the world.
[AI malaise] reflects how a lot of people are feeling and have maybe been struggling to put into words about AI.
Amy NordrumExecutive Editor, MIT Technology Review
This is becoming the most popular model. Model families are increasingly becoming open source and Chinese-built, which is so significant for AI development in many other parts of the world, and such an interesting, different strategy than what American tech companies are doing. And so I found that item -- the way that it encapsulated this entire trend and a big development that people may have heard of but would like a lot more context -- to be my favorite.
I also think the term -- it's not on the list, but it's in Mat Honan's essay that talks about this concept -- AI malaise just captures something in this moment. It reflects how a lot of people are feeling and have maybe been struggling to put into words about AI. [People are] sort of overwhelmed and also underwhelmed at the same time and in some ways frustrated. There's something more significant going on there.
And [the term] resistance captures the acts that a minority of people have been taking to speak out and demonstrate against AI. But I think malaise is more the mass feeling that many more people are probably experiencing. It's not necessarily causing them to protest, but they're just checking out. It's almost the opposite reaction. That was an interesting conversation we had that resonated with the way that I feel, like I've been hearing people talk about AI and this just general zeitgeist mood that we have right now.
I'm so glad you brought up AI malaise, because it resonated with me, too. I'm curious, how much should business leaders be keeping an eye on both AI resistance and AI malaise? How do you see it possibly impacting enterprise settings?
Nordrum: There's this big wave of enthusiasm, and a lot of people were trying stuff out or felt like they had to be trying stuff out at work. And some people in your workforce, if you're leading a company, might be getting a bit of this feeling now. Where they have tried it, they've put some good faith effort toward making it work and it's just not working for them, or it's not doing the thing they wanted, or it's not as easy. Acknowledging that is helpful.
I was talking to an [EmTech AI] attendee earlier who was saying it's become more common for people that are working with AI and developers -- software developers especially -- to talk about the trouble that they're running into. It used to be almost like you couldn't talk about that because it might reflect poorly on your own skills or like you're not with it if you're talking about the limitations. So I think that's becoming more of a broad feeling.
Acknowledging it and saying, this might just be part of the technology's lifecycle. We might have this moment where we all fall a little bit out of love with it, and then it gets better and we find better ways to work with it. And then in the end, it actually does help us do our work and helps us get stuff done. I don't really know, but I think just acknowledging that that might be where people are at, and that not everything has worked, and that it's not immediately solving all of our problems -- I think that's a healthy, good thing to do.
Amy Nordrum and Niall Firth presented 'The 10 things that matter in AI right now,' including resistance where people grow increasingly averse to AI and protest it.
One of the things that I wanted to ask about was the supercharged scams and the weaponized deepfakes. What distinctions between these two trends warrant them each holding their own space in such a thoughtfully curated list?
Nordrum: That's interesting. One's kind of a subset of the other, I suppose.
Firth: I feel like deepfakes are aimed directly at people -- particularly women and for political reasons.A deepfake is creating something for an end goal, whereas a scam is more just making it easier to do crappy, low-level scamming of people for money.
We were told for years that this was going to happen if we didn't do something, and that if we didn't prepare for it, this is what life's going to be like for lots of people.
Niall FirthExecutive Editor, Newsroom, MIT Technology Review
Nordrum: Yeah, and some of the supercharged scams involve deepfakes, like the one I mentioned with the CFO that got their company cheated out of money. Or the ones I've heard about, where people are getting calls and their loved one sounds distressed on the other end of the line, and it's like, I'm in jail, send me money. They're hearing that kind of story in their loved one's voice, but it's a spoof, it's a deepfake, an audio deepfake in that case.
But then there's other kinds of supercharged scams that don't involve those kinds of deepfakes, like better phishing emails or using some of these models to look for bugs in software code and then exploiting those vulnerabilities. You're right that there's overlap, and then there are also parts of each one that don't have as much to do with the other. It's just direct harassment in the case of deepfakes or other kinds of scamming.
Firth: We've covered it for years before this particular AI cycle.We talked about deepfakes, particularly of women, by ex-boyfriends. Obviously, now it's incredibly easy to do, and they're far more realistic, and they're everywhere.So it's just targeted harassment.We were told for years that this was going to happen if we didn't do something, and that if we didn't prepare for it, this is what life's going to be like for lots of people. And just as we're told, it is happening now.
Many of the 10 developments on this list need guardrails to mitigate adverse effects. Do you think AI is advancing faster than the guardrails it needs?
Nordrum: Yes, that's so often the case with technologies.
Firth: It seems relatively easy to put guardrails on lots of these things. It's just more a case of political will. The tech companies have a lot of say and a lot of plans in terms of lobbying, in terms of political pressure. Europe is doing quite a lot of regulation around some of these things, particularly around chatbots and deepfakes, but that's sort of meaningless in global terms. And most of the companies doing this are American. So yeah, it's not that hard to solve, but no one's going to really want to do the things that need to be done.
Olivia Wisbey is a site editor for Informa TechTarget's AI & Emerging Tech group. She has experience covering AI, machine learning and other emerging technologies.