Getty Images

Experts talk impact, challenges of AI and emerging tech

Companies must consider many risks as AI and other emerging technologies evolve at a rapid pace. In this interview excerpt, experts discuss the societal and environmental effects.

Organizational leaders are among those racing to find generative AI's business value. The potential downsides and need for regulation are arguably getting less attention.

Business and IT leaders are facing a host of questions and complex decisions as they grapple with the growing impact, challenges and responsibilities related to AI and emerging tech.

In a wide-ranging BrightTALK conversation, three MIT researchers discussed those issues and shared in-depth analysis and guidance. Below is an excerpt of that talk where the tech experts analyzed the potential of generative AI and other emerging technologies as well as sustainability and societal implications. They also discussed the need for responsible AI development, why regulation will be complex and some new issues leaders will face.

Editor's note: This interview was edited for length and clarity.

Generative AI has a number of sustainability implications. How can we ensure that emerging tech and AI development are sustainable and responsible?

Irving Wladawsky-Berger: We need to be careful, because these are powerful technologies with huge potential for good, but which can be badly misused.

AI and emerging tech: The growing impact, challenges and responsibilities

Society has realized Web2 didn't work so well. Social media, which we were all so excited about, has not been used all for good. Social media has been misused … to polarize.

We need to understand that a lot of what you're seeing is less an existential threat to humanity … and more that powerful technologies need to be carefully managed, including in their potential impact on the climate. It takes lots of electricity to use this technology. We need to figure out how to make their usage much more efficient. And that will happen if we spend the time [working on] it.

Madhav Kumar: I echo the concerns about sustainability implications, especially long-term implications. I'm proceeding a bit more cautiously, optimistically thinking that solutions will be developed.

If this [tech] is to be widely adopted -- and when I mean widely, I mean, globally widely adopted -- the costs of running these models have to come down. I think that's an area that's ripe for transformation as well as, how do we make these models run in a more sustainable and eco-friendly way? I think that's absolutely of paramount importance.

The big breakthrough will happen when we try to solve the problem from the crux [of sustainability] itself and not necessarily relying on policy spillover for good to happen by chance. I don't think we are there yet, but there are a lot of smart people working on this. We will see something optimistic about this soon.

Robert Mahari: I'm very optimistic about the positive uses.

I'm fairly concerned about our ability to really regulate AI for much of the same reasons why I'm optimistic about it. The barriers to entry are so low and they're only getting lower, and to Madhav's point, energy usage is going to come down. Sophistication required to train a model, to run a model, to host it locally: All of those things are going to come down.

Then we converge on a world where proposals for shutting AI down are just unrealistic. It will be distributed all over the place. Anyone with a relatively small budget and some technical expertise will be able to run their own modeling.

The things that I'm really worried about are things like misinformation campaigns, weaponizing AI in those kinds of ways. I think more research is going to be needed to figure out how to regulate that, how to protect against that, in part, because we can only imagine so many misuses.

Actually being able to get jurisdiction over players is going to be challenging. There'll be some need for international cooperation. There's going to be some need for regulatory innovation. Certainly, in general, thinking and research about, what are the harms that we're really supposed to be concerned about? We all have our vivid imaginations, but what's really going to be going on? How do we guard against them?

That's also partially where things like digital identity are going to be really important. Being able to certify that something was created by a human, and what is the provenance of a piece of information, a piece of art?

All of those things will kind of start becoming really important. Research is needed and good research is going on. So, we'll see.

Next Steps

AI existential risk: Is AI a threat to humanity?

Targeting AI: Responsible AI means regulation, ethical use

Former Google CEO outlines dangers of generative AI

Generative AI: Data privacy, backup and compliance

Dig Deeper on Sustainable IT

Data Center
Mobile Computing