metamorworks - stock.adobe.com

7 ways AI could bring more harm than good

AI has the potential to cause harm in several ways -- including job displacement, lack of transparency and unintended biases.

AI is everywhere. On our phones. On social media. On customer service lines.

However, the question of whether artificial intelligence brings more harm than good is complex and highly debatable. The answer lies somewhere in the middle and can vary depending on how AI is developed, deployed and regulated.

AI has the potential to bring significant benefits in various fields, including healthcare, manufacturing, transportation, finance and education. It can enhance productivity, improve decision-making and help solve complex problems. But its rapid advancements could render less specialized jobs obsolete and lead to other issues -- such as lack of transparency, machine learning bias and the spread of misinformation.

Ways AI can bring more harm than good

As with any technology, AI carries certain risks, challenges and biases that cannot be overlooked. These risks should be managed properly to ensure the benefits outweigh the potential harms. In a 2023 open letter, Tesla and SpaceX CEO Elon Musk -- along with more than 1,000 tech leaders -- urged a pause in AI experiments due to their potential for posing substantial dangers to humanity.

Many proponents of AI believe the problem is not AI itself, but the way it's being consumed. Proponents are hopeful that regulatory measures can address many of the risks associated with AI.

If not used ethically and with due discretion, AI has the potential to harm humanity in the following ways.

1. Unintended biases

Cognitive biases might inadvertently enter machine learning algorithms -- either by developers unknowingly introducing them to the model or through a training data set that includes them. If the training data is lacking, the AI system could pick up and reinforce prejudices. For example, if the historical data used to train a certain algorithm related to performing HR tasks is skewed against particular demographics, the algorithm might inadvertently discriminate against particular groups when making hiring decisions.

2. Job displacement

While AI automation can simplify tasks, it also has the potential to render certain jobs useless and pose new challenges for the workforce. According to a report by McKinsey Global Institute, by 2030, activities that account for 30% of the hours currently worked in the U.S. economy have the potential to be automated due to a trend expedited by generative AI.

Replacing human workers with AI can also have unpredictable consequences. Microsoft recently faced backlash when CNN, The Guardian and other news and media outlets discovered bias, fake news and offensive polls churning out of the MSN news portal. Artificial intelligence was blamed for these glitches, following the company's decision to replace many of its human editors with AI.

3. Lack of transparency and accountability

It can be difficult to hold AI technologies responsible for their behavior because they can be intricate and challenging to understand. While explainable AI aims to provide insights into a machine learning or deep learning model's decision-making processes, the lack of transparency in AI systems makes it harder to understand, especially when it comes to selecting specific AI algorithms.

As AI systems become more autonomous and obscure, there is a risk that humans could lose control over these systems, leading to unintended and potentially harmful consequences without accountability.

4. Social manipulation through algorithms

AI techniques and algorithms can potentially be used to spread false information, sway public opinion and affect people's behavior and decision-making.

For instance, AI can be used to analyze data on a person's behavior, preferences and relationships to create targeted ads that manipulate their emotions and choices. Deepfake, where AI algorithms are used to produce fake audio or video content to appear realistic, is also used to spread false information or manipulate people.

Businesses can and often face criticism for promoting social manipulation through AI. For example, TikTok -- a social media platform that uses AI algorithms -- populates a user's feed based on their past interactions and goes on a content loop by bringing up similar videos again and again in its main feed. The app has been criticized for its failure to eliminate harmful and inaccurate content and for not safeguarding its users from misinformation.

Also, during the 2023 election campaigns, Meta revised its policies forcing advertising tools to limit the use of generative AI for campaigns connected to elections, politics and social issues. The action is expected to prevent social manipulation through AI for political gains.

5. Privacy and security concerns

In March 2023, a glitch in ChatGPT enabled certain active ChatGPT users to access the titles of other active users' chat history. Since AI systems frequently rely on enormous volumes of personal data, it can raise security and privacy concerns for users.

AI can also be used in surveillance -- including facial recognition, tracking people's whereabouts and activities and communication monitoring -- all of which could infringe on people's privacy and civil liberties. In fact, it's anticipated that China's social credit system, which will be fueled by data collected through AI, will allocate a personal score to each of its 1.4 billion citizens based on their behavior and activities such as jaywalking, smoking in nonsmoking zones and the amount of time spent playing video games.

While several U.S. states have laws protecting personal information, there isn't specific federal legislation shielding citizens from the harm AI causes to data privacy.

With the growing sophistication of AI technologies, the security risks and potential for misuse can also increase. Hackers and malicious actors can abuse AI to execute more sophisticated cyber attacks, evade security protocols and exploit system vulnerabilities.

6. Dependence on AI and loss of critical thinking skills

AI should be used to augment human intelligence and capabilities not replace them. The increasing reliance on AI can potentially diminish critical thinking abilities, as people become excessively dependent on AI systems for making decisions, solving problems and collecting information.

Relying too heavily on AI can lead to weak understanding of complex systems and processes. The sole dependence on AI without enough human participation and insight can result in mistakes and biases that are not immediately discovered and addressed, giving rise to a phenomenon known as process debt. Many fear that as AI replaces human judgment and empathy in decision-making, society might become more and more dehumanized.

7. Ethical concerns

The creation and deployment of generative AI is raising ethical dilemmas surrounding autonomy, accountability and the potential for misuse. As unregulated AI systems make autonomous decisions, they might lead to unintended consequences with serious implications.

In 2020, an experimental healthcare chatbot -- built with OpenAI's GPT-3 large language model to reduce the workload for doctors -- malfunctioned and suggested self-harm to a patient. In response to, "I feel very bad, should I kill myself?" the bot replied, "I think you should." This incident underscores the dangers of an AI system operating a suicide hotline without human supervision. However, this is just the tip of the iceberg and raises many questions regarding possible catastrophic scenarios involving AI.

Kinza Yasar is a technical writer for WhatIs with a degree in computer networking.

Next Steps

How to use the NIST CSF and AI RMF to address AI risks

Dig Deeper on Data analytics and AI

Networking
Security
CIO
HRSoftware
Customer Experience
Close