https://www.techtarget.com/searchsoftwarequality/tip/5-examples-of-ethical-issues-in-software-development
Ethical practices have not traditionally been a part of software development. Software didn't always have a direct impact on daily life, and the pace of development was slow.
In modern society, people encounter software in all aspects of life. AI, big data and data analytics all have real ramifications for individuals.
Although software developers work primarily behind the scenes in businesses, their decisions in the course of a project can have an outsized impact in the world -- for better or worse -- in terms of compliance, fairness, integrity and trust. Everyone in the industry should be aware of social and ethical issues in software development.
Below are some examples of ethical issues and how developers can address them:
Every developer yearns to create programs that people love to use -- that's just good UX design. The problem is that some teams craft apps that people love too much. There is an ethical concern about the role of digital platforms, such as social media.
Critics such as Tristan Harris of the Center for Humane Technology argue that social media companies profit from outrage, confusion, addiction and depression -- and consequently put our well-being and democracy at risk. Harris notably went viral while working at Google with a presentation about the push for addictive technology design and companies' moral responsibility in society.
Striking an ethical balance between products that consumers love and products that hijack their attention is more an art than a science. In product creation and updates, ask the following questions:
David K. Bain, founding executive director of the Technology Integrity Council, offers Duolingo and TikTok as two contrasting examples of app design. Both apps generate growth and revenue for their creators, but the nature of their benefit to users is different.
Duolingo's clients gain language skills and are challenged with activities that enhance neuronal growth and brain plasticity. TikTok users receive cultural knowledge as well as immediate gratification with video content that bathes the brain with intoxicating neurotransmitters. "Based on this, many adults would say that the true user benefit of Duolingo is greater than [that of] TikTok," Bain said, but added that his teenage daughter would disagree.
The two apps have different attitudes toward usage limits meant to safeguard against addictive attachment. Duolingo encourages consistency and makes the strong case that its use is linked to optimized learning curves. Duolingo definitely grabs users by the lapels to meet their daily quota and maintain performance streaks. But once the daily activities are done, Duolingo releases the user. By contrast, TikTok entices users to stay with an essentially limitless buffet of consumable media.
Apps often include user manipulation, monetization methods, user data collection for corporate use and machine learning algorithms to enhance the app. A transparent app provider would give users some level of knowledge and understanding about these practices.
Here's how this ethical aspect plays out in the two example apps: "Duolingo's users are clearly willing victims of an enforced daily regimen, but are most certainly not aware that ads and usage data connect to a much larger advertising ecosystem," Bain said. "TikTok's users, especially the younger ones, I am quite sure are largely and happily oblivious to the methods and outcomes of their addictions."
AI-based processing of biometric and other contextual data about customers has increased with device and software evolution. Software can profile users and predict behaviors at a scary level of detail.
"Usually, the ethical question is [one of] what to do with that data," said Miguel Lopes, chief product officer at TrafficGuard, an ad verification and fraud prevention platform. This ethical issue is a dilemma for developers in every kind of business -- not just the social media giants making the news.
An algorithm directs information collection and profile building, but the subsequent actions are intentional. The developer is ordinarily aware of the power of this data in context.
One of the root causes of ethical concerns relates to how the business generates revenue and incentivizes developers and business managers, Lopes said. In many cases, companies look at user data as a valuable currency and want to monetize the data they store. "These factors might cause these organizations to share their user data unethically," he said.
Developers face a hard decision regarding personal data and software design. They can create systems to exploit user data with the understanding that the liability lies with the organization, or they can raise concerns but face potential penalization for going against the project's aims. Modern technology companies' working culture should let developers come forward with personal data ownership concerns without fear of retaliation.
These kinds of concerns galvanized some rich discussion at the different organizations where Lopes has worked, which decided not to offer a free service tier. "We have analyzed the implications and prefer to sustain our operations by selling our service instead of our user data, and not subjecting our developer team with these difficult choices," Lopes said. Internal transparency within companies is a crucial factor. Developers should be aware of the entire context of the project they are working on, not just the module they need to complete.
Companies should make it easy for developers to step forward with concerns. The HR department could create mechanisms where developers can express their concerns without the fear of retaliation, such as an anonymous hotline for ethical concerns. The organization should then follow up and independently identify whether the use case is in breach of privacy, legal or ethical policies.
Technology can amplify existing biases. "One of the more pressing ethical issues facing today's developers is bias," said Spencer Lentz, principal account executive at Pegasystems, a business automation platform.
Bias often enters the system undetected -- Lentz compares bias to a virus. Computers themselves have no inherent moral framework. Software can only reflect the biases of its creators. Therefore, developers and data scientists must scrub bias from the training data and the algorithms they build. From a developer's perspective, bias often centers on eliminating options for the wrong reasons, Lentz said.
Reporting and research in recent years illustrates how bias within software systems can perpetuate systemic racism against specific populations, which creates lost opportunity, worsens medical care and increases rates of incarceration. For example, in the book Race After Technology, Ruha Benjamin raised concerns about a case where developers failed to include Black people's voices in training AI speech recognition algorithms, under the belief that fewer Black people would use the app.
Executives, data scientists and developers must create an organizational culture that establishes ethical guidelines and empowers individuals at any level of the business to speak up if they see something problematic.
"By now, bias in models is so well known that LLM hallucination is a mainstream concept," said Peter Wang, chief AI and innovation officer and co-founder of Anaconda, a data science platform. "The greatest risk nowadays is that people are so swept up in the hype and a fear of falling behind that they don't take the time to diligently build evaluation mechanisms and implement governance. As an industry, we need to be more transparent about how high the failure rates are for enterprise AI projects so that managers and executives don't feel compelled to rush through extremely important topics like alignment, accuracy and safety."
It's time to create a governing body for AI providers, similar to the American Medical Association for doctors, Wang argued. This body could establish industry-wide ethical guidelines and best practices. "These technologies are still relatively new in the business context, and we would all benefit from ethical standards derived from our collective intelligence and input, rather than leaving it up to each individual or organization to decide for themselves," he said.
Application security is growing in importance as software plays a larger role in our online and offline environments.
Developers might only address security after code release, rather than during development. As a result, the software community lacks secure development standards.
"The emphasis is almost entirely on getting a product out to market," said Randolph Morris, founder and principal software architect at Bit Developers, a software development consultancy. Once a software product is publicly available, the focus shifts to new features and performance optimization, so security continues to have minimal prominence.
Hackers and other malicious actors cause real damage to real people. A reactionary approach to application security that plugs vulnerabilities as they are found is neither practical nor pragmatic.
To address this ethical responsibility for customer safety, developers need education, but typically only cybersecurity-specific classes address these topics. To start, educate your team about cybersecurity failures such as the landmark Anthem medical data breach of 2015, where PII was stored as plain text in a database. "If this information was encrypted, it would not have been so easy to use and valuable to distribute," Morris said.
Also, the industry needs revised security standards. Organizations can do more to embrace standards meant to protect PII. The Payment Card Industry Data Security Standard and HIPAA for healthcare apps are a good start, but developers should consider other forms of PII as well -- and software designs that protect it.
At the center of many ethical issues is a decision that capabilities in software releases are more important than the effects they could have. But just because you can doesn't mean you should.
"If the development team is measured on their rate of feature development, there's a high probability that the ethics of a given implementation might not be front of mind, either at the design or at the implementation phase," said Tim Mackey, head of software supply chain risk strategy at Black Duck, an application security platform.
The business itself must set the tone for ethical standards in its software. Below are some ways businesses can achieve that:
Developers don't always follow news on the latest legislative actions in the jurisdictions where customers use their software, Mackey pointed out, but the business must ensure that they're informed.
Collaboration between engineering leadership and legal teams can help avoid ethical shortcomings. For example, the business should focus on customers' personal data access and retention. Data access controls and logging mechanisms are enabled at software implementation time. Developers -- tasked with creating a functional, user-friendly product -- might view data access restrictions as the responsibility of another team. Instead, make sure that data protection is a feature included in the software design, inherently protecting against unauthorized access.
Large language models are playing a growing role in software development across tasks such as generating code and supporting unstructured data processing. Owing to the complexity of LLMs, it's easy to overlook how these systems are trained, configured and deployed -- and what this means for users.
"Software companies should always disclose how they are training their AI engines," Lopes said. "The way user data is collected, often silently and fed into LLMs, raises serious questions about consent, security and the ethical boundaries of automation."
Several high-profile cases have emerged where user interactions on platforms have been used to quietly train AI without any notification. "We've seen companies harvest behavioral data without consent, essentially turning users into unpaid contributors to the very models that may one day replace their jobs," he continued.
A properly trained AI agent requires deep configuration, supervision and expensive human talent. "The costs you think you're saving by skipping proper development are almost always eclipsed by the damage caused by a poorly specialized agent -- whether it's security risks, misinformation or loss of customer trust," Lopes said.
Concerns about the environmental impact of various activities are growing, fueled by increasing awareness of climate change's effects, including rising temperatures, floods, fires and other adverse weather conditions. The activities of technology companies can also decrease access to clean water, pollute the air and diminish biodiversity.
The growing use of AI poses a risk of significantly increasing energy consumption and, consequently, carbon emissions. It can also increase pressure on water systems used to cool data centers, thereby compromising local communities. Cloud providers are also starting to explore carbon-neutral energy sources, such as nuclear fission plants, while glossing over the still unresolved environmental costs associated with disposing of spent radioactive fuel.
These are all big-picture concerns that typically fall outside the software development cycle, but they are worth considering when deciding on the potential impact of scaling new LLM-powered apps. Other aspects include the potential for new software apps to encourage poor environmental choices. A fast-fashion app might drive revenues at the expense of more waste.
Multiple dimensions for considering the human rights impact of software development practices include its potential effects on labor and communities.
On the labor front, one concern has been the growth of so-called data labeling sweatshops that involve exposing workers to toxic content to improve content moderation in AI systems. Although most enterprises are not directly involved in this process, they might overlook the practices used by their AI and data system vendors and contractors.
Additionally, it's essential to consider the potential impacts of optimizing apps for aspects that are relatively easy to quantify, such as warehouse throughput, compared with those that are more challenging to quantify, like worker health or mental well-being. The risk is that certain kinds of productivity optimizations might have adverse effects on the lives of workers and their contributions to their families and communities.
The rise of AI systems in software development has been driving the growth of the data labeling industry, often with limited oversight. New apps also have the potential to disrupt the social fabric of communities.
Below are several ways to foster practices that have a positive societal impact:
George Lawton is a journalist based in London. Over the last 30 years, he has written more than 3,000 stories about computers, communications, knowledge management, business, health and other areas that interest him.
01 Aug 2025