Tip

Reduce these forms of AI bias from devs and testers

Watch out for cognitive biases in every development and testing decision -- especially AI biases that affect software users. Here are the ways to address common issues.

Cognitive bias in software development negatively affects quality. Developers, testers, product owners -- virtually all team members -- have biases that influence what they do to develop and test software. Software developers even transfer their biases to AI systems as they build, train and test products. The result of cognitive bias in software development is poor application information or performance for users.

Cognitive bias means that individuals think subjectively, rather than objectively, and therefore influence the design of the product they're creating. Humans filter information through their unique experience, knowledge and opinions.

Development teams cannot eliminate cognitive bias in software, but they can manage it. Let's look at the biases that most frequently affect quality, and where they appear in the software development lifecycle. Use the suggested approaches to overcome cognitive biases, including AI bias, and limit their effect on software users.

How to manage cognitive bias

A person knowledgeable about a topic finds it difficult to discuss from a neutral perspective. The more the person knows, the harder neutrality becomes. That bias manifests within software development teams when experienced or exceptional team members believe that they have the best solution. Infuse the team with new members to offset some of the bias that occurs with subject matter experts.  

Cognitive bias often begins in backlog refinement. Preconceived notions about application design can affect team members' critical thinking. During sprint planning, teams can fall into the planning fallacy: underestimating the actual time necessary to complete a user story.

Planning poker: A way to manage the planning fallacy

Also called Scrum poker, planning poker is a gamified Agile technique where members of the development team form into groups, then speak with the product owner about an item in the backlog. Here are the general rules:

Each group selects a card with a number on it that represents the expected number of days the work requires, story points or other criteria. Each group representative then flips the card. If all groups' cards sync up, that number becomes the estimation for the backlog item.

If the groups' cards don't match, they all discuss and have subsequent rounds until everyone reaches a consensus or near consensus -- a popular vote is not enough to determine an estimation.

Confirmation bias and representative bias. Confirmation bias means people interpret new information in a way that supports their initial perception. Developers and architects might seek data to show that their preferred approach is better than new technologies and architectures. Representative bias is the tendency to analyze new situations based on similar experience. Prioritize skill development to reduce biases, and share ideas via online training, collaboration tools, meetups and conferences.

Congruence bias. Beware this type of cognitive bias in software development. Similar to confirmation bias, with congruence bias, a person plans and executes tests based on personal hypotheses without considering alternatives. Congruence bias is often the root cause of missed negative test cases. Testers write test cases to validate that app functionality works according to specifications. But test cases neglect to validate that functionality doesn't work in ways that it should not. For example, you might test that the name field allows alpha characters, but did you also test that it does not allow numerals and symbols? Manage congruence bias with peer reviews of the test cases.

Reduce cognitive bias in AI

Cognitive bias in AI can wreak havoc in application quality. The three main sources of AI bias are:

  • data selection
  • latent bias
  • human interactions

Data selection bias. With this kind of systematic error, the data scientist, developer or tester inputs training data that represents only a part of the problem domain; so data can misrepresent the entire problem domain. Data selection is a common source of bias in AI systems, but this systematic error is difficult to identify. The output of AI results seem to reflect real human decisions, rather than an accurate view of the problem space. To mitigate data selection bias in AI, testers must quantify the problem domain, then map out how the planned data fully encompasses that area.

Latent bias. This phenomenon occurs when people incorrectly correlate concepts. Correlations are relationships between variables that AI systems use in predictions, and those statistical associations are at the heart of machine learning technology.

However, correlation doesn't mean causation. For example, a correlation between human intelligence and monetary income level doesn't mean that higher intelligence causes higher income. Other factors can contribute to higher income, which might or might not relate to higher intelligence. Without intense analysis, causation is difficult or even impossible to prove. To mitigate latent AI bias, testers must ask why a particular data collection accurately reflects the causation behind a particular outcome.

Human interactions. AI can misinterpret human interactions when someone or some system uses it in an unintended or malicious way.

Testers must specifically assess data selection, look for unintended correlations and evaluate ways that human interaction can incorrectly train the AI. Data that is seemingly innocuous might influence the results significantly. For example, depending on the choice of data, the AI might learn that the salutation "Dr." is associated only with male names. 

All software is biased in some way. Humans design and write software; humans have biases. Biases might be trivial and unimportant, or others have serious consequences. Define unacceptable cognitive biases, and mitigate the types defined here where possible.

Next Steps

Combat algorithm bias with these software test pointers

Devs don't trust AI in software testing

Dig Deeper on Software development lifecycle

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close