A ﬂaky test is an analysis of web application code that fails to produce the same result each time the same analysis is run. Whenever new code is written to develop or update computer software, a web page or an app, it needs to be tested throughout the development process to make sure the application does what it’s supposed to do when it’s released for use. Logically, when put through the same test over and over, the code will produce the same result -- the application will either work properly every time, thus passing the test, or fail to work properly every time, thus failing the test.
However, seemingly at random, occasionally the same test of the same code will produce diﬀerent results. Sometimes it will show that the code passed the test and the application worked as planned, and sometimes it will show that the code failed the test and didn’t work as planned. When the test fails to produce a consistent result, the test is deemed ﬂaky.
Flaky tests can be caused by various factors:
- an issue with the newly-written code
- an issue with the test itself
- some external factor compromising the test results
Once a test is deemed ﬂaky, there are diﬀerent approaches to dealing with the muddled results. Some developers will ignore the ﬂakiness entirely, assuming that the issue is with the test and not with the newly-written code. Others will rerun their test multiple times and only go back to investigate further if the test fails a certain number of times in a row, indicating to them a true failure.
However, the safest approach -- the only way to truly ﬁnd out whether there is a bug in the code -- is to halt the development of the application, fully investigate the cause of the ﬂaky test and resolve it. If left unresolved and there truly is an issue with the code, one problem has the potential to wind up leading to another and another as more is built onto the faulty code.
When investigating the cause of a ﬂaky test, the developer will need to gather data to try to discover diﬀerences within the seemingly random results in order to isolate the cause of the failed tests. The code should be re-examined, as should the test itself, and if no issues are found then external factors will need to be looked at to see if they might be at the core of the problem. The developer might look at whether the tests that passed were run at a certain time of day whereas the ones that failed were run at a diﬀerent time of day, whether certain programs were running on the developer’s computer at the same time of failed tests that weren’t running when the tests passed or whether the tests that failed did so at the same point in the test or at diﬀerent times during the test.
Sometimes, the cause of the ﬂaky test is simple to diagnose and can be quickly ﬁxed. That’s the best-case scenario. Other times, there is no easy ﬁx, and though potentially costly and time-consuming, the developer may need to delete the test and rewrite it from scratch in order to ensure the accuracy of the test results.
Unfortunately, ﬂaky tests are not uncommon -- Google, for example, reports that 16 percent of its tests show some level of ﬂakiness. They can bring production to a temporary standstill, but they can be dealt with, and they can be resolved.