Can we fully automate our software testing? Find the right automation test cases

7 questions to ask before you select software testing tools

Before you select software testing tools, you need to know how to evaluate them. Explore trial versions, research the vendors and assess your organization's needs and capabilities.

Software testing tools speed up the test process in major ways. Automated software testing simulates time-intensive tasks, compares those tasks to what is expected and reports a result. Bug tracking software enables teams to log, search and find the status of issues, as well as track updates and merges. Coverage tools give organizations insight into what code or requirements are exercised by which tests, and what needs more work.

It might be overwhelming to evaluate software testing tools against your organization's needs. With limited budgets, teams might pursue open source options that are likely to be web-based and require installation, support and periodic upgrades. Some open source software testing tools offer commercial support and might involve less training time, but those typically come with a high price tag. Commercial SaaS is a third option that has a low monthly cost, often in a freemium model.

When you evaluate software testing tools, consider the following common problems and solutions -- and ask a few key questions -- before you write the purchase order.

How will a testing tool help solve your problems?

When people say "We are going to start using Bugzilla," what they really mean is "We have a bug tracking problem, and we've picked a tool to solve it." Often, the problem itself is fuzzy. It's impossible to evaluate software testing tools to see if they will specifically solve vague or insufficiently understood problems.

Most software testing tools don't help you eliminate problems; in reality, tools create different problems. Driving a car from point A to point B is faster than walking, for example, but it also contributes to pollution which is a different problem. The trick, to paraphrase renowned software developer and writer Eric Sink, is to trade the problems you have for the problems you want.

Select your testing tool to solve a specific problem. Don't select a tool and then figure out how it might fit your team.

Where will the software be deployed, and who are the testers?

Some software testing tools reside on devices that employees will use to test an application. Some, like crash reports, might even be used by customers. For desktop apps, this means IT must install client software and grant the user permission to conduct the install. In most cases, users download an install file they can execute themselves; commercial software typically requires a license key and a credit card.

Consider who will use the testing software, where and how it will be installed, and ensure tool selection fits the user base. Some automation tools are designed for programmers and are embedded in the IDE; testers-turned-automators, on the other hand, may find the IDE intimidating. If testers lack required administrative privileges, that will be a problem. If the software under test is web-based, you might want a test tool that runs on the internet.

Is the testing tool compatible with existing OSes and apps?

Testing software vendors typically claim their tools support functional, GUI, test management and test automation on all platforms. Buyer beware. The claim of "all platforms" suggests a tool tries to be everything for everyone.

In practice, we've found one such tool didn't support Java running in the browser -- it could not recognize the objects -- while another ran only on Windows. A third tool required the team to set up a web server. Yet another tool had incredibly good support for the current browser … approximately four months after each major version's release.

Do some digging to find out what the tools are designed to support. For example, many test automation tools must run as part of a build or continuous integration server. Getting a Windows tool to work with a Linux-based build server or a Unix tool to function when the build server is Azure DevOps Server can be problematic.

When evaluating software testing tools, see if they line up with your current app and test how the new process would work. At the same time, design tests not only for the platforms supported today, but for those the team and product development group plan to support over the next year or two.

How well does the tool integrate with your platform and workflow?

As you evaluate a testing tool, consider the platform and tools already in use to increase the likelihood of smoother integration, easier adoption and better test results. A QA team that uses Jira will likely succeed when they use tools built to support Jira. The same is true for Visual Studio users who use tools designed to fit into the Microsoft platform. This is also the case with Java, Oracle and most ERP systems.

When you select tools that align with your overall application platforms, you'll enjoy a more seamless workflow. If you don't, you might end up with yet another silo of information that is essentially duplicated on the way in and forgotten.

Additionally, different teams often rely on separate version control systems. Preserving code and automation across version control systems will likely cause problems. Some test tools embed directly into the IDE and offer plugins to extend their capabilities. Test management applications can integrate with bug tracking tools and code coverage tools. Research and understand the capabilities of all tools before integrating them with your platforms.

This advice also goes back to knowing who the user is. Programmers live in the IDE; the closer you can get to their native environment, the more likely it is they will use the tool. That is a critical step. When failures happen because of changes programmers have made, you want them to approve tests.

Does the tool require training or knowledge of special languages?

Some software testing tools have proprietary languages and syntax. Most record and playback tools require some amount of training. If programmers can use the tool in their native, or preferred, language -- Ruby, Python, .NET and Java are popular for tools -- training might not be required.

Then again, testers or analysts who don't know the language might use the tools. A proprietary language requirement to use a tool might limit its ability to be integrated with other tools. Using a readily available and well-known language makes it possible to export scripts to other tools.

How do open source versus commercial tools compare?

By the time you determine the problem to be solved, you should have identified a few tools to help achieve that goal. Open source software testing tools often require no upfront costs, unlike proprietary tools, but they might not meet the needs of the organization.

Be sure to answer three main questions.

  • How effective can the team be with open source software testing tools?
  • Are open source tools a good fit?
  • How will the team get support and training?

Open source application testing tools can work exceptionally well on the platforms and technologies for which they were designed. Change the underlying database, OS or programming language, though, and the time invested in the tools might eat up the budget you saved by going open source in the first place. Some organizations require customizations that go beyond what the open source project can provide. In many cases, you will have to support yourself when it comes to installation, patches and maintenance for these open source tools.

Commercial tools have similar problems, but vendors have a strong motivation to make things work, as well as to provide support and training. If the product has a list of supported platforms, ask the vendor how they support new browsers and OSes as they are released.

Should you try before you buy?

Many proprietary commercial tools offer trial or community versions to download and run for free. Other web-based tools enable you to create and use an account for free for 30, 60 or 90 days. Often, these trial versions have limited features or cannot be saved for repeat use, but a programmer and tester can experiment to see if the tool's features or flexibility meet their organization's needs.

Give an employee permission to try the application for two weeks to reveal any problems with the install. If the software must be bundled with end-user software, such as a crash reporter, get a trial and walk through the entire process with a beta user to see if any problems pop up. For example, the tool might falsely identify as spyware when the software "phones home" with usage statistics.

Most open source software testing tools are available for free if you download them directly and install them yourself. When in doubt, demo the software and run it through its paces. It is too easy to push through a proof of concept, spend a large sum of money and then find out the "solvable problems" with the proof of concept are either difficult or impossible to fix.

Don't invest a large amount of time until you are sure the product will be fit for use. Will a trial version suffice, at least temporarily? If so, determine how long it will take to get purchase approval to prevent lost productivity when the trial expires.

Next Steps

Compare the top automated functional testing tools

Find the right automation test cases

How to evaluate test automation languages

 

Dig Deeper on Software testing tools and techniques

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close