Automation Testing is a professional discipline, performed much like software development. In this discipline, certain industry standards and best practices for automating tests have been developed and applied, much akin to a software development effort. In this article, we present some best practices as well as suggestions/ recommendations that are crucial for the development and execution of automated testing. These best practices, developed by experts over the years, are aimed at assisting testers to prevent those mistakes that take up test engineers’ time and result in duplication of test efforts.
1. Test Tool Compatibility Checks
It is desirable to have candidate test tools checked out within an isolated environment before they are installed within the target test environment. If part of the target application exists in some form at the time of test tool consideration, then the test team should install the test tool or tools along with the application and determine whether the two are compatible. Once a compatibility check has been performed and a few problems arise, the test team will need to investigate whether workaround solutions are possible.
When third-party controls (widgets, OCX, or ActiveX) are used to develop the application, the development team needs to know in advance whether the automated tool is compatible with each third-party control. It is a good idea to ask the tool vendor to provide a list of third-party controls with which the tool is compatible and to hand this list to the developers who are planning on implementing these various controls.
Also, note that test tools need to be compatible with both the current software application and future development environments. The primary test tool adopted by the test team organization should be compatible with primary development tools in the market.
2. Test Tool Upgrades
The test team may have extensively tested a test tool in an isolated environment and then applied the test tool on one or more projects. As a result, the test team may be familiar and capable with the particular test tool and confident that it works as specified within product literature. Suppose that the test tool vendor announces a bigger and better version of the test tool product as part of a product upgrade. Should the test team immediately adopt the new version and apply it to the project at hand?
The test team will want to make sure that the scripts created in the old version of the tool still work with the new version of the tool. Most vendors promise backward compatibility, meaning that scripts created in the current version of the tool can be reused in the new version of the tool, without any changes to the existing scripts.
Any test team needs to approach the use of a new product release with caution. Some tests should be carried out to ensure that test team activities can still be performed with the new version in the same manner as with the previous test tool release.
3. Baselined System Setup and Configuration
To ensure the integrity of the current system environment, it is a good practice for the test team to back up the current system setup/configuration baseline before installing any new automated test tool or a new version of an automated test tool. Specifically, it is beneficial to back up .dll files prior to new installations to ensure that these files are not overwritten. This precautionary activity should be performed even when the test team has successfully tested the new test tool software within an isolated test environment.
4. Software Installations in the Test Environment Baseline
It is a good practice to avoid the installation of unnecessary new software within the target test environment once the test environment becomes operational and has been baselined.
5. Overall Test Program Objectives
The test team needs to be careful to avoid the trap of becoming consumed with the development of test scripts that distract the test engineer from his or her primary mission—finding errors. Specifically, the test engineer should not over automate tests and should resist the impulse to automate a test that is more effectively performed manually.
6. Keep Automation Simple
The most elaborate testing script is not always the most useful and cost-effective way of conducting automated testing. When using a table-driven approach, the test team should keep in mind the size of the application, the size of the test budget, and the return on investment that might be expected from applying a data-driven approach.
7. Test Procedure Design and Development Standards
Test engineers need to take a disciplined approach to test procedure design and development. Specifically, they need to rigorously adhere to design and development standards so as to promote maximum reusability and maintainability of the resulting automated testing scripts. Test engineers need to flag those test procedures that are repetitive in nature and therefore lend themselves perfectly for automation.
8. Automated v. Manual Testing
The test team should realize that not everything should be automated immediately. Instead, it should take a step-by-step approach to automation. It is wise to base the automation effort on the test procedure execution schedule, with one goal being to not duplicate the development effort.
9. Reuse Analysis
The test engineer needs to conduct reuse analysis of already-existing testing scripts, in an effort to avoid duplication of automation efforts. Test resources are limited, but expectations of test team support may be greater than budgeted for. As a result, the test team cannot afford to squander precious time and energy duplicating the test effort. Measures that can be taken to avoid duplication include performing a disciplined test design.
10. Schedule Compatibility
The project schedule should include enough time to accommodate the introduction and use of automated test tools. An automated test tool is best introduced at the beginning of the development lifecycle. Early introduction ensures that the test team has adequate lead time to become familiar with the particular automated test tool and its advanced features. Sufficient lead time is also necessary so that system requirements can be loaded into a test management tool, test design activities can adequately incorporate test tool capabilities and test procedures and scripts can be generated in time for scheduled test execution.