Site icon KiwiQA

Regression Testing: Contextualizing regression testing in evolving software security practices

Regressing Testing in USA Australia

Regressing Testing

The act of retesting the modified software is known as Regression Testing. Regression testing constitutes a huge amount of testing solutions in commercial software development and is an essential aspect of any realistic software development life cycle. The larger systems or components generally have larger sets of regression test. Regression testing is generally incorporated into the continuous integration services

This article explains what types of tests must be included within a typical set of a regression test, how to handle the failure of a regression test, and how to identify the regression test that is supposed to run.

Identifying and Running Tests in Regression Testing

The tester faces an issue in identifying which tests should be included in the set of a regression test. Blindly including every possible test set can result in a large set that becomes too difficult to manage. The constraint with a large test set is that it can’t run as frequently as modifications are updated in the software. For conventional development processes, the period usually amounts to an entire day; the regression test processes run during the night for evaluating the software modified that day. The developers review the outcomes the next morning. If the regression tests do not finish in a timely manner, the development process is disrupted. It is well worth throwing money at this problem in terms of additional computational resources to execute the tests, but, at some point, the marginal advantage of adding a given test is not worth the marginal expenditure of the resources needed to execute it. On the other side, a set that is too small will not cover the functionality of the software sufficiently well, and too many faults will be passed on to the users. It is also possible is to restructure tests with an eye to efficient execution.

Thus, in order to avoid the aforementioned problems, a regression test set has to be the right size. Some organizations have a policy that for each problem report that has come in from the field, a regression test must exist that, in principle, detects the problem. The idea is that customers are more willing to be saddled with new problems than with the same problem over and over. The above approach supports traceability because each test chosen in this way has a concrete rationale. For example, if node coverage in the form of method call coverage shows that some methods are never invoked, then it is a good idea to either decide that the method is dead code with a respect that particular application, or include a test that results in a call to the method.

Responding to Regression Test Failures

If one or more regression tests fail, the first step is to determine if the change to the software is faulty, or if the regression test set itself is broken.  In either case, additional work is required. If no regression tests fail, there is still work to do. The reason is that a regression test set that is satisfactory for a given version of the software is not necessarily satisfactory for a subsequent version.

Changes to software are often classified as corrective (a defect is corrected), perfective (some quality aspect of the software is improved without changing its behaviour), adaptive (the software is changed to work in a different environment), and preventive (the software is changed to avoid future problems without changing its behaviour). All of these changes require regression testing. Even when the (desired) external functionality of the software does not change, the regression test set still needs to be reanalyzed to see if it is adequate. For example, preventive maintenance may result in a wholesale internal restructuring of some components. If the criteria used to select the original regression tests were derived from the structure of the implementation, then it is unlikely that the test set will adequately cover the new implementation.

API Automation webinar

Evolving a regression test set as the associated software changes is a challenge. Changes to the external interface are particularly painful since such a change can cause all tests to fail. For example, suppose that a particular input moves from one drop-down menu to another. The result is that the capture/replay aspect of executing each test case needs an update. Or suppose that the new version of the software generates an additional output. All of the expected results are now out of date and need to be augmented. Clearly, automated support for maintaining test sets is just as crucial as automated support for generating and executing the tests.

Penetration Testing

A different approach to limiting the amount of time needed to execute regression tests is selecting only a subset of the regression tests. For example, if the execution of a given test case does not visit anything modified, then the test case has to perform the same both before and after the modification and hence, can be safely omitted. Selection techniques include linear equations, symbolic execution, path analysis, data flow analysis, program dependence graphs, system dependence graphs, modification analysis, firewall definition, cluster identification, slicing, graph walks, and modified entity analysis.

How to accelerate your product hits the market early and on time by outsourcing software testing? Solve all your software testing needs !! Contact us.

Exit mobile version