Performance testing is an information-gathering and analysis process in which measured data are collected to predict when load levels will exhaust system resources. It is during this process that you will collect your benchmark values. In an effort to evaluate multi-user support capabilities, three types of tests are commonly conducted: (1) performance, (2) load and (3) stress tests. Although these terms are often used interchangeably, each represents a test that is designed to address a different objective.
One of the key objectives in performance testing is to enhance the ability to predict when future load levels will exhaust the web system so that effective enhancement strategies can be developed to maintain acceptable user experience. Performance testing is usually done to answer the following questions:
- Can the system handle the expected load while maintaining acceptable response time?
- If not, at what point does system performance begin to deteriorate?
- Which components cause the degradation?
- Is the current system scalable, to accommodate future growth?
- When performance fails to meet acceptable customer-experience levels, what effect will this have on business objectives such as company sales and technical support costs?
Performance testing involves the evaluation of three primary elements:
- Workload-Workload is the amount of processing and traffic management that is demanded of a system. To evaluate the system workload, three elements must be considered: (1) users, (2) the application, and (3) resources. With an understanding of the number of users (along with their common activities), the demands that will be required of the application to process user activities (such as HTTP requests) and the system’s resource requirements, you can calculate a system’s workload.
- System environment and available resources- There are three primary elements that represent the resources involved in any online transaction: (1) a browser on the client-side, (2) a network, and (3) the server-side. Web applications typically consist of many interacting hardware and software components and a failure or deficiency in any of these components can affect performance drastically.
- System response time- Web applications may consist of both static and dynamic content, in pages of varying sizes. When a user submits a form or clicks a link, the resulting page might be a simple static HTML file containing a few lines of text, or it might be an order confirmation page that is displayed after a purchase transaction is processed and a credit card number is verified through a third-party service. Each of these types of content will have different acceptable response times. For example, an acceptable response time for the static HTML page might be two seconds, whereas the acceptable response time for the purchase transaction might be eight seconds.
Tool Options for Performance Testing
To decide which testing tools would best assist the testing effort, you must identify the operating environment that the testing tool must support. This includes the operating system, hardware platform, network infrastructure (WAN or LAN), network protocols, and so forth. And be aware that the tool might have to work on multiple platforms.
As far as test-script generation and execution are concerned, determine whether a tool that provides script recording (as opposed to manual scripting) will be needed. Make sure that the tool can log all discrepancies. The tool should also be able to simulate multiple versions of browsers and network connections. Make sure that the tool also supports user think time. Finally, look for support of HTTPS, Java, ActiveX, scripts and cookies.
The best solution for performance testing tools may be a combination of both off-the-shelf and homegrown tools. Further, when evaluating tools used to gather and analyze data, consider whether a tool provides the result analysis and publishing features that will be needed.
Writing A Performance Test Plan
A Performance Test plan should document test objectives, test requirements, test designs, test procedures and other project management information. All development and testing deliverables should be documented in the test plan. Typically, the testing phase of your project will consist of the following activities:
- Generating test data.
- Setting up a test bed for data.
- Setting up the test suite parameters.
- Running the tests.
- Tuning the tests.
- Rerunning the tests.
Keep the following factors in mind while writing a test plan:
- Identifying Baseline Configuration and Performance Requirements – In defining baseline configuration and performance requirements for the system under test, it is important to identify system requirements for the client, server, and network. Consider hardware and software configurations, network bandwidth, memory requirements, disk space, connectivity technologies, and so on. To determine system workload, you will also have to evaluate the system’s users and their respective activities.
- Determine Whether the Testing Process Will Be Hardware-Intensive or Software-Intensive-The hardware-intensive approach involves the use of multiple client workstations in the simulation of real-world activity. The software-intensive approach involves the virtual simulation of numerous workstations over multiple connection types.
Deciding which tests to run and conducting performance tests
As with other forms of testing, performance testing should be started as early as possible and should be repeated as often as possible. If the tests are ready early in the development cycle, they can be run as part of a regression suite against each new build. The effects of changes made in each build on the overall performance can then be measured by running a load or performance test. New performance issues can be correlated to changes made in the build and addressed appropriately.
Load tests are also useful for finding rare and often nonreproducible problems.
Some tips to make performance testing successful are:
- Specify which tests you will run.
- Estimate how many cycles of each test you will run.
- Schedule your tests ahead of time.
- Specify by which criteria you will consider the system under test (SUT) to be ready to test.
- Do ‘forward thinking’: Determine and communicate the planned tests and how the tests will be scheduled.