For a tester, one of the ideal ways to ensure that a system has the endurance to sustain actual load/business demand and at the same time serve multiple users reliably and in a timely manner is performance testing. We do performance testing so that we can provide management with the necessary information to make intelligent decisions about improvement and risk.
Performance, Load and Stress Testing: Determining the Difference
The terms performance and load are often used interchangeably, but they are not the same: A load test is done to determine if the system performance is acceptable at the predefined load level; a performance test is done to determine the system performance at various load levels. The similarities between performance and load tests lie in their execution strategies. Typically, these tests involve the simulation of hundreds, or even thousands, of users accessing a system simultaneously over a certain period. Due to the time, effort, and money involved, it is often impractical to have people execute such testing without the aid of automated testing tools.
Stress testing evaluates the behaviour of systems that are pushed beyond their specified operational limits (this may be well beyond the requirements); it evaluates responses to bursts of peak activity that exceed system limitations. Determining whether a system crashes and, if it does, whether it recovers gracefully from such conditions is a primary goal of stress testing. Stress tests should be designed to push system resource limits to the point at which their weak links are exposed.
Unlike performance and load tests, stress tests push systems past their breaking points. System components that break are subsequently investigated and reinforced. Performance and load tests simulate regular user activity. The results of these tests give developers insight into system performance and response time under real-world conditions.
Factors affecting Performance Testing
There are numerous workload, resource, design and implementation related factors that can affect the performance of any application. These factors can be found both within the boundaries of the system (hardware/ software) and outside the boundaries of the system (users)-
- Users-It includes either number of users or user types
- A frequency of activities-It includes user behaviours and user/activity arrival rates (i.e. description of traffic-per-unit time and the arrival rate of the traffic).
- Sessions-It includes a number of user sessions and a number of page views per session.
- Type of pages requested per session(e.g., static page versus credit card transaction authorization)
- Server-side response time-These include-
- Server speed: CPU, memory, I/O throughput, and so on
- Server-side local network speed or throughput
- Server-side distributed architecture
- Software performance
- System configuration
- Security-related configuration etc.
Performance Testing Considerations
Performance testing involves extensive planning for the definition and simulation of the workloads you want to test. It also involves the analysis of collected data throughout the execution phase. Performance testing considers such key concerns as:
- Will the system be able to handle increases in Web traffic without compromising system response time, security, reliability, and accuracy?
- At what point will the performance degrade, and which components will be responsible for the degradation?
- What impact will performance degradation have on company sales and technical support costs?
Each of these concerns requires that measurements be applied to a usage model (how the system resources are used based on the activities or workload generated by the users) of the system under test. System attributes, such as response time, can be evaluated as various workload scenarios are applied to the usage model. Ultimately, we can draw conclusions and initiate corrective actions based on the collected data.
From the management perspective, the common issues and considerations before, during, and after planning for and execution of performance testing include:
- Will the Web application be capable of supporting the projected number of users while preserving acceptable performance, and at what cost?
- At which point will the Web application load-handling capability begin to degrade?
- What will be the financial implications of the degradation?
- What can be done to increase the Web application load-handling capability, and at what cost?
- How can the system be monitored after deployment so that appropriate actions can be taken before system resources are saturated?
While management has to set the goals, as a tester, you will be tasked with actually producing and executing the tests and reporting bugs found. Many of the performance-related problems are complex and difficult to repair. It is essential for the success of the testing process to adequately prepare for the expectations of the organization and its management. You must understand the test requirements and scope of what you are being asked to test. To that end, define, or at least answer:
- What are the objectives of the performance tests?
- Who cares (especially folks from the business side) enough to grant you the budget to plan for and run performance tests?
- Why measure? Have you budgeted adequate time and resources to enable the development teams to address performance problems, and to enable you to retest the system?
There can be numerous deliverables involved in a performance test. A test plan is important for providing visibility to the various stakeholders involved in the project. The test plan should include some or all of the following:
- Performance testing goals
- Workload definitions
- User scenario designs
- Performance test designs
- Test procedures
- System-under-test configurations
- Metrics to collect etc.
Other Testing Considerations
In addition to the above, some other performance testing considerations to be kept in mind include-
It takes time, effort, and commitment to plan and execute performance, load, and stress tests. Performance testing involves more people in an organization than just testers. A well-planned testing program requires a joint effort between all members of the product team, including upper management, marketing, development, IT, and testing.
- Aggregate response time is a sum of browser processing time, network service time, and server response time. Performance analysis takes all of these factors into consideration.
- Server-side analysis of performance bottlenecks often includes an examination of the Web server, the application server, and the database server. Bottlenecks at any of these servers may result in server-side performance problems, which may ultimately affect overall response time
- Begin performance testing as early in the development process as possible to allow time for the analysis of data and the resolution of performance issues.
- Repeat performance tests as many times as possible prior to deployment so that performance degradation issues can be identified early. Allow plenty of time to isolate and troubleshoot performance issues.
- Regarding data analysis and corrective action planning, consider how performance degradation can be resolved:
- By adding system resources.
- By adjusting network system architecture.
- Through programming.
- In defining baseline configuration and performance requirements, you should identify system requirements for the client, server, and network. Consider hardware and software configurations, network bandwidth, memory requirements, disk space, and connectivity technologies.
Follow these practices for best results!
Connect with KiwiQA to leverage focussed capability for load and performance testing for web and mobile application