Blog

An Eye On Quality: Testing In the Absence Of Performance Benchmarks

An eye on Quality

Testing Without Requirements: Is It Possible?

Is testing without requirements really feasible? Isn’t it a cornerstone of quality assurance? Typically, functional validation relies heavily on documented specifications. Testing teams need clear pass/fail criteria when evaluating various features and functionalities. However, when it comes to evaluating system and app behavior, especially under load or stress, the criteria can be more subjective. This makes it harder to assess whether an application is performing well or poorly.

Usually, software is tested against certain specified goals. But sometimes, performance benchmarks get overlooked because project deadlines, feature additions, and bug fixes take precedence, leaving little time to focus on system performance. This gap calls for external performance specialists to provide solutions for these scenarios.

The Need for Clear Benchmarks

Test plans should always be tied to clear objectives, and benchmarks should be part of agile sprints when evaluating system performance. These plans should define measurable pass/fail criteria. Before starting performance evaluations, it’s important to have the following in place:

  • Goals (which may be subjective)
  • Expectations
  • Objective-based criteria

Facing the Challenge of Undefined Requirements

If testers are assigned a project without clear performance goals, they have two choices: they can either abandon the task or push back and ask the project manager to define success criteria. If no clear feedback is forthcoming, testers may find themselves in a difficult position. The first challenge is to identify who is responsible for defining performance goals and user expectations.

In cases where testers still don’t receive these benchmarks, they should document the situation in their plans to protect themselves in the future. They can proceed with the understanding that they are working in an exploratory mode, with no formal specifications to follow. In such cases, it’s essential for testers to set their own benchmarks and goals.

Creating Your Own Benchmarks

In the absence of formal criteria, testers should document their approach to evaluating system behavior. Once test execution is complete, they should report the results without making premature assumptions. It’s essential to avoid analysis at this stage, as jumping to conclusions may impact the results. The main task is to assess the system’s behavior under real-world conditions, considering factors like available tools, test environment, and the state of the code.

Test Reporting Basics

When no formal guidelines are provided, testers need to create their own set of performance evaluation goals. The following details should be included in the final report to assess the system’s capabilities:

  • Concurrent User Support: How well does the system handle simultaneous users?
  • Speed and Response Times: What are the system’s response times during various tasks?
  • Throughput: What is the work throughput under different conditions?

Testers should ensure that their goals address:

  • The number of systems involved during testing compared to the baseline environment
  • The speed of work processing
  • How the system responds to changes in load during the evaluation

The rate of change in response times and system resource consumption should be recorded throughout the testing process, as it can be an important metric when defining the system’s capabilities.

Volume and Load Analysis

To better understand the system’s capacity, testers should consider the volume of tasks the system can handle. For example, how many insurance policies can be processed in an hour? Based on this, testers can determine how many users the system can handle during peak times. Peak load should be calculated by multiplying the average by a coefficient. They should also estimate how long it would take an average user to complete a specific task.

Comparing with Industry Standards

Response time expectations and resource consumption benchmarks can often be derived by researching similar systems. Unfortunately, there are no industry-wide standards for assessing resource consumption (e.g., CPU usage). Testers will need to collaborate with system engineers and architects to determine if the consumption levels observed during testing are acceptable.

Setting Response-Time Benchmarks

Standard response-time limits can help establish benchmarks. For instance, the Nielsen Norman Group suggests the following response-time thresholds to guide performance evaluation:

  • Instantaneous Response (1 second): Critical for applications with direct manipulation (e.g., user interfaces). A 1-second response time allows users to feel in control without noticing any delay, which is essential for a smooth user experience.
  • 1-10 seconds: Users can tolerate delays of up to 10 seconds without losing focus. However, delays beyond this threshold may lead to frustration and distract users from their tasks.

Conclusion

When performance goals are absent, testers face multiple challenges. They must take initiative and establish their own benchmarks to provide meaningful insights into the system’s behavior. Performance consultants and QA teams should be prepared to handle such situations. By following the outlined steps, testers can create their own standards even when formal requirements are missing, ensuring their efforts add value to the overall quality of the product.

Happy testing!