Comprehensive test management solution for complex challenges.
What is a software testing strategy and what it includes
The software testing strategy describes the chosen approach for testing in the individual testing phases, as well as the method in which to proceed within these phases. The software testing strategy defines the decisions necessary to achieve the project goals. The test design procedures are defined, and the entry and exit criteria are specified. How the planned tests will be executed and the types of tests to be applied are also part of the testing strategy development.
Creating a Test Strategy
In principle, the test strategy once defined should never be updated. However, the test strategy is a living document and is usually adjusted to the scenarios during the project. The test strategy is documented in a test strategy document and is written by the test manager.
The established test strategy is directed not only at software testers and quality assurance but at the entire project team. This includes, for example, developers, the system architect, the release manager, and the product manager. They are all involved in the software testing strategy.
Test Objectives
In the first step of a test strategy, the test objectives should be defined in the project to establish a reason and purpose for developing and executing successful tests. The test objectives are defined by the test manager and stakeholders or project leaders. The test objectives may vary depending on the project and its requirements. Example test objectives in the test strategy might include:
- Complete test coverage of high-priority requirements with test cases
- Full representation of key work scenarios
- Successful execution of test cases and test scenarios defined as "high" and "critical" in significance or priority classes A and B
- Ensuring data protection and related security measures. Real data will not be used or disclosed
- Bugs and errors with "high" and "critical" priority are to be fixed promptly
- Confirmation that all documents generated during testing are complete and accurate and are stored centrally
Test Processes
Once the test objectives have been defined, the next step is to identify which test processes will be applied to the test object. Their influences should be considered. The goal is to ensure high quality and reliability. The following test levels may be applied to a test object:
- Component Testing (Whitebox): Testing the program code, for example, using JUnit. The source code is analyzed and checked for correctness. These tests are conducted by developers.
- Integration Testing: Testing the system architecture, including interfaces, components, or systems. These are linked to the test object. For example, this could involve testing the system and a database interface where the created personal data is stored and accessible at all times. The market for test tools is vast, including both paid tools with extensive features and free testing tools with limited functionalities.
- System testing for the analysis of the requirement specification, aimed at testing the functionalities of the test object. System tests can be manual, where the application being tested is tested manually ("by hand"). In addition to manual tests, there are also automated system tests. The development effort for automated system tests can be considerable. However, they offer the great advantage of covering many more functionalities during testing. Additionally, these tests can be executed at any time and speed up the test execution process. Nevertheless, these tests need to be maintained regularly and require experience and software knowledge during their development. An example of an automated system test could be testing the entire web interface of the application under test. For instance, checking if all buttons are clickable, identified by IDs, and if the desired functionalities meet the requirements. The testing framework "Selenium" is highly popular and reliable in test automation.
- Acceptance Testing (E2E Test), also known as acceptance testing, is typically conducted by the business units along with the software testers of the application being tested. The focus is on verifying whether the developed functions meet the needs of the customers and stakeholders.
- Smoke Test: After a product version (release) has been deployed, smoke tests are performed. These tests check the correctness of the most essential (basic) functions of the software being tested.
- Usability Tests can be conducted in the final phase of the project, where users without prior experience with the application test it in everyday use.
- Load and Performance Testing: These tests measure processing speed. There is a distinction between "performance" tests, which assess load and its dependence for specific use cases (such as measuring response time or throughput), and load tests, which examine system behavior under increasing load. Additionally, stress load testing involves observing how the system reacts to load beyond normal levels and anticipated peak loads.
The number of test stages applied depends on the project and budget. It is recommended to adjust the type of tests to the functions developed in the release or sprint. This means that for a technical function, such as a database test, a functional integration test of the connected interfaces should be performed. For a completed function in the GUI, a system test and an acceptance test of the implemented user story are conducted.
It is important to consider how the test processes impact the overall costs (testing costs) of the project. The additional effort and especially the financial contribution for correcting errors later can exceed the budget. Therefore, the test strategy should include provisions for updating and adjusting the test processes throughout the project.
Testing Tools
Once the test processes to be applied for the planned software product are known, the next step in the test strategy is to identify the appropriate tools needed for each test process. A targeted selection and efficient use of testing tools are essential to enable a reliable system environment. Costs for acquiring the tools and their licensing scopes must also be calculated. Many tools are offered for free but are often limited in functionality or maximum number of users. Therefore, selecting the right tools requires a significant time investment, which is necessary for the test strategy.
The test manager evaluates which tools provide the greatest benefit for each test stage. In addition to the costs of the tools, maintenance needs to be considered. If updates for the testing tools are required, they are often carried out centrally and can lead to downtimes. Employees also need to be trained in using the tools. Many tool providers offer training as well as technical and professional support for their customers.
For requirements management, test case documentation, test execution logging, and defect management, an ALM (Application Lifecycle Management) tool is recommended. An ALM tool centralizes development and quality assurance, allowing real-time tracking of project progress and ensuring improved transparency. Test cases are created based on requirements and are traceable. Test runs can be managed, planned, and structured. Test result documentation must be integrated into the test strategy, allowing for the traceability of past test executions when researching previously conducted tests.
For test automation of system tests, there are many different testing tools available, including free applications like Selenium, Cypress, and SoapUI (for interface testing), as well as paid tools. One example is Ranorex, which includes an object identification feature and a built-in recorder that captures and provides all interactions.
For load and performance testing, there are both free and paid applications available.
As can be seen, the selection of testing tools is extensive. Therefore, it is very important for the software strategy to evaluate, test, and select the tools in a targeted and comprehensive manner.
Test Documentation
The development of the test strategy should also include the documentation of tests. Documenting test cases, bugs, test processes, and other elements is a crucial part of the test strategy and should not be neglected. In addition to the test concept, which is the written plan for the test object, there are other test documents and reporting aspects that should be integrated into the software test strategy.
- Test Case Specification: This is the collection of all defined test cases. Each test case should include a priority, a descriptive name, the test case objective, preconditions, postconditions, the specific steps, and acceptance criteria. Traceability to the relevant requirements is also helpful.
- Test Execution Specification: The sequence of tests is recorded in the form of test scenarios. These scenarios order test cases and may take into account, for example, the priority of the test cases. Regression tests are often incorporated into test scenarios.
- Test Progress Report: Published or provided at regular intervals, this report includes the number of executed and planned test cases within the test process.
- Test Completion Report: After the completion of test activities, a test completion report is created. It should include a summary of the tests conducted, covered requirements, uncovered and blocking factors, an evaluation of test results, and remaining issues.
- Bug Reports: Bug reports are essential when deviations from the expected results of a test case occur. These should be documented and made available for traceability.
Test Design
To avoid obstacles and unresolved questions during test case creation in the test design phase, the software test strategy should outline how the framework for test cases is structured and how they are developed. In general, test cases should be created for test conditions in descending order of priority, starting with the most critical ones first.
For abstract test cases, all mandatory fields should be correctly filled out, and the description should include preconditions, actions, and acceptance criteria.
Concrete test cases, in addition to these elements, should also include specific test steps and test data (e.g., “Open the person with code 47110815” instead of “Open a person”). In test automation, test cases should be described clearly and be applicable universally. This means that in the actual test case, only the methods with the necessary test data should be called.
Test Criteria
The test strategy must define the test criteria, specifically the entry criteria and exit criteria, also known as the start and end of testing.
Test Entry Criteria
Test entry criteria allow for an assessment of the test readiness of the test object and specify the requirements for the test infrastructure and test tools that must be met before testing begins. Example test entry criteria include:
- The requirements for the components of the software solution have been approved by stakeholders or clients.
- The test cases have been implemented, reviewed, and approved for testing. The completed test cases are marked with the status "Completed."
- The test cases have been added to the test scenarios of the corresponding test stages and sorted by priority and dependency.
- The test environment is available to all testers involved.
- The rights for all planned technical users in the test are set up in the test environment.
- Errors/bugs from previous test stages and sprints have been fixed or accepted by project management or stakeholders.
Test Exit Criteria
Test exit criteria specify the conditions agreed upon by all parties for the completion of a test stage or the entire testing process. Example test exit criteria include:
- The test cases and features planned for the current sprint or cycle have been successfully tested.
- All requirements have been covered by test cases.
- There are no open issues with "critical" or "high" priority.
- All identified bugs/errors have been documented (including reference to the test case and requirement).
- All discovered issues during the testing phase have been resolved or discussed with stakeholders.
- All functions have been successfully executed in a test environment.
- The application being tested has passed performance tests.
Software Test Metrics and Measurements
The progress of testing activities is measured through the creation and execution of test cases, as well as the number of identified defects over the remaining project time. The performance of systems is evaluated using various metrics generated during testing.
Test Data Management
In the software testing strategy, it's essential to identify and account for the appropriate test data. Before the application is ready for testing, it's necessary to determine what types of test data can be used. Along with data protection, the strategy must also address how test data will be (automatically) provided. This includes resetting the data after use and ensuring that all test data is deleted from systems once the agreed usage period has expired.
Productive test data is rarely used because it consists of real data from the production environment, such as personal test data. This data comes from actual individuals and is traceable.
In addition to productive test data, there is synthetic test data. Synthetic data are similar in structure to productive data but do not have a one-to-one correspondence with actual individuals from the original dataset. However, they match the logic and structure of the data. This data is realistic but not actual records like "Mickey Mouse."
Another option is to use anonymized test data. As the term "anonymized" suggests, these are test data that cannot be traced back to individuals. The test data is generated by an anonymization script and then provided. Testing the application with anonymized test data must be conducted in a restricted access environment, not in a "free" test environment, to comply with data protection regulations.
Error Management
Error management should also be included in the strategy. The basic principle is that when an error is discovered, it should be documented. It is important to use neutral, factual, and professional language. The focus should be on the issue itself, not on discussing the involvement of individuals or their responsibility for the error. Without a defined software strategy, error handling cannot be properly implemented. This leads to errors being inadequately described or even completely omitted, with fixes applied informally. The result is that, sooner or later, the application will be plagued by severe issues.
The software strategy document should also define how errors are to be addressed. This may include, for example, an internal review to verify the correctness of the fix, ideally conducted by the tester who reported the error. High-priority errors should be addressed first of all.
Interruption Criteria
The test strategy must also account for situations where testing needs to be paused due to certain events. These are known as interruption criteria. Examples of such criteria include:
- Test data is not available.
- The test environment cannot be accessed or is not operational.
- Critical errors have not yet been fixed, which prevents meaningful testing.
Resumption Criteria
Once the issues preventing testing are resolved, test execution can be resumed. The resumption criteria include:
- The test environment is accessible again.
- Maintenance activities are conducted outside of testing activities.
- Testing resources and capacities are available.
Risks and Their Management
The test strategy document must also address risk management. Risks need to be considered in the planning phase to mitigate them in a timely manner. Both stakeholders and developers, as well as software testers or test managers, should develop an understanding of the risks associated with deploying the application.
Several scenarios can arise in a project where planned tests cannot be conducted as scheduled by the department. It is crucial to clarify resource availability early and adjust the schedule accordingly.
In project execution, a situation might occur where stakeholders lack the capacity to support the project. The strategy could define that stakeholders will be interviewed on short notice to explore support options. Similarly, the department might be understaffed or engaged in routine business, making them unavailable for quality assurance. Pre-planning is needed to determine when they will be available for the testing phase.
Technical errors in the application being tested or in interfaces (both internal and external) can also occur and have severe consequences, such as critical processing issues or unintended data deletions. The test strategy should address such cases by promoting close and open collaboration within the team or with external service providers to prevent and manage these risks effectively.
Test Strategy Example
Here’s an example of creating a software test strategy based on a real scenario. The initial situation is:
An insurance provider wants to develop a portal for customers to check their contract status, manage their contracts, and print them. Each customer will have a single set of login credentials, and after the initial login, customers will create their own passwords according to specified security criteria. They will be able to access, print, and export their contracts in PDF or Word format. Personal details such as address and phone number should be editable at any time.
The project is new and approved by the company. Besides the requirements for the portal, no other documents are available. An experienced test manager has been hired to develop and implement the complete quality cycle. An ALM tool is provided for creating defects, requirements, and manual test cases, and it integrates with one or more test automation tools. The test automation tool will be used to automate web interface tests and regression tests.
In addition to the test plan, the test manager develops the testing strategy. The testing strategy is documented in the testing strategy document and is meant to be a living document. The software testing strategy can be updated throughout the duration of the project.
First, determine which objectives should be achieved in the project. Important test objectives for the project would include:
- Complete coverage of high-priority requirements with test cases.
- Successful execution of the test cases defined by their highest significance.
- Confirmation that all documents generated during testing are complete and accurate.
- Critical and severe errors as well as bugs must not be present in the production environment. If they are, they will be addressed promptly. Depending on their importance and in agreement with the stakeholders, a hotfix will be implemented.
Next, the test processes necessary for the portal must be identified and defined for the testing strategy.
- Component testing is conducted by the developers. They check their code for correctness. An experienced test automation specialist or a software tester with technical expertise can assist the developers during the code review.
- Integration testing is conducted to test the connected interfaces, as endpoints like "storage of documents and contracts" are saved in a different interface, which is then accessed. This also includes components that are connected to the test object.
- System testing is conducted to test the functionalities of the test object. System tests are implemented both manually and automatically. At the beginning of the project, when the application is still in development and not many functionalities are available, the tests are conducted manually. Once the basic framework (functionalities, buttons, etc.) of the application is in place, the first automated test cases can be developed. For extensive workflows, such as creating a customer → the customer submits an application → the customer edits the application → the customer accesses documents, automated regression tests are created. These tests are executed daily in the Jenkins pipeline on the TEST environment. The test reports are stored automatically and are accessible at any time.
- Acceptance testing (E2E testing), also known as acceptance test, is typically conducted by the business units and software testers of the tested application.
Load and performance testing will be postponed at the start of the project. Once the application is ready and an internal analysis is conducted, for example, to determine how many customers want to access the portal, load and performance tests will be carried out.
Now it can be specified what conditions are necessary for designing test cases. These include:
- Preconditions for the tests must be documented. These can be existing contracts, user groups, or scans within the test environments. If a precondition for executing the test is not met, the test cannot be started.
- Each test case should be linked to the requirement it verifies and to all other test cases with which it has dependencies.
- Test cases must generally include the specific test steps, concrete test data, and test documents.
Before testing begins, it must be ensured that all requirements arising from the test case related to the test environment, testing tools, and test data are ready prior to execution. The test readiness of the test object must be met.
Once the tests have been executed, certain postconditions must also be fulfilled. One example would be that the tests have run completely. In the project, exit criteria are defined that specify these conditions. One of these criteria would be that all test cases planned for the testing phase and resolved bugs have been successfully tested.
If errors or bugs are identified during testing, they should be addressed promptly, no later than the next release (depending on the severity of the error). A defect management process is established in the project for the testing strategy, which describes the workflow for handling defects. It is important that no blame is assigned during discussions. The defect should be reproducible.
Once the defect is resolved, retesting is conducted, and it is then prepared for the next release. High-priority defects take precedence in this process. Depending on the severity and importance of the bug, a hotfix will be implemented.
The test results are stored centrally and made available for audits, for example, by regulatory authorities. For each release, a release report (release version) is published to the stakeholders, detailing all completed tasks, user stories, and resolved bugs from the last release.
All these points and scenarios are documented in the testing strategy document. This document has been reviewed and approved by stakeholders and relevant departments. It is kept in the project directory and can be accessed by all project members. If there are suggestions for improvements or adjustments, the test manager should be contacted. The test manager oversees the testing strategy and intervenes if the software strategy deviates during implementation.
Conclusion
In software quality assurance, relying on a software testing strategy is essential. It is comparable to team sports: without a strategy, success cannot be achieved. On the contrary, it leads to problems and even chaos. With a strategy and a plan, the quality level of the application being tested is maintained. However, this requires intensive preparation and a lot of patience. In the end, the significant effort will pay off, both for the development team and the testers in quality assurance, as well as for the end users who will encounter no errors or obstacles with the application. In quality assurance, a strategy brings order and structure. It is also crucial that the developed software testing strategy is actually adhered to. It is not just a loose document; it should be actively implemented in the project.