Testing Strategy —1/3— explained
Software Quality in mind
The purpose
Testing Strategy is meant to define what matter and what to do to reach a certain degree of quality. We don’t want to put every bit of our efforts to ensure the software is of very high quality, because simply thee perfection comes with high cost. Rather, we may want to get the maximum out of every single effort we put in the software development.
Reader’s guide
This article is part of a serie:
Testing Strategy
A testing strategy usually has a mission statement which could be focused on the software being built or may be related to the wider business goals and objectives. A typical mission statement could be:
To constantly deliver Working Software that meets customer’s requirements by preaching the concept of ‘providing fast feedbacks’ and ‘favouring defect prevention’ rather than ‘defect detection’.
That statement reflects a few but not exhaustive values and principles of the agile manifesto.
And to support that statement, we may apply to some best practices and we may use these four rules.
Rules
- No code may be written for a story until we first define its acceptance criteria/tests. This is the entry condition of the developer’s space, as part of the definition of ready (DoR).
- A story may not be considered completed until all its unit tests and integration tests pass, the code has been reviewed by peers. This is the exit condition of the developer’s space, as part of the definition of done (DoD).
- A story may not be considered completed until all its acceptance tests pass. This is the exit condition of the development team’s space, as part of the definition of done (DoD). This includes automated and manual tests.
- A product increment may not be considered delivered until all tests pass in the continuous integration pipeline. This shall also include manual exploratory tests, as part of the acceptance criteria. This is the definition of release and deployable (DoRD).
There are many other things we may want to add to the DoR and DoD and DoRD. Here we are not going to detail them all.
Besides, there is one thing that a few people misunderstand. What are the roles and responsibilities of a Quality Assurance Engineer ?
Quality Assurance
By all mean, Quality Assurance is not just about testing by testers :
- QA is a set of activities intended to ensure that products satisfy customer requirements in a product with a certain level of quality at every stage of the process aiming to deliver a software to production.
- QA is the responsibility of everyone, not only the testers. QA is all the activities we do to ensure correct quality during the development of new products.
- There may be a dedicated member of the team to assume that QA role to ensure that the Testing Strategy is executed accordingly.
If we look at a higher level, a QA role should be implicated at every phase of a Software Development Life Cycle (SDLC). From the QA point of view, we could talk about Software Quality Life Cycle (SQLC).
Software Quality Life Cycle

The SQLC is another perspective to see the Software Development Life Cycle (SDLC), but with the quality focus. Here we notice there are many testing types being executed at every phase of the SDLC.
Every test is ignited by an idea of how to execute it. Then we may wan to automate it to save time for the upcoming changes, because we would re-test to ensure there is no regression. One way to see the testing as the whole process to automate tests is to represent them in the Cohn’s pyramid.
The Test Pyramid
When testing one software being built, we want to ensure things that matter are tested before delivering it to end users, them prioritize them for automation.

It delivers a graphical representation to show the proportion of:
- High level end-to-end (around 10%)
- Mid-level integration tests (around 20%)
- Low-level unit tests (around 70%)

On the left hand side, and that’s in the past, we used to try automate only at the hight level, the end-2-end level.
Define the 5WHs

Unit Testing
Every development team member must ensure to write unit tests to ensure that pieces of code are working correctly. They may — and it’s highly recommended — use TDD to drive the development of business functions by first writing test code.
At this level we may want to outline these 5WH questions:
- WHY: To ensure code is developed correctly
- WHO: Developers
- WHAT: Business Code
- WHEN: new code is written
- WHERE: Local Dev + CI
- HOW: Automated
Integration Testing
At this level we may want to outline these 5WH questions:
- WHY: To ensure a group of components is working together
- WHO: Developers/Tech Leaders/Architects
- WHAT: web services, components, controllers, database, message bus, etc.
- WHEN: a new component is developed
- WHERE: Local Dev + CI
- HOW: Automated
System Testing
At this level we may want to outline these 5WH questions:
- WHY: To ensure the whole system works
- WHO: QA/BA/PO
- WHAT: Scenario Testing, User flows and typical User Journeys, Performance and security testing
- WHEN: the codebase is changed
- WHERE: CI + Staging Environment
- HOW: Automated, less manual tests
Acceptance Testing
At this level we may want to outline these 5WH questions:
- WHY: To ensure customer’s expectations are met
- WHO: QA/PO/End-users
- WHAT: Verifying acceptance tests on the stories, verification of features, performance, sanity
- WHEN: the feature is ready
- WHERE: CI + Staging Environment
- HOW: Manual, some automated tests
In order to be able to define a testing strategy to cover these four levels, we have to know are are the types of tests we need.
Development
When development starts, new production code and/or modification to legacy code should be backed by unit tests written by developers and peer-reviewed by another developer or a technical leader. Any commit to the code repository should trigger an execution of tests CI pipeline. This provides a fast feedback mechanism to the development team. Unit tests ensure that the system works at a technical level and that there are no errors in the logic.
Developer Testing
As developers, we should behave as if we don’t have any QA in the team or organisation. It is true that QAs have different mindset but we should test to the best of our ability. “Ping-Pong” is a common phenomena we encounter in Software Development as we tend to quickly move on to the next story, but in reality when a defect is found, it takes longer have the User Story completed as per the Done Criteria. When there is a defect, most of the time we implicated more people in fixing the issue, then frustration come out of it. Any new code or modification to the existing code should have appropriate unit tests that will be part of the unit regression test.
Automated Acceptance Tests and Non-functional Testing
The automated acceptance tests include Integration Tests and Service Tests and UI tests which aim to prove the software works at a functional level and that it meets user’s requirements and specifications. Automated acceptance tests are usually written in Gherkin language and executed using a BDD tool such as cucumber.
Remember: Not all tests need to be automated! Because these tests typically require communication over HTTP, they need to be executed on a deployed application, rather than run as part of the build.
Non-functional tests (Performance and Security) tests are as equally important as functional tests, therefore they need to be executed on each deploy.
Performance Tests should check performance metrics on each deploy to ensure no performance degradation.
Security Tests should check for basic security vulnerabilities derived from OWASP. It is vital that this should be a completely automated process with very little maintenance to get the most benefit out of automated deployments. This means there should be no intermittent test failures, test script issues, and broken environment. Failures should only be due to genuine code defects rather than script issues, therefore any failing test which is not due to genuine failures should be fixed immediately or removed from the automation pack, to be able to get consistent results.
Regression Testing
Not expecting to find many defects. Their purpose is only to provide feedback that we haven’t broken major functionality. There should be a very little amount of manual regression testing.
Sanitary Testing — Should be no more than 15 mins
This pack contains only high-level functionality to make sure the application is stable enough for further development or testing. For example, for an eCommerce website, tests included in this pack could be:
- Product Search
- Product Review
- Purchase Item
- Account Creation / Account Login
Full regression pack — should be no more than 1 hour
This pack contains the full regression suite of tests and contains everything else which is not included in the smoke pack. Here, the goal is to get a quick feedback with a larger set of tests. If the feedback takes more than 1 hour, it is not quick. Either reduce the number of tests by using pairwise test technique, create test packs based on risk or run the tests in parallel.
UAT and Exploratory Testing
There is no reason why UAT and exploratory testing cannot run in parallel with the automated acceptance tests. After all, they are different activities and aim to find different issues. The aim of UAT is to ensure that the developed features make business sense and helpful to customers. PO (Product Owner) should run User Acceptance Tests or Business Acceptance Tests to confirm the built product is what was expected and that it meets user’s expectations. Exploratory testing should focus on user scenarios and should find bugs that automation misses. Exploratory testing should not find trivial bugs, rather it should find subtle issues.