Testing Strategy — 3/3 — The Agile way
Fast feedbacks and defect prevention
This article is part of a serie:
Nowadays, the majority of team are working in an agile environment, where we work in short sprints, and an iterations of 2 to 4 months, each sprint is focused on only a few user stories. One of the drawback — and fortunately — is to have less extensive documentation and less extensive test plan. Still we do require a high-level agile testing strategy as a guideline for the agile teams. Its purpose is to draw a structure that the teams can follow.
Agile Testing Quadrants
Agile Test Quadrants (introduced by Brian Marick and further worked upon by Lisa Crispin) try to understand the relationship between the various forms of tests using 4 different quadrants.
The Quadrants help to reach goals related to supporting the team, critiquing the product, meet the business and meet technological needs. We will use some tools and testing levels and types to serve the purpose in each quadrant.
- Quadrant 1: Technology facing tests that support the team
The quality of the code and the components is the main concern of the this quadrant. It requires the involvement of developers along with the implementation of the technologies and automation. In this quadrant, the team is using the technology to drive test cases, covering unit tests and component tests, at a unit level.
- Quadrant 2: Business facing tests that support the team
In the quadrant, we may see the business driven test cases are being implemented for supporting the team.
Basically, the main focus of this quadrant is the business requirements. Test cases associated with the requirements are implemented, usually by testers, for the supporting team. Generally, It involves the usage of both manual and automated testing, and consists testing types (such as functional testing, examples, story tests, prototypes and simulations) are executed, as a part of definition of done for a story.
- Quadrant 3: Business facing tests that critique the product
Here, feedbacks of user experience and feedbacks from other testing phases are gathered to serve for improvement of the software. Tests are often manual and are user-oriented, and they are mostly exploratory tests, usability tests, user acceptance tests, alpha tests, and beta test, at system or user acceptance level. This quadrant is mainly used to build and gain confidence in the usability of the product.
- Quadrant 4: Technology facing tests that critique the product
This quadrant involves the usage of the technology to automatically evaluate the software. It mainly focuses on non-functional requirements, at an operational acceptance level, such as reliability, security, compatibility, maintainability, interoperability, resiliency, robustness, etc.
In this quadrant, tests are often automated, and they are performance, load, stress, and scalability, security, maintainability, memory management, data migration, infrastructure, and recovery.
Testing Levels vs Testing Quadrants
Those four testing levels are not quite different from the four testing quadrants, except the order. It’s trivial to start with unit testing at the lower level, then climb up to the integration testing, then system testing and finally acceptance testing. By fitting the testing levels to the quadrants, we can draw a ’N’ flow.
Another thing to notice is that on the left side, we can apply to the ‘Test-First’ approach. Here were are referring typically to XP, TDD and BDD.
To help ourself in defining the strategy for testing our product, we can answer the to the why, who, what, when, where and how questions.
Testing Strategy in SCRUM
Once we have an idea on how to testing our product, we may want to fit it into an Agile way. For instance, we are going to the fit this into the the most popular Agile methodology: the SCRUM framework. Here we are not aiming to dive into details of the SCRUM framework, so we are just getting right to the point.
If we look at a higher level, a QA role should be implicated at every phase of a Software Development Life Cycle (SDLC). From the QA point of view, we could talk about Software Quality Life Cycle (SQLC).
In each story Sprint Planning, everyone in the team learns about the details of the stories so developers and QA know the scope of the work. Everybody should have the same understanding of what the story is about. Developers should have a good understanding of the technical details that are involved in delivering the story, and QA should know how the story will be tested and if there are any impediments to test the stories.
Most common cause of software development failure is due to unclear requirements and different interpretation of requirements by different members of the team. User stories should be simple, concise and unambiguous. As a good guideline, it is best to follow the INVEST model for writing user stories.
A good user story should be:
- Independent (of all others)
- Negotiable (not a specific contract for features)
- Valuable (or vertical)
- Estimable (to a good approximation)
- Small (so as to fit within an iteration)
- Testable (in principle, even if there isn’t a test for it yet)
The following format should be used to write user stories :
As a [role] I want [feature] So that [benefit]
It is important not to forget the “Benefit” part, as everyone should be aware of what value they are adding by developing the story.
Each of the User stories must contain acceptance criteria. This is possibly the most important element which encourages communication with different members of the team. When a User Story is written, Acceptance criteria should be written at the same time as par of it. All acceptance criteria should be verifiable. Each Acceptance Criteria should have a number of Acceptance Tests presented as scenarios written in Gherkin format.
Scenario 1: Title Given [context] And [some more context]… When [event] Then [outcome] And [another outcome]…
In order to prevent defects, PO, BA, Dev, and QA must be involved in User Story refinements (during Sprint Planning, Planning Poker, or Stories Workshop) where User Stories are described, so acceptance criterions are explained. Scenarios should be written down in feature files, and they should be testable. Any corner cases should be thought of. Here we can take profit of QA’s expertise. Scenarios that will reveal defects when testing the product. And more importantly, they are behaviour focused, so the more effort and time spent on this activity, the best results at the end, because as we know the majority of defects are due to unclear and vague requirements.
Likewise, User Stories estimation should include the testing effort (manual or automated) as well and not just coding effort.
Once all the above activities are completed and no issues found, the story is Done! The above are some guidelines on what can be included in an Agile Test Strategy Document.