Testing Strategy —1/3— explained

The purpose

Testing Strategy is meant to define what matter and what to do to reach a certain degree of quality. We don’t want to put every bit of our efforts to ensure the software is of very high quality, because simply thee perfection comes with high cost. Rather, we may want to get the maximum out of every single effort we put in the software development.

Reader’s guide

This article is part of a serie:

Testing Strategy

A testing strategy usually has a mission statement which could be focused on the software being built or may be related to the wider business goals and objectives. A typical mission statement could be:

Rules

  • No code may be written for a story until we first define its acceptance criteria/tests. This is the entry condition of the developer’s space, as part of the definition of ready (DoR).
  • A story may not be considered completed until all its unit tests and integration tests pass, the code has been reviewed by peers. This is the exit condition of the developer’s space, as part of the definition of done (DoD).
  • A story may not be considered completed until all its acceptance tests pass. This is the exit condition of the development team’s space, as part of the definition of done (DoD). This includes automated and manual tests.
  • A product increment may not be considered delivered until all tests pass in the continuous integration pipeline. This shall also include manual exploratory tests, as part of the acceptance criteria. This is the definition of release and deployable (DoRD).

Quality Assurance

By all mean, Quality Assurance is not just about testing by testers :

  • QA is the responsibility of everyone, not only the testers. QA is all the activities we do to ensure correct quality during the development of new products.
  • There may be a dedicated member of the team to assume that QA role to ensure that the Testing Strategy is executed accordingly.

Software Quality Life Cycle

The Test Pyramid

When testing one software being built, we want to ensure things that matter are tested before delivering it to end users, them prioritize them for automation.

The “Test Pyramid” is a concept developed by Mike Cohn.
  • Mid-level integration tests (around 20%)
  • Low-level unit tests (around 70%)
The anti pattern vs the ideal automation pyramid

Define the 5WHs

Unit Testing

Every development team member must ensure to write unit tests to ensure that pieces of code are working correctly. They may — and it’s highly recommended — use TDD to drive the development of business functions by first writing test code.

  • WHO: Developers
  • WHAT: Business Code
  • WHEN: new code is written
  • WHERE: Local Dev + CI
  • HOW: Automated

Integration Testing

At this level we may want to outline these 5WH questions:

  • WHO: Developers/Tech Leaders/Architects
  • WHAT: web services, components, controllers, database, message bus, etc.
  • WHEN: a new component is developed
  • WHERE: Local Dev + CI
  • HOW: Automated

System Testing

At this level we may want to outline these 5WH questions:

  • WHO: QA/BA/PO
  • WHAT: Scenario Testing, User flows and typical User Journeys, Performance and security testing
  • WHEN: the codebase is changed
  • WHERE: CI + Staging Environment
  • HOW: Automated, less manual tests

Acceptance Testing

At this level we may want to outline these 5WH questions:

  • WHO: QA/PO/End-users
  • WHAT: Verifying acceptance tests on the stories, verification of features, performance, sanity
  • WHEN: the feature is ready
  • WHERE: CI + Staging Environment
  • HOW: Manual, some automated tests

Development

When development starts, new production code and/or modification to legacy code should be backed by unit tests written by developers and peer-reviewed by another developer or a technical leader. Any commit to the code repository should trigger an execution of tests CI pipeline. This provides a fast feedback mechanism to the development team. Unit tests ensure that the system works at a technical level and that there are no errors in the logic.

Developer Testing

As developers, we should behave as if we don’t have any QA in the team or organisation. It is true that QAs have different mindset but we should test to the best of our ability. “Ping-Pong” is a common phenomena we encounter in Software Development as we tend to quickly move on to the next story, but in reality when a defect is found, it takes longer have the User Story completed as per the Done Criteria. When there is a defect, most of the time we implicated more people in fixing the issue, then frustration come out of it. Any new code or modification to the existing code should have appropriate unit tests that will be part of the unit regression test.

Automated Acceptance Tests and Non-functional Testing

The automated acceptance tests include Integration Tests and Service Tests and UI tests which aim to prove the software works at a functional level and that it meets user’s requirements and specifications. Automated acceptance tests are usually written in Gherkin language and executed using a BDD tool such as cucumber.

Regression Testing

Not expecting to find many defects. Their purpose is only to provide feedback that we haven’t broken major functionality. There should be a very little amount of manual regression testing.

Sanitary Testing — Should be no more than 15 mins

This pack contains only high-level functionality to make sure the application is stable enough for further development or testing. For example, for an eCommerce website, tests included in this pack could be:

  • Product Review
  • Purchase Item
  • Account Creation / Account Login

Full regression pack — should be no more than 1 hour

This pack contains the full regression suite of tests and contains everything else which is not included in the smoke pack. Here, the goal is to get a quick feedback with a larger set of tests. If the feedback takes more than 1 hour, it is not quick. Either reduce the number of tests by using pairwise test technique, create test packs based on risk or run the tests in parallel.

UAT and Exploratory Testing

There is no reason why UAT and exploratory testing cannot run in parallel with the automated acceptance tests. After all, they are different activities and aim to find different issues. The aim of UAT is to ensure that the developed features make business sense and helpful to customers. PO (Product Owner) should run User Acceptance Tests or Business Acceptance Tests to confirm the built product is what was expected and that it meets user’s expectations. Exploratory testing should focus on user scenarios and should find bugs that automation misses. Exploratory testing should not find trivial bugs, rather it should find subtle issues.

Unlisted

--

--

--

Developer, Coder or Craftsman, but code quality matters. Technical writer @wesquad @wemanity

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Running a monero node over Tor

Overcast, AudioBooks, Huffduffer, and Workflow

Costomer Collection Advertisement By @nftstudio24 : @TheBoredNFTs

Racing Beta Classes

Compile android debug bridge on Raspberry pi

Server Monitoring and Log Analysis in 10 minutes

How to Scan A File for Viruses in Salesforce Apex

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Kong To

Kong To

Developer, Coder or Craftsman, but code quality matters. Technical writer @wesquad @wemanity