Optivem Journal

Optivem Journal

Share this post

Optivem Journal
Optivem Journal
Modern TDD - System Level
Test Driven Development

Modern TDD - System Level

Teams who adopt automated System Tests tend to write them at the end of the Sprint. Developers & QA Engineers discover (late) they misunderstood the requirements! This problem can be solved with ATDD.

Valentina Jemuović's avatar
Valentina Jemuović
Jun 27, 2025
∙ Paid
7

Share this post

Optivem Journal
Optivem Journal
Modern TDD - System Level
2
3
Share

Welcome to the premium edition of Optivem Journal. I write about Modern Test Pyramid & Modern TDD practice to help you escape the nightmare of Legacy Code. I know you're stressed & don’t have time, and you don’t want to wait for several years. If you want support on your TDD journey, join our paid community of 160+ senior engineers & engineering leaders. Get instant access to Group Chat & Live Q&A:


Previously, we covered the Modern Test Pyramid - System Level, where we showed the importance of System Level tests, which included Smoke Tests, E2E Tests, and most importantly, Acceptance Tests & External System Contract Tests.

Acceptance Tests are executable Acceptance Criteria. We learnt that Acceptance Tests & External System Contract Tests, together, are a replacement for Manual System Regression Testing.

Does it matter whether we write Acceptance Tests last… or write Acceptance Tests first?


📢 On Wed 9th July 17:00 CET, I’m hosting TDD & Unit Tests - Part 4 (Live Q&A):

Register now

P.S. If you’re a Paid Member, you’ll receive a 100% discount (see Event Description for instructions on how to redeem your 100% discount).


Approach 1. Writing & Executing Manual Tests Last

Most companies are NOT agile, but rather they work in a Waterfall way. They might be practicing Scrum in a non-agile way, whereby each Sprint is a mini-waterfall. (Or perhaps the whole release is a waterfall, whereby a one phase is executed over multiple sprints.)

  1. PO specifies User Stories

  2. PO hands over the User Stories to Developers

  3. Developers try to implement the User Stories (based on their interpretation)

  4. Developers hand over the Software to QA Engineers

  5. QA Engineers test the User Stories (based on their interpretation)

Here’s a more detailed breakdown of the mini-waterfall:

Requirement Phase

  • PO specifies User Stories

Development Phase

  1. Developers try to implement the User Stories

  2. Developers manually test the User Stories on their local machine

QA Phase

  1. Deploy System to QA Environment

  2. User Story Testing

    1. QA Engineers manually test the User Stories (from the Sprint Backlog)

    2. Developers fix any bugs reported

    3. QA Engineers manually test the fixes

    4. If QA Engineers find further bugs, then repeat the process (of manual testing & developer rework)

  3. Regression Testing

    1. QA Engineers perform manual regression testing

    2. Developers fix any regression bugs reported

    3. QA Engineers manually test the fixes

    4. If QA Engineers find further bugs, then repeat the process (of manual testing & developer rework)

Problems of Manual Tests Last:

The following are the problems faced (at the end of the Sprint):

  • Regression Bugs - QA Engineers discover lots of Regression Bugs. Time Wasted: Developers fixing & QA Engineers re-testing regression bugs.

  • Misunderstood Requirements - Developers discover that QA Engineers had a different interpretation of requirements. Time Wasted: Late Developer rework due to misinterpretations & QA Engineers re-testing.

  • Missing Scenarios - QA Engineers discover new scenarios that Developers didn’t think of. Time Wasted: Late Developer rework due to missed scenarios & QA Engineers re-testing.

Approach 2. Writing Acceptance Tests last

When companies adopt Acceptance Tests, to replace Manual Regression Tests, they tend to keep the same process of working, i.e. at the end of the Sprint, QA Engineers now write (automated) Acceptance Tests rather than Manual Tests.

Problems solved

Writing Acceptance Tests solves the problem:

  • Regression Bugs are not detected by QA Engineer, but are rather visible to the Developer hence solved by the Developer (hence QA Engineer has time for reporting new bugs rather than regression bugs).

Note: Acceptance Tests help reduce regression bugs associated with User Story Acceptance Criteria. For example, if we implement some Acceptance Criteria in Sprint N, then we’ll be protected in all future Sprints (Sprint N+1, Sprint N + 2, etc.)

Problems NOT solved

However, since Acceptance Tests are written at the end of the Sprint, it does not solve requirement-related problems, where there’s a gap in requirement understanding between Developers & QA Engineers (and also PO):

  • Misunderstood Requirements - This problem is still not solved!

  • Missing Scenarios - This problem is still not solved!

There’s also a new problem:

  • Invalid Tests - When writing the test at the end, since the functionality has already been implemented, we tend to write just enough of the test to see it GREEN. It could happen by accident, that we write an Acceptance Test that executes the System but does NOT verify anything (ZERO assertions!); we see the test is passing, and we move on… but this test is NOT testing anything at all! Alternatively, the test could have an assertion, but it might be the wrong assertion; it is not verifying what should be verified. It’s an always-passing test. This is a useless test, because it gives us a false sense of assurance; it does not protect us against regression bugs!

Approach 3. Writing Acceptance Tests first (ATDD)

How can we address the issue of requirement misunderstandings? How do we bring everyone (PO, Developers, QA Engineers) on the same page?

Requirement misunderstandings occur because requirements are often generalized, leaving room for misinterpretation and gaps in understanding.

What if we could write requirements in a very specific way, such that there is no room for misinterpretation? What if we could write requirements clearly, specifying behavioral expectations through specific inputs and expected outputs, i.e., example-based requirements?

If we wrote example-based requirements before coding, then it puts everyone on the same page. Acceptance Tests themselves are example-based requirements. Writing them before coding is known as ATDD.

Problems solved:

Writing example-based requirements (Acceptance Tests) before coding helps align the whole team regarding requirements:

  • Misunderstood Requirements - Writing requirements as examples clarifies them for everyone and eliminates gaps in understanding.

  • Missing Scenarios - QA Engineers can bring their creative thinking (“what if“ scenarios) to be brainstormed at the start of the Sprint, which is converted to requirements as examples, so that the whole team has a more comprehensive understanding of scenarios that need to be covered

  • Invalid Tests - By writing the Acceptance Test first, we can verify that it fails and that the failure occurs for the expected reason. This assures us that the test itself is valid, that it’s testing what it should be testing.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Valentina Jemuović, Optivem
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share