Optivem Journal

Optivem Journal

Share this post

Optivem Journal
Optivem Journal
Modern Test Pyramid - Component Level
Automated Testing

Modern Test Pyramid - Component Level

Many software departments face the challenge of inter-team communication & inter-team bugs. When a System Test fails, all the teams have to get together to resolve the bug. Is there a better way?

Valentina Jemuović's avatar
Valentina Jemuović
Jun 06, 2025
∙ Paid
6

Share this post

Optivem Journal
Optivem Journal
Modern Test Pyramid - Component Level
1
Share

Welcome to the premium edition of Optivem Journal. I write about Modern Test Pyramid & Modern TDD practice to help you escape the nightmare of Legacy Code. I know you're stressed & don’t have time, and you don’t want to wait for several years. If you want support on your TDD journey, join our paid community of 170+ senior engineers & engineering leaders. Get instant access to group chat & live Q&As:


📢 On Wed 18th Jun 17:00, I’m hosting TDD & Unit Tests (Live Q&A):

Register now

P.S. If you’re a Paid Member, you’ll receive a 100% discount (see Event Description for instructions on how to redeem your 100% discount).


System Testing provides effective User-Facing Feedback.

Automated System Testing is beneficial because it provides us with assurance that:

  • The System behaves as expected from the End User perspective (i.e., it satisfies the Acceptance Criteria)

  • We did not introduce any Regression Bugs affecting the End User (i.e., existing System behavior was not accidentally changed)

Furthermore, Automated System Testing provides us with a fast feedback loop (<= 1hr) compared to the long feedback loop of Manual System Testing (days/weeks/months). It acts as living, up-to-date documentation of System Specifications/Requirements, i.e., System Tests are Executable Specifications.


System Testing is NOT enough for Developer Team Feedback.

Whilst System Testing provides feedback to Developers whether they satisfied System Requirements (and whether they introduced System Regression Bugs), the feedback loop is problematic:

  • Slow Developer Feedback: System Tests have a long execution time (<= 1hr), which is too slow to provide feedback to developers whether the code in their commit behaves as expected, whether they introduced a Regression Bug in their commit. During the 1hr that they wait for the System Test suite to complete, the Developers have potentially made several commits; so then when one test fails, they have to waste time trying to find out which commit(s) are to blame, attempt fixes, and then wait yet another 1hr to see if the System Tests pass.

  • Coarse Developer Feedback: When a System Test fails, we don’t know which team is to blame. Is the Frontend not working correctly, or the Backend is not working correctly, or is there miscommunication between Frontend & Backend? Similarly, within a Microservice Backend, is the problem with Microservice #1, or Microservice #2, etc., or with miscommunication between them? We don’t know; we have to manually debug. The team has to waste time debugging.

To summarize, due to the slow feedback loop & coarse feedback, developers have to waste time debugging.


Solution: Component Level Testing

Let’s see how we can solve the problem above:

  • How to get faster Developer Feedback?

  • How to get more granular Developer Feedback?

The solution is Component Level Testing:

  • Component Tests

  • Contract Tests

For a system with Frontend & Monolithic Backend:

For a system with Frontend & Microservice Backend:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Valentina Jemuović, Optivem
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share