Modern TDD - Unit Level
When a Component has high Business Logic complexity, there will be many Component Tests, so the test execution time gets slower and slower due to multiplication of I/O calls. What to do?
📅 Join me for the live workshop ATDD in Legacy Code Roadmap on Wed 6th Aug (17:00 - 19:00 CEST) (100% discount for Optivem Journal members)
🔒 Welcome to the premium Optivem Journal. I’ll help you apply TDD to escape the nightmare of Legacy Code. Join our paid community of 160+ senior engineers & leaders for support on your TDD journey, plus instant access to group chat and live Q&As:
Previously, we covered the Modern Test Pyramid - Unit Level, where we showed why we need Unit Level Tests (Unit Tests & Narrow Integration Tests):
Unit Tests - Provide us with extremely fast feedback on correctness of business logic, by testing business logic in isolation from I/O concerns (hence, by isolating business logic from I/O, we get feedback and order of magnitude faster). As business logic complexity increases, the number of scenarios that need to be covered also increases, which is where the benefits of Unit Tests become even more visible.
Narrow Integration Tests - Provide us faster feedback compared to Component Tests because we eliminate the combinatorial explosion in testing I/O logic.
The question is, does it matter whether we write Unit Tests last (i.e. after code)… or write Unit Tests first (i.e. before code)?
Approach 0. Not writing Unit Level Tests
A team might be practicing Component Level TDD, without any Unit Tests.
We don’t have to write Unit Tests, or we might not even be able to write Unit Tests. We could be just writing Component Level Tests.
Case 1: Cannot write Unit Tests: Suppose that our Component architecture is not unit-testable, because there’s no separation between business logic and I/O logic. Examples are:
We have fat REST controllers that contain everything - executing business logic, communicating with databases and external systems, and sending back a response based on some logic
We have fat “service“ classes that do both business logic and communicate with databases and external systems
Since business logic is not isolated (it’s instead mixed with I/O concerns), it's not possible to test the business logic independently of I/O - unit tests are not feasible.
Case 2: Choose not to write Unit Tests: The Component architecture might be unit-testable (i.e. business concerns are separate from I/O concerns), but we consciously choose not to write Unit Tests. E.g. where the Component is too simple and/or where the amount of business logic complexity (roughly approximated by LOC) is not significantly higher than the amount of I/O logic complexity, or where there is much more I/O complexity. Then, we might not see the ROI of Unit Testing.
Problems solved with Component Level Tests
Component Level Tests help teams work effectively independently & in parallel, because each team gets feedback on their Component in isolation
Problems not solved with Component Level Tests
Developer has to wait for several minutes to get feedback (due to I/O slowness) whether they introduced a regression bug - this is especially a problem for Components with high degree of business logic complexity
The fundamental unsolved problem is that, due to the inclusion of I/O within all tests, it means the feedback loop is slow. The wait time of several minutes is too long to get feedback on business logic changes, causing developers context-switches, or preventing them from working incrementally (they may decide to batch up several changes prior to running the slow Component Tests, but then this increases the cost of debugging when some Component Test fails).
Approach 1. Writing Unit Level Tests last
At some point, in Components with high business logic complexity, the team recognizes that the waiting time for Component Tests is too slow because the Component Test suite grew a lot. Why? High business logic complexity → many scenarios → many (slow) Component Tests → feedback loop gets slower and slower.
So, the team realizes they want to adopt Unit Tests. Why? Because every Unit Test is an order of magnitude faster than a Component Test (Unit Test runs purely in-memory, Component Test includes I/O, in-memory is an order-of-magnitude faster than I/O). So we can then have hundreds/thousands of Unit Tests, and get feedback within seconds.
The team naturally starts by writing Unit Tests last. Even though they’re practicing TDD at the System Level, and at the Component Level, there is nothing to force them to practice TDD at the Unit Level. Their process might be:
Write a failing Acceptance Test
For each team, working in parallel:
Writes a failing Component Test
Writes some code
Writes Unit Tests & Narrow Integration Tests (they pass)
Verify that all the Component Tests & Contract Tests pass
Verify that all the Acceptance Tests pass
Problems solved with Unit Level Tests last
The benefit is that each team gets fast feedback on whether their Business Logic works correctly, and relatively fast feedback on whether their Presentation/Infrastructure Logic works correctly
Problems not solved with Unit Level Tests last
The Unit Tests (and Narrow Integration Tests) might not be valid, they might not be testing what we expect to be testing. This is because we’ve never seen them fail. When we see passing Unit Tests (and Narrow Integration Tests) it doesn’t mean anything - they might be always-green tests, which are useless. So, they won’t be good at detecting bugs, which means we’ll end up with a higher percentage of failing Component Level Tests, thereby wasting more time debugging.
We could partially try to handle this problem by using Code Coverage, to help us discover lines of code that are never executed by Unit Tests & Narrow Integration Tests. This is useful in helping us add some missing tests.
However, Code Coverage can’t help us know whether we’re verifying behavior - lines of code could be executed, but its output might not be asserted at all (or only partially asserted). This is where we’d need Mutation Coverage, however, the problem is that Mutation Testing is too slow, it is impractical to run it with every commit, it might be run as a nightly job (or some other interval) but that results in slow feedback. Then, we would have to spend additional time updating our Unit Tests & Narrow Integration Tests.
We end up with time wasted in late debugging & late rework (because these Unit Tests & Narrow Integration Tests are not good at detecting regression bugs, so failures are detected more at the Component Test Level). This makes development more expensive.