15 Comments
User's avatar
Jelena Cupac's avatar

This is a very tough situation - how can you find a company to work for - one that genuinely upholds best practices?

Expand full comment
Valentina Jemuović's avatar

There are very few companies that uphold best practices, so it is a challenge.

But here's what you can do:

1. Network. Try to find Senior Engineers, Tech Leads, Team Leads, Engineering Managers who support best practices, and who are working in a company where they get to practice it. If you find those people and get that job, it'll be easier for you, with the right environment. Try to attend meetups or other ways to expand your reach.

2. Initiate change. If you're not able to find the right environment, then you can be the catalyst for change. Generally, as you progress up to Senior Engineer level and esp. Tech Lead level, you'll have much greater authority to enact practices. This means, even if the rest of the company is working in a bad way, you can practice changes locally - in your project, your module, your microservice - where you have ownership, and where you have the authority to set practices for your team.

Expand full comment
MURALI MOHAN NARAYANABHATLA's avatar

The following are few questions from some of our managers:

1) What data do you have to prove that the team needs TDD?

2) How do we know if the team members are following test first or test after?

3) What are the baseline measures now and what measures prove that they reaching the target state?

Any advise on this would be appreciated! Thanks in advance!

Murali

Expand full comment
Valentina Jemuović's avatar

"1) What data do you have to prove that the team needs TDD?"

Most of the "acceptable" data that exists is in the time that developers spending debugging, e.g. this article that states developers spend 75% time debugging https://coralogix.com/blog/this-is-what-your-developers-are-doing-75-of-the-time-and-this-is-the-cost-you-pay/... some other articles say they spend 25-50% time debugging https://undo.io/solutions/developer-productivity/reduce-time-spent-debugging/ ...

Whilst there isn't consensus of the exact number, it is clear that it is a substantial percentage, and that we would like to reduce that debugging time.

That can be used as a strong argument for the need for developers to do automated testing (and not just rely on QA E2E Tests) because only developer automated testing (e.g. unit testing) is able to reduce that debugging amount.

So that's an argument for automated testing.

But how about data for TDD? Those studies are, to be honest, very rare, and even if they do exist, I would find any data questionable.

HOWEVER we then ask ourselves, how do we get a reliable automated test suite, to reduce the debugging time? Uncle Bob wrote that there are two ways to get to a reliable test suite https://blog.cleancoder.com/uncle-bob/2016/06/10/MutationTesting.html

Option 1. Test Last AND Mutation Testing

Option 2. TDD

From this, I conclude that the strong argument is for automated testing (using the debugging time data), but there isn't a data argument for TDD necessarily (I find mixed studies regarding TDD & data).

Hence, when I convince people, I now stick with automated testing as the main argument, because that's more easily understood. Even in coaching, I spend months with the team working test-last, making sure they can write tests properly. Then, I let them face the pain of low mutation tests scores, and having to kill mutants (time-consuming & boring activity, but needed to ensure they have a reliable test suite). Then after all that, I introduce TDD. I get them to compare what they prefer Test Last & Mutation Testing... or TDD... after that practice, they generally choose TDD.... Thus, I do not use data in this argument.

Expand full comment
Valentina Jemuović's avatar

Finally, my summary:

FOR MANAGERS:

- I don't see the necessity of mentioning TDD, many might not even know what it is. I just tal about Automated Testing, which is generally accepted by managers.

FOR TEAMS:

- I stopped trying to convince anyone of TDD

- Instead I convince them of Automated Testing (which is something that they're already convinced about)

- Then I help the team to do Automated Testing well (I keep Test Last, no need to talk about TDD)

- Then I introduce TDD as the alternative, and get them to compare the two ways of working, so that they convince themselves of TDD...

- Summary: I don't tell them why TDD is better, I don't attempt to prove anything... I let them discover, so I use a guided approach in coaching (I haven't written about it yet, but maybe might in the future)

You can even see the above being reflected in my TDD in Legacy Code series https://journal.optivem.com/p/tdd-in-legacy-code-outline --> notice in each phase, how 90% of the articles are about Automated Testing, and then TDD comes at the end... That's because it mirrors the approach I used in coaching, I leave TDD at the end, and don't convince people anymore...

Expand full comment
Valentina Jemuović's avatar

"3) What are the baseline measures now and what measures prove that they reaching the target state?"

At the unit level, that's where measures are the easiest. I'd track Code Coverage AND Mutation Score... the Mutation Score measures whether the unit test suite is strong in protecting against regression bugs.

At higher levels, e.g. system level, I'd measure bug reports from QA Engineers. Assuming the team is properly writing Acceptance Tests (that means QA Engineer is involved at the start too), then there should be almost no bug reports from QA Engineers for the acceptance criteria (though QA Engineer is free to add additional scenarios, which are translated to Acceptance Tests).

Also, at higher level, I'd measure that a bug doesn't re-occur.

And at the highest of all levels, tracking DORA metrics. With a reliable test suite, impact should be:

- Frequent deployments

- Short time between acceptance and deployment

- Low frequency of deployment failures

- Short time to restore from failure

Notice I said a reliable test suite.. but I didn't say TDD... even as part of Continuous Integration, it is Automated Testing that's necessary, but TDD isn't... though we can see Dave Farley, Martin Fowler, etc. recommend TDD as the way to achieve that... A reliable test suite is a natural outcome of TDD, but hard to achieve with Test Last.

Expand full comment
Valentina Jemuović's avatar

"2) How do we know if the team members are following test first or test after?"

Teams following TDD tend to have high Code Coverage AND high Mutation Test score.

Teams following Test Last may have high Code Coverage BUT in general they have LOW Mutation Test score. That means their tests aren't protecting them against regression bugs. Those tests are a waste.

You can read some of my earlier articles:

- Code Coverage vs Mutation Testing https://journal.optivem.com/p/code-coverage-vs-mutation-testing

- Don't chase Code Coverage goals https://journal.optivem.com/p/dont-chase-code-coverage-goals

- 100% Code Coverage but 0% Regression Bug Protection?!

https://journal.optivem.com/p/100-code-coverage-but-0-bug-protection

- Code Coverage Targets - Recipe for Disaster

https://journal.optivem.com/p/code-coverage-targets-recipe-for-disaster

Expand full comment
Fabien Ninoles's avatar

Oh dear, I've seen this both ways (don't want any tests and "tests are more important than working code") on both side of the alley (as manager and as IC). People, especially senior people, can be quite stubborn in their stance against some practices. Most of the time, it is rooted in some trauma of their past, but it can also be they are just trying to get their way, looking to use a methodology they heard about but have no actual experience in practice.

Strangely, it's usually far easier to change it as a IC than as a manager, but it can also sabotage your carrier if you don't have the support of your manager, or at least someone with some additional authority over the team. If you can get one of those, than truely, the advice you gave is the easiest if not the best ones. At the same time, let be very clear: tests, or any kind of best practices, aren't what keep the business afloat, only their results do it. So make sure that, whatever do, you increase such value and not slowing things down, even if it's just temporary. That's not easy, but it will very much increase your chance to see those practices being accepted.

Expand full comment
Valentina Jemuović's avatar

"At the same time, let be very clear: tests, or any kind of best practices, aren't what keep the business afloat, only their results do it. So make sure that, whatever do, you increase such value and not slowing things down, even if it's just temporary. That's not easy, but it will very much increase your chance to see those practices being accepted."

Well-said! Exactly, in the end, the business cares about the results.

What is your perspective on how to increase value and to not slow things down? Perhaps you could provide a story/example...

Expand full comment
Fabien Ninoles's avatar

I think my prefer way to approach integrating a testing practice in an org is through regression testing. The ability to reproduce a problem consistently on a branch, and then the ability to prove that your change adequately fix the issue, is an elegant proof that you are adding value, if done in adequate time.

After that, small and very focused refactoring, being made as part of a feature development (and not by itself), is also a great way to introduce meaningful change to an architecture and bring new concepts. The refactoring must be small, have a clear value-add, and importantly, do not change the current style of the code. Inconsiderate inconsistencies has made more damage than any patterns, and most teams are used to their patterns and have learned how to avoid their pitfalls, which is not the case of your patterns, if you are new to the team. So, slow integration of new patterns are essential if you don't want your dream patterns to become someone else's nightmare.

Expand full comment
Valentina Jemuović's avatar

1. Regression testing is indeed essential. Without regression testing, it's unsafe to make behavioral changes, unsafe to make structural changes, unsafe to do optimization... we're then locked-in.

2. Exactly, we should integrate new practices slowly to avoid causing chaos within the team.

Expand full comment
Jose Garrera's avatar

In my previous profession as an industrial engineer, I was responsible for improving productivity. Such improvements generally involve modifying people's ways of working. There wasn’t a single case where resistance didn’t arise. This resistance to change is inherent in human nature, and the strategies to achieve change vary with context.

However, one strategy that proved very effective for me was leading by example—without pressure, without imposition. Often, when people see that the change is beneficial for everyone, they become curious and the change begins naturally from that point.

Expand full comment
Valentina Jemuović's avatar

That's the best way to enact change, to be a true leader rather than just impose rules.

Could you share one story from your work as an industrial engineer, what was one productivity problem, what was your proposed solution, how did you lead by example in that situation...

Expand full comment
Jose Garrera's avatar

An example case occurred in a company that manufactures filters for automotive vehicles. In this case, we took a workstation where improvements involved significant jumps in productivity. This choice was about the impact of the changes: we wanted to emphasize the results to be achieved.

We reconfigured the workstation with the person who originally held the position, asking them somewhat guided questions (trying to have the ideas come directly from the operator). We assisted by calculating and designing the new workflow and then implementing it.

As a final step, we demonstrated the old way of working versus the new way, using various indicators to make the improvement tangible. From there (being able to see the results), the company’s management didn’t hesitate and led the change by supporting our work.

As an additional point, I’d add that we often use the phrase: "It's not that you're doing it wrong. It's about doing a little better today what you did well yesterday."

Expand full comment
Valentina Jemuović's avatar

1. Guided questions are indeed very powerful. By asking a person questions, they give answers, they feel ownership like it's their idea AND the idea actually makes sense to them (rather than us convincing them), so then they understand how to implement it AND are more motivated to do so.

2. Comparing old vs new way, and letting them see which is better is also powerful. I used that for TDD - whereby I'd show people TLD and Mutation Testing .... versus TDD... and then they get to choose which they prefer (tends to lead people to TDD), instead of me saying why TDD is better.

3. That's an excellent phrase "It's not that you're doing it wrong. It's about doing a little better today what you did well yesterday." -> Exactly. No one wants to be made to feel bad/worthless/incompetent, that's why it's better to focus on the word "improvement"

Expand full comment