“We spent so much time on maintenance when using Selenium, and we spend nearly zero time with maintenance using testRigor.” - Keith Powe, VP of Engineering, IDT
If the Manual QA Engineers already have documented procedures, then they can import their existing manual test procedures into testRigor, e.g. by connecting to JIRA / TestRail, etc. It's recommended to import in batches - e.g. import a batch, clean up steps, get it running... then go onto the next batch.
If the Manual QA Engineers do NOT have documented procedures, they can use AI functionality in testRigor to generate the test cases.
Thanks for publishing this, Valentina. I always learn a lot from your writing on ATDD and TDD. It's always valuable to see your perspective on these concepts.
I'm very much in favour of the Four Layer Model (which I originally learned from Dave Farley and have seen practical real-world examples from your talks/writings) and have seen its benefits in terms of decoupling and reduced maintenance.
Reading this piece about ATDD with testRigor, I am trying to understand the context:
1. From the outside, it seems that a lot of the plumbing lives inside testRigor's platform rather than in a Four Layer architecture that the team owns. Is that a fair way to look at it?
2. You mention that many executives want results 'right now' and that converting a whole manual test suite can't be done overnight with the Four Layer Model. In that context, do you see tools like testRigor mainly as a short-term, pragmatic option under delivery pressure, or as a long-term solution for teams that could or should invest in a Four Layer Model? If I simply put, for which kinds of teams/products would you lean towards a tool like testRigor?
I am asking because you know better than anyone how much energy, effort, and investment it takes to scale these practices inside a team. Today it's getting harder to explain the value of these investments when AI can generate code and tests with relatively little effort.
Do you think it would help readers if this context: when you would reach for a tool like testRigor versus the Four Layer Model, were made more explicit?
I would love to hear how you frame these trade-offs when you work with teams.
[Part 3/3] "Do you think it would help readers if this context: when you would reach for a tool like testRigor versus the Four Layer Model, were made more explicit?"
If a company is in this situation:
- Stuck in Manual QA for regression testing), AND
- Has attempted E2E Test Automation the bad way (tightly coupled to Selenium/Playwright, etc.) wasting significant time on test maintenance whilst not getting any benefits, AND
- Is working in silos, whereby it's seen as QA job to do testing, AND
- Management isn't willing to invest in building skillset capability for the Four Layer Model, AND
- Management wants results from Day 1
Then, in those cases, I see the Four Layer Model as impossible for such companies, the only viable way is a tool like testRigor which enables Manual QA Engineers to import their Manual Test cases, to get started with automation from Day 1, in a way that's more maintainable compared to their previous failed attempts with Selenium/Playwright.
Thanks so much for taking the time to write such a detailed 3-part reply. This really helps me understand how you are thinking about testRigor vs The Four Layer Model.
Your framing of "90% of teams with low engineering maturity" vs "<10% who can invest in the Four Layer Model" makes a lot of sense, and I agree that tools like testRigor can be a huge step up from brittle Selenium/Playwright scripts and pure manual regression.
The bit I am still wrestling with is how this message lands for those low-maturity teams.
When an article talks about "90% test automation with AI" and plain-English tests that QA can write from day one, a lot of teams may hear:
"We don't really need to invest in test architecture or developer involvement, the tool will handle it."
We are already seeing a similar pattern with agentic tools like Claude, Cursor, etc., where majority of people conclude that "AI agents can write everything for us," so we no longer need to build testing, design, and architecture capabilities in the team.
In our experience, that's where new kinds of test-debt appear:
- as the application grows, the plain-English tests start to drift, duplicate intent and become hard to read,
- suites end up encoding a lot of "how" instead of "what", only now the plumbing is hidden inside a vendor platform,
- and a few years later, the team is again afraid to touch tests because they don't really understand what's covered.
I believe that the teams who succeed with tools like testRigor in the long term are the ones who still apply the spirit of the Four Layer Model: they establish a clear domain language, create reusable higher-level actions, and are intentional about what belongs in ATs vs. lower-level tests.
I would be really interested to hear:
- When you work with teams adopting testRigor in those 90% situations, do you still guide them to introduce some lightweight test architecture/domain vocabulary inside the tool? Hence, tests remain readable as the product grows? Though I am not sure how far you can really go with those inside tools, like testRigor
- And how do we communicate to leadership that even with AI-based tools, there is still ongoing investment needed in test design and ownership?
My concern is not with testRigor itself, but that some readers might walk away thinking "we can skip the hard work of building testing capability, we will just buy it."
Thanks again for all the work you have done for the TDD/ATDD community. I have learned a lot from your writings, and I am still learning.
[Part 6] Regarding communication with leaders, even before AI, there was high popularity of Record and Playback Automation tools. Executives preferred those tools rather than building up the test architecure themselves. Similar thing happens with AI tools.
It depends on company engineering culture, and heavily depends on the CTO / Engineering Manager. If they understand the fundamentals why we need separation of DSL vs plumbing, then we'll be in a position to make arguments for investment in it.
[Part 4] The teams that engage me are those who are willing to implement the architecure themselves, so as a Technical Coach, I help teams adopt ATDD using the Four Layer Model. Those are the kind of teams that would be interested in working with me
For teams that would choose a tool (it might be a Selenium play-record tool, or it might be some AI tool) - those kind of companies have not contacted me up to now, since they generally just give the tool to their QA and there isn't much of a point for a Technical Coach. Instead, those teams may get support from customer service from that tools company.
However, I do think everyone should familiarize themselves with the Four Layer Model, irrespective of whether or nor they are using a tool. However, I haven't experienced it.
[Part 5] How to define readability? I define it in terms of speaking via domain language, and that's something that is a must-have in Four Layer Model, whereby tests are written in terms of the DSL.
However, when it comes to tools, thst cannot be enforced. For example, when a team migrated from their manual procedures to testRigor, they'll import the tests that are written as UI commands.
They would need to receive basic training regarding writing good DSL, the difference between domain language vs UI language. If they don't do that, then we end up with UI coupling.
[Part 2/3] "2. You mention that many executives want results 'right now' and that converting a whole manual test suite can't be done overnight with the Four Layer Model. In that context, do you see tools like testRigor mainly as a short-term, pragmatic option under delivery pressure, or as a long-term solution for teams that could or should invest in a Four Layer Model? If I simply put, for which kinds of teams/products would you lean towards a tool like testRigor?"
For many teams (e.g. 90% in the world) they have low engineering maturity, they are stuck with Manual QA or bad UI tests (coupled to Selenium/Playwright). No one has the adequate architecture skillsets for the Four Layer Model, and there is no organizational willingness to invest in skillset. It is seen that testing is QA job, and no willingness to involve developers in this effort. In those situations, a tool like testRigor can be a practical wedge to move the needle quickly — especially to remove reliance on manual testing bottlenecks. It enables from day 1 that Manual QA engineers start migrating their manual QA scripts into testRigor.
For fewer teams (e.g. < 10% in the world), the company is willing to invest in skillset building to be able to build the Four Layer Model. This requires solid engineering/architecture skillset, willingness for developers to own the tests (rather than QA), and the ability to invest. For those companies, I recommend the Four Layer Model - you invest in building up the plumbing, but you get complete control & maintainability.
[Part 1/3] "1. From the outside, it seems that a lot of the plumbing lives inside testRigor's platform rather than in a Four Layer architecture that the team owns. Is that a fair way to look at it?"
Yes, your interpretation is fair. With the Four Layer Model, the team owns the abstraction and the plumbing. With testRigor, that plumbing lives inside the platform. That’s the trade-off: you get speed and convenience up front, but you give up some architectural control and long-term flexibility.
How can manual QA Engineers start with testRigor?
If the Manual QA Engineers already have documented procedures, then they can import their existing manual test procedures into testRigor, e.g. by connecting to JIRA / TestRail, etc. It's recommended to import in batches - e.g. import a batch, clean up steps, get it running... then go onto the next batch.
If the Manual QA Engineers do NOT have documented procedures, they can use AI functionality in testRigor to generate the test cases.
Thanks for publishing this, Valentina. I always learn a lot from your writing on ATDD and TDD. It's always valuable to see your perspective on these concepts.
I'm very much in favour of the Four Layer Model (which I originally learned from Dave Farley and have seen practical real-world examples from your talks/writings) and have seen its benefits in terms of decoupling and reduced maintenance.
Reading this piece about ATDD with testRigor, I am trying to understand the context:
1. From the outside, it seems that a lot of the plumbing lives inside testRigor's platform rather than in a Four Layer architecture that the team owns. Is that a fair way to look at it?
2. You mention that many executives want results 'right now' and that converting a whole manual test suite can't be done overnight with the Four Layer Model. In that context, do you see tools like testRigor mainly as a short-term, pragmatic option under delivery pressure, or as a long-term solution for teams that could or should invest in a Four Layer Model? If I simply put, for which kinds of teams/products would you lean towards a tool like testRigor?
I am asking because you know better than anyone how much energy, effort, and investment it takes to scale these practices inside a team. Today it's getting harder to explain the value of these investments when AI can generate code and tests with relatively little effort.
Do you think it would help readers if this context: when you would reach for a tool like testRigor versus the Four Layer Model, were made more explicit?
I would love to hear how you frame these trade-offs when you work with teams.
[Part 3/3] "Do you think it would help readers if this context: when you would reach for a tool like testRigor versus the Four Layer Model, were made more explicit?"
If a company is in this situation:
- Stuck in Manual QA for regression testing), AND
- Has attempted E2E Test Automation the bad way (tightly coupled to Selenium/Playwright, etc.) wasting significant time on test maintenance whilst not getting any benefits, AND
- Is working in silos, whereby it's seen as QA job to do testing, AND
- Management isn't willing to invest in building skillset capability for the Four Layer Model, AND
- Management wants results from Day 1
Then, in those cases, I see the Four Layer Model as impossible for such companies, the only viable way is a tool like testRigor which enables Manual QA Engineers to import their Manual Test cases, to get started with automation from Day 1, in a way that's more maintainable compared to their previous failed attempts with Selenium/Playwright.
Thanks so much for taking the time to write such a detailed 3-part reply. This really helps me understand how you are thinking about testRigor vs The Four Layer Model.
Your framing of "90% of teams with low engineering maturity" vs "<10% who can invest in the Four Layer Model" makes a lot of sense, and I agree that tools like testRigor can be a huge step up from brittle Selenium/Playwright scripts and pure manual regression.
The bit I am still wrestling with is how this message lands for those low-maturity teams.
When an article talks about "90% test automation with AI" and plain-English tests that QA can write from day one, a lot of teams may hear:
"We don't really need to invest in test architecture or developer involvement, the tool will handle it."
We are already seeing a similar pattern with agentic tools like Claude, Cursor, etc., where majority of people conclude that "AI agents can write everything for us," so we no longer need to build testing, design, and architecture capabilities in the team.
In our experience, that's where new kinds of test-debt appear:
- as the application grows, the plain-English tests start to drift, duplicate intent and become hard to read,
- suites end up encoding a lot of "how" instead of "what", only now the plumbing is hidden inside a vendor platform,
- and a few years later, the team is again afraid to touch tests because they don't really understand what's covered.
I believe that the teams who succeed with tools like testRigor in the long term are the ones who still apply the spirit of the Four Layer Model: they establish a clear domain language, create reusable higher-level actions, and are intentional about what belongs in ATs vs. lower-level tests.
I would be really interested to hear:
- When you work with teams adopting testRigor in those 90% situations, do you still guide them to introduce some lightweight test architecture/domain vocabulary inside the tool? Hence, tests remain readable as the product grows? Though I am not sure how far you can really go with those inside tools, like testRigor
- And how do we communicate to leadership that even with AI-based tools, there is still ongoing investment needed in test design and ownership?
My concern is not with testRigor itself, but that some readers might walk away thinking "we can skip the hard work of building testing capability, we will just buy it."
Thanks again for all the work you have done for the TDD/ATDD community. I have learned a lot from your writings, and I am still learning.
[Part 6] Regarding communication with leaders, even before AI, there was high popularity of Record and Playback Automation tools. Executives preferred those tools rather than building up the test architecure themselves. Similar thing happens with AI tools.
It depends on company engineering culture, and heavily depends on the CTO / Engineering Manager. If they understand the fundamentals why we need separation of DSL vs plumbing, then we'll be in a position to make arguments for investment in it.
[Part 4] The teams that engage me are those who are willing to implement the architecure themselves, so as a Technical Coach, I help teams adopt ATDD using the Four Layer Model. Those are the kind of teams that would be interested in working with me
For teams that would choose a tool (it might be a Selenium play-record tool, or it might be some AI tool) - those kind of companies have not contacted me up to now, since they generally just give the tool to their QA and there isn't much of a point for a Technical Coach. Instead, those teams may get support from customer service from that tools company.
However, I do think everyone should familiarize themselves with the Four Layer Model, irrespective of whether or nor they are using a tool. However, I haven't experienced it.
[Part 5] How to define readability? I define it in terms of speaking via domain language, and that's something that is a must-have in Four Layer Model, whereby tests are written in terms of the DSL.
However, when it comes to tools, thst cannot be enforced. For example, when a team migrated from their manual procedures to testRigor, they'll import the tests that are written as UI commands.
They would need to receive basic training regarding writing good DSL, the difference between domain language vs UI language. If they don't do that, then we end up with UI coupling.
[Part 2/3] "2. You mention that many executives want results 'right now' and that converting a whole manual test suite can't be done overnight with the Four Layer Model. In that context, do you see tools like testRigor mainly as a short-term, pragmatic option under delivery pressure, or as a long-term solution for teams that could or should invest in a Four Layer Model? If I simply put, for which kinds of teams/products would you lean towards a tool like testRigor?"
For many teams (e.g. 90% in the world) they have low engineering maturity, they are stuck with Manual QA or bad UI tests (coupled to Selenium/Playwright). No one has the adequate architecture skillsets for the Four Layer Model, and there is no organizational willingness to invest in skillset. It is seen that testing is QA job, and no willingness to involve developers in this effort. In those situations, a tool like testRigor can be a practical wedge to move the needle quickly — especially to remove reliance on manual testing bottlenecks. It enables from day 1 that Manual QA engineers start migrating their manual QA scripts into testRigor.
For fewer teams (e.g. < 10% in the world), the company is willing to invest in skillset building to be able to build the Four Layer Model. This requires solid engineering/architecture skillset, willingness for developers to own the tests (rather than QA), and the ability to invest. For those companies, I recommend the Four Layer Model - you invest in building up the plumbing, but you get complete control & maintainability.
[Part 1/3] "1. From the outside, it seems that a lot of the plumbing lives inside testRigor's platform rather than in a Four Layer architecture that the team owns. Is that a fair way to look at it?"
Yes, your interpretation is fair. With the Four Layer Model, the team owns the abstraction and the plumbing. With testRigor, that plumbing lives inside the platform. That’s the trade-off: you get speed and convenience up front, but you give up some architectural control and long-term flexibility.