What also works well is to do a Testlist or an Example Mapping eg on Miro/Whiteboard and then just paste a picture to translate it to a Markdown file. this can then be used as input for the TDD loop.
nothing I can share, sorry. But I suggest to try it out - just for the next feature.
I use claude code personally, it feels really powerful in the terminal and it feels mature. Haven't tried other options in the last few months (which is an eternity in "AI-years"), but from talking to colleagues etc. Claude Code regularly gets mentioned as beeing one of (if not the) the top tools right now - YMMV.
Regarding writing good tests, to be honest, I didn't find any solution there yet. I've found that AI was generating meaningless tests. The only direction I see as helpful is writing better requirements, but then that's the same effort as writing the test itself.
However, for writing code that makes the test pass, that's where AI shines. Some prompts: "Can you make the test pass?", "Can you update file XYZ to make the test pass?". I also might add "Please implement only minimal code, not more than is needed to make the test pass."
Similarly for refactoring, I might ask "Can you refactor this code?", "How would you improve this code?"
Marco Emrich & I worked out this prompt for following TDD, also some rules for refactoring. It works suprisingly good (tested with cursor & claude code). give it a try if you like: https://github.com/ferdi145/mad-tdd-mob-ai-driven/blob/main/CLAUDE.md
What also works well is to do a Testlist or an Example Mapping eg on Miro/Whiteboard and then just paste a picture to translate it to a Markdown file. this can then be used as input for the TDD loop.
Thanks for sharing! Do you have any open source project where you used that prompt to showcase what was generated (e.g. some demo project)...
Also, what are your thoughts on Cursor/Claude versus Copilit? Which do you tend to use and why?
nothing I can share, sorry. But I suggest to try it out - just for the next feature.
I use claude code personally, it feels really powerful in the terminal and it feels mature. Haven't tried other options in the last few months (which is an eternity in "AI-years"), but from talking to colleagues etc. Claude Code regularly gets mentioned as beeing one of (if not the) the top tools right now - YMMV.
Based on your experience, where do you see Claude Code as performing really good, versus what do you see as the limitations...
Do you have any tips for prompting AI to write good tests & code?
Regarding writing good tests, to be honest, I didn't find any solution there yet. I've found that AI was generating meaningless tests. The only direction I see as helpful is writing better requirements, but then that's the same effort as writing the test itself.
However, for writing code that makes the test pass, that's where AI shines. Some prompts: "Can you make the test pass?", "Can you update file XYZ to make the test pass?". I also might add "Please implement only minimal code, not more than is needed to make the test pass."
Similarly for refactoring, I might ask "Can you refactor this code?", "How would you improve this code?"