8-Step Human–AI Code Review
AI-Powered Code Review
📅 Join me for Acceptance Testing (Live Training) on Wed 25th Feb (17:00 - 19:00 CET) (100% discount for Optivem Journal members)
🔒 Hello, this is Valentina with a premium issue of the Optivem Journal. I help Engineering Leaders & Senior Software Developers apply TDD in Legacy Code.
I used to love code reviews. Going through a merge request line by line, questioning naming, debating implementation choices, which design patterns to use, etc....
Fast forward to today. AI is everywhere. Developers use AI to write code, refactor code, write tests. Before, it used to take hours/days, now it’s reduced to minutes.
The problem? Just because AI is fast doesn’t mean the code is good. Or even maintainable. How much should humans actually review?
Code Review in the age of AI
A developer recently told me he’s stopped reviewing code the way he used to.
He doesn’t read every line anymore. Instead, he checks:
Is the code readable?
Can a human actually understand what’s going on?
If the answer is no, the developer should explain or improve it. That’s it.
Human Review was already shrinking before AI
Even before AI, human review was already shrinking.
Linters in the pipeline would do formatting checks. SonarQube would do clean code checks — cyclomatic complexity, duplication, smells. A lot of issues that reviewers used to flag was already handled automatically.
So the shift didn’t start with AI.
AI just took it one level further.
Before AI, SonarLint would flag code with:
Code smells
Complexity
Duplication
Developers fixed those issues locally.
Once the code reached the Pipeline, Linters & SonarQube ran the checks again, and developers fixed any remaining problems before merging.

