Anthropic launches Claude Code Review to check your AI-generated code for $15–25 a scan
Anthropic has introduced a new tool called Claude Code Review in research preview. It lets you run automated code checks directly within Claude, no third-party plugins needed – just enable the option in admin settings.
As shown on the Claude YouTube channel, once Code Review is active, "a team of agents" crawls through your codebase in parallel, flagging potential bugs. The system filters out false positives on its own and ranks findings by severity.
Anthropic says the tool is modeled on the internal review process the company uses itself. According to the developers, code review has long been a bottleneck for engineers. Code Review won't approve pull requests automatically, but "it closes the gap so reviewers can actually cover what's shipping."
Not everyone is impressed, though. The head of AI at Hedgineer, a hedge fund asset management software company, shared their take on X:
"Other than not needing a trigger like a GitHub Action and being configurable directly inside Claude Desktop, I see absolutely NO additional functionality or improvement over just setting up claude.yml with the /install-github-app command."
Their team already had an AI code review workflow in place, so the main gain was just convenience of integration.
On the other side of the debate, Claude coder Thariq argues the new tool "uses a lot more compute and tends to catch more difficult bugs."
That extra compute comes at a cost – reviews are billed by token usage and typically run $15–25 per check, with larger or more complex pull requests pushing the price higher. Anthropic acknowledges this directly, stating that "Code Review optimises for depth and may be more expensive than other solutions, like our open source GitHub Action."
There's an argument that a pricier tool like this actually makes more sense for hand-written code. Human-authored codebases tend to be smaller in scope, which means less to comb through and more focused results.
A paper published in January found that vibe coding has a measurably negative effect on open source software quality. AI-assisted coding raises productivity, but "it also weakens the user engagement through which many maintainers earn returns."
AI coding tools generate massive volumes of code – some of it good – but reviewing it at scale takes increasingly long. That's setting aside the more dramatic cases where AI coding tools have wiped entire databases, something even Amazon hasn't been immune to.
Anthropic frames Code Review as a tool built to dedicate more compute to the checking process. Whether it holds up against truly large codebases, and whether the price tag is justified, remains to be seen in practice.