React code reviews get oddly miserable right when the PR is “basically fine.” You know the type. The UI works. Tests are green. Everyone’s ready to hit merge.
And still… your gut’s doing that little warning siren thing.
Because future-you is absolutely capable of getting paged at 2 a.m. Over something small and stupid: an impure render, an effect that never cleans up, a tiny accessibility regression nobody noticed. That’s the exact moment my AI React code review workflow pays rent. It catches the boring-but-expensive landmines fast, and it keeps me consistent when my brain is already toast.
Key Takeaways
- I use AI like a structured reviewer, not an “approve button.” It summarizes what changed, proposes a review plan, and points at the sketchy zones.
- For SEO + accessibility, I run an AI audit zeroes in on semantic markup, heading hierarchy, and SEO-impacting HTML mistakes. This is inspired by Basit’s prompt.
- Guardrails come first. Always. ESLint hooks rules, jsx-a11y, and React StrictMode surface real issues before AI even opens its mouth.
- I double-check with real tools, not vibes: Lighthouse for SEO + accessibility audits, and sometimes DOM-level a11y checks like axe.
- Efficiency matters. ReAct-style review plus skip analysis can cut wasted effort. One team reported 40% lower token use and 30% faster PR processing.
What “AI React code review” means
When I say AI React code review, I mean using an LLM to do a few very specific jobs:
- Understand the diff and the surrounding context.
- Generate a checklist tailored to this PR, not a generic “React best practices” sermon.
- Call out likely bugs, design smells, missing tests.
- Offer concrete improvements with tradeoffs, not floaty advice like “consider performance.”
What it does not mean is replacing basic engineering hygiene.
Because if your repo has no linting, no CI, and no conventions… the AI will “review” pure chaos and sound confident while doing it. That’s the opposite of what you want. Tight constraints first. Then AI on top.
My AI React code review workflow
I landed on this after plenty of “ask ChatGPT to review this PR” experiments where the output looked fancy and said basically nothing.
1) Start with a ReAct-style “plan the review” prompt
I like the ReAct idea, Reasoning + Acting, because it forces an actual process instead of a stream of random opinions.
Jet Xu described using ReAct patterns for AI code review, plus “skip analysis” for trivial PRs, and reported wins like 40% reduction in token consumption and 30% faster PR processing. They even saw a 25% increase in user satisfaction by cutting noise and focusing on relevant changes. Honestly, that lines up with what I see too. Half the value is simply not spending time “reviewing” junk.
Here’s the prompt I use. I paste the diff and add a short repo note.
You are my senior React reviewer.
Goal: perform an AI React code review of this PR.
Step 1. Summarize what changed, list touched modules, infer intent.
Step 2. Create a review plan with priorities. Correctness, hooks, performance, a11y, SEO, tests.
Step 3: give findings as:
- Severity
- Evidence
- Fix
Step 4: list missing tests and edge cases.
Constraints: follow React rules of hooks, avoid speculative claims.If the PR is tiny, docs or formatting, I tell it to skip deep review and only check for risky patterns. That’s my version of skip analysis. No heroics for a comma change.
2) Run lint + hooks checks before asking AI
I don’t want the AI wasting time on stuff ESLint can prove in seconds.
eslint-plugin-react-hooks enforces the Rules of React and catches hook issues at build time, including
rules-of-hooksandexhaustive-deps. React’s docs are pretty blunt about why this matters. Hooks depend on consistent call order and correct dependencies, and messing up can quietly break correctness and performance.
External ref: https://react.dev/reference/eslint-plugin-react-hookseslint-plugin-jsx-a11y does static checks on JSX for accessibility issues. Their README calls out the big limitation. It’s static-only, so you’ll want runtime or DOM testing too, like
@axe-core/react.
External ref: https://github.com/jsx-eslint/eslint-plugin-jsx-a11y
Example, flat config style:
// eslint.config.js
import reactHooks from 'eslint-plugin-react-hooks'. Import jsxA11y from 'eslint-plugin-jsx-a11y';
export default [
{
files. ['**/*.{js,jsx,ts,tsx}'],
plugins. {
'react-hooks'. ReactHooks,
'jsx-a11y': jsxA11y,
},
rules: {
...reactHooks.configs.recommended.rules,
...jsxA11y.flatConfigs.recommended.rules,
},
},
];Once those guardrails are in place, AI React code review becomes way more valuable. You’re asking it to think about higher-level stuff: component boundaries, state flow, render behavior, and the big question, “does this still scale when the app gets messy?”
3) Use React StrictMode as a “cheap bug detector”
If a PR touches effects or state logic, I sanity-check under React StrictMode. Not because it’s fancy. Because it’s annoying in a productive way.
React explicitly says StrictMode will, in dev only:
- re-render components an extra time to find impure rendering bugs
- re-run Effects an extra time to find missing cleanup
External ref: https://react.dev/reference/react/StrictMode
This matters because AI suggestions can look perfectly reasonable and still sneak in mutation during render or an effect forgets cleanup. StrictMode tends to make those mistakes loud fast.
4) Do an AI audit for SEO + accessibility (React/Next.js)
This part is heavily inspired by Abdul Basit’s prompt-based audit approach.
React makes UI easy. It also makes “bad HTML easy.” And over time, those little mistakes stack up, especially in React and Next.js. Semantics get weird, heading hierarchy drifts, SEO quietly takes a hit.
Basit’s audit prompt aims at:
- semantic markup issues
- SEO-impacting mistakes
- poor heading hierarchy
Source: Basit’s post on auditing React code for SEO and accessibility.
So I ask the AI something like this:
Audit this React component for:
1) Semantic HTML (landmarks, button vs div, form controls)
2) Heading hierarchy (h1..h6, skipping levels)
3) Accessibility (labels, aria, focus, keyboard)
4) SEO-impacting HTML issues
Return a checklist + specific code fixes.And yeah, I verify against real standards when the team starts debating feelings. WCAG 2.2 is what I point to when “I think it’s fine” turns into a 20-minute discussion.
External ref: https://www.w3.org/TR/WCAG22/
5) Verify with Lighthouse (AI React code review + proof)
AI is useful. Lighthouse is receipts.
Google’s Lighthouse is an open-source automated tool with audits for performance, accessibility, SEO, and more. You can run it in DevTools, in the CLI, or wired into CI. A run usually takes 30–60 seconds and gives you a report with failed audits plus fix guidance.
External ref: https://developer.chrome.com/docs/lighthouse/overview
CLI example:
npm i -g lighthouse
lighthouse https://localhost:3000 --view --only-categories=accessibility,seoIf you want a visual for the post, a good one is:
- Image idea. “AI React code review workflow diagram: Lint → AI review plan → A11y/SEO audit → Lighthouse verification → PR comments”
- Alt text: “AI React code review workflow showing ESLint hooks checks, JSX accessibility linting, AI audit prompts, and Lighthouse SEO/accessibility report”
A real example: when AI helps (and when it lies)
I’ve had the same experience Mate Marschalko described when asking ChatGPT to generate React components. For a simple modal + toggle, it can be “executed flawlessly.” It even adds sensible placeholder content and notes about missing wiring. That’s real value. Fast scaffolding, fast review feedback, fewer staring-at-a-blank-file moments.
But the failure mode is almost boringly predictable. Vague requirements invite confident nonsense.
So I steal a trick from The React Show episode on using AI with React. I iterate on prompts and tighten constraints until the output is testable and specific. The moment you demand evidence and file-level references, review quality jumps.
Best practices I stick to for AI React code review
- Don’t paste only a component. I include the diff, the props contract, and where it’s used.
- I force structure. Severity, evidence, fix, tests.
- I ask for edge cases. Loading states, empty lists, rapid re-renders, stale closures.
- AI is a reviewer, not an author. If it suggests a refactor, I ask for a minimal patch first.
- I close the loop with tools. ESLint, tests, StrictMode, Lighthouse.
Conclusion
My AI React code review setup isn’t magic. It’s just a workflow I can repeat even when I’m tired: guardrails first with lint + StrictMode, AI for context-aware feedback, then Lighthouse to prove we didn’t ship an SEO or accessibility regression.
If you try one thing, try the ReAct-style “review plan” prompt. It’s the difference between random comments and an actual review.
If you want to go further, check out my internal post on AI-assisted testing workflows.
Internal link: https://www.basantasapkota026.com.np/2026/02/testing-with-ai-just-got-easy-practical.html
And if you’ve got a favorite AI React code review prompt, or a spectacular failure story, drop it in the comments. I collect those like souvenirs.
Sources
- Jet Xu, “How We Made AI Code Review 40% More Efficient Using ReAct Patterns” (DEV Community) . Https.//dev.to/jet_xu/how-we-made-ai-code-review-40-more-efficient-using-react-patterns-1cd
- Abdul Basit, “How I Use AI to Audit React Code for SEO and Accessibility” (Medium) . Https.//medium.com/@basit.miyanjee/how-i-use-ai-to-audit-react-code-for-seo-and-accessibility-570e829d247e
- Abdul Basit, “How I Use AI to Review React Code Like a Pro” (JavaScript in Plain English) , https.//javascript.plainenglish.io/how-i-use-ai-to-review-react-code-like-a-pro-38bd730aeb3e
- Google Chrome for Developers, “Introduction to Lighthouse” , https.//developer.chrome.com/docs/lighthouse/overview
- React Docs, “<StrictMode>” , https.//react.dev/reference/react/StrictMode
- React Docs, “eslint-plugin-react-hooks” — https.//react.dev/reference/eslint-plugin-react-hooks
- jsx-eslint, “eslint-plugin-jsx-a11y” — https.//github.com/jsx-eslint/eslint-plugin-jsx-a11y
- W3C, “Web Content Accessibility Guidelines (WCAG) 2.2” — https.//www.w3.org/TR/WCAG22/
- The React Show, “[90] How To Use AI To Write React Programs” — https://www.thereactshow.com/podcast/how-to-use-ai-to-write-react-programs
- Mate Marschalko, “I asked ChatGPT AI to write React and JavaScript code — I was shocked!” (Bits and Pieces / Bit) — https://blog.bitsrc.io/i-asked-chatgpt-ai-to-write-react-and-javascript-for-me-and-i-was-shocked-detailed-analysis-d68d55be7746