How to Avoid AI Slop in Your Pull Requests

Coding with AI is the new normal. Reviewing AI-written code is the new bottleneck.

The problem isn’t necessarily that AI writes bad code. It’s that it often writes blurry code, code that technically works but is harder to review, harder to trust, and harder to maintain.

That’s what people mean when they talk about AI slop.

And as more teams lean on AI, the ability to produce clean, reviewable pull requests becomes a real competitive advantage.

What “AI slop” actually means

AI slop isn’t one thing. It shows up in a few predictable ways:

  • Unnecessary comments: Comments that restate what the code already says or explain things no human would bother explaining.
  • Over-defensive code: Extra type escapes
  • Inconsistent style: Formatting, naming, or patterns that don’t match the rest of the file or codebase.
  • Overthinking: Hooks, abstractions, or configuration added “just in case” instead of because they’re needed.

Every model has its own personality here. Some love comments. Some love guards. Some love abstractions. None of them know your codebase unless you teach them.

Formatting is mostly solved, the rest isn’t

Things like weird formatting changes are largely a solved problem on teams:

  • Prettier
  • Biome
  • ESLint
  • Project-wide conventions

If you’re working solo, you should still lock these in early. But for teams, formatting is rarely the real issue anymore.

The real problem is semantic consistency.Does this code feel like it belongs here?

That’s what makes AI-generated PRs hard to review.

Why the “remove AI slop” slash command went viral

A slash command made the rounds recently that effectively said:

“Check this diff against main and remove all AI-generated slop.”

It resonated because it reframed review as a subtractive process:

  • Remove comments a human wouldn’t write
  • Remove defensive checks that don’t belong
  • Remove type escapes
  • Remove patterns inconsistent with the file

Not “does this work?”, but “what shouldn’t be here?”

That’s a powerful lens for review, but it’s still reactive. The real win is preventing slop before the PR exists.

The root causes of AI slop

Most AI slop comes from one of these:

  1. Poor planning
  2. Shallow understanding of the codebase
  3. Vibe coding (letting AI do whatever it wants)
  4. No rules or constraints

This isn’t a tooling problem. It’s a process problem.

A 5-step process to avoid AI slop

I recommend a simple but disciplined loop:

Research → Plan → Execute → Review → Revise

This works for tasks of any size.

1. Research

Before writing code, narrow the problem space.

Ask:

  • Which files are relevant?
  • What patterns already exist?
  • Where are the trust boundaries?
  • What assumptions does this part of the code make?

The goal isn’t deep investigation — it’s eliminating ambiguity.

At the end of this step, output a short summary:

  • Relevant files
  • Key constraints
  • Open questions (and answers)

This can live in a markdown file or become prompt context.

2. Plan

Turn research into intent.

The plan answers:

  • What needs to change?
  • What won’t change?
  • What constraints must be respected?

This is where you:

  • Prevent unnecessary abstractions
  • Lock in error-handling expectations
  • Set boundaries for comments, hooks, and patterns

Review the plan like architecture, not like code. If something feels off here, it’s cheap to fix.

3. Execute

Now let the AI work, but narrowly.

  • Prefer smaller, scoped executions
  • Follow the plan exactly
  • Avoid “helpful extras”

Good execution feels boring. That’s a feature.

4. Review

This is where AI slop is caught.

Review with a different question than usual:

“Would someone who’s been in this codebase for 6 months write this?”

Check:

  • Tests passing
  • Linting clean
  • Code matches the plan
  • Style matches the file
  • No unnecessary comments, guards, or escapes

5. Revise (the step most people skip)

This is the hardest and most valuable step.

When output doesn’t match expectations, ask:

  • What context was missing?
  • What rule wasn’t explicit?
  • What assumption did the model make?

Then feed that learning back:

  • Update your rules
  • Update your agent instructions
  • Update your Cursor / Claude / Copilot context
  • Improve your slash commands

This is how you stop fixing the same problems repeatedly. You’re not just coding, you’re building the factory.

Why this step matters more than people think

Most people never revisit mistakes.

Same reason most students never rework missed test problems, it’s uncomfortable and feels like extra work.

But this step compounds:

  • Better prompts
  • Cleaner diffs
  • Faster reviews
  • Higher trust

Over time, your PRs stop looking “AI-written”, not because AI got better, but because your system did.

The payoff

When this process clicks:

  • PRs are easier to review
  • Code reads cleanly
  • Future AI agents have better context
  • New team members ramp faster
  • Velocity increases without chaos

And if you combine this with:

  • Strong linting
  • Good tests
  • Clear rules
  • Slash commands
  • MCP integrations (GitHub, Linear, etc.)

You don’t just write code faster, you become someone who sets the standard for how well the team works.

Published on Dec 19, 2025.