How to Avoid AI Slop in Your Pull Requests

How to Avoid AI Slop in Your Pull Requests

  • Jake Ruesink
  • AI
  • 19 Dec, 2025

Coding with AI is the new normal. Reviewing AI-written code is the new bottleneck.

The problem isn’t necessarily that AI writes bad code. It’s that it often writes blurry code, code that technically works but is harder to review, harder to trust, and harder to maintain.

That’s what people mean when they talk about AI slop.

And as more teams lean on AI, the ability to produce clean, reviewable pull requests becomes a real competitive advantage.

What “AI slop” actually means

AI slop isn’t one thing. It shows up in a few predictable ways:

  • Unnecessary comments: Comments that restate what the code already says or explain things no human would bother explaining.
  • Over-defensive code: Extra type escapes
  • Inconsistent style: Formatting, naming, or patterns that don’t match the rest of the file or codebase.
  • Overthinking: Hooks, abstractions, or configuration added “just in case” instead of because they’re needed.

Every model has its own personality here. Some love comments. Some love guards. Some love abstractions. None of them know your codebase unless you teach them.

Formatting is mostly solved, the rest isn’t

Things like weird formatting changes are largely a solved problem on teams:

  • Prettier
  • Biome
  • ESLint
  • Project-wide conventions

If you’re working solo, you should still lock these in early. But for teams, formatting is rarely the real issue anymore.

The real problem is semantic consistency.Does this code feel like it belongs here?

That’s what makes AI-generated PRs hard to review.

Why the “remove AI slop” slash command went viral

A slash command made the rounds recently that effectively said:

“Check this diff against main and remove all AI-generated slop.”

It resonated because it reframed review as a subtractive process:

  • Remove comments a human wouldn’t write
  • Remove defensive checks that don’t belong
  • Remove type escapes
  • Remove patterns inconsistent with the file

Not “does this work?”, but “what shouldn’t be here?”

That’s a powerful lens for review, but it’s still reactive. The real win is preventing slop before the PR exists.

The root causes of AI slop

Most AI slop comes from one of these:

  • Poor planning
  • Shallow understanding of the codebase
  • Vibe coding (letting AI do whatever it wants)
  • No rules or constraints

This isn’t a tooling problem. It’s a process problem.

A 5-step process to avoid AI slop

I recommend a simple but disciplined loop:

Research → Plan → Execute → Review → Revise

This works for tasks of any size.

1. Research

Before writing code, narrow the problem space.

Ask:

  • Which files are relevant?
  • What patterns already exist?
  • Where are the trust boundaries?
  • What assumptions does this part of the code make?

The goal isn’t deep investigation — it’s eliminating ambiguity.

At the end of this step, output a short summary:

  • Relevant files
  • Key constraints
  • Open questions (and answers)

This can live in a markdown file or become prompt context.

2. Plan

Turn research into intent.

The plan answers:

  • What needs to change?
  • What won’t change?
  • What constraints must be respected?

This is where you:

  • Prevent unnecessary abstractions
  • Lock in error-handling expectations
  • Set boundaries for comments, hooks, and patterns

Review the plan like architecture, not like code. If something feels off here, it’s cheap to fix.

3. Execute

Now let the AI work, but narrowly.

  • Prefer smaller, scoped executions
  • Follow the plan exactly
  • Avoid “helpful extras”

Good execution feels boring. That’s a feature.

4. Review

This is where AI slop is caught.

Review with a different question than usual:

“Would someone who’s been in this codebase for 6 months write this?”

Check:

  • Tests passing
  • Linting clean
  • Code matches the plan
  • Style matches the file
  • No unnecessary comments, guards, or escapes

5. Revise (the step most people skip)

This is the hardest and most valuable step.

When output doesn’t match expectations, ask:

  • What context was missing?
  • What rule wasn’t explicit?
  • What assumption did the model make?

Then feed that learning back:

  • Update your rules
  • Update your agent instructions
  • Update your Cursor / Claude / Copilot context
  • Improve your slash commands

This is how you stop fixing the same problems repeatedly. You’re not just coding, you’re building the factory.

Why this step matters more than people think

Most people never revisit mistakes.

Same reason most students never rework missed test problems, it’s uncomfortable and feels like extra work.

But this step compounds:

  • Better prompts
  • Cleaner diffs
  • Faster reviews
  • Higher trust

Over time, your PRs stop looking “AI-written”, not because AI got better, but because your system did.

The payoff

When this process clicks:

  • PRs are easier to review
  • Code reads cleanly
  • Future AI agents have better context
  • New team members ramp faster
  • Velocity increases without chaos

And if you combine this with:

  • Strong linting
  • Good tests
  • Clear rules
  • Slash commands
  • MCP integrations (GitHub, Linear, etc.)

You don’t just write code faster, you become someone who sets the standard for how well the team works.

Dec 19, 2025.

Related Posts

From Discovery to Delivery: An AI Forward Product Team

From Discovery to Delivery: An AI Forward Product Team

  • Jake Ruesink
  • AI
  • 24 Aug, 2025

AI is changing how product teams work, from the way we discover opportunities to how we deliver features. The fastest teams today are AI forward: they use AI to validate ideas quickly, generate workin

read more
📋 Comprehensive Cursor Rules Best Practices Guide

📋 Comprehensive Cursor Rules Best Practices Guide

  • Jake Ruesink
  • AI
  • 29 May, 2025

If you want your AI coding assistant to actually “get” your project, great rules are non-negotiable. But writing effective Cursor rules isn’t just about dumping a list of do’s and don’ts—it’s about st

read more
Context Building: The Art of Layered AI Problem Solving

Context Building: The Art of Layered AI Problem Solving

  • Jake Ruesink
  • AI
  • 25 Jul, 2025

In the rapidly evolving landscape of AI-assisted development, a powerful methodology is emerging that goes far beyond simple prompt engineering. Context Building represents a systematic approach to pr

read more
Cultivating Intentional Agent Networks

Cultivating Intentional Agent Networks

  • Jake Ruesink
  • AI
  • 06 Mar, 2026

This project started with a missing tool. Our team used to rely heavily on Codegen, a platform that connected our workflows across GitHub, Slack, Linear, and our codebase. It wasn’t just an AI coding

read more
From Prompts to Prototypes: Learning the AI Development Process

From Prompts to Prototypes: Learning the AI Development Process

  • Jake Ruesink
  • AI
  • 18 Feb, 2025

Some friends in a coding chat I'm part of were asking about how to get better at AI-driven coding. They were wondering if the issues they were facing stemmed from a skill gap, poor prompts, a lack of

read more
From Slack to Shipped - How I Build Features with AI Agents

From Slack to Shipped - How I Build Features with AI Agents

  • Jake Ruesink
  • AI
  • 11 Aug, 2025

Modern AI tools are transforming how we write, review, and ship code — but the real magic happens when you connect them into a structured, repeatable workflow. In this post, I’ll walk through the

read more
Is Codegen the Future of Coding? 🛠️

Is Codegen the Future of Coding? 🛠️

  • Jake Ruesink
  • AI
  • 20 Mar, 2025

Software development has always evolved, from hand-written assembly code to powerful frameworks and libraries that streamline work. Now, AI-powered code generation is taking center stage, and tools li

read more
🤖 My First Multi-Agent AI Coding Session: How an Hour of Agentic Magic Transformed My Workflow

🤖 My First Multi-Agent AI Coding Session: How an Hour of Agentic Magic Transformed My Workflow

  • Jake Ruesink
  • AI
  • 04 Jun, 2025

A couple of weeks ago, I wanted to test the bounds of agentic AI development workflows just as a fun exploration. I’d seen plenty of demos and played with a few basic examples, but this was my first r

read more
Speed Coding an AI Chatbot at Remix Austin

Speed Coding an AI Chatbot at Remix Austin

  • Jake Ruesink
  • AI
  • 03 Apr, 2025

I joined the Remix Austin meetup, hosted at HEB Digital’s downtown office, for a unique event called “Remix Rodeo.” The concept: form a team, pick an idea, and build something impressive in just o

read more
The Top Skill Engineers Should Be Developing Right Now

The Top Skill Engineers Should Be Developing Right Now

  • Jake Ruesink
  • AI
  • 05 Mar, 2026

AI has dramatically changed how software gets written. Code generation is fast. Ideas can become implementations in minutes. Entire features can appear with a few prompts. But this speed introduces

read more
Medusa Superpowers - Unlocking E-Commerce Potential

Medusa Superpowers - Unlocking E-Commerce Potential

In today’s rapidly evolving digital environment, e-commerce platforms must maintain flexibility, scalability, and intelligence to keep businesses competitive. SaySo, the p

read more
🛒 The Future of E-Commerce? AI Workflows in Medusa 2 🤖

🛒 The Future of E-Commerce? AI Workflows in Medusa 2 🤖

E-commerce is undergoing a transformation, and AI is at the center of it. As platforms evolve, the ability to integrate AI-powered workflows directly into store management systems is becoming increasi

read more