Modern AI tools are transforming how we write, review, and ship code — but the real magic happens when you connect them into a structured, repeatable workflow. In this post, I’ll walk through the process I use to go from an idea in Slack to a working feature in production, using Codegen, Linear, and GitHub as the backbone.
🎥 Video Walkthrough Below
Here’s the full step-by-step video version of this process — complete with real-world hiccups, test failures, and AI-powered fixes.
Why This Workflow Matters
Most AI coding demos show a single big prompt that spits out code.
That’s fine for prototypes — but when you’re building and maintaining real software, you need structure, traceability, and collaboration.
This workflow uses four stages that keep things clean and predictable:
- Research – Understand the problem and context
- Plan – Define architecture, scope, and acceptance criteria
- Execute – Write the code and open a PR
- Review – Test, lint, and turn feedback into actionable work
And here’s the kicker: AI agents can drive each of these stages without breaking your existing team workflows.
Step 1 – Slack Idea → Linear Issue
It starts with a casual message in Slack.
Instead of just chatting about the idea, I tag Codegen with a natural language request — for example: “Let’s explore adding local storage as a way to save our tasks so they persist between refreshes. No DB, just small utility functions.”
Codegen then:
- Analyzes the existing codebase
- Creates a detailed Linear issue
- Includes architecture plans, implementation steps, files to modify, and acceptance criteria
This alone is a huge upgrade from scribbling notes in Slack — we now have a fully documented ticket ready for action.
Step 2 – Linear Issue → GitHub PR
Once the Linear issue looks solid, I assign it to the Codegen agent. That’s the trigger.
From there, Codegen:
- Creates a new branch
- Implements the code
- Opens a pull request with a summary of the changes
I don’t need to babysit it — the PR is just waiting for review.
And because all context came from the Linear issue, the implementation is aligned with the original request.
Step 3 – PR Feedback → Linear Sub-Tasks
After a PR is opened, it needs review. That’s where AI can step in again:
- I run a human review or use AI reviewers like Copilot or CodeRabbit.
- I paste the PR URL into Slack and ask Codegen to analyze the review comments.
- Codegen creates Linear sub-issues for each piece of feedback.
This is game-changing because review comments stop being “floating text” in GitHub — they become trackable, actionable work items in the same command center.
Step 4 – Sub-task Completion → Shipped Feature
From here, sub-issues can be:
- Assigned to Codegen again for fixes
- Picked up by human devs
- Merged into the target feature branch when ready
A quick best practice: always specify the target repo and base branch when triggering Codegen work. This ensures PRs land in the right place and don’t accidentally push to main too soon.
Why This Works So Well
By making Linear the command center and letting Codegen handle execution, you get:
- Full traceability
- Structured AI usage
- A workflow that scales with multiple contributors — human or AI
- Less context-switching between tools
Tools Featured in This Workflow
- Codegen
- Linear
- GitHub
- Bun
- React Router
- ShadCN
- Lambda Curry Form Library
- Copilot / CodeRabbit
- Cursor & Warp
Final Thoughts
This wasn’t the video I thought I’d make — it turned into a longer journey, with some detours into test failures and AI debugging. But that’s real development, and it’s exactly why I wanted to show the entire process, not just the happy path.
If you take away one thing, let it be this:
Research → Plan → Execute → Review
It’s not just a project management mantra — it’s the foundation for making AI a real, reliable part of your dev workflow.
Prompts for Reference
Used when extracting comments from a PR for review (I typically do this in Slack):
[include link to PR]
Primary Objective
Review all comments on a GitHub Pull Request and generate specific, actionable prompts that an AI agent can use to implement each suggested change.
Step-by-Step Process
1. Initial PR Analysis
• Extract PR number, repository details, and all review comments
• Identify comment authors (human reviewers vs automated tools)
• Classify comment types (refactor, nitpicks, security, etc.)
2. Comprehensive Comment Discovery
• Look for inline review comments on specific lines
• Check general PR-level comments
• Find nested comments within review bodies (like CodeRabbit’s nitpick sections)
• Include automated tool comments with different formatting
3. Comment Classification & Prioritization
• Major Refactoring: Architectural changes, code duplication removal
• Nitpicks: Style improvements, optimization suggestions
• Security/Performance: Critical issues requiring immediate attention
• Documentation/Testing: README updates, test improvements
4. AI Agent Prompt Generation
For each comment, create a prompt using this template:
[Priority] [Brief Title]
In [file_path] [line_range], [current_issue]. [What_needs_changing] to [desired_outcome]. [Implementation_details].
5. Output Format
🤖 AI Agent Implementation Prompts
Major Refactoring Tasks (X)
[List of major refactoring prompts]
Nitpick Improvements (X)
[List of minor improvement prompts]
Summary
• X Major Tasks: [brief description]
• X Nitpicks: [brief description]
Key Success Criteria
:white_check_mark: Completeness: Every actionable comment becomes a prompt
:white_check_mark: Specificity: Include exact file paths and line numbers
:white_check_mark: Clarity: Clear enough for AI implementation without ambiguity
:white_check_mark: Context: Explain both problem and solution
This ensures comprehensive coverage of all PR feedback and generates actionable tasks for AI agents! :dart:
Used when kicking off a codegen issue as a comment on the issue:
@codegen Create a pull request into the base branch that works on the current issue.
**Requirements:**
1. Implement only the changes necessary to address the issue.
- Avoid unrelated refactoring or feature work.
2. Keep the scope minimal and directly tied to the fix.
3. Before committing and pushing:
- Run the project's lint and typecheck scripts found in the root `package.json` file.
- Ensure there are no errors or warnings.
- Confirm that all relevant tests pass.
4. Format code according to the project’s existing style and conventions.
**Deliverable:**
- A PR containing only the changes relevant to the issue, passing lint, typecheck, and tests, and ready for review.