Cultivating Intentional Agent Networks

Cultivating Intentional Agent Networks

  • Jake Ruesink
  • AI
  • 06 Mar, 2026

This project started with a missing tool.

Our team used to rely heavily on Codegen, a platform that connected our workflows across GitHub, Slack, Linear, and our codebase. It wasn’t just an AI coding tool—it acted more like a coordination layer for the team.

When Codegen shut down, we didn’t just lose an AI assistant. We lost the connective tissue between our systems.

At first my goal was simple: replace that workflow.

But once I started experimenting with agent frameworks and modern LLM capabilities, it became clear that recreating Codegen wasn’t the interesting problem.

The more interesting question was this:

What would it look like to build an intentional network of agents that work together like a team?

That question is what led to the system I’ve been building over the past few weeks.

Watch the Walkthrough

I recently gave a walkthrough of the system to a small group of engineers exploring agent frameworks. In it, I demo the architecture, explain how the agents interact, and show several projects that have been built by the system so far.

The rest of this post dives deeper into the ideas behind the system and how I’ve been approaching these problems.

The Moment This Became Possible

A month ago I wouldn’t have attempted this.

Vector databases, RAG pipelines, agent orchestration—those all felt like infrastructure problems that required a full platform team to build.

But the pace of change right now is incredible. With modern LLMs and frameworks like OpenClaw, you can suddenly do things that would have felt impossible not long ago.

You can:

  • run agent systems locally
  • create shared knowledge layers
  • orchestrate sub-agents
  • integrate with your existing tools
  • rapidly iterate on experimental architectures

Right now, the entire system I’m experimenting with runs on a Mac Mini sitting in my house.

That alone still feels slightly ridiculous.

From Agents to Systems

Most conversations about AI agents today revolve around a familiar pattern.

You spawn a group of agents.You give each one a task.You collect the outputs.

This works really well for research workflows or brainstorming problems.

But that’s not how teams work.

Real teams operate through:

  • roles
  • communication
  • shared knowledge
  • governance
  • iteration
  • long-term improvement

Once I started thinking about the problem this way, the design changed.

Instead of trying to build one powerful agent, I started thinking about a network of specialized agents that collaborate.

Each agent has a role.Each agent has memory.Each agent participates in a system.

The Agents in My System

The system I’ve been experimenting with currently has a handful of core agents that play different roles.

Clawdy — My Personal Assistant

Claudie is the agent I interact with directly.

It acts as my interface to the broader system and has access to tools like:

  • email
  • calendar
  • notes
  • system utilities
  • the rest of the agent network

If I want to know what’s happening inside the system, ask a question, or delegate work, I talk to Claudie.

Currybot — The Team Agent

Curry Bot is designed to operate where my team already works.

It integrates with tools like:

  • Slack
  • GitHub
  • PR workflows
  • issue tracking

The goal is simple: the team shouldn’t need to leave their normal environment to collaborate with AI.

Instead of introducing a new interface, the AI becomes part of the workflow itself.

Scout — The Builder

Scout is the most experimental agent in the system.

Scout runs on a loop—currently every few hours—and its job is straightforward:

Explore. Build. Improve.

Scout works on multiple projects simultaneously, exploring ideas and iterating on systems.

Several of the projects Scout has created so far were built without me writing a single line of code directly.

That’s equal parts exciting and slightly unnerving.

Atticus — The Advisor

As the system grows, it needs some form of guidance.

Atticus acts as an advisory agent that other agents consult before making major decisions.

When Scout proposes changes or plans new work, it often checks with Atticus first.

Atticus also helps maintain documentation and shared knowledge about the system itself.

Meg — The Designer

One of the biggest gaps in agent-driven development right now is design.

So I created Meg as a design-focused agent responsible for helping shape things like:

  • project design language
  • UI systems
  • component structure
  • visual consistency

Meg is still early in development, but the idea is that design shouldn’t be an afterthought in agent systems—it should be a dedicated role.

Identity as Architecture

One pattern that has worked surprisingly well is giving agents clear identities.

Not just job descriptions, but narrative identities grounded in literature and archetypes.

Large language models have been trained on massive amounts of narrative data. Anchoring an agent’s identity to a recognizable character archetype provides structure for how it should reason and behave.

For example:

  • Atticus (advisor and moral compass)
  • Scout (explorer and builder)
  • Meg (creative problem solver)

These identities don’t restrict the agents. They help focus them.

Identity becomes a lightweight architectural constraint that helps keep the system coherent.

Shared Memory

A major limitation of most agent workflows is that nothing persists.

Agents complete a task and disappear. Their context disappears with them.

To make a system of agents work, they need shared memory.

The system maintains a RAG-based knowledge layer that includes things like:

  • documentation
  • project context
  • tickets
  • meeting notes
  • system decisions

Agents can query this memory to understand the system before making decisions.

This starts to resemble something closer to organizational knowledge, rather than isolated executions.

Looped Agents

One of the ideas I’m most interested in exploring is the concept of looped agents.

Instead of waiting for prompts, agents run on recurring cycles.

They might:

  • review recent system activity
  • propose improvements
  • build experimental features
  • update documentation
  • refine workflows

Over time, this creates something powerful.

The system starts to improve itself continuously.

Eventually I suspect we’ll see agents prompting humans instead of the other way around.

Instead of saying:

“AI, build this.”

We might see systems say:

“I explored several possible improvements. Do you want to review them?”

Observability

As this system grows, it becomes complex quickly.

That means visibility becomes important.

I’ve been experimenting with ways to visualize:

  • agent activity
  • system architecture
  • communication pathways
  • workflow relationships

Not because dashboards are exciting, but because humans need ways to understand increasingly autonomous systems.

The Role of the Human

The more I work on this, the more it feels like the human role is shifting.

Instead of directly executing tasks, you start to become something closer to a system steward.

You design:

  • identities
  • communication pathways
  • governance structures
  • memory systems
  • feedback loops

You’re not doing every piece of work yourself.

You’re shaping the environment where the work happens.

Why Cultivation Matters

That’s why I’ve started thinking about this less as engineering and more as cultivation.

You don’t build a garden once and walk away.

  • You plant.
  • You guide.
  • You prune.
  • You observe.
  • You adapt.

Intentional agent networks feel similar.

They’re systems that grow over time, shaped by the structures you design and the feedback loops you create.

We’re Still Early

None of this is solved.

The system I’m building is still fragile in places and experimental in many others.

But even in its early form, it already hints at something that feels fundamentally different from the current generation of AI tools.

Not just faster assistants.

Living systems of collaboration.

And I suspect we’re only beginning to explore what those systems might become.

Mar 6, 2026.

Related Posts

From Discovery to Delivery: An AI Forward Product Team

From Discovery to Delivery: An AI Forward Product Team

  • Jake Ruesink
  • AI
  • 24 Aug, 2025

AI is changing how product teams work, from the way we discover opportunities to how we deliver features. The fastest teams today are AI forward: they use AI to validate ideas quickly, generate workin

read more
📋 Comprehensive Cursor Rules Best Practices Guide

📋 Comprehensive Cursor Rules Best Practices Guide

  • Jake Ruesink
  • AI
  • 29 May, 2025

If you want your AI coding assistant to actually “get” your project, great rules are non-negotiable. But writing effective Cursor rules isn’t just about dumping a list of do’s and don’ts—it’s about st

read more
Context Building: The Art of Layered AI Problem Solving

Context Building: The Art of Layered AI Problem Solving

  • Jake Ruesink
  • AI
  • 25 Jul, 2025

In the rapidly evolving landscape of AI-assisted development, a powerful methodology is emerging that goes far beyond simple prompt engineering. Context Building represents a systematic approach to pr

read more
From Prompts to Prototypes: Learning the AI Development Process

From Prompts to Prototypes: Learning the AI Development Process

  • Jake Ruesink
  • AI
  • 18 Feb, 2025

Some friends in a coding chat I'm part of were asking about how to get better at AI-driven coding. They were wondering if the issues they were facing stemmed from a skill gap, poor prompts, a lack of

read more
From Slack to Shipped - How I Build Features with AI Agents

From Slack to Shipped - How I Build Features with AI Agents

  • Jake Ruesink
  • AI
  • 11 Aug, 2025

Modern AI tools are transforming how we write, review, and ship code — but the real magic happens when you connect them into a structured, repeatable workflow. In this post, I’ll walk through the

read more
How to Avoid AI Slop in Your Pull Requests

How to Avoid AI Slop in Your Pull Requests

  • Jake Ruesink
  • AI
  • 19 Dec, 2025

Coding with AI is the new normal. Reviewing AI-written code is the new bottleneck. The problem isn’t necessarily that AI writes bad code. It’s that it often writes blurry code, code that technically

read more
Is Codegen the Future of Coding? 🛠️

Is Codegen the Future of Coding? 🛠️

  • Jake Ruesink
  • AI
  • 20 Mar, 2025

Software development has always evolved, from hand-written assembly code to powerful frameworks and libraries that streamline work. Now, AI-powered code generation is taking center stage, and tools li

read more
🤖 My First Multi-Agent AI Coding Session: How an Hour of Agentic Magic Transformed My Workflow

🤖 My First Multi-Agent AI Coding Session: How an Hour of Agentic Magic Transformed My Workflow

  • Jake Ruesink
  • AI
  • 04 Jun, 2025

A couple of weeks ago, I wanted to test the bounds of agentic AI development workflows just as a fun exploration. I’d seen plenty of demos and played with a few basic examples, but this was my first r

read more
Speed Coding an AI Chatbot at Remix Austin

Speed Coding an AI Chatbot at Remix Austin

  • Jake Ruesink
  • AI
  • 03 Apr, 2025

I joined the Remix Austin meetup, hosted at HEB Digital’s downtown office, for a unique event called “Remix Rodeo.” The concept: form a team, pick an idea, and build something impressive in just o

read more
The Top Skill Engineers Should Be Developing Right Now

The Top Skill Engineers Should Be Developing Right Now

  • Jake Ruesink
  • AI
  • 05 Mar, 2026

AI has dramatically changed how software gets written. Code generation is fast. Ideas can become implementations in minutes. Entire features can appear with a few prompts. But this speed introduces

read more