Cultivating Intentional Agent Networks
- Jake Ruesink
- AI
- 06 Mar, 2026
This project started with a missing tool.
Our team used to rely heavily on Codegen, a platform that connected our workflows across GitHub, Slack, Linear, and our codebase. It wasn’t just an AI coding tool—it acted more like a coordination layer for the team.
When Codegen shut down, we didn’t just lose an AI assistant. We lost the connective tissue between our systems.
At first my goal was simple: replace that workflow.
But once I started experimenting with agent frameworks and modern LLM capabilities, it became clear that recreating Codegen wasn’t the interesting problem.
The more interesting question was this:
What would it look like to build an intentional network of agents that work together like a team?
That question is what led to the system I’ve been building over the past few weeks.
Watch the Walkthrough
I recently gave a walkthrough of the system to a small group of engineers exploring agent frameworks. In it, I demo the architecture, explain how the agents interact, and show several projects that have been built by the system so far.
The rest of this post dives deeper into the ideas behind the system and how I’ve been approaching these problems.
The Moment This Became Possible
A month ago I wouldn’t have attempted this.
Vector databases, RAG pipelines, agent orchestration—those all felt like infrastructure problems that required a full platform team to build.
But the pace of change right now is incredible. With modern LLMs and frameworks like OpenClaw, you can suddenly do things that would have felt impossible not long ago.
You can:
- run agent systems locally
- create shared knowledge layers
- orchestrate sub-agents
- integrate with your existing tools
- rapidly iterate on experimental architectures
Right now, the entire system I’m experimenting with runs on a Mac Mini sitting in my house.
That alone still feels slightly ridiculous.
From Agents to Systems
Most conversations about AI agents today revolve around a familiar pattern.
You spawn a group of agents.You give each one a task.You collect the outputs.
This works really well for research workflows or brainstorming problems.
But that’s not how teams work.
Real teams operate through:
- roles
- communication
- shared knowledge
- governance
- iteration
- long-term improvement
Once I started thinking about the problem this way, the design changed.
Instead of trying to build one powerful agent, I started thinking about a network of specialized agents that collaborate.
Each agent has a role.Each agent has memory.Each agent participates in a system.
The Agents in My System
The system I’ve been experimenting with currently has a handful of core agents that play different roles.
Clawdy — My Personal Assistant
Claudie is the agent I interact with directly.
It acts as my interface to the broader system and has access to tools like:
- calendar
- notes
- system utilities
- the rest of the agent network
If I want to know what’s happening inside the system, ask a question, or delegate work, I talk to Claudie.
Currybot — The Team Agent
Curry Bot is designed to operate where my team already works.
It integrates with tools like:
- Slack
- GitHub
- PR workflows
- issue tracking
The goal is simple: the team shouldn’t need to leave their normal environment to collaborate with AI.
Instead of introducing a new interface, the AI becomes part of the workflow itself.
Scout — The Builder
Scout is the most experimental agent in the system.
Scout runs on a loop—currently every few hours—and its job is straightforward:
Explore. Build. Improve.
Scout works on multiple projects simultaneously, exploring ideas and iterating on systems.
Several of the projects Scout has created so far were built without me writing a single line of code directly.
That’s equal parts exciting and slightly unnerving.
Atticus — The Advisor
As the system grows, it needs some form of guidance.
Atticus acts as an advisory agent that other agents consult before making major decisions.
When Scout proposes changes or plans new work, it often checks with Atticus first.
Atticus also helps maintain documentation and shared knowledge about the system itself.
Meg — The Designer
One of the biggest gaps in agent-driven development right now is design.
So I created Meg as a design-focused agent responsible for helping shape things like:
- project design language
- UI systems
- component structure
- visual consistency
Meg is still early in development, but the idea is that design shouldn’t be an afterthought in agent systems—it should be a dedicated role.
Identity as Architecture
One pattern that has worked surprisingly well is giving agents clear identities.
Not just job descriptions, but narrative identities grounded in literature and archetypes.
Large language models have been trained on massive amounts of narrative data. Anchoring an agent’s identity to a recognizable character archetype provides structure for how it should reason and behave.
For example:
- Atticus (advisor and moral compass)
- Scout (explorer and builder)
- Meg (creative problem solver)
These identities don’t restrict the agents. They help focus them.
Identity becomes a lightweight architectural constraint that helps keep the system coherent.
Shared Memory
A major limitation of most agent workflows is that nothing persists.
Agents complete a task and disappear. Their context disappears with them.
To make a system of agents work, they need shared memory.
The system maintains a RAG-based knowledge layer that includes things like:
- documentation
- project context
- tickets
- meeting notes
- system decisions
Agents can query this memory to understand the system before making decisions.
This starts to resemble something closer to organizational knowledge, rather than isolated executions.
Looped Agents
One of the ideas I’m most interested in exploring is the concept of looped agents.
Instead of waiting for prompts, agents run on recurring cycles.
They might:
- review recent system activity
- propose improvements
- build experimental features
- update documentation
- refine workflows
Over time, this creates something powerful.
The system starts to improve itself continuously.
Eventually I suspect we’ll see agents prompting humans instead of the other way around.
Instead of saying:
“AI, build this.”
We might see systems say:
“I explored several possible improvements. Do you want to review them?”
Observability
As this system grows, it becomes complex quickly.
That means visibility becomes important.
I’ve been experimenting with ways to visualize:
- agent activity
- system architecture
- communication pathways
- workflow relationships
Not because dashboards are exciting, but because humans need ways to understand increasingly autonomous systems.
The Role of the Human
The more I work on this, the more it feels like the human role is shifting.
Instead of directly executing tasks, you start to become something closer to a system steward.
You design:
- identities
- communication pathways
- governance structures
- memory systems
- feedback loops
You’re not doing every piece of work yourself.
You’re shaping the environment where the work happens.
Why Cultivation Matters
That’s why I’ve started thinking about this less as engineering and more as cultivation.
You don’t build a garden once and walk away.
- You plant.
- You guide.
- You prune.
- You observe.
- You adapt.
Intentional agent networks feel similar.
They’re systems that grow over time, shaped by the structures you design and the feedback loops you create.
We’re Still Early
None of this is solved.
The system I’m building is still fragile in places and experimental in many others.
But even in its early form, it already hints at something that feels fundamentally different from the current generation of AI tools.
Not just faster assistants.
Living systems of collaboration.
And I suspect we’re only beginning to explore what those systems might become.
Mar 6, 2026.