Plumbing, Basketball, and Slop Cannons: Team Lessons From a Multi-Brand E-Commerce Migration
- Jake Ruesink
- Team Leadership
- 10 Apr, 2026
Intro, The Project
The hard parts of our biggest client migration were mostly not technical.
We are moving a multi-brand e-commerce ecosystem to headless commerce. In practice, this is 12+ storefronts across multiple business lines, each with their own checkout flows, user authentication paths, and legacy integrations. Some flows came from Drupal. Some integrations were XML APIs built decades ago. Some auth paths ran through enterprise SSO. Nothing about it was a greenfield app with one team and one source of truth.
Our team is distributed across time zones around the world. On top of that, we coordinate with other teams that own different parts of the stack and different systems of record.
The dates were real. Hard launch targets. Calendar facts, not flexible aspirations.
When people hear that kind of context, they usually expect the postmortem to be technical. Which framework decision mattered? Which API strategy worked? Which caching pattern saved us?
Those things matter. But they were not the bottleneck.
The bottleneck was how work moved through the team.
Could we keep flow through the system, or did we keep starting more than we finished? Did the right people own the right domains, or did bugs bounce around while everyone stayed busy? Did we invest in clarity before coding, or did we fire a slop cannon and then spend a week cleaning up?
This post is about those team lessons.
Three metaphors from our standups ended up becoming the operating model:
- Plumbing, keep the pipes clear.
- Basketball, put people in the right positions.
- Slop cannon, plan before you fire.
If you are leading a distributed team through a migration with tight dates, these patterns matter more than one more productivity trick. They changed how we ran sprints, how we made decisions, and what we measured as real progress.
1) Keep the Pipes Clear, Flow Over Throughput
The lesson is simple, if work cannot get through review and into main, you are not moving fast.
I used the plumbing metaphor in standups because everyone immediately got it. Work is water. Tickets open the faucet. Code review is the drain. If the drain is clogged, adding more water does not help. It floods the sink.
At one point we had 23 PRs sitting in code review. On paper, the team looked very productive. Lots of activity, lots of commits, lots of people “working.” In reality, we had a blocked drain. The system was accumulating partially completed work faster than we could close it.
That is when we made flow the top metric.
Finish before you start more
PRs that are 80% done are expensive inventory. They cost context, they age quickly, and they slow everyone else down.
We started assigning explicit daily ownership for shepherding PRs to merge. Not “everyone should review” in theory. One person owns unblocking merges today.
That changed behavior fast. Instead of ending standup with ten people taking ten new tickets, we ended standup by asking:
- Which PRs merge today?
- What exact feedback is blocking each one?
- Who is responsible for clearing that blocker before end of day?
When you do that consistently, throughput follows. Not because people code faster, but because work actually exits the system.
Use a PR size guardrail
Large PRs were another pipe-clogger. We implemented a 5-8 file rule for normal work. Not because 9 files is morally wrong, but because review quality drops sharply as PRs sprawl.
When someone needs a bigger PR, they now get peer approval before writing it. That quick pre-check forces a better question up front: “Should this be split?”
We had a 46-file guest account PR that became a case study. It bundled multiple concerns, mixed behavior changes with refactors, and was painful to review. The code was not bad. The shape was bad. Even strong engineers struggle to review a kitchen-sink diff quickly and safely.
After that, the team had a shared memory: if the PR is giant, the system pays for it later.
Stop pretending code review means “waiting”
Another adjustment was workflow truthfulness.
When review requested changes, tickets moved back to In Progress immediately. We stopped letting them sit in “Code Review” for three days as if review was still happening.
That sounds small, but it fixed visibility. If a ticket is actually being revised, the board should show that. Fake states create fake confidence.
Prioritize the real bottleneck
Code review is usually the bottleneck in a healthy team. Treat it that way.
That means there are days when your highest-leverage contribution is not writing new code. It is reducing review queue depth, clarifying comments, and helping teammates close open loops.
A lot of teams say this and then reward the opposite behavior. We changed the expectation in standups and sprint retros so review work was visible and valued as delivery work.
Beware the illusion of progress
This one deserves to be blunt: bug fixes can create an illusion of progress.
I said this directly in standup because we were slipping into whack-a-mole mode. Fixing a stream of small bugs feels satisfying. You can point to closed tickets. You can feel momentum.
But if those bugs are symptoms of weak planning, unclear boundaries, or missing architecture decisions, then velocity is synthetic. You are spinning effort, not compounding it.
The growth moments in this migration were not when we closed ten random bugs in a day. They were when we paused to shape the work better, clarified ownership, and designed a flow that reduced recurring defects.
Fast coding feels good. Clear pipes win projects.
2) Put People Where Their Strengths Compound
The lesson is simple, team design matters as much as individual talent.
The basketball metaphor came from frustration with role mismatch. You do not put your point center at shooting guard and then wonder why the offense looks awkward. Same in engineering.
When people are mispositioned, work bounces. When people are aligned to strengths, decisions get faster and quality gets steadier.
Create domain owners and route directly
We assigned clear ownership by domain, checkout, admin, auth, vouchers/passports, and other critical surfaces.
That changed bug routing immediately. Instead of a bug ping-ponging across three developers, it went to the domain owner first. Even when someone else helped implement, the owner stayed accountable for correctness and continuity.
Ownership is not gatekeeping. It is reducing ambiguity.
Every domain needs a secondary owner
Primary-only ownership breaks in a global team. People sleep.
We added secondary owners for every domain so handoffs could happen cleanly across time zones. One concrete example: a senior developer owned a complex feature primarily, with a secondary owner in another time zone so end-of-day handoff did not freeze progress.
That gave us effective continuity across time zones without heroics. Work could move across continents with less context loss.
Stop wasting senior leverage on pure cleanup
We had a period where one of our strongest engineers was mostly being used as a post-project fixer. They could do that work, but that is not where their impact compounds.
So we pulled him into architecture conversations and pair planning sessions earlier in the cycle.
The result was predictable: fewer downstream surprises, better ticket shaping, and stronger implementation decisions before code was written.
Senior engineers should not be hidden in the back office cleaning up after avoidable planning misses. Put them where they can prevent misses.
Ownership also applies to the business side
Engineering ownership alone is not enough if requirements ownership is fuzzy.
We were tracking around 45 requirements across brands and workflows. Each one needed a clearly named business owner. Otherwise requests bounced between teams, and engineering burned time resolving questions that should have had a direct stakeholder answer.
When each requirement had a real owner, conversation paths shortened. Fewer meetings, fewer loops, faster decisions.
Pair planning transfers capability
Our team is strong but uneven by domain, which is normal. The fastest way to raise the floor is deliberate pair planning.
We pushed for at least one paired planning session per week per developer. Not pair programming all day. Pair planning.
That cadence did two things:
- It transferred domain context before tickets got complex.
- It normalized design discussion as a team sport instead of a rescue pattern.
Over time, that reduced single points of failure and made handoffs less fragile. People could step into adjacent domains with confidence because they had seen the reasoning, not just the final code.
Talent wins games. Positioning wins seasons.
3) Invest Upfront or Pay Forever
The lesson is simple, if you skip context and planning, execution gets noisy and expensive.
We started calling this the slop cannon.
A slop cannon is what happens when you jump straight to execution with weak context. You fire effort quickly, and then you spend days cleaning up the mess.
The term stuck because everyone recognized the pattern.
Respect the pipeline: research, clarify, plan, execute
Execution should be the fourth step, not the first.
The pipeline we reinforced was:
- Research what exists, including legacy behavior.
- Clarify requirements, edge cases, and ownership.
- Plan testable slices and implementation path.
- Execute.
Skipping the first three steps and jumping to code produces garbage, even with AI assistance. AI can speed execution, but it cannot replace missing context.
This is a newer engineering skill than many teams admit. Writing code is table stakes. Building and curating context is the differentiator.
Include QA in planning, not just validation
One of our biggest process corrections was bringing QA into epic planning earlier.
QA sees gaps that PMs and developers miss, especially around scenario coverage and weird real-world behavior. When they are only included late, those misses show up as churn.
Sprint 6 made this obvious. We completed 85 items and delivered 133 story points, our biggest sprint yet. That did not happen because we worked longer hours. It happened because QA and engineering collaborated in breakout rooms after standups and closed ambiguity before implementation drifted.
Fifteen minutes of focused QA-dev discussion prevented days of async confusion.
Engineers must define test needs proactively
Business stakeholders rarely ask for specific E2E coverage. That is not a criticism, it is just not their job.
It is our job.
So we made it explicit that engineers propose test boundaries as part of planning. If a change introduces risk, the ticket should include how we will verify it.
Quality improves fastest when tests are designed with the change, not bolted on after regressions.
Break work into testable slices
Large “do everything” tickets are slop cannon fuel.
We shifted ticket shaping toward testable slices. The ticket defines a verifiable boundary, what done means from behavior and validation perspective. Engineers still own implementation details, but the slice itself is concrete and checkable.
That made status reporting more honest and made QA handoff cleaner. “Almost done” became less common because slices were scoped to be truly finishable.
Walk the legacy site first
For migration work, this is non-negotiable.
Before touching a ticket, developers should walk the legacy behavior in the existing product. You cannot replicate 99% of behavior if you have not experienced the baseline yourself.
We caught multiple hidden rules this way, including subtle UX behaviors and data quirks that were never documented cleanly. The walkthrough is faster than guessing and far cheaper than rework.
Track improvements as backlog items, not side comments
When someone spots a post-parity improvement during implementation, capture it as a labeled backlog item.
Do not let it live in Slack memory.
That practice gave us two benefits:
- We preserved good ideas without derailing parity work.
- We could prioritize improvements intentionally with stakeholders later.
Upfront investment can feel slower in the moment. In migrations, it is the only way to avoid permanent rework tax.
4) Process Is What You Actually Do, Not What You Write Down
The lesson is simple, process only counts when it changes daily behavior.
A lot of teams can write a good process doc. Fewer teams consistently run it under deadline pressure.
What improved our outcomes were a handful of concrete operating changes that stuck.
Run pre-sprint stakeholder planning
Before engineering sprint kickoff, we added a separate stakeholder planning session to set business priorities for the next two weeks.
Before this change, engineers often entered sprint planning with unclear directional goals. We could still fill a board, but the sprint lacked shared priority clarity.
Pre-sprint alignment fixed that. Engineering planning became execution planning, not first-pass discovery.
Standardize kickoff with a template
We created a sprint kickoff template so each sprint started with the same structure across product owners, project managers, and the broader team.
Templates sound boring, but they reduce cognitive load and variation. Everyone knows where priorities live, where risks are discussed, and where handoffs are defined.
Consistency is a force multiplier in distributed teams.
Start standup with blockers and review queue
We changed standup order.
Not “what did I do yesterday” first. First question is: what is blocked and what is stuck in code review?
That single change kept attention on flow instead of activity theater. If something was blocked, it got immediate visibility. If reviews were backed up, we addressed the queue before opening new work.
Use breakout rooms for fast convergence
After standup, we leaned into targeted breakout sessions between QA and developers.
This replaced long async loops with short live alignment. Issues that used to drag for three days were often resolved in fifteen minutes.
The value was not just speed. It was shared understanding. People left the breakout with the same mental model of behavior, edge cases, and expected test outcomes.
Measure process impact with sprint outcomes
You can usually tell if process is real by comparing adjacent sprints.
Sprint 5 felt busy and reactive. Lots of bug-fix churn, lots of context switching, lots of perceived progress.
Sprint 6, after we tightened planning, domain ownership, and testable slices, delivered 133 story points and 85 completed items, our strongest sprint in this cycle.
Same broad team, same business complexity, same external constraints. Different operating model.
That is the point. Process is not a side artifact. It is the production system for team outcomes.
Closing, The Pattern
The recurring pattern in this migration was flow, ownership, and upfront investment.
If I reduce everything to three standup metaphors, it is this:
- Plumbing — Keep the pipes clear. Finish before you start.
- Basketball — Put people in positions where their strengths compound.
- Slop cannon — Plan before you fire.
None of these are glamorous ideas. That is why they work.
Distributed migration teams do not usually fail because no one can write code. They fail because work clogs, ownership blurs, and execution outruns understanding.
The team that finishes fastest is usually not the team that types fastest. It is the team that keeps work moving through review, routes decisions to clear owners, and invests early in context so execution is clean.
That is what changed for us between reactive and reliable delivery.
And if you are in the middle of a complex migration right now, with hard dates and too many moving parts, that is where I would start tomorrow morning.
Not with a new framework.
With your pipes, your positions, and your slop cannon.