There’s a well-known story in the Zhuangzi about a butcher named Cook Ding. His knife never dulled — not because it was sharper than anyone else’s, but because he never cut through bone. He found the natural joints in every piece of meat and guided the blade through without resistance.
We think this is the right metaphor for AI-assisted software development — and it’s the principle behind the agentic development workflow we’ve been building and deploying at PAR Technology.
The Problem With How Most Teams Use AI
Most engineering teams are using AI assistants in one of two ways: as an autocomplete engine or as a conversation partner. You either get inline code suggestions one line at a time, or you open a chat window and try to describe an entire feature in a single conversation.
Both approaches have the same fundamental problem. They treat software development as a single, continuous activity — as if understanding what to build, deciding how to build it, and actually building it are the same kind of thinking.
They’re not.
Understanding a problem is divergent, exploratory, and full of dead ends. Choosing an architectural approach is comparative, analytical, and requires holding multiple options in tension. Writing code is sequential, convergent, and demands precision. Asking an AI to do all three at once is asking it to cut through bone.
Three Spaces, Three Kinds of Thinking
The insight behind our approach is that every development task moves through three distinct cognitive modes. We call them Problem Space, Solution Space, and Implementation Space.
Problem Space is about discovering what you’re actually dealing with. What’s failing? Why? What constraints exist that aren’t obvious? What competing needs create tension? The work here is Socratic — asking better questions until the problem becomes bounded enough to solve.
Solution Space is about exploring how to address a well-defined problem. Not jumping to the first answer, but generating genuinely different approaches, evaluating each against the real constraints, and choosing one with eyes open about what you’re trading away.
Implementation Space is about making the chosen approach real — decomposing an architecture into executable steps, building them in the right order, and verifying the result against the original intent.
These aren’t phases in a project plan. They’re fundamentally different kinds of cognitive work. Mixing them is where AI-assisted development breaks down: solving problems you haven’t fully understood, coding before you’ve committed to an approach, or re-debating architectural decisions mid-implementation.
The separation is the system.
What We Built
We developed a structured agentic workflow that gives AI agents specialized roles matched to each cognitive mode. Rather than a single AI conversation that tries to handle everything, our pipeline moves work through distinct phases — each with its own agent behaviors, its own inputs and outputs, and clear rules about what that phase does and doesn’t do.
In Problem Space, the AI acts as an exploration partner: asking clarifying questions, surfacing tensions between competing needs, and tracking when the problem has been explored sufficiently to move forward.
In Solution Space, the AI shifts to architectural exploration: generating multiple approaches, evaluating tradeoffs against the problem’s specific constraints, and helping the engineer arrive at a design decision — without making the choice for them.
In Implementation Space, the AI becomes an executor: following a structured plan, checking its work against the spec, and escalating when something doesn’t fit rather than improvising a workaround.
Each phase produces an artifact — a problem brief, an architectural spec, an implementation plan — that becomes the fixed input for the next phase. These artifacts are the contracts between phases. They prevent drift, ensure context transfers cleanly, and give every phase a clear reference point for what was decided upstream.
Why the Boundaries Matter More Than the Agents
The most valuable part of this system isn’t the AI behavior in any single phase. It’s the boundaries between phases.
When the Implementation phase discovers that the architecture doesn’t decompose cleanly, it doesn’t improvise — it sends work back to Solution Space with a specific finding. When Solution Space exploration reveals that the original problem was missing a critical constraint, it pauses and routes back to Problem Space rather than designing around the gap.
This backward flow is what makes the system self-correcting. Each downstream phase acts as a validation layer for the one before it. Mistakes get caught when they’re cheap to fix — before they’re buried under layers of implementation.
It also prevents the most common failure mode we see in AI-assisted development: architectural second-guessing during implementation. Once a decision has been made and documented in the artifact chain, the build phase trusts it. If an implementation step is harder than expected, the system’s default response is to push through — because the difficulty was likely an anticipated cost of a tradeoff that was already evaluated. The spec already answered “wouldn’t it be better to…” during the phase when that question belonged.
Where It’s Delivering
The workflow has been through two major iterations based on real usage from our engineering team. It’s driving AI- backed development projects at a velocity that’s changed how we think about what’s achievable in a sprint cycle.
More significantly, it works on the hard stuff. Not just new, clean-slate projects — but large, established codebases with years of multi-contributor history. The kind of code that most AI tools struggle with because there’s no single author’s style, no clean architecture to follow, and a thousand implicit conventions that aren’t documented anywhere.
The structured problem-bounding phase is what makes this possible. Before the AI writes a single line of code, it has a bounded problem, a chosen approach with documented tradeoffs, and a plan that accounts for the specific codebase’spatterns and conventions. The upstream thinking is sufficient before execution begins.
What’s Next
We’re currently mapping this workflow from its origins as a CLI-based development tool to a deployed multi-agent system using LangGraph. The goal is to move beyond individual developer productivity and into team-level orchestration — where the structured pipeline becomes infrastructure that any engineer can use, not just those who’ve internalized the workflow.
We’re also exploring how the Problem Space phase can serve roles beyond engineering. The same structured exploration that helps an engineer bound a technical problem can help a product manager bound a feature request or a business stakeholder articulate a need. The brief that comes out of Problem Space doesn’t require technical expertise to produce — it requires disciplined thinking, which is exactly what the agent behavior enforces.
AI-assisted development is still in its early stages. Most teams are discovering what works through trial and error. Our bet is that the answer isn’t better models or bigger context windows — it’s better structure around how we use the models we already have.
Find the natural joints. Guide the blade through. Let the AI do what it’s good at, in the right order, with the right constraints.
Cook Ding’s knife never dulled. We think there’s a lesson in that.
Harrison Wright is an AI Platform Strategy & Architect at PAR Technology, where he leads enterprise AI strategy and platform development across three business units. He has 8+ years of experience shipping production software, including founding and exiting a technology company.