A Bad Line of Research Is a Hundred Bad Lines of Code
The Problem with Vibes
When developers first start using AI coding agents, the natural approach is to describe what you want and hope for the best. Vibes-based development. Sometimes it works beautifully. Other times you get code that looks right but misses the architectural patterns, naming conventions, or business logic that your codebase depends on.
The issue isn’t the model’s capability. It’s the context window. An agent that doesn’t understand your codebase will write generic solutions. An agent loaded with the right context will write code that fits.
Research-Plan-Implement
Dex’s Research-Plan-Implement (RPI) workflow keeps agents in the smart zone. The discipline is straightforward:
Research the codebase first. Read the code, trace the dependencies, understand the patterns already in use. Don’t assume — verify. Look at how similar features were built before. Check the test patterns. Read the configuration.
Plan the changes next. Write down the approach, the files involved, the expected outcome. This isn’t bureaucracy — it’s context loading. The plan becomes part of the agent’s working memory, keeping it aligned as implementation gets complex.
Implement only after research and planning are loaded into the context window. By this point, the agent has everything it needs to write code that actually fits.
Why This Matters
A bad line of research cascades. If the agent misunderstands the architecture, every file it touches will reflect that misunderstanding. If it misses a naming convention, every function it writes will be inconsistent. If it doesn’t know about an existing utility, it’ll reinvent it poorly.
But a good line of research compounds. Understanding the existing patterns means the new code slots in naturally. Knowing the test conventions means tests get written correctly the first time. Reading the configuration means no surprises at deploy time.
Building the Discipline
This is what inspired me to build rpikit, a tool that enforces this discipline at every phase. Verify before claiming done. Evaluate feedback before accepting it. Don’t outsource the thinking — outsource the typing.
The engineers getting the best results from AI aren’t the ones who prompt the hardest. They’re the ones who research the deepest before they start.
This article was originally posted on LinkedIn.