If You Can't One-Shot Your Feature, It's a SKILL.md Issue
We’ve Always Been Toolsmiths
Software engineers have always customized their environments. We tweak our terminals, configure our editors, remap our keyboards. We write shell aliases, create snippets, and build custom linters. The goal has always been the same: reduce friction between intent and execution.
AI-assisted development is just the next layer of that same tinkering. Instead of writing a vim macro, you’re writing a SKILL.md file that teaches your AI agent how to approach a specific type of task. Instead of configuring a linter rule, you’re describing the patterns and conventions your codebase follows so the agent can match them.
What a SKILL.md Actually Does
A SKILL.md file is context that persists across sessions. It captures the patterns, conventions, and domain knowledge that make the difference between an AI agent that writes generic code and one that writes code that fits your project.
When you can’t one-shot a feature — when the agent keeps getting it wrong, or you have to correct the same mistakes repeatedly — that’s a signal. The agent isn’t dumb; it’s under-informed. The fix isn’t to type harder. The fix is to improve the context you’re feeding it.
The Feedback Loop
Here’s the cycle I follow:
- Attempt the feature. Let the agent take its best shot.
- Identify the gaps. Where did it go wrong? Missing conventions? Wrong patterns? Incomplete understanding of the domain?
- Capture the knowledge. Write it down in a SKILL.md or CLAUDE.md file so the agent has it next time.
- Try again. The next attempt should be closer. If not, repeat.
Over time, your SKILL.md files accumulate the tribal knowledge of your project. They become a living document of how your codebase works, written in a format that both humans and AI agents can use.
The Better Your Skills, the Further You Can Push
This is the real leverage. Every hour spent improving your SKILL.md files pays compound interest. A well-documented set of skills means you can tackle increasingly complex features with higher first-attempt success rates. Your agents get smarter not because the models improve, but because the context you provide improves.
The engineers who will get the most out of AI tooling aren’t the ones who prompt the hardest. They’re the ones who invest in teaching their tools well.
This article was originally posted on LinkedIn.