I Built a Tool to Stop Myself from Overengineering
After building a full ReAct loop when a simple skill would work, I made a complexity mapping tool that forces me to decompose problems before jumping to solutions.
I kept making the same mistake. I'd get an idea, jump to the most sophisticated solution, build it, then realize something simpler would have worked. The Security Agent project was the wake-up call. I built a full ReAct loop with 12 tools and a real-time dashboard. Turns out a Claude Code skill did the job better.
So I built a tool to catch myself earlier.
The Pattern I Kept Repeating
With Security Agent, I wanted to scan my codebase for vulnerabilities. My brain went straight to "agentic security tester" with tool calls, a ReAct loop, CVE lookups, and a live dashboard showing the agent's reasoning.
I built all of it. The agent ran, called tools, found real CVEs. It also hallucinated 100+ findings about files that didn't exist. After fighting context limits and duplicate findings, I asked myself: could a deterministic script do this? Could Claude Code do this as a skill?
Yes to both. The "agent" added complexity without adding value.
This wasn't a one-time thing. I noticed myself reaching for Level 4 solutions (multi-agent systems) when Level -1 solutions (straightforward code) would work fine. I needed something to interrupt that pattern.
What the Tool Does
Solution Architect is a 6-step workflow that forces me to think before building:
- Problem - Describe what I'm trying to build
- Constraints - What tools am I required to use? Will this integrate with something larger?
- Decomposition - Break the problem into sub-problems (Claude helps here)
- Scoring - Assign a complexity level (-2 to 4) to each sub-problem
- Dependencies - Set the execution order between sub-problems
- Results - View the plan, export markdown with a data flow diagram
The key insight: when I looked at "security scanner" as one thing, I thought "complex AI agent." When I broke it into sub-problems (find dependencies, check CVE database, scan code patterns, format output), each one was obviously simple.
Why Claude Code Instead of an API
I originally planned to integrate Opus 4.5 via API for LLM-assisted analysis. Then I realized I'm already paying for an Anthropic Max subscription. Why add API costs when Claude Code is right there?
The tool now has "Copy for Claude" buttons at each step. I paste the prompt into Claude Code, get a response, paste it back. The app parses numbered lists and applies the suggestions.
This turned out better than API integration anyway. It shows two tools working together rather than hiding an API call behind a button.
What I Actually Get From It
The tool forces me to:
Decompose before deciding. I can't assign a complexity level to "the whole thing." I have to break it into pieces first. That alone catches most overengineering.
Consider constraints upfront. The constraints step asks about required tools and integration horizon. "Will this become part of a larger system?" changes the recommendations. An agent tool needs clean APIs. A standalone script doesn't.
See the distribution. After scoring, a mini complexity map shows where the sub-problems land. If I have six sub-problems and five are Level -1, I'm probably not building a Level 4 system.
Think about dependencies. The dependencies step makes me define execution order. Sometimes that reveals that sub-problems can run in parallel. Sometimes it shows a cleaner sequence than I'd assumed.
The Complexity Levels
The tool uses 7 levels from -2 to 4:
| Level | Name | What It Means |
|---|---|---|
| -2 | Use Existing | A tool already does this, just use it |
| -1 | Direct Code | I know exactly how to solve this with code |
| 0 | Basic Automation | Scripts and orchestration, no AI needed |
| 1 | Single LLM Call | One prompt, one response, done |
| 2 | Tool-Using Agent | LLM needs to call tools to complete the task |
| 3 | Multi-Step Agent | Multiple reasoning steps, state management |
| 4 | Multi-Agent System | Multiple agents coordinating on a task |
Most of my "AI project" ideas decompose into mostly Level -1 and 0 sub-problems with maybe one or two Level 1 or 2 pieces. The tool makes that visible before I start building.
Try It
The tool is available as a demo. You can work through all 6 steps, and the "Copy for Claude" prompts work with Claude.ai, Claude Code, or any Claude interface. Paste the responses back and the tool parses them.