This content originally appeared on DEV Community and was authored by ithiria894
You ask Claude about a function. It gives you a confident, detailed explanation. You build on it for an hour. Then you find out it was wrong.
Or: you change a function, tests pass, you ship. Three days later — four other places called that function, all broken. Claude never mentioned them.
Same root cause: Claude doesn't have a way to navigate your codebase.
The core idea
Turn your entire repo into a graph. Use BFS + LSP to search and traverse it.
/generate-index → build the graph (deterministic script + AI refine)
↓
AI_INDEX.md → the graph itself (adjacency list — nodes are domains, edges are connections)
↓
/investigate-module → read a specific node (grounded, with sources)
/trace-impact → BFS along the edges (find everything a change affects)
Drop a bug or a feature request anywhere on this graph, and the system traces every connected path to find what's affected — before you write a single line of code.
What makes this AI_INDEX different
There are dozens of AI_INDEX templates. Most are flat file lists:
auth → src/auth/
api → src/api/
db → src/models/
Claude knows where to find things, but has no idea that changing auth breaks api. No structure connects them. It's a phonebook, not a map.
Our AI_INDEX is a graph data structure — an adjacency list:
### Auth
- Entry: src/auth/middleware.py
- Search: verifyToken, AuthError
- Tests: tests/test_auth.py
- Connects to:
- API layer — via requireAuth() in src/api/routes.py
- DB layer — via UserModel.findById() in src/models/user.py
### API layer
- Entry: src/api/routes.py
- Search: router, handleRequest
- Connects to:
- Auth — via requireAuth middleware
- Rule evaluation — via POST /api/evaluate
Every domain is a node. Every Connects to is an edge. That's what makes /trace-impact possible — BFS traversal on this graph. Without edges, you have a directory listing. With them, you have a network an algorithm can walk.
Edges come from real imports, not guessing. The generator scans actual import statements.
LSP — the search engine for the graph
BFS needs precise lookups at each node. grep can't do this — string matching gives 40 results, 15 noise, half your token budget gone.
LSP asks the language's type checker directly. Semantic, not string. Same query, 6 exact results.
| grep | LSP findReferences | |
|---|---|---|
| Speed | baseline | 900x faster |
| Token cost | high | 20x lower |
| Accuracy | string match, false positives | semantic, zero false positives |
/generate-index — build the graph automatically
Scans imports, directory structure, exported symbols. Outputs AI_INDEX.md with all Connects to edges from actual import statements. Deterministic — 80% zero tokens. Claude refines the last 20%.
Run once on a new repo. Re-run when the structure changes.
/investigate-module — verification-first prompting
The key mechanism: forces Claude to name the exact file and function it read before making any claim. Eliminates the middle ground of confident fabrication — Claude either reads the source (accurate) or says "uncertain" (you dig deeper).
/trace-impact — BFS traversal on the graph
This is where the graph pays off:
- Level 0: the node you're changing
- Level 1: direct callers (LSP findReferences — semantic, not grep)
- Level 2: callers of those callers
-
Cross-domain: follows
Connects toedges across module boundaries - Tests: every test covering the affected set
Breadth-first so you see all direct impact before going deeper. Stops at API boundaries. Nothing slips through.
The workflow
New repo:
/generate-index → builds the graph with all nodes and edges
Fix a bug:
1. /trace-impact → BFS from the bug, map the full blast radius
2. /investigate-module → read the parts you need to understand
3. Fix it → you already know what else needs updating
Add a feature:
1. /trace-impact on each touch point
2. /investigate-module for domains you don't understand
3. Implement
4. /generate-index if you added new nodes or edges
Get it
Everything is in one repo — the three skills, the generator script, templates:
github.com/ithiria894/claude-code-best-practices
Built from research, source code analysis, and way too many hours of watching Claude confidently explain code it hadn't read.
This content originally appeared on DEV Community and was authored by ithiria894
ithiria894 | Sciencx (2026-03-31T23:46:39+00:00) The bottleneck for AI coding assistants isn’t intelligence — it’s navigation. Retrieved from https://www.scien.cx/2026/03/31/the-bottleneck-for-ai-coding-assistants-isnt-intelligence-its-navigation/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.