When Code Lies: A Recovery Guide for Technical Debt

Stop trusting comments that say 'this should work.' A methodical framework for uncovering why your code fails, tracing broken assumptions, and fixing technical debt without breaking everything else. From real sessions with Claude Code

Steven J Jobson

7/31/20254 min read

We've all been there. You inherit a project (or worse, return to your own code from six months ago), and nothing works the way the documentation says it should. The search function returns empty arrays. The data pipeline mysteriously produces nulls. And somewhere, buried in the code, is a comment that says `// This should work` followed by 50 lines of increasingly desperate workarounds.

Welcome to technical debt recovery—the art of figuring out what your code actually does versus what everyone thinks it does.

The Great Disconnect

Here's the thing about working with AI assistants on coding projects: they're brilliant at following patterns and implementing what you tell them the system does. But when those assumptions are wrong? Well, you end up with beautifully written code that solves the wrong problem.

I recently discovered this the hard way. My AI assistant and I built an entire search system based on the "fact" that session files were stored as simple markdown files. Turns out? They were actually stored in folders with two files each. Our perfectly crafted search queries were looking in all the wrong places.

That experience led to this framework—a systematic approach to recovering from technical debt that works whether you're debugging a web scraper, fixing a data pipeline, or untangling a legacy API.

Phase 1: Play Detective (Reconnaissance)

First rule of technical debt recovery: Trust nothing, verify everything.

Start by reading the symptoms like a detective reads a crime scene. When your error says "returns empty," ask yourself: empty because there's no data, or empty because you're looking in the wrong place?

Look for the telltale signs of assumption-driven development:

- Comments like "this should work" (narrator: it didn't)

- Mock data that's "temporarily" hardcoded (since 2019)

- Try-catch blocks that catch and silently ignore errors

Here's a real example I encountered:

~javascript

// This should return user sessions

function getSessions() {

try {

return database.query('SELECT * FROM sessions');

} catch (e) {

// Temporary fix until DB is set up

return MOCK_SESSIONS;

}

}

Spoiler alert: The database was set up. It just used a completely different schema than expected.

Phase 2: Follow the Breadcrumbs (Investigation)

Once you've identified the symptoms, it's time to trace the actual data flow. This is where AI assistants can be incredibly helpful—if you give them the right context.

Instead of asking "Why doesn't the search work?", try:

- "Show me the actual structure of a session file"

- "What format does this API actually return?"

- "List all the transformations this data goes through"

The key is to map the real architecture, not the assumed one. I like to draw two diagrams:

1. What we think happens: `File → Parse → Display`

2. What actually happens: `Folder → Find correct file → Parse → Transform → Cache → Display`

The gaps between these diagrams? That's your technical debt.

Phase 3: The Surgical Approach (Strategic Planning)

Here's where many developers go wrong: they try to fix everything at once. Resist this urge!

Ask yourself: What's the smallest change that unblocks progress? Can you add an adapter layer instead of rewriting the entire system? Can you fix the query instead of restructuring the database?

For our session search problem, instead of restructuring how sessions were stored (which would have broken other features), we simply updated our search to look for the right file within each folder:

~javascript

// Before: Looking for .md files

const results = search('*.md', searchTerm);

// After: Looking for content.md within session folders

const results = search('*/content.md', searchTerm);

One line changed. Hours of rewriting avoided.

Phase 4: Precision Strikes (Implementation)

When fixing technical debt, remove the bandaids before applying proper fixes. Those mock data fallbacks? Delete them. They're hiding real problems.

Fix one thing at a time and test each fix in isolation. Your commits should tell a story:

- "Remove mock data fallback from session search"

- "Update search pattern to match actual file structure"

- "Add validation for search results"

And please, for the love of future-you, comment the why, not the what:

// Bad comment:

// Search for files

// Good comment:

// Sessions are stored as folders with content.md and metadata.json

// We only search within content.md to avoid duplicate results

Phase 5: Leave a Map (Documentation)

This is crucial: document your discoveries. Not just what you fixed, but what you learned.

Create a simple "Myths vs Reality" section in your README:

Common Misconceptions:

Myth: Sessions are stored as single .md files

Reality: Sessions are folders containing content.md and metadata.json

Myth: The API returns user objects

Reality: The API returns user IDs that need a second call for details

The Honest Status Update

One of the best practices I've adopted is using clear status indicators:

- ✅ Verified: Tested with production data

- 📝 Implemented: Code written, needs testing

- ⚠️ Assumed: Should work in theory

- ❌ Known Broken: Here be dragons

This honesty helps both your future self and your AI assistant understand what can be trusted.

Red Flags to Watch For

As you work through technical debt, watch for these warning signs:

~javascript

// 🚩 The "temporary" permanent fix

data = data || FAKE_DATA; // TODO: Remove when API is ready (dated 2020)

// 🚩 The mysterious timeout

setTimeout(() => doSomething(), 3000); // No idea why but it works

// 🚩 The contradictory comment

// This can never be null

if (value === null) { handleNull(); }

Your Recovery Toolkit

Whether you're fixing a broken search function or debugging why your AI chatbot keeps hallucinating, the process is the same:

1. Observe the actual behavior

2. Question every assumption

3. Verify with real data

4. Fix the smallest possible thing

5. Document what you discovered

6. Test with production-like scenarios

Remember: technical debt isn't just bad code—it's the gap between what we think our system does and what it actually does. The faster you can close that gap, the faster you can build on solid ground.