I asked AI to test itself (it found a bug I missed)
I've been building software with AI coding assistants for a few months now.
Last night I added a Monte Carlo simulation tool to a project. The AI wrote the function, the schema, the docs. Everything looked right.
Then I tried something different.
Instead of eyeballing the diff, I asked the AI: "Import all the modules. Call the new functions. Prove this works."
30 seconds later, the AI reported 7 tools registered instead of 8.
The Monte Carlo handler was fully coded but never wired up. Dead code. It would have failed silently every time someone used it.
Here's what's interesting.
When I asked the AI to "review the code," it said "looks good." Pattern matching. No actual verification.
When I asked it to run the code, import errors and missing connections surfaced immediately.
But then I realized... this is the whole pattern.
AI writes confident-looking code. It compiles. It reads well. But the wiring between components? That's where it drops the ball.
So now I use this prompt before every commit:
"Before we commit, verify these changes work:
1. Import the new modules. Do they load without errors?
2. Call the main functions with test data. Do they return what you expect?
3. Check integration. Is the new code actually reachable from where it needs to be called?
Run these as scripts, not a code review."
Copy that. Use it with Claude Code, Cursor, Copilot, whatever.
Three checks. 30 seconds. Catches real bugs.
I also learned something about how AI follows instructions. I had a rule in my system prompt: "Silently verify rules before responding."
The AI ignored it. Every time.
I changed it to: "STOP and show the specific rule violation."
Night and day.
Soft instructions get skipped. Concrete, explicit instructions get followed. Same principle for code verification. Same principle for prompts.
If you're using AI to write code, don't just review what it writes.
Ask it to prove it works.
P.S. This works because it forces execution, not opinion. I teach this exact approach to AI prompting inside Prompt Writing Studio — concrete instructions that get followed, not soft suggestions that get ignored.