I asked my AI to grade itself (the performance review I wasn't ready for)


Claude Code has a command most people don't know about.

Type /insights. That's it.

It reviews all your coding sessions from the last 7 days. Then it generates an HTML report. What you worked on. What mistakes you made. Where you wasted time.

I ran it last week. Here's what it told me:

"A single 6-hour session is far less effective than starting a new chat for each job."

I'd been running marathon coding sessions. Open Claude Code in the morning. Work until lunch. Keep going after lunch. Same window. Same context. Same conversation growing to thousands of tokens.

Turns out the AI loses context after a few hours. It starts hallucinating solutions. Referencing functions that don't exist. Suggesting fixes for problems you solved three hours ago. The longer the session, the worse the output.

The fix was embarrassingly simple.

One window per job. Not one window for everything.

Now I run 4-5 Claude Code instances in Warp (my terminal app). Each window gets a name. Each window gets one task.

Window 1: Frontend bug

Window 2: API integration

Window 3: Writing newsletter email

Window 4: Debugging tests

I tab between them to check progress. Each agent stays focused because its context window is clean. No cross-contamination from earlier tasks.

But /insights didn't stop there.

It also reviewed my CLAUDE.md file (the project instructions Claude reads at the start of every session) and suggested three updates I'd missed. Rules that were outdated. Patterns I'd stopped following. A timezone convention I'd documented but wasn't enforcing.

Then it spotted two automation opportunities. Repetitive workflows I was doing manually that could be saved as skills (reusable commands).

One command gave me:

- A workflow audit

- A context management strategy

- CLAUDE.md improvements

- Automation suggestions

All from data I'd already generated by using the tool normally.

The pattern here applies beyond Claude Code. Every AI tool you use is generating usage data. The question is whether you're reviewing it.

Most people optimise their prompts. Almost nobody optimises their sessions. The gap between "using AI" and "using AI well" isn't the model. It's the workflow around it.

Run /insights this week. You might not like what it says. But you'll work better after.

Want to learn the exact prompts and workflow frameworks I use to get more from AI tools? Check out my Prompt Writing Studio course

Subscribe to Creator Leverage: Master AI. Build Systems. Grow Your Business