The 2 AM Problem
Part of the challenge of being a product manager who codes is that the best coding hours happen when everyone else is asleep. The day is a whirl of meetings, customer support, discovery calls, and operational work around our InControl Charger Management System. By the time I can focus on my personal projects, it's 2 AM on a Tuesday, and I desperately need someone to review my code.
Traditional solutions don't work at 2 AM:
- Senior developers are asleep
- If they are not asleep, I would not want to demolish a work boundary to code review my portfolio
- Code review platforms require async waits
- Online communities are quiet
- My impostor syndrome is at peak volume
I needed a code mentor available 24/7 who could provide brutally honest feedback without judgment. Enter: AI agents.
The Experiment: Full-Stack Code Review
I've been using ChatGPT and Claude for coding help for years, but mostly for spot fixes—debug this error, explain this pattern, write this function. I'd never asked an AI to do what I really needed: a comprehensive code review with the kind of constructive criticism that helps you actually improve.
So I tried something different. I asked Claude Code's full-stack-mentor agent to review my portfolio site and not spare my feelings.
Here's the exact prompt I used:
Let's have the full-stack-mentor do a pass over this project rusl26. This is my personal portfolio and blog. It has a Swiss, minimalist design ethos. The purpose of the site and the design is to present my writings and thoughts, and help me establish thought leadership in the fields I work in. This task should be a general review of my code and a set of helpful lessons and recommendations. Do not spare my feelings in providing a design or software critique; I want most to learn how to be a world class software developer and train my "design eye".
What I Got Back: Two Documents and a Wake-Up Call
The agent delivered two comprehensive documents:
- COMPREHENSIVE_REVIEW.md - An 11-section review with an overall grade: B+ (83/100)
- lessons/building-world-class-portfolios.md - Educational deep-dives explaining the "why" behind every recommendation
The Brutal Honesty I Asked For
Critical Issues Identified:
- Homemade markdown parser (security vulnerability + missing features)
- Using
<img>instead of Next.js<Image>(costing 70% bandwidth) - XSS vulnerability in contact form (no input sanitization)
- Accessibility failures (color contrast, focus indicators, heading hierarchy)
- Production console.logs leaking system info
What I Was Doing Right:
- Content quality (EVgo case study called "outstanding")
- Cohesive Swiss minimalist aesthetic
- Modern tech stack (Next.js 15, React 19, TypeScript)
- Clean component architecture
- Fast build times and good bundle sizes
The grade stung a bit—B+ when you think you're writing A code is humbling. But the agent was right. I had shipped with security holes and performance issues because I hadn't taken time for proper review.
The Fix-It Session
Here's where it got interesting. I asked the agent to implement all the fixes. What followed was a masterclass in systematic software improvement.
The Todo List Approach
The agent immediately created a detailed todo list with 30 specific items:
- Install dependencies
- Replace markdown parser
- Add category system to all posts
- Convert images to Next.js Image components
- Add input sanitization
- Fix accessibility issues
- Optimize font loading
- Add RSS feed
- Run builds and fix errors
Then it worked through them one at a time, marking each as complete before moving to the next. This alone was a lesson in professional development practice.
Real-Time Problem Solving
When the build failed (multiple times), the agent:
- Read the error messages carefully
- Identified the root cause
- Fixed the issue
- Verified with another build
- Moved on only after success
This is exactly how senior developers work—systematic, patient, verification-focused.
The Learning Moments
Security: The Markdown Parser
My homemade markdown parser was trying to handle edge cases I hadn't considered:
Before:
// Naive regex-based parser
content = content.replace(/\*\*(.*?)\*\*/g, '<strong>$1</strong>');
After:
import { marked } from 'marked';
import DOMPurify from 'isomorphic-dompurify';
const rawHtml = marked.parse(md, { async: false });
return DOMPurify.sanitize(rawHtml, { /* proper config */ });
Lesson: Don't reinvent security-critical wheels. The marked library handles tables, code blocks, nested lists, and edge cases I hadn't even thought of.
Performance: Image Optimization
I was serving raw S3 images:
Before:
<img src={project.cover} alt={project.title} />
After:
<Image
src={project.cover}
alt={`${project.title} project showcase`}
fill
sizes="(max-width: 640px) 100vw, (max-width: 1024px) 50vw, 33vw"
quality={85}
priority={false}
/>
Impact: 70% bandwidth reduction through automatic WebP/AVIF conversion and responsive sizing.
Lesson: Framework features exist for a reason. Next.js Image is battle-tested by thousands of high-traffic sites.
Accessibility: The Details Matter
The agent caught issues I'd missed:
/* Before: No focus indicators */
a:focus { outline: none; }
/* After: Clear focus indicators for keyboard navigation */
a:focus-visible,
button:focus-visible {
outline: 2px solid var(--color-accent-orange);
outline-offset: 2px;
border-radius: 2px;
}
Lesson: Accessibility isn't just alt text. It's focus management, color contrast, heading hierarchy, and dozens of small details that compound into usability.
The Results: B+ to A- in One Session
After implementing the fixes:
- ✅ Security vulnerabilities closed
- ✅ Performance dramatically improved (70% faster image loading)
- ✅ Accessibility compliance (WCAG AA)
- ✅ Build successful with zero errors
- ✅ 36 posts properly categorized
- ✅ RSS feed for subscribers
- ✅ Professional error handling
The agent estimated I went from 83/100 to 92/100 by implementing the top 12 fixes.
What Makes This Different from Stack Overflow
Traditional help sources give you:
- Stack Overflow: Answer to your specific question
- Documentation: How to use a feature
- Tutorials: Step-by-step for one task
- Code review services: Async feedback, often days later
The AI code mentor gave me:
- Comprehensive analysis across architecture, performance, security, design, and accessibility
- Educational context - the "why" behind each recommendation
- Prioritized action items ranked by impact vs effort
- Immediate implementation - it could make the changes and verify they worked
- Learning resources curated for my specific gaps
It's like pair programming with a senior developer who has infinite patience and never gets tired of explaining things.
The Honest Limitations
This isn't magic, and AI code mentors have clear limits:
What worked brilliantly:
- Identifying common security issues (XSS, injection)
- Finding performance anti-patterns
- Catching accessibility violations
- Suggesting better libraries/patterns
- Systematic implementation of fixes
What still needed human judgment:
- Design aesthetic decisions (I kept my vibrant orange!)
- Product priorities (which fixes to implement first)
- Business context (understanding my audience)
- Creative problem-solving for novel challenges
Where it struggled:
- Very new libraries/frameworks (training data lag)
- Highly domain-specific logic
- Subjective design choices
How to Get Similar Results
If you want to use AI as a code mentor, here's what worked for me:
1. Be Specific About Context
❌ "Review my code"
✅ "This is a portfolio site with a minimalist design.
The goal is thought leadership. Review for code quality,
performance, security, and design."
2. Ask for Brutal Honesty
"Do not spare my feelings. I want to learn how to be
a world-class developer."
This permission seems to unlock better feedback. The agent knows you want real improvement, not just validation.
3. Request Educational Content
"Explain WHY each recommendation matters, not just WHAT to change."
The educational document the agent created was as valuable as the code fixes.
4. Be Willing to Implement
Don't just collect feedback—act on it. I asked the agent to implement fixes, which meant:
- Learning by watching the process
- Seeing best practices in action
- Understanding why changes were made
5. Iterate on Failures
When builds failed, I didn't give up. The agent debugged, fixed, and tried again. This taught me the professional mindset of persistence.
The Unexpected Benefit: Design Eye Training
The review didn't just improve my code—it trained my "design eye." I now look at my own code differently:
Before: "It works, ship it."
After:
- Does this have proper error handling?
- Are there accessibility issues?
- Could this be a security risk?
- Is there a better-tested library for this?
- Am I creating technical debt?
The agent's systematic approach to code review has become my internal checklist.
The ROI Breakdown
Time invested: ~3 hours Issues identified: 29 specific improvements Critical fixes implemented: 12 Code quality improvement: B+ → A- (83/100 → 92/100) Learning value: Immeasurable
For a solo developer working at 2 AM, this is extraordinary ROI.
Practical Applications Beyond Personal Projects
This approach works for:
Startups: Get senior-level review without senior-level salary Side projects: Validate architecture before investing weeks Learning: Understand WHY your code has issues Audits: Systematic review of legacy code Onboarding: Learn project standards and patterns
The Future of Coding with AI Mentors
This experience convinced me we're at an inflection point. AI agents aren't replacing developers—they're democratizing access to expertise.
What this means:
- Junior developers can get senior-level guidance 24/7
- Solo founders can build production-quality products
- Code review becomes continuous, not periodic
- Learning accelerates through immediate, contextual feedback
The bottleneck isn't access to knowledge anymore—it's willingness to ask for honest feedback and act on it.
Try It Yourself
If you want a similar code review:
- Choose your AI agent (Claude Code, GitHub Copilot, ChatGPT, etc.)
- Define your project context clearly
- Ask for brutally honest feedback
- Request both review and educational content
- Implement the top recommendations
- Learn from the process
The worst that happens? You get a list of improvements. The best? You level up your entire development practice.
Final Thoughts
On a Saturday ahead of a Game 7 of the World Series (Go Dodgers!), I asked an AI agent to review my portfolio. I expected a few suggestions. I got a comprehensive education in production-ready development practices.
The agent found security holes I'd missed, performance wins I'd overlooked, and accessibility gaps I didn't know existed. More importantly, it explained why each issue mattered and how to fix it properly.
This isn't about AI replacing developers. It's about making expertise accessible when you need it—whether that's 2 AM or 2 PM, whether you're a solo founder or part of a team, whether you're learning or leading.
Your code mentor is available right now. It won't judge you for asking basic questions. It won't get tired of explaining things. It won't be unavailable because it's sleeping.
Ask for honest feedback. Implement the recommendations. Learn from the process.
That B+ code you shipped? It can be A+ by tomorrow morning.