The AI Vampire Is Real, and I Have the Bite Marks
How a Chief AI Officer went from sleeping 78% to 70% in eight weeks — and what my Whoop data says about a problem nobody wants to name.
TL;DR: Steve Yegge wrote about the AI Vampire in February. I read it, nodded, and went back to building. Two months later, my sleep data, my project structure, and my own CLAUDE.md file are the receipts. The AI Vampire isn’t a metaphor. It’s a measurable cognitive drain that scales with capability, and the people most at risk are the ones who are best at using AI.
The Whoop Doesn’t Lie
Here’s a chart I didn’t want to share.
From October through January, my sleep performance was trending upward: 74%, 75%, 77%, 78%. Four months of steady gains. I was kettlebell training, playing piano, reading physical books, and sleeping like a person who had his life in reasonable order.
Then February hit. Sleep performance dropped to 70%. March (so far): 71% (and likely lower before month’s end).
What changed in February? I got good at Cowork.
Not “started using” — got good at it. The kind of good where you have eight scheduled automations firing before breakfast. Where your WhatsApp groups get triaged at 7:55 AM, your Slack at 8:06, your work inbox at 8:09, your Google Chat at 8:15, and a consolidated briefing lands in your iMessage at 8:33. Where a meta-summary of all those summaries arrives at 8:37.
By the time I get to my computer, I’m already behind on what my own systems produced.
As my high school band teacher used to say, “practice doesn’t just make perfect, it makes permanent.” And what I’ve been practicing is a kind of cognitive overcommitment that compounds daily.
The CLAUDE.md Is the Crime Scene
Steve Yegge described the AI Vampire as the phenomenon where AI capability expansion doesn’t free you — it drains you. I read his essay and thought: yeah, but I’m managing it.
I wasn’t.
The evidence is in my own configuration file. My CLAUDE.md — the instruction set that governs every AI session I run — is 500+ lines long. It contains a plugin workflow with four directories and a rebuild pipeline. A scheduled task table with ten entries and their cron schedules. A cross-machine sync protocol between two computers. A section on “Autonomous Send Permissions” that pre-authorizes my AI to send emails, iMessages, and Slack messages while I sleep.
I built the factory that builds the factory. And the factory never stops.
There’s a section in CLAUDE.md titled “Session Continuity” that specifies six mandatory sections for every project’s SESSION.md file. Another requiring Mermaid architecture diagrams for any non-trivial project. Another on thinking effort configuration — when to use high, when to use medium, when to type “ultrathink” for a single complex turn.
This is not a configuration file. This is the floor plan of an obsession.
Jevons Was Right
In the WhatsApp groups I monitor every morning — yes, I have an AI that reads my WhatsApp groups every morning — someone named the phenomenon perfectly this week: Jevons Paradox applied to cognition.
William Jevons observed in 1865 that making coal more efficient didn’t reduce coal consumption. It increased it. The efficiency gains made coal useful for more things, and demand exploded.
The 1M context window didn’t reduce my workload. It expanded my ambition. I now attempt things I would never have tried three months ago. The family assistant that fills out medical forms. The email triage system that classifies my inbox before I see it. The WhatsApp intelligence briefing you’re reading bits of the output of right now. The scheduled task that checks my portfolio on the first of every month.
Each one is individually reasonable. Collectively, they’re a second job I invented for myself, a job whose output is more work to review, more decisions to make, more orchestration to maintain.
One member of my WhatsApp group described the 1M context window experience this way: “Dopamine and adrenaline running high... already found myself crashing during the week after long sessions.” He described needing to rebuild structural blockers — forced breaks — because the capability expansion is addictive. He called it being a moth flying toward a bigger flame.
Another member called it “the AI Vampire.” He’d read Yegge’s piece. The term is spreading.
And the irony is not lost on me that prior to my AI role at Logitech, I ran an innovation software group centered around the idea of building healthy work habits. Le sigh.
From Builder to Orchestrator in 90 Days
Here’s the part that’s harder to quantify than sleep scores.
Three months ago, I was a builder. I wrote code. I edited files. I debugged things. I made architecture decisions by reading source code and holding the system in my head.
Now I’m an orchestrator. I describe intent. I review output. I set constraints and definitions of done. I manage a fleet of AI sessions the way a senior executive manages a team — except the team never pushes back, never gets tired, and never tells me I’m asking for too much.
That last part is the problem.
A good junior employee will eventually say: “I can’t take on another project.” A good manager will notice the signs of overload. But Claude doesn’t get overloaded. It spins up another VM. It runs another session. It processes another briefing. And because it’s always ready, I’m always working.
The shift from builder to orchestrator happened in about 90 days. I went from writing Python to writing SKILL.md files. From debugging code to debugging prompts. From managing git branches to managing a plugin marketplace that distributes my skills across sessions.
Yegge used the metaphor of $/hr: you can’t control the numerator, but you control the denominator. My version: I control the denominator, but I’ve been voluntarily setting it to infinity.
The Comprehension Debt Spiral
Addy Osmani published a piece on “comprehension debt” this week that I shared in two of my WhatsApp groups. The core idea: you can build faster than you can understand what you’ve built.
I recognized myself immediately.
My Claude workspace has projects in five categories: work, personal projects, shared operational infrastructure, public repos, and miscellaneous one-offs. The shared directory alone contains skills, plugin configs, built plugin files, and a marketplace repository, and all connected by a rebuild script that requires Bash 4+ because it uses associative arrays.
I built all of this in two months. I do not fully understand all of it. The SESSION.md files are my attempt to fight comprehension debt — a structured way to make sure at least some context survives between sessions. The Mermaid diagrams are my attempt to make architecture visible without reading every source file.
But these are rearguard actions. I’m fighting the debt, not preventing it. Every new skill I write, every new scheduled task I create, every new plugin I package — each one adds to the surface area I need to maintain. And maintenance isn’t the fun part. The fun part is building the next thing.
The slot machine analogy is real. You describe a system in a paragraph. The AI builds it. It works. The dopamine hits. You describe the next one. And the next one. And by the time you stop, it’s 3 AM and your Whoop is about to give you a 70%.
What I’m Actually Changing
I’m not going to tell you to “touch grass” or “set boundaries.” You’re an adult. You know that. Here’s what I’m actually doing, concretely, starting this week:
Capping scheduled tasks. I have ten. That’s too many. The morning cascade — WhatsApp at 7:55, Slack at 8:06, email at 8:09, Chat at 8:15, consolidated brief at 8:33, meta-summary at 8:37 — produces more information than I can act on before my first meeting. I’m consolidating to three: one WhatsApp briefing, one Logi brief, and one AI news digest. Everything else is on-demand.
Building a “not-now” list. Every new skill idea goes on a list instead of into a build session. If I still want it in a week, I build it. If I don’t, it wasn’t important, it was just dopamine-adjacent.
Measuring orchestration hours separately. I’m tracking time spent reviewing AI output, maintaining skills, and debugging automations as a separate category from “productive work.” My hypothesis: orchestration overhead is eating 30-40% of the time I think I’m saving.
Treating my own Whoop data as a circuit breaker. If sleep performance drops below 72% for two consecutive weeks, I shut down evening sessions. No exceptions. The AI will still be there tomorrow.
Reintroducing “dumb” hours. Piano practice. Ninjutsu. A run without AirPods. Time where the input is physical and the feedback loop doesn’t involve a token count.
My Uncomfortable Truth
The AI Vampire is most dangerous to the people it shouldn’t be — the power users, the early adopters, the ones building the future of work while quietly destroying their own capacity to do it.
I’ve led teams through IPOs. I’ve completed triathlons and other ultra events. I have 4 kids! I know what sustainable effort looks like. And I still got bitten.
The irony is that I have a voice guide — literally a SKILL.md file — that tells AI systems how to write like me, which is especially useful for drafting inconsequential emails. One of my signature phrases is: “Decide, delegate, disappear.” The “disappear” part is supposed to mean getting out of the way. Instead, I disappeared into the work.
Yegge is right that the new workday should be three to four hours. I’d refine it: three to four hours of orchestration. The kind of deep cognitive work where you’re setting intent, reviewing output, making judgment calls, and holding the whole system in your head. That’s the expensive stuff. That’s what drains you.
The rest of the day should be for the things AI can’t compound: relationships, physical training, creative work that has no prompt, and the kind of thinking that only happens when the screen is off.
The 1M context window is extraordinary. The 1M context window is also a trap, because it makes you feel like you can hold everything too.
You can’t. The Whoop doesn’t lie.
Eric Porres is Chief AI Officer at Logitech and writes Beyond Reason, a newsletter about AI for practitioners. He has eight scheduled automations, four kids, a 71% sleep score, and a dog named Coco who doesn’t care about any of this.




So you have become the architect and have empowered the LLM to become your construction crew. Like LI in China who misaligned incentives and build too much. The cognitive overload is clear. Try this … measure how much each skill is adding to your experience. LLMs have this build in between AXIOM scores and Cumulative SoftMax Probability (or probability weighted token mass). Normalize the skills score to a scale (0 to 100) and request the system to force rank them based on MDL success. Compare each ranking to your cognitive load feeling. If the cognitive load is heavier then the success score you likely overbuild for that activity. Think of it as the knowledge based IRR measure.
Yikes! Only a high level brain can reach that experience. Thankfully also smart enough to see the problem.