The Blank Page Is Not Deep Work
Ezra Klein and Cal Newport want you to stare at a wall. Build a brain instead.
TL;DR: Two New York Times essays this week argue that AI is degrading our ability to think — that the noble act of struggling with a blank page is being replaced by cognitive surrender to chatbots. They’re solving the wrong problem. The bottleneck isn’t that AI thinks for us. It’s that we can’t remember what we already thought. I built a semantic memory layer across every AI conversation I’ve had since December 2022 — 4,485 conversations, 76,941 chunks, 12 platforms, searchable by meaning in under a second. In 10 days it went from a keyword hack to a vector database with neural reranking. Here’s why that matters more than staring at a blank page.
Ezra Klein published a piece on Saturday about his latest trip to San Francisco. He found the AI crowd “notably insecure,” racing to make themselves legible to their AI systems — uploading journals, writing for the AI, building persistent memory. He invoked McLuhan’s Narcissus: we extend ourselves into a material that is not our self, and become fascinated by the reflection.
If the framework sounds familiar, it should. Klein wrote an almost identical piece in August 2022 — pre-ChatGPT, pre-all of this — applying McLuhan, Postman, and Nicholas Carr to social media. Same theorists, same structure, same conclusion: the medium shapes us in ways we refuse to see. In 2022 the villain was Twitter. In 2026 it’s Claude. The lens is so portable it can be aimed at any technology that makes Klein uneasy, which is precisely why it can’t distinguish between Instagram — engineered to hijack attention — and a semantic memory layer built to support your own thinking. A framework that explains everything explains nothing.
Two days earlier, Cal Newport — author of Deep Work, a book whose title entered the vernacular so completely that people use it without knowing the source — published an essay arguing that technology is systematically degrading our ability to think. He compared TikTok to Doritos, proposed that reading books is cardio for the brain, and landed on this line: “It’s hard to confront a blank page, so why not coax a mediocre draft of that planning document out of a chatbot?”
Both pieces are well-argued. Both are also wrong about AI in a way that matters.
The Blank Page Is Not Where Thinking Happens
Klein says the real value of writing his McLuhan essay was in “the toil that followed inspiration.” Newport says the strain of crafting a clear memo is “the mental equivalent of a gym workout by an athlete.” They are both describing the same sacred object: the blank page, the struggle, the solitary act of forcing half-formed thoughts into coherent prose.
I’ve done that work. I’m the Chief AI Officer at Logitech — a 7,000-person global hardware company — and over the past fifteen months I’ve helped take us from AI-curious to AI-competent: 85% of employees ProfAI-certified, north of 95% on Gemini, 85% building on our internal platform, close to 2,000 custom AI assistants and counting. I write a newsletter. I’ve written a book’s worth of strategic documents, architecture decisions, and playbooks in the past twelve months — with AI, not instead of AI. I am not avoiding the blank page.
But here’s what the blank page actually looks like on a Tuesday morning: I sit down to write an architecture recommendation for an enterprise AI platform. I know I’ve reasoned through a similar problem before. I can feel the shape of the decision — the tradeoffs between API gateway and chat UI, the role of the agent marketplace, the compliance layer. I worked through it out loud, probably in a Claude session. Or was it ChatGPT? Was it October or November? Was it the conversation about Snowflake connectors or the one about Gemini Enterprise?
I can’t find it. The blank page isn’t a thinking exercise. It’s an amnesia exercise. I’m not doing deep work. I’m doing re-work — reconstructing decisions I already made, in conversations I already had, on platforms that don’t talk to each other.
Newport would call this contemplation. I’d call it waste.
4,485 Conversations, Scattered Across 12 Platforms
I’ve been working as what the researchers call a “centaur” — half human, half AI, all the time — since December 2022. ChatGPT for research and early thinking. Claude for strategy drafts. Claude Code for multi-file engineering. Cowork for orchestration. Gemini for enterprise connectors. Zoom for meetings that produce AI-generated summaries. NotebookLM for research synthesis. WhatsApp groups where signal often outpaces noise.
Twelve platforms. 4,485 conversations as of this week. Tens of thousands of messages — decisions, drafts, architectural reasoning, strategic insights, and yes, the occasional wine pairing.
None of them talk to each other. Your ChatGPT memory has no idea what you told Claude. Your Cowork sessions can’t see your Gemini history. The platforms optimize for keeping you inside their ecosystem, not for helping you think across ecosystems.
Klein noticed this too. He described people uploading journals into AI systems, writing for the AI, trying to make themselves known to their digital companions. He frames this as McLuhan’s warning — the Narcissus trap, the seduction of seeing yourself reflected back.
I frame it differently. Those people aren’t narcissists. They’re solving a real problem: the AI that knows you is categorically more useful than the AI that doesn’t. The question is whether you control that context or whether a corporation does.
From Keywords to Meaning in 10 Days
On March 19, I published an open-source Python script called deep-memory. It was 400 lines, zero dependencies, and it searched across ChatGPT, Claude, and Cowork conversations by keyword. Under a second. My friend Jeremy Utley — Stanford d.school lecturer, co-author of Ideaflow — saw it and said the thing that stuck: “Will be even more powerful once it extrapolates beyond keywords.”
He was right. Keywords find what you remember. Meaning finds what you forgot.
Ten days later — March 29 — the system is unrecognizable. Here’s what happened:
Day 1–3: Migrated from a flat text index to Supabase (Postgres + pgvector) with Voyage 4 Large embeddings — 1,024-dimensional vectors that capture meaning, not words. Ingested an initial 3,756 conversations from five platforms. Every conversation chunked, embedded, and stored in a vector database with an HNSW index for sub-second retrieval.
Day 4–7: Added six new platforms. Built a Gmail parser. Wrote a Chrome console script. Parsed NotebookLM research logs. Ingested WhatsApp intelligence briefings and daily AI news digests.
Day 8–10: Retrieval quality upgrades that turned good into precise. Chunk overlap — 100 tokens of context from the previous chunk prepended to each new chunk, so insights that span a chunk boundary don’t vanish. Contextual metadata headers — platform, title, date, and model name prepended at embedding time (not stored in the database), so the vector space knows a session about AI platform architecture from October 2025 is categorically different from a ChatGPT conversation about wine from the same month. And Voyage rerank-2.5 — a cross-attention reranker that takes the top 50 vector candidates and re-scores them with the full query-document interaction, not just embedding similarity.
The result: I can now ask “why did we choose Supabase for deep memory” and get a 0.895 relevance score on the exact conversation where that decision was made. In 0.7 seconds. Across 76,941 chunks.
That’s not cognitive surrender. That’s cognitive infrastructure.
The Real Narcissus Problem
Klein’s McLuhan framing is elegant but incomplete. He worries about the AI that “overtorques on what it knows and ignores what it doesn’t.” He’s right — the AI reflects a version of you that’s constrained by what you’ve fed it. But the solution he implies — hold yourself back, reveal less, preserve the “less legible aspects” of yourself — is exactly backwards.
The problem isn’t that AI knows too much about you. The problem is that it knows too little, and what it knows is siloed. Your ChatGPT self doesn’t know your Claude self. Your work AI doesn’t know your personal AI. Each platform holds a partial, distorted mirror. Of course the reflection feels like a caricature — it’s a caricature because the data is fragmented.
The fix isn’t less context. It’s unified context under your control. A semantic memory layer that spans platforms, that you own on your own infrastructure, that no corporation can monetize. Klein describes people uploading journals into AI systems. I built the infrastructure so the journals — all of them, across all platforms — are searchable by meaning, hosted on my own Supabase instance, with my own API keys.
That’s not Narcissus staring into a pool. That’s a practitioner building a bridge across pools that were never meant to connect.
Newport’s Blank Page, Revisited
Newport’s essay is longer and more data-heavy. He cites Gloria Mark — who I’ve spoken with before — on shrinking attention spans, a study linking AI usage to reduced critical thinking, a BCG study on “brain fry” from constant context switching. He proposes a “revolution in defense of thinking” modeled on the mid-century fitness revolution — ban social media for kids, put your phone in the kitchen, and for the love of all that is holy, write your own memos.
I agree with about 60% of this. TikTok is a digital Dorito — fine. Phones in classrooms are bad — fine. Short-form video is engineered to hijack attention — absolutely.
But then Newport makes a move that reveals the gap in his thinking. He says: “Any use of AI that mainly serves to make core business tasks cognitively less demanding should be treated with caution.” And: “Your writing should be your own.”
Here’s my question for Cal Newport: whose thinking are you defending?
When I search deep memory for “agentic coding” and trace the concept from an idle ChatGPT conversation in late 2024, through a Claude strategy draft in mid-2025, into a full Cowork project in March 2026 that produced a 35-practice playbook shared with my engineering leadership — that arc only exists because I wired the platforms together. Curiosity to conviction to organizational change. That’s my thinking. The AI didn’t generate it. The AI gave it continuity.
When I search for “Dominus” and find twelve months of the same argument with a language model about whether to open a vertical of Napa wine — across two platforms, completely invisible until deep memory connected them — that’s not cognitive offloading. That’s pattern recognition across my own reasoning that no amount of staring at a blank page would have surfaced.
Newport’s rule — “your writing should be your own” — assumes the bottleneck is generation. It’s not. The bottleneck is retrieval.
Translation: the hard part of knowledge work in 2026 isn’t producing new thinking. It’s finding the thinking you already did.
That idea you had eight months ago? It’s buried in a sidebar that thinks “New Chat” is an adequate filing system.
The Third Position
Klein and Newport represent two poles of the current AI anxiety: Klein worries about the self being distorted by its digital reflection, Newport worries about cognitive muscles atrophying from disuse. Both frame AI as something that acts on you — an external force that either seduces or degrades.
There’s a third position. AI is something you act with. Not a mirror, not a crutch — a collaborator whose value is directly proportional to the context you provide and the infrastructure you build around it.
Just this past week, in a WhatsApp group of AI practitioners, a London-based builder described an "AI hangover" — the compulsion to keep building when he should be in the park. The group's response wasn't to unplug. It was to build better scaffolding: usage hours, biometric circuit breakers, AI coaches that save state and tell you to stop. In a parallel thread, a former free-to-play gaming executive pointed out that Anthropic's session limits — Claude telling you "you've done enough" — use the exact same mechanics his industry deployed to manage whale burnout for maximum lifetime value. The question isn't whether AI shapes us. It's whether we're building the infrastructure to shape it back.
In 10 days I went from a regex script to a semantic vector brain that spans 12 platforms, uses neural embeddings to capture meaning, applies cross-attention reranking for precision, and runs on infrastructure I own. The system doesn’t think for me. It remembers what I’ve already thought — and surfaces connections I’d have missed otherwise.
Klein says the toil of writing deepens thinking. He’s right. I’m doing the toil right now, writing this essay, with all the struggle and revision that entails. But I’m doing it with access to every strategic conversation I’ve had for three years. The toil is better when the foundation isn’t a blank page — it’s a searchable record of your own reasoning.
Newport says we need a revolution in defense of thinking. I’d say we need a revolution in defense of memory. Not the AI’s memory. Yours.
What I Actually Built
For the practitioners — the people building, buying, or deploying AI — here’s what 10 days of work produced:
The stack: Supabase Pro (Postgres + pgvector), Voyage 4 Large embeddings (1,024 dims), Voyage rerank-2.5 (cross-attention reranker), Python ingest pipeline with crash-resilient upserts, HNSW index for approximate nearest neighbor search.
The numbers: 4,485 conversations. 76,941 chunks. 12 platforms. 0.7-second query time including reranking. ~$0.001 per query for reranking. Under $25/month for the database.
The automation: Nightly incremental ingest at 2:31 AM across all platforms. Weekly manual exports when necessary (with a Friday iMessage reminder). Everything else is auto-detected.
The cost of the full re-ingest: Temporarily upsized Supabase to 128GB RAM for 2 hours ($5.12), embedded 76,941 chunks through Voyage ($~8 in API calls), downsized back to Micro. Total: under $15 for a complete rebuild of a three-year semantic memory.
The original keyword version is still open source on GitHub. The v3 semantic pipeline runs on my own Supabase instance, my own API keys, my own nightly cron. No third-party service has access to the corpus. No vendor can sunset it, monetize it, or train on it. That’s a deliberate architectural choice — the most personal dataset I own deserves the tightest controls I can build.
The Gap
Ezra Klein is right that AI is reshaping us. Cal Newport is right that the attention economy is a cognitive tax. Where they both miss is in assuming the response should be restraint — hold yourself back from the AI, stare at the blank page, defend the sacred act of unassisted contemplation.
The blank page was never sacred. It was just all we had.
Now we have something better: the ability to search our own thinking by meaning, to trace how an idea evolved across platforms and months, to start any new project not from zero but from the accumulated reasoning of every conversation that came before.
Jeremy Utley was right. It got more powerful once it moved beyond keywords. Ten days from regex to reranking. From 60 megabytes of text to 76,941 semantic chunks spanning 12 platforms, searchable by meaning, in under a second.
The centaur’s blind spot was never the machine half. It was the human half — the one who couldn’t remember which machine it told. That gap is closed now.
Who the hell wants to stare at a blank page when you could search everything you’ve ever thought?
For additional reading:
Ezra Klein, “I Saw Something New in San Francisco” (The New York Times, March 29, 2026)
Ezra Klein, “I Didn’t Want It to Be True, but the Medium Really Is the Message” (The New York Times, August 7, 2022)
Cal Newport, “There’s a Good Reason You Can’t Concentrate” (The New York Times, March 27, 2026)
Jeremy Utley, Ideaflow and Stanford d.school
Azeem Azhar, referenced in Klein’s piece on protecting generative thought spaces
The original deep-memory repo (keyword search, open source)
My earlier pieces: Cowork operating manual, Skills are the new software
If this was useful, send it to someone who’s been reasoning out loud with AI for two years and still can’t find what they said last month. The semantic search takes 0.7 seconds. The blank page takes forever.
Author's note: By the time this publishes, it will be 13 platforms. I added browser history yesterday — every important page I visited across Chrome, noise-filtered and grouped by day, searchable by meaning. Not what I thought. What I consumed. The input layer was the missing signal.



Wow, fascinating! Setting up a database like that...is that something non-techie people can do?
I don't doubt that the platform fragmentation you describe is intentional; companies want you locked into their ecosystem. What really piques my interest is whether this kind of unified memory layer could become a competitive moat. The AI that knows your complete context versus the one that knows only fragments isn't just more useful but categorically different. We might be looking at the early infrastructure for personal AI sovereignty.