DL 426
The Tell-Tale Signs
Published: March 22, 2026 • 📧 Newsletter
A useful question to carry into this week: Is this doing what I think it's doing?
Not paranoia. Just curiosity. Because several things that looked like one thing this week turned out to be something else entirely.
Subscribe here • Support on Ko-fi
📚 Recent Work
Doug Belshaw's newsletter this week mentioned he'd built a DOUG.md. A portable file that tells AI tools who he is so he doesn't have to re-explain himself every session. I went down a rabbit hole.
I built my own. IAN.md — what it is, what's in it, and a template you can steal is live on the blog. The short version: every AI conversation starts with a "context tax." The work of re-introducing yourself from scratch. A simple text file fixes that. Writing it turned out to matter more than I expected.
Thanks to Doug for showing us how.
The digital garden is where the deeper threads from each issue live. Follow any concept as far as you want.
🔖 This Week
- An AI-generated persona fooled millions and funneled them to adult content
- Google quietly killed a medical advice feature it said had "nothing to do with safety"
- Tech workers are competing on leaderboards for who uses the most AI
- Intelligence and consciousness aren't the same thing, and that distinction matters
🪖 The Soldier Who Wasn't There
"Jessica Foster" had a million followers. She posted selfies on warships, showed up at Trump events, appeared with world leaders. Thousands of commenters thanked her for her service.
She was entirely AI-generated. The Washington Post reported the account was a persona built to gain trust and funnel followers to a paid adult content platform. All before anyone noticed she wasn't real.
This isn't about politics. It's about the fact that AI image generation is now good enough to deceive at scale, for months, across multiple platforms at once.
What to do with this: Reverse image search profile photos that feel off — TinEye or Google Images' built-in search. Check hands, ears, backgrounds. Look for any footprint outside the account's own feed. The account is gone. The playbook isn't.
More in the garden: Media Literacy, Disinformation
💊 Google's Quiet Retreat
Google launched a feature called "What People Suggest." AI-organized summaries of crowdsourced health advice from strangers worldwide. They called it "the potential of AI to transform health outcomes."
They've since quietly killed it. The official reason is "broader simplification." Nothing to do with safety, they said.
This comes after a Guardian investigation in January found Google's AI health summaries, seen by 2 billion people a month, were already spreading false and misleading medical information.
We've seen this playbook before. Deploy. Discover it's a problem. Exit quietly. No announcement, no explanation. If you're using Google for health questions, that's the context.
NIH and Mayo Clinic are still more reliable than any AI summary at the top of search results.
🏆 The Token Leaderboard
At OpenAI, one engineer processed 210 billion tokens in a single week. This is the text equivalent of Wikipedia 33 times over. At Anthropic, a user racked up a $150,000 bill in one month on AI coding tools.
This is tokenmaxxing. Using as much AI as possible as a status signal. Managers at Meta and Shopify are factoring AI usage into performance reviews. Dinner conversations have shifted from "What are you building?" to "How many agents do you have running?"
One engineer put it plainly: "It's becoming a career risk to not use AI at an accelerated pace, regardless of output quality."
Regardless of output quality. The leaderboards don't measure that. Which is the whole problem, and something to watch for wherever AI adoption is being tracked in your own workplace or school.
🧠 Intelligence ≠ Consciousness
Neuroscientist Anil Seth makes a distinction worth holding onto in his piece for Noema: intelligence is about doing; consciousness is about being.
AI systems can do remarkable things. Whether there is anything it is like to be one is a genuinely open question. We tend to conflate the two and that leads us to trust AI in ways we probably shouldn't.
His line: "If we confuse ourselves too readily with our machine creations, we not only overestimate them, we also underestimate ourselves."
The right posture isn't distrust. It is curious healthy skepticism. Use it, push back, verify. Not because it's malicious. Because it isn't what it looks like.
🤔 Consider
Believe nothing you hear, and only one half that you see.
— Edgar Allan Poe
Four stories. Four things that looked different than they were. Worth asking that question a little earlier.
⚡ This Week
- Test your eye. Which Face Is Real? is a research tool for spotting AI-generated faces. Harder than you'd think.
- Scroll past the AI health summary. Next time you Google a symptom, click through to an actual source. Notice the difference.
- If your workplace is measuring AI usage, ask what they're measuring. Token count and output quality are not the same metric.
- Before sharing something compelling: 30 seconds. Reverse image search. Quick Google. Most misinformation spreads because sharing is frictionless and checking isn't.
🔗 Navigation
Previous: DL 425 • Next: DL 427 • Archive: 📧 Newsletter
🌱 Connected Concepts
- Media Literacy — Verifying what you see online is now a baseline skill, not an advanced one.
- AI Literacy — The gap between what AI appears to do and what it actually does is where most problems live.
- Disinformation — AI-generated personas are the newest vector. The technology is cheap; the playbook is proven.
- Surveillance Capitalism — Google's medical feature fits the pattern: extract value, optimize engagement, exit quietly.
- Critical Thinking — The Poe principle runs through all four stories.
- Stochastic Parrots — The argument that LLMs are sophisticated pattern-matchers without understanding.