DL 426

The Tell-Tale Signs

Published: March 22, 2026 • 📧 Newsletter

A useful question to carry into this week: Is this doing what I think it's doing?

Not paranoia. Just curiosity. Because several things that looked like one thing this week turned out to be something else entirely.

Subscribe hereSupport on Ko-fi


📚 Recent Work

Doug Belshaw's newsletter this week mentioned he'd built a DOUG.md. A portable file that tells AI tools who he is so he doesn't have to re-explain himself every session. I went down a rabbit hole.

I built my own. IAN.md — what it is, what's in it, and a template you can steal is live on the blog. The short version: every AI conversation starts with a "context tax." The work of re-introducing yourself from scratch. A simple text file fixes that. Writing it turned out to matter more than I expected.

Thanks to Doug for showing us how.

The digital garden is where the deeper threads from each issue live. Follow any concept as far as you want.


🔖 This Week


🪖 The Soldier Who Wasn't There

"Jessica Foster" had a million followers. She posted selfies on warships, showed up at Trump events, appeared with world leaders. Thousands of commenters thanked her for her service.

She was entirely AI-generated. The Washington Post reported the account was a persona built to gain trust and funnel followers to a paid adult content platform. All before anyone noticed she wasn't real.

This isn't about politics. It's about the fact that AI image generation is now good enough to deceive at scale, for months, across multiple platforms at once.

What to do with this: Reverse image search profile photos that feel off — TinEye or Google Images' built-in search. Check hands, ears, backgrounds. Look for any footprint outside the account's own feed. The account is gone. The playbook isn't.

More in the garden: Media Literacy, Disinformation


💊 Google's Quiet Retreat

Google launched a feature called "What People Suggest." AI-organized summaries of crowdsourced health advice from strangers worldwide. They called it "the potential of AI to transform health outcomes."

They've since quietly killed it. The official reason is "broader simplification." Nothing to do with safety, they said.

This comes after a Guardian investigation in January found Google's AI health summaries, seen by 2 billion people a month, were already spreading false and misleading medical information.

We've seen this playbook before. Deploy. Discover it's a problem. Exit quietly. No announcement, no explanation. If you're using Google for health questions, that's the context.

NIH and Mayo Clinic are still more reliable than any AI summary at the top of search results.


🏆 The Token Leaderboard

At OpenAI, one engineer processed 210 billion tokens in a single week. This is the text equivalent of Wikipedia 33 times over. At Anthropic, a user racked up a $150,000 bill in one month on AI coding tools.

This is tokenmaxxing. Using as much AI as possible as a status signal. Managers at Meta and Shopify are factoring AI usage into performance reviews. Dinner conversations have shifted from "What are you building?" to "How many agents do you have running?"

One engineer put it plainly: "It's becoming a career risk to not use AI at an accelerated pace, regardless of output quality."

Regardless of output quality. The leaderboards don't measure that. Which is the whole problem, and something to watch for wherever AI adoption is being tracked in your own workplace or school.


🧠 Intelligence ≠ Consciousness

Neuroscientist Anil Seth makes a distinction worth holding onto in his piece for Noema: intelligence is about doing; consciousness is about being.

AI systems can do remarkable things. Whether there is anything it is like to be one is a genuinely open question. We tend to conflate the two and that leads us to trust AI in ways we probably shouldn't.

His line: "If we confuse ourselves too readily with our machine creations, we not only overestimate them, we also underestimate ourselves."

The right posture isn't distrust. It is curious healthy skepticism. Use it, push back, verify. Not because it's malicious. Because it isn't what it looks like.


🤔 Consider

Believe nothing you hear, and only one half that you see.
— Edgar Allan Poe

Four stories. Four things that looked different than they were. Worth asking that question a little earlier.


⚡ This Week


Previous: DL 425Next: DL 427Archive: 📧 Newsletter


🌱 Connected Concepts