The Moment Digital Identity Stopped Being Evidence
Hey Everyone - Hope your weekend was great. I spent most of mine building things (more on that below), but this week's Signal has been stuck in my head for days. It is one of those stories that feels like science fiction until you realize it already happened.
This week:
The Signal - The moment digital identity stopped being evidence
What I'm building - My new article-to-infographic pipeline (and an AI assistant for my actual life)
Resources - Typewriters, the loneliness economy, and a rare optimist on AI
Skills to Develop - Callback verification
Let's dive in.
This week’s Signal
🌎 The $25.6 Million Video Call

/In early 2024, a finance employee at the global engineering firm Arup joined what looked like a routine video call. The CFO was on the line. So were several senior colleagues he recognized. The CFO explained that a sensitive transaction needed to be processed urgently, and asked him to move money across a series of international accounts.
Over the next few minutes, he initiated fifteen wire transfers totaling $25.6 million.
Every person on that call was a deepfake. The CFO, the colleagues, the faces he knew, the voices he had heard in meetings. All generated. He had done exactly what he was trained to do. He had seen, heard, and recognized the people asking. The problem was that seeing, hearing, and recognizing no longer mean what they used to.
This is not an isolated story. Experian's 2026 Fraud Forecast names agentic AI and deepfake job candidates as the top operational threats facing enterprises this year. Consumers lost $12.5 billion to fraud in 2025. Detected deepfake incidents grew from 500,000 in 2023 to 8 million in 2025, a 900 percent increase. A Nature Human Behaviour study found humans identify deepfake videos correctly about 54 percent of the time. Another benchmark puts it closer to 24.5 percent. Either way, we are at or below the level of a coin flip.
For most of human history, if you saw someone's face and heard their voice, you could be reasonably confident you were talking to them. That assumption is the foundation of almost every trust decision we make, from how banks verify identity to how you decide whether to pick up the phone.
That assumption is now wrong. And I do not think most people have fully absorbed what that means.
The instinctive response is to build better detectors. That is the wrong move. The best public detectors scored 89 percent against 2023 fakes. Against the latest synthetic video they are down to 71 percent. Humans are far worse. The detection arms race is being lost, and the gap is widening every quarter.
The better question is what happens when you stop trying to detect the fake and start trying to certify the real.
This is already happening in places that cannot afford to be wrong. Schools are bringing back typewriters and blue book exams. Journalists insist on on-the-record calls to numbers they already have. Camera makers are embedding cryptographic chips that sign an image at the moment of capture. The whole field is shifting from detection to provenance. Do not ask whether this is fake. Ask whether you can prove it is real.
The personal version of this shift matters for most of us. When perception cannot be trusted, the channels that survive are the ones where identity is already established. The friend whose voice you have heard in your kitchen. The colleague you sat across from at a conference. These relationships are not valuable only because they are meaningful. They are valuable because they are a provenance system. You have a verified baseline, and a deepfake has to fool something much deeper than your eyes.
This is where the story connects to everything I have been writing for a year. Local community, in-person meetings, the small dinner party, the Dunbar-sized network of people you actually know. I used to call it a cultural preference. I no longer see it that way. It is infrastructure. It is the only layer of trust that cannot be forged at scale.
The practical takeaway splits two ways. If you operate a business, assume any face or voice on a screen could be synthetic, especially in a moment of urgency or money. The next sophisticated fraud will not look like fraud. It will look like your boss, on video, asking for something slightly unusual but not impossible. If you have a protocol before that call comes, you win. If not, you lose at the speed of a wire transfer.
If you are living your life, invest in the handful of relationships where identity is already established and where the cost of faking you is high. Those are the people who will notice when something is off. They are also the people you can verify anything through when in doubt.
The $25.6 million video call was not a glitch. It was a preview. The next one will not look like one either.
What I’m Building
An Infographic Pipeline for Austin Founders Feed, and an Assistant for My Life

Two automations shipped this week.
The first is an article-to-infographic pipeline for the Austin Founders Feed. I drop in an article, and the pipeline produces a finished infographic with credit to the author at the bottom. The thing I learned building it was counterintuitive. When I wrote the image prompt myself, the results were mediocre. When I let the AI read the article and write its own visual prompt, the quality jumped dramatically. The AI knows the article better than I do in that moment, so it knows what to show. My job was to build the pipeline, not to handwrite every brief.
The second is my personal AI assistant. It handles calendar and scheduling, email triage and drafting, errands and household purchases. It monitors my Discord servers and inboxes and flags anything I missed. It keeps my Notion todo list honest. I get a morning briefing to start the day and an evening summary to close it.
Both builds follow the same rule. Let the machine do the repetitive work. Protect the thinking for myself.
What I’m Learning
lot’s of stuff
Things I Learned:
Experian's 2026 Fraud Forecast - The primary source for this week's Signal. Experian lays out agentic AI and deepfake job candidates as the top 2026 operational threats. Gartner predicts 30% of enterprises will find standalone identity verification unreliable by the end of the year. Worth reading the whole thing.
Typewriter Takeover in Schools - The Columbian documents a national trend of high schools and universities bringing back typewriters and blue books to certify student work. The photo alone tells the story. This is the provenance pivot showing up in a classroom.
An AI-worried Economist Finds a Rare Reason for Hope - University of Chicago behavioral economist Alex Imas, previously a well-known AI pessimist, argues that historical automation patterns hold. Routine tasks collapse in value while contextual judgment, emotional intelligence, and adaptive skills command steeper premiums than in any previous wave. The counterpoint to the doom reading.
When Everyone's Brilliant, Human-to-Human Connection Becomes Your Superpower - A tight essay on why, in a world where everyone has access to the same model, the scarcest resource is genuine connection. Pairs cleanly with this week's Signal.
Survival Skill
Callback Verification
The idea is simple. For any request involving money, credentials, access, or urgency, you never act on the channel the request came in on. You verify through a second, pre-agreed channel first. A phone number you already have saved. A text to a known number. A safe word you agreed on in advance.
This matters because the signature of a modern AI-powered scam is not a shady stranger asking for your password. It is someone you trust, on a channel you use every day, asking for something urgent and slightly unusual. The voice can be cloned. The face can be rendered. The email can be spoofed. The callback is what breaks the loop.
Here is how to build it this week. Pick three or four people where a fake version would actually cost you something. Your spouse. Your mom. Your business partner. Your CFO. Save a phone number for each that you trust, and agree on one word either of you can use to prove you are real in an urgent moment. That is it. You have a protocol.
Then commit to one rule. If anyone in that group asks for money, credentials, a wire, a gift card, a code, or anything urgent, you do not act until you confirm on a different channel. A two-minute delay costs nothing if the request is real, and everything if it is fake.
This skill is durable regardless of what happens with AI because the logic under it is older than AI. Defense in depth. A second witness. Systems that matter do not trust a single signal. Neither should you.
Pick your people. Save the numbers. Agree on the word. It takes an afternoon. You will almost certainly never use it. The one day you do, it will be worth everything it cost you to build.
Closing Thoughts
If a familiar face asked you for money on a video call tomorrow, what would actually make you pause?
What part of your own life would you not want to automate, even if you could, because it is the part that makes the automation worth doing?
Who are the people in your life you could most easily verify in person, and how often do you actually see them?
“ Weekly AI Prompt: "I want you to help me audit my personal trust surface for AI-powered fraud. Here is a list of the people, accounts, and institutions I would act urgently for if they contacted me: [list them].
For each one, tell me:
Through which channels could a sophisticated attacker most plausibly impersonate them (email, SMS, video call, phone, in-app DM)?
What is the specific action the attacker would most likely try to get me to take urgently?
What is a realistic verification step I could pre-agree with this person that would make an impersonation attempt fail?
What is one habit I should build this month to reduce my overall exposure across this whole list?
Then tell me which relationship on my list is the highest-value target for an attacker and what I should do about it this week."
Until next week,
Ken
