Adults Lose Skills to AI. Children Never Build Them.

Hey Everyone - Hope you had a great weekend. I've been deep in the weeds building out new systems for this newsletter, and I've been thinking a lot about what I should and should not be handing off to AI. More on that below. This one hit me hard this week.

This week:

  • The Signal - The two types of cognitive damage AI is doing

  • What I'm building - My new AI-assisted workflow (and what I refuse to automate)

  • Resources - Mythos, AI agents, and the analog revival

  • Skills to Develop - Analog thinking time

Let's dive in.

This week’s Signal
🌎 Adults Lose Skills to AI. Children Never Build Them.

A recent study gave software developers a new coding library to learn. Half used AI to help. Half did not. Both groups produced working code. But when they were tested on whether they actually understood what they had built, the AI-assisted group performed 17 percent worse on conceptual quizzes. They could not debug what the AI had written for them. They had the output without the understanding.

These were experienced developers with years of programming knowledge to fall back on. They still lost ground.

Now consider what happens when the person using AI has no existing knowledge at all.

Timothy Cook wrote a piece in Psychology Today that reframed this entire conversation. His argument is simple but unsettling. What AI does to a 45 year old is categorically different from what it does to a 14 year old.

When an adult uses AI to summarize a research paper, that is delegation. The adult has read hundreds of papers. They know what a good argument looks like. If AI disappeared tomorrow, they could still do the work. The skill has atrophied, but it still exists. It can be rebuilt.

When a child uses AI to write an essay, that is substitution. The child has never learned how to structure an argument independently. The neural pathways for source evaluation, for constructing original thought, were never formed. You cannot atrophy a muscle that was never built.

Cook calls this cognitive foreclosure. And foreclosure, unlike atrophy, may not be reversible.

A separate study by Michael Gerlich supports this. Participants over 46 showed higher critical thinking scores alongside lower AI reliance. Participants between 17 and 25 showed the inverse. The older group offloaded tasks they already knew how to perform. The younger group offloaded tasks they never learned to perform.

This changes the conversation from "are we getting lazier" to something more structural. Adults who lean on AI too heavily get less sharp. That is a productivity problem. Children who grow up delegating their thinking to AI may never develop the cognitive foundations for independent reasoning. That is a civilizational problem.

I want to be careful here. I am not anti AI. I use it every single day. But the question is not whether AI is good or bad. The question is whether you are delegating or substituting. And that answer depends entirely on what you have already built inside your own head.

If you already have deep knowledge in a domain, AI amplifies you. If you do not have that foundation, AI replaces the process that would have built it. The speed is real. The shortcut is also real.

If you are a parent, pay attention to what your kids are using AI for. There is a difference between a child using AI to check their work and a child using AI to do their work. The line between them is thinner than it looks.

If you are early in your career, be honest about where you are substituting instead of delegating. If you cannot do the task without AI, you do not understand it yet.

If you are experienced, your deep knowledge is what makes AI useful to you. Protect it. Keep doing things the hard way sometimes, not because it is efficient, but because it is how you maintain the judgment that makes your shortcuts meaningful.

The downside of adult offloading is people get less sharp. The downside of childhood offloading is a generation that was never sharp to begin with. The difference between those two outcomes is not a matter of degree. It is a matter of kind.

What I’m Building
My AI-Powered Newsletter System (And What I Refuse to Automate)

This week I went deep on something I've been thinking about for a while. I rebuilt my entire newsletter workflow using Claude's Cowork system.

Here is what it does for me now. Every morning at 8am, it searches for relevant AI news, research papers, and articles. It summarizes them and saves a daily research brief. It keeps a running list of future post ideas with notes on which themes they connect to. It has a full style guide and template built from analyzing all 18 of my past posts. It even has persistent memory of my writing patterns, my projects, and the themes I come back to.

I'm not going to lie, it is pretty wild. It feels like having a research assistant who actually knows my newsletter.

But here is the part that connects to this week's Signal. I was very deliberate about what I gave it and what I kept for myself.

The AI handles research aggregation, scheduling, file organization, and pattern tracking. I handle the thinking. I decide what the Signal is. I write the arguments. I choose which survival skill matters this week. I make the judgment calls about what connects to what.

That boundary is not accidental. If I let AI write the Signal section, I would lose the thing that makes this newsletter mine. Not the words, but the thinking behind them. The slow process of reading something, sitting with it, connecting it to something I saw last week or experienced last month. That is where the value lives. And it is exactly the kind of process that cognitive offloading erodes.

So my system is designed to be a delegation tool, not a substitution tool. It does the things I already know how to do but do not want to spend time on. It does not do the things I need to keep doing myself in order to stay sharp.

If you are building with AI right now, I think this distinction is worth thinking about. What are you delegating? What are you substituting? And do you know the difference?

What I’m Learning
lot’s of stuff

Big week for interesting content. Here is what caught my attention:

Things I Learned

  • Claude Mythos is too dangerous for public consumption - Fireship covers Anthropic's decision to lock down their new Mythos model over security concerns. The model can apparently discover long-standing software vulnerabilities. When AI gets good enough to find exploits faster than humans can patch them, things get interesting fast.

  • No way this actually works - ThePrimeagen reacts to a recent AI development that seems too good to be true. Always appreciate the skepticism from this channel.

  • Learn 80% of Claude Cowork in Under 20 Minutes - Jeff Su walks through the 7 core capabilities of Claude's Cowork system. This is basically the tool I used to build my new workflow. Good breakdown of local file access, persistent memory, connectors, and scheduled tasks.

  • Companies Are Laying Off Workers Because of AI's Potential, Not Its Performance - HBR argues the current wave of AI layoffs (nearly 80,000 tech workers in Q1) is driven by speculative expectations rather than proven results. Even Sam Altman has acknowledged "AI washing." Companies are blaming AI for cuts they would have made anyway.

  • 2026 Analog Revival: Gen Z and Millennials Rejecting AI - Younger generations are gravitating toward disposable cameras, vinyl, in-person meetups, and analog experiences. Not as nostalgia, but as a deliberate search for tangibility. Exactly the counter-trend I've been writing about.

Survival Skill
Analog Thinking Time

This week's survival skill is one that sounds almost absurdly simple but is becoming increasingly rare and increasingly valuable. Learning how to think without a screen.

I do not mean meditation, although that is fine too. I mean deliberately setting aside time to process ideas, solve problems, and make connections using only your own mind. No chat window. No search bar. No AI assistant suggesting the next thought.

This skill matters now because of exactly what we discussed in this week's Signal. Every time you reach for AI to help you think through a problem, you are making a choice about which cognitive muscles get exercised and which ones do not. For any individual instance, the cost is tiny. But the costs compound. And they compound invisibly, which is the dangerous part.

I have started building this into my week deliberately. When I am working on the Signal section of this newsletter, I do not open Claude until I have a clear thesis written by hand. I take walks without headphones and let my mind wander through whatever I have been reading that week. I carry a small notebook and write down connections when they hit.

What surprised me is how uncomfortable it felt at first. I would catch myself reaching for my phone to look something up or opening a chat window to help me articulate a half-formed idea. That impulse is telling. It means the offloading habit is already forming, even in someone who thinks about this constantly.

The practice itself is straightforward. Pick one thinking task per day that you normally do with AI and do it without. Write a first draft by hand. Work through a problem on a whiteboard. Sit with a decision for 30 minutes before asking for input. Journal about something you are stuck on without searching for answers.

The goal is not to abandon AI. The goal is to maintain the cognitive infrastructure that makes AI useful. If you can think clearly without it, you can think even more clearly with it. If you cannot think without it, you are not using a tool. You are dependent on one.

This is also one of those skills that compounds in a direction most people do not expect. The more you practice analog thinking, the better your AI interactions get. Your prompts improve because your thinking is clearer. Your ability to evaluate AI output improves because your judgment is sharper. You catch mistakes faster. You ask better questions.

I think of it like a professional athlete who still does bodyweight exercises. They have access to every machine and piece of technology in the gym. But the basics keep the foundation strong. The basics are what everything else is built on.

In a world where everyone has access to the same AI tools, the person who can still think without them has a genuine edge. Not because AI is bad. But because the ability to think independently is what makes AI worth using.

Protect your ability to think. It is the one thing AI cannot give back to you once it is gone.

Closing Thoughts

  • Are you delegating to AI or substituting with it? Do you know the difference?

  • What are you building right now that keeps your thinking sharp, not just your output fast?

  • When was the last time you solved a hard problem without any AI assistance?

Weekly AI Prompt: "I want you to audit my AI usage honestly. Here is how I used AI this week: [describe your AI interactions]

For each one, tell me:

  • Was this delegation (I already know how to do this) or substitution (I cannot do this without AI)?

  • What cognitive skill am I not exercising because of this?

  • If AI disappeared tomorrow, could I still do this task?

  • What is one way I could keep using AI for this task while still building the underlying skill?

Until next week,

Ken

Keep Reading