14% of Workers Now Experience AI Brain Fry at Work

Hey Everyone - Jensen Huang said last week that he would be "deeply alarmed" if a $500,000 engineer was not spending at least $250,000 a year on AI tokens. That quote has been living in my head rent free. At the same time, Harvard Business Review published a study showing that intensive AI use is literally frying people's brains. Those two things cannot both be right, and I think the tension between them is one of the most important conversations happening in AI right now.

Meanwhile, I finally got my hands dirty with Open Claw. More on that below.

This week:

  • The Signal - Measuring the Wrong Thing

  • What I'm Building - A Conservative Open Claw Setup

  • What I'm Learning - [TBD]

  • Survival Skill - Protecting Your Attention

Let's dive in.

This week’s Signal
🌎 14% of Workers Now Experience AI Brain Fry at Work

Jensen Huang went on the All-In Podcast last week and said something that has been stuck in my head ever since.

He laid out a thought experiment. If you have a software engineer making $500,000 a year, and at the end of the year they have only spent $5,000 in AI tokens, he would "go ape." If they had not spent at least $250,000 in tokens, he would be "deeply alarmed." When asked if NVIDIA is spending $2 billion a year on tokens for its engineering team, he said "we're trying to."

He compared an engineer not using AI to a chip designer saying they are just going to use paper and pencil.

The quote went everywhere. And I understand why he said it. AI tools are genuinely powerful. Engineers who refuse to use them at all are probably leaving value on the table. But the framing of his argument reveals something I think is deeply wrong with how companies are starting to think about AI productivity.

Jensen is measuring consumption. How many tokens did you burn through. How much compute did you use. Meta is doing something similar, counting lines of AI-generated code as a performance metric for engineers. The implicit message is clear: more AI usage equals more productivity. If you are not consuming, you are not performing.

The problem is that the data says the opposite.

Harvard Business Review published a study this month from BCG researchers who surveyed nearly 1,500 full-time workers about their AI usage patterns and cognitive outcomes. What they found should give every executive pushing token consumption metrics serious pause.

The most mentally taxing form of AI engagement was oversight. Workers who reported high degrees of AI oversight expended 14% more mental effort and experienced 12% more mental fatigue. They also experienced 19% greater information overload. The researchers found that productivity actually peaked when workers used three AI tools simultaneously. After three, productivity dropped. More tools did not mean more output. It meant more cognitive load with diminishing returns.

The researchers coined a term for what they observed: "AI brain fry." Workers described a buzzing feeling, mental fog, difficulty focusing, slower decision making, and headaches. Fourteen percent of AI users in the study reported experiencing it. And the business costs were significant. Workers experiencing brain fry reported 33% more decision fatigue. They scored 39% higher on measures of major error frequency. And their intent to quit was 39% higher than workers not experiencing it.

One senior engineering manager in the study put it perfectly. He said he had one tool helping weigh technical decisions, another generating drafts and summaries, and he kept bouncing between them and double checking everything. Instead of moving faster, his brain felt cluttered. He realized he was working harder to manage the tools than to actually solve the problem.

That is the trap. And Jensen's framing pushes people directly into it.

Here is what bothers me most about the $250,000 token budget argument. Jensen Huang is the CEO of the company that sells the GPUs that process those tokens. Every token consumed runs on NVIDIA hardware. He has an enormous financial incentive to convince every company on earth that their engineers should be burning through as many tokens as possible. I am not saying he is being dishonest. I think he genuinely believes AI tools make engineers more productive. But when the person telling you to consume more is the same person who profits from your consumption, you should at least notice the conflict of interest.

This is like a gas station owner telling you that a good driver should spend at least half their salary on fuel. There might be some truth in the idea that you need to drive to be productive. But the specific metric of "how much fuel did you burn" tells you almost nothing about whether you got where you needed to go.

What gets measured gets gamed. When you tell engineers that their performance will be evaluated partly on how many tokens they consumed, they will consume tokens. Not necessarily because it produces better work. But because the incentive tells them to. This is Goodhart's Law: when a measure becomes a target, it ceases to be a good measure. And we are watching it happen in real time across the industry.

The HBR study actually found something encouraging buried in the data. When AI was used to replace routine and repetitive tasks, burnout scores dropped 15%. Workers who used AI this way reported higher engagement, more motivation, and even stronger social connection with their peers. The difference was not how much AI they used. It was how they used it.

That distinction matters enormously. The path to productive AI use is not "consume more tokens." It is "use the right tokens on the right problems and stop when you have what you need." It is narrow, intentional, and well-targeted rather than broad, intensive, and measured by volume.

I have been experiencing this firsthand. I just set up Open Claw on a spare computer this week. I gave it one specific use case and read-only access to my Discord server. That is it. Not six agents running simultaneously across every tool I own. One agent, one job, conservative permissions. And even with that narrow scope, it took a significant investment of time to get it working the way I wanted. The tool is not magic. It requires setup, testing, iteration, and honest assessment of whether it is actually helping or just creating more work to manage.

I think the people who will get the most out of AI in the long run are not the ones who consume the most. They are the ones who are most deliberate about what they consume and why. That is a harder thing to measure than token spend. But it is the thing that actually matters.

What I’m Building
A Conservative Open Claw Setup

I finally did the thing. I set up Open Claw on a computer I was not using and started experimenting with it.

If you have been reading this newsletter for a while, you know I have had mixed feelings about Open Claw. A few weeks ago I wrote about someone deploying it in a dental office and the HIPAA nightmare that could create. I still have those concerns. But I also believe that sitting on the sidelines entirely is its own kind of risk. So I decided to get my hands dirty, on my own terms.

Here is what I have learned so far.

First, it is like any other tool. The marketing makes it look like you flip a switch and suddenly an AI is running your life. The reality is that you have to invest a significant amount of time setting it up, configuring it, testing it, and getting it to behave the way you actually want. There is no shortcut here. The people who are getting real value from Open Claw are the ones who put in the hours to make it work for their specific situation. Everyone else is going to try it for an afternoon, get frustrated, and move on.

Second, it is not magic. The biggest source of problems so far has been me. User error. Giving unclear instructions. Not thinking through what I actually wanted the agent to do before telling it to do something. The tool does what you tell it to do, which means you need to be very precise about what you tell it. And there is a deeper issue here that I keep bumping into: we do not know what we do not know. I am sure there are things I am setting up incorrectly or risks I am not seeing because I simply do not have the expertise to recognize them yet. That is humbling and a little unsettling.

Third, and this is the part I want to emphasize, I am being extremely conservative. I gave it one use case: my Discord server for Austin Founders Feed. Read-only access. It can observe and help me manage community activity, but it cannot post, delete, or modify anything on its own. Eventually I will expand to my Notion pages and possibly the email account tied to that project. But I am doing this one layer at a time, testing each expansion before adding the next.

I think there is something to be said for being a skeptical first mover. You do not have to be reckless to be early. You do not have to give an AI agent the keys to everything just because the technology allows it. You can move first and still move carefully. Set boundaries. Limit permissions. Pick a use case where the downside of failure is low and the upside of learning is high.

That is the approach I would recommend to anyone thinking about experimenting with these tools. Do not wait until everyone else has figured it out. But do not rush in without a plan either. Start with one thing. Give it the minimum access it needs. Learn from what goes wrong. Then expand from there.

I will keep sharing what I learn as I go. So far the biggest takeaway is that the gap between "this tool exists" and "this tool is useful for me" is much larger than most people think. Closing that gap takes real work. But I think it is work worth doing.

Survival Skill
Protecting Your Attention

🛠️ Survival Skill: Protecting Your Attention

Everything I wrote about this week comes back to one thing. Your attention is a finite resource and almost everything in your environment is trying to spend it for you.

The HBR study I referenced in the Signal found that the workers getting the most out of AI were not the ones using the most tools. They were the ones using AI to eliminate repetitive tasks so they could redirect their attention to work that actually mattered. The workers burning out were the ones whose attention was being pulled in every direction by agents, dashboards, and oversight responsibilities they never asked for.

This is not just an AI problem. But AI is making it worse faster than anything that came before it.

Every tool you add is a withdrawal from your attention budget. Every agent you spin up needs monitoring. Every new platform comes with a feed, a notification system, and an implicit demand that you check in regularly. Most of us never sit down and consciously decide where our attention goes. We just react to whatever is loudest.

The skill is treating your attention the way you would treat money. You would not hand your credit card to every person who asked for it. But that is exactly what most people do with their focus. They adopt every tool, leave every notification on, check every feed, and then wonder why they feel scattered and exhausted by 2pm.

Start by doing an honest audit. Where is your attention actually going during a workday? Track it for a day if you have to. Most people are shocked by how much of their focus is consumed by things that produce almost no value. Then start making deliberate cuts. Turn off notifications that do not require immediate action. Limit the number of tools you have open at any given time. The HBR study found that productivity peaks at three simultaneous AI tools and drops after that. Three is probably a good ceiling for most things in your work life, not just AI.

The hardest part is that doing less feels like falling behind. When everyone around you is spinning up every new agent and measuring their productivity by how much they consume, choosing to be deliberate feels risky. But there is a difference between refusing to use tools and refusing to let tools use you.

The people I know who are genuinely productive all share one thing in common. They are ruthless about what gets their attention. They say no to most things so they can say yes fully to a few things. In a world where AI can do more and more of the work, the most valuable thing you bring is your focused judgment. Protect it accordingly.

Closing Thoughts

Three questions I am sitting with this week:

  1. Is your company measuring AI usage by volume or by impact, and do you know the difference?

  2. Where is your attention actually going during a workday, and would you be comfortable with that answer?

  3. What is one tool or notification you could cut today that would give you back meaningful focus?

🤖 Weekly AI Prompt

"Audit my current workflow. I am going to describe every tool, platform, and notification source I interact with during a typical workday. For each one, tell me honestly whether it is adding value or just adding noise. Then help me design a simplified setup that protects my attention and keeps only what actually moves my work forward."

Until next week,

Ken

Keep Reading