The Warning Sign That We are Cooked
Hey Everyone - I’m writing this before the weekend. I’m planning to do a “dopamine detox”. I’m basically sitting in a room without any stimulation or food for 2 days straight. Not sure what this has to do with AI… but wanted to share. Will update with how it goes (if anyone cares).
This week:
The warning sign that we are all cooked - Theory of mind
What I’m building - I actually (vibe) coded this week…
Resources - Binging YouTube AI
Skills to Develop - Sell. Sell. Sell.
Let’s dive in.
This week’s Signal
🌎 The Warning Sign That We are Cooked

We have spent a lot of time arguing about how fast AI is improving. How cheap it is getting. Whether it is in a bubble. Whether the hype is overblown.
But there is a much clearer signal to watch for.
The moment we should really pay attention is when AI can do humor well.
I first heard this idea articulated clearly by Philip Su on the “A Life Engineered podcast”, and it reframed how I think about progress in AI. Not speed. Not benchmarks. But theory of mind.
(This is a great interview BTW, I highly recommend watching the whole thing)
Theory of mind is the ability to understand that other people have beliefs, intentions, emotions, and expectations that are different from your own. It is what allows humans to empathize, persuade, teach, and joke. A joke only works if you understand what the other person expects and then intentionally subvert it in a way that feels safe and surprising.
Humor is not just language. It is context, timing, shared experience, and emotional calibration. The same joke can be hilarious to one person and fall completely flat with another. To be funny consistently, you have to model the mind of the person you are talking to.
AI today is very good at pattern matching. It can explain jokes. It can generate things that resemble jokes. But it does not truly know who it is talking to. It does not understand your internal state, your boundaries, or what would feel appropriate in a specific moment. It simulates understanding without actually having it.
That is why humor is such a powerful signal.
If AI can make us laugh, it can also manipulate us.
Humor is a shortcut to trust. We laugh with people we feel safe around. We lower our guard when something consistently understands us. If a system can model your perspective well enough to land a joke, it can also nudge your beliefs, soften your resistance, and influence your decisions.
This already matters in social media. Recommendation algorithms do not need to convince you of anything directly. They just need to keep you engaged, entertained, and emotionally invested. Add humor that feels personal, and that influence becomes much stronger.
It matters even more in advertising. Ads that feel generic are easy to ignore. Ads that feel like they understand you are harder to resist. A system with theory of mind does not just optimize for clicks. It optimizes for rapport.
And in politics, the implications are obvious. Persuasion that adapts in real time to individual psychology is far more powerful than mass messaging. This is not about ideology. It is about mechanics. A system that understands mental states can shape narratives at a level of precision we have never seen before.
This is why debates about whether AI is in a bubble miss the point. Bubbles affect markets. They do not define capability. The real question is whether these systems are moving closer to understanding humans as humans rather than as statistical patterns.
But here is the more hopeful part.
Recognizing this signal early gives us leverage. It reminds us that human judgment, boundaries, and community matter more, not less. It pushes us to invest in media literacy, real relationships, local trust, and skills that keep us grounded in the physical world.
AI getting better at understanding us does not mean we are powerless. It means we need to be more intentional about where we place our attention and who we allow to earn our trust.
So if you want a simple signal to watch for, ignore the hype cycles and product launches.
Watch for the moment when AI gets your jokes.
And then remember that understanding how influence works is the first step to staying human in a world full of very persuasive machines.
Please take 3 seconds to fill this out. If you don’t I’ll send my AI agents after you!
Which feels like the biggest real risk from AI to you personally?
Last week’s poll results still at the end!
What I’m Building
Vibe Coding & Automations

One of my goals for the new year is to get better at systems. If I have good systems I can do more with less time. It also allows me to scale up anything that I do.
I view automating repetitive tasks as one of the most effective way to systemize. This usually backfires and I spend 3 days building something only to not use it and still do the manual process. I swear 2026 is going to be different though.
These are the things I’m automating right now:
1) Event collection —> For my local Austin newsletter I’m vibe coding a scraper that aggregates business events in my area. This should save my partner and I about 3 hours a week searching these websites.
2) Cost monitoring —> Every podcast I listen to tells me how important monitoring costs is. Unfortunately, I hate doing it. Anything that has to do with accounting gives me the ick. Thus I have built a little platform that tracks all my costs and revenues associated with our local newsletter business. My accountant would be so proud.
3) Email outreach? —> This is a stretch for me. I hate sending emails, but I hate sending AI emails even more. I’m going to try to experiment with outbound email automations to find sponsors for my other newsletter. 50/50 if this happens. If someone is buying something from me, I feel like I should at least have the decency do my research on them and send them a real email. Maybe I’m old fashioned.
If you want to get your feet wet with automation, I recommend experimenting with tools like Zapier, n8n, or make.com.
The Future of Tech. One Daily News Briefing.
AI is moving faster than any other technology cycle in history. New models. New tools. New claims. New noise.
Most people feel like they’re behind. But the people that don’t, aren’t smarter. They’re just better informed.
Forward Future is a daily news briefing for people who want clarity, not hype. In one concise newsletter each day, you’ll get the most important AI and tech developments, learn why they matter, and what they signal about what’s coming next.
We cover real product launches, model updates, policy shifts, and industry moves shaping how AI actually gets built, adopted, and regulated. Written for operators, builders, leaders, and anyone who wants to sound sharp when AI comes up in the meeting.
It takes about five minutes to read, but the edge lasts all day.
What I’m Learning
AI Rabbit hole

Things I Learned
Bill Gurley’s take on an AI Bubble - The rise of charlatans with every new technology.
Content I Made
I guess this is really the only content I made this week :(. I have some videos in the pipeline though!
Survival Skill
Learning to sell yourself

This week’s survival skill is one that makes a lot of people uncomfortable, but it might be the most important one to build. Learning how to sell yourself.
As AI gets better at producing work, the bottleneck shifts away from execution and toward opportunity. Getting work. Getting invited in. Getting chosen. None of that is automatic, and none of it is solved by being quietly competent.
Selling is not about manipulation. It is about translation. It is the skill of helping other people understand the value you provide and why it matters to them.
AI can generate resumes, portfolios, and pitch decks. It cannot build conviction for you. It cannot advocate for you in a room you are not in. It cannot carry your reputation forward when decisions are being made by humans under uncertainty.
Selling yourself shows up everywhere. Explaining what you do in a way that makes sense. Following up after a conversation. Asking for opportunities instead of waiting for them. Sharing your work publicly. Making it easy for someone to say yes to you.
This skill only becomes more important as AI spreads. When output becomes abundant, people choose based on trust, clarity, and confidence. They work with people they understand. They hire people who can articulate impact. They invest in people who can make their value legible.
Selling yourself does not mean exaggerating. It means owning your strengths. It means being clear about what problems you solve. It means being willing to say, “I can help with this,” and then standing behind it.
If you can sell yourself, you will always find work. You will always create opportunities. You will not be dependent on a single employer, a single role, or a single system.
AI changes how work is done. It does not change how opportunities are created.
Learn to sell yourself. It is a skill that compounds in every future.
Closing Thoughts
Don’t laugh at any AI jokes anytime soon.
What could you automate? What systems are you creating this year?
Are you selling yourself?
Weekly AI Prompt (for chatgpt): “Based on what you know about me, write a short explanation of what I do and why it is valuable, as if you were introducing me to someone who could help my career or business. Then explain what assumptions you had to make and where you might be wrong.”
Last week’s Poll Results:
If AI automates most digital work, where would you want to invest more of your time?

Interesting split here. Pretty small sample though. Any tips on how to get a better ratio of poll responses than 5/2999?
Until next week,
Ken

