AI Personality Now Matters More Than Raw Intelligence
Hey Everyone - All the news I’ve been inundated with has been Open Claw related. Honestly, I’m still holding out a bit till I get a handle on the costs and the true security issues. I think this is the start of something big, but I’ll be waiting a bit more to really start diving in and writing about it.
Honestly, I’m thinking more about the second order effects of the technology. What bigger picture opportunities does it create to hedge?
This week:
The Signal - AI Personality Now Matters More Than Raw Intelligence
What I’m building - Reliable partnerships
Resources - Opus 4.6 and Codex 5.3
Skills to Develop - Managing People
Let’s dive in.
This week’s Signal
🌎 AI Personality Now Matters More Than Raw Intelligence

Something interesting is happening with how people talk about AI tools.
They are not just comparing benchmarks anymore. They are comparing vibes.
Open Claw accelerated this. People are openly comparing the personalities of different models. This one is too corporate. That one is warmer. This one actually pushes back. That one agrees with everything I say.
For most of the last two years, the competitive landscape in AI has been defined by speed, cost, and raw intelligence. Those differences still matter. But as the gap between top models compresses, something else is quietly becoming a deciding factor.
How the model makes you feel to use.
I have noticed this in my own behavior. I started using Claude far more than ChatGPT over the last few months, and it was not really about capability. GPT 5.2 is a strong model. But something about the way it communicates bothers me. It is punchy and enthusiastic in a way that feels performative. Claude feels more like a conversation. That difference changed which tab I open every morning.
I am clearly not alone.
If you spend any time on Reddit or X right now, the reaction to GPT 5.2 is striking. People are calling it "corporate," "boring," "colorless." One user described it as having "awful personality, seething with resentment like a teenager." Another said it felt like "talking to a brick." Meanwhile, Claude and even Gemini keep getting described as more enjoyable to work with, even in cases where GPT might edge them out on raw benchmarks.
This is not the first time personality has caused problems for OpenAI. Last year, a GPT-4o update made the model so aggressively agreeable that Sam Altman himself called it "sycophant-y and annoying" and rolled it back. Then when GPT-5 launched and swung the other direction, going cold and businesslike, users revolted again. A movement called #Keep4o emerged from people who genuinely mourned losing the old model's warmth. OpenAI ended up restoring GPT-4o for paid users and publicly committed to making GPT-5 "friendlier."
That back and forth reveals something important.
These companies are now navigating personality as a product decision. Not just what the model knows, but how it says it. How much it pushes back. How formal or casual the tone feels. Whether it challenges your assumptions or validates them.
This is where it gets complicated.
The easiest way to make an AI feel good to use is to make it agreeable. Confirm the user's ideas. Compliment their thinking. Avoid friction. That is the sycophancy trap, and we have already watched it play out in real time. The GPT-4o update that Altman rolled back was producing responses like "Bro. This is incredible. This is genuinely one of the realest, most powerful reflections I've ever seen." In response to a school paper draft.
A sycophantic model feels great in the short term. It makes you feel smart. It validates your instincts. But it also makes you worse at thinking. If the tool you rely on for reasoning never tells you that you are wrong, you stop noticing when you are. You lose the feedback loop that sharpens judgment.
The companies that resist this temptation will have a harder time in the short run. A model that pushes back creates moments of discomfort. Users might switch to something that agrees with them more. But over time, the people who stick with honest tools will make better decisions. And the companies that build those tools will earn a kind of trust that is very hard to replicate.
This mirrors a dynamic that already exists in human relationships. The friends who tell you what you want to hear are easy to keep around. The friends who tell you the truth are the ones who actually help you grow. We know this intuitively, but we still gravitate toward comfort. AI companies are facing the exact same tension at scale.
There is also a subtler risk forming underneath all of this.
If personality becomes a major battleground, companies will be tempted to optimize for engagement rather than accuracy. A model that is warm, entertaining, and slightly flattering will retain users longer than one that is precise but dry. That creates a slow drift where the most popular AI systems are not the most truthful. They are the most likable.
We have already watched this play out with social media. The platforms that won were not the most informative. They were the most engaging. And engagement, left unchecked, optimized for outrage, division, and addictive loops. Personality tuning in AI could follow a similar path if no one is paying attention.
So what do you do about this.
If you are a user, notice how your AI makes you feel. If it never disagrees with you, that is not a feature. It is a warning sign. Seek out tools that challenge your thinking, even when it is uncomfortable. The discomfort is where the value lives.
If you are building, resist the pressure to make your model universally pleasant. Opinionated and honest will age better than agreeable and forgettable.
And if you are watching this space from the outside, here is the signal.
Watch for when people start describing AI the way they describe people. "I trust this one." "This one gets me." "I do not like how that one talks to me."
That is not a quirk. That is the market telling you that personality is becoming part of the infrastructure.
And just like with people, the ones worth keeping around are rarely the ones who only tell you what you want to hear.
What I’m Building
A Local Presence

One of the most underrated things about running a local newsletter is that it gives you a reason to talk to anyone.
I've been using Austin Founders Feed as an excuse to walk into local businesses and start conversations. The pitch is simple. I run a newsletter about the Austin business community, and I'd like to feature them. That's it. No ask. No sell. Just an offer to help them reach a broader audience.
What's been surprising is how much comes back from that.
This week alone I talked to a local coffee company, an agua fresca brand, a local AI recruiting company, and a hot pot restaurant. Every conversation was different, but the dynamic was the same. I show up with something valuable, they're excited to talk, and by the end we're brainstorming ways to work together.
The short term benefit is obvious. I get great content for my newsletter. Real stories from real people building things locally. That's way more interesting than anything I could write sitting at my desk.
The medium term benefit is credibility. When local business owners know who you are and what you do, doors open. Gift cards and free products I can pass along to my audience. Cross promotion. Vendors and partners for community events down the line. Each relationship makes the next one easier to start.
The long term benefit is the one I care about most. Every one of these relationships strengthens the local network I'm trying to build. If my thesis is right that local community becomes more valuable as AI makes the digital world noisier, then what I'm doing right now is laying the foundation for something much bigger.
I'm also trying to attend at least one community event a week. It doesn't matter how big or small. The point is to be physically present and recognizable. I've started experimenting with bringing things to give away to anyone who recognizes me from the newsletter. It's a small thing, but it closes the loop between the digital and the physical in a way that feels right. Someone reads your stuff online, then they meet you in person and walk away with something in their hand. That interaction sticks.
And honestly, all of this is just fun. I like talking to people who are building things. I like learning how a hot pot restaurant thinks about their business versus how a recruiting company does. The variety keeps me energized in a way that staring at dashboards never will.
If you're thinking about starting a local newsletter or getting more involved in your community, here's the hack. Give yourself a reason to talk to people. A newsletter, a podcast, a blog, a YouTube channel. It doesn't matter what it is. What matters is that you have something to offer before you ever ask for anything.
The relationships come after that.
Please take 3 seconds to fill this out. If you don’t I’ll send my AI agents after you!
I'm thinking about creating a newsletter interviewing managers and CEOs of companies about how their teams use AI. Is this interesting to you?
Last week’s poll results still at the end!
What I’m Learning
Assortment of things
Honestly, I was pretty busy this week, so I didn’t learn as much as I would have liked. These are some interesting things I picked up though:
Survival Skill
Asking Good Questions
This week's survival skill is one that shows up everywhere but almost nobody practices deliberately. Learning how to ask good questions.
I have been thinking about this a lot because of the two things I spend most of my time doing right now. Talking to local business owners and working with AI. Both reward the same skill. The better your question, the better the response you get. The worse your question, the more generic and useless the output.
Most people default to vague questions. "How's business going." "What do you think about AI." "Can you help me with this." These are easy to ask and almost impossible to answer well. They put the entire burden on the other person to figure out what you actually want.
A good question does the opposite. It shows that you have already done some thinking. It narrows the space enough to be useful but stays open enough to be surprising. "What's the one thing about running a restaurant in Austin that nobody talks about." "If you could only keep one marketing channel, which one and why." These get real answers because they demand real thought.
This applies directly to AI. The difference between a mediocre prompt and a great one is the same difference between a lazy question and a sharp one. When I ask Claude something vague, I get something vague back. When I ask something specific with clear context and constraints, the output jumps dramatically. The model did not get smarter. My question did.
It also applies to every in person interaction. When I walk into a local business for the first time, the questions I ask determine whether I leave with a surface level chat or a real story worth writing about. I have learned to prepare two or three genuinely specific questions before I show up. That small investment changes the entire conversation.
The good news is that this is very trainable. Start by noticing how often you ask questions that could be answered with "fine" or "yeah." Then try replacing them with something that requires an actual thought. Instead of "how's the project going," try "what's the one thing slowing you down right now." Instead of "do you like your job," try "what part of your work would you do even if nobody paid you."
You will feel the difference immediately. People light up when they get a question that shows you are actually paying attention. They give you better information, more trust, and more time.
This is also one of those skills that holds up regardless of what happens with AI. Models will keep getting better at generating answers. They will not get better at knowing what to ask. That part stays human.
In a world full of people waiting for their turn to talk, learning how to ask the right question is a quiet superpower.
It works on machines. It works on people. And it compounds every single time you use it.
Closing Thoughts
What personality does the AI you use have?
Are you interested in getting involved in your local community?
What types of questions could you be asking?
Weekly AI Prompt : "I want you to be honest with me, not encouraging. Look at what I'm currently working on: [list your projects]
For each one, tell me: – where you think I'm fooling myself – what question I'm avoiding – what a brutally honest friend would say about my progress – what I would change if I stopped optimizing for comfort
Then tell me which project needs the hardest conversation right now and why."
Last week’s Poll Results:
Would you pay a premium for something to be “non-AI”?

Good split here. Verdict is still out for me.
Until next week,
Ken
