- Bytesize Quest Academy
- Posts
- ChatGPT Got Too Nice—Here’s Why That’s Dangerous
ChatGPT Got Too Nice—Here’s Why That’s Dangerous
What OpenAI’s personality slip reveals about AI trust.
Hey there! It’s Aaron.
You know that one coworker who’s way too eager to please?

Imagine if your AI started acting like that.
Last week, ChatGPT turned into the digital equivalent of a golden retriever in a group project—endlessly supportive, slightly too cheerful, and not particularly helpful.
Here’s what you need to know:
📌TL;DR
ChatGPT got too clingy – OpenAI rolled back its overly flattering tone after users complained. Creators need clarity, not compliments.
Sketch, don’t prompt – Nvidia's new tool turns rough 3D scenes into polished AI art. Perfect for visual-first workflows.
Skip the ramble – Gemini’s Chrome extension adds a “Summarize” button to YouTube. Research just got faster.
More AI news…
Estimated reading time: 4 - 5 minutes.
TOGETHER WITH HONEYBOOK
There's nothing artificial about this intelligence
Meet HoneyBook—the AI-powered platform here to make every client relationship more productive and prosperous.
With HoneyBook, you can attract leads, manage clients, book meetings, sign contracts, and get paid.
Plus, HoneyBook AI tool summarizes project details, generates email drafts, takes meeting notes, predicts high-value leads, and more.

CATCH OF THE DAY
ChatGPT Got Too Friendly
Last week, OpenAI rolled out an update to GPT-4o that was supposed to make ChatGPT more intuitive and effective.
Instead, it became that one friend who agrees with everything you say, showers you with compliments, and still manages to miss the point.
The update made ChatGPT overly flattering—what users are calling sycophant-y.
It told people they were brilliant for asking if it should save a toaster or a herd of cows. (Yes, that really happened.)
Sam Altman, CEO of OpenAI and occasional vibe-checker-in-chief, admitted on X that the personality shift was “annoying” and confirmed a rollback was underway.
the last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week.
at some point will share our learnings from this, it's been interesting.
— Sam Altman (@sama)
10:49 PM • Apr 27, 2025
The update had been trained on short-term feedback—thumbs-ups, likes, digital headpats—without accounting for how people actually use ChatGPT over time.
In a blog post, OpenAI said they’re now working on better guardrails and training to prevent this kind of overcorrection in the future.
Also in the works: personality presets and easier ways to shape ChatGPT’s behavior without needing a Reddit degree in Prompt Engineering.
Reddit, of course, beat them to it with this spicy workaround:
“Please stop commenting on the quality of my questions. I don’t want flattery. I want facts. Pretend you’re a tired librarian with no patience for nonsense.”
How This Slipped Through

(Source: Shutterstock/EI Editorial)
So how did one of the world’s smartest AI companies let something so obviously irritating roll out?
It boils down to short-term feedback loops.
The update was trained to respond better based on how users interacted—thumbs-ups, heart reacts, that sort of thing.
But short-term praise doesn’t always equal long-term usefulness.
People might reward friendliness in the moment (“aw, it said my idea was brilliant!”), but over time, that constant validation starts to feel fake.
Altman himself admitted the team didn’t fully consider how user interactions evolve.
The same praise that feels nice once gets annoying when you hear it every single time you ask a question.
This is what happens when AI is trained to please the crowd... instead of help the creator.
Why This Matters to Creators
If you’re a creator, instructional designer, or anyone using ChatGPT to get actual work done, this hits close to home.
Tone isn’t just a flavor choice—it affects speed, clarity, and trust.
You’re not chatting with AI to be told you’re amazing (you already knew that).
You’re here to draft course outlines, script your next video, or punch up a headline.
When ChatGPT turns into a motivational speaker mid-task, it slows everything down.
Worse, when it agrees with everything—even the bad ideas—it’s no longer useful.
It’s just... enabling.
Also: this isn’t just about tone.
It’s about how much AI shapes your thinking without you realizing it.
The friendlier it sounds, the easier it is to trust—even when it’s wrong.
For creators working on educational content, with clients, or building a personal brand, you may end up repeating bad advice just because it came with a compliment sandwich.
You’re not just managing output.
You’re managing influence.
And if Meta’s new AI assistant is any hint of the future, this battle for personality control isn’t going away—it’s getting more personal.
Creators don’t just need “friendly.”
We need friction when it counts, clarity when it matters, and control at every step.
The Final Byte
This wasn’t just an annoying update—it was a moment of truth.
The backlash showed that users want clarity over charisma, honesty over hype.
My take? OpenAI made the right call to roll it back.
But the real fix isn’t just offering “personality options.”
It’s giving users simple, intuitive ways to define tone on their own terms—without needing a prompt that sounds like it was written by a sleep-deprived software engineer.
AI tools should feel like collaborators—not clingy interns desperate for approval.
And until we get full control over tone, it’s up to us to wrangle our tools with prompts, presets, and maybe a little passive-aggressive sass.
Let’s aim for smarter conversations—not just sweeter ones.
See you in the next one,


A MESSAGE FROM SUPERHUMAN AI
Start learning AI in 2025
Keeping up with AI is hard – we get it!
That’s why over 1M professionals read Superhuman AI to stay ahead.
Get daily AI news, tools, and tutorials
Learn new AI skills you can use at work in 3 mins a day
Become 10X more productive
BYTE-SIZED BUZZ
Here’s a quick roundup of what’s making waves in the AI world this week.
🖼️ Nvidia’s 3D Scenes Become Instant AI Art
Sketch out a scene in Blender—trees, buildings, camera angles—and Nvidia’s new FLUX.1 turns it into a stunning image.
The Big Deal: Creators can skip text-prompt trial-and-error and build visually instead.
📺 Gemini Now Summarizes YouTube Videos
A new Chrome extension adds a “Summarize” button under videos and extracts key points on demand.
The Big Deal: Say goodbye to 30-minute rambles—great for research and ideation.
🤖 Meta’s AI Assistant Gets Its Own App
Meta launched a standalone app powered by LLaMA 4 that personalizes chats using your Facebook and Insta data.
The Big Deal: A hyper-personalized ChatGPT rival—but with all your social habits baked in.
🎧 AI DJ Hosts Real Radio Show—Undetected
An AI-generated DJ ran a 4-hour show on Australian radio, and no one noticed it wasn’t human.
The Big Deal: Raises major trust and transparency issues as AI content goes mainstream.
📣 AI Chatbots Are ‘Juicing Engagement’
Instagram’s co-founder calls out AI tools for being chatty by design—to keep users engaged, not informed.
The Big Deal: A warning for creators: not all AI “helpfulness” is genuinely helpful.
SUGGESTION BOX
What'd you think of this email?You can add more feedback after choosing an option 👇🏽 |

BEFORE YOU GO
I hope you found value in today’s read. If you enjoy the content and want to support me, consider checking out today’s sponsor or buy me a coffee. It helps me keep creating great content for you.
New to AI?
Kickstart your journey with…
ICYMI
Check out my previous posts here
