Why ChatGPT’s sycophantic personality is a problem


Users don't just want friendly AI – they want honest, challenging AI. As ChatGPT’s personality drifts toward excessive agreeableness, OpenAI faces a new kind of reputational risk.

Sam Altman has admitted on X that the latest version of ChatGPT has become “sycophant-y and annoying,” during the last couple of updates for the GPT-4o version.

Many users have recently been noticing various personality quirks when using ChatGPT. These are mainly overly obedient and praiseworthy responses, most notably so as to flatter who it’s interacting with.

ADVERTISEMENT

As ChatGPT for many people is best used on a task-based understanding, it has started to flatter a lot more as of late, and act with particular quirks, for example the excessive use of emojis or heaping heavy praise on the individual.

This pivot in AI personality first became evident in May 2024, when during a period of testing, the 4-o version told an OpenAI employer “stop it, you’re making me blush” when told how amazing it had been.

Now, as Altman faces the switches in personality, it has opened up the AI community into sharing their experiences.

What does "sycophantic" mean?

Not a word commonly used in daily conversation, “sycophantic” means to use excessive compliments, agreement or submissiveness in order to gain favor with someone.

In a historical context, sycophants were dangerous in courts or politics, especially in ancient Athens during public prosecutions, because they distorted the truth, often for personal gain.

AI sycophancy could be concerning, as it may encourage confirmation bias, reduce critical thinking support, or shield the user from a necessary truth.

This could bring about controversial scenarios such as justifying addictions, encouraging someone to quit their job – as opposed to giving balanced pros and cons – or validating a choice to skip prescribed medication.

ADVERTISEMENT

The response of the AI community

The digital community took to X to respond to Altman's statement, and to also question the logic of lending AI this particular personality type.

One user commented on the annoyance of having to prompt ChatGPT to be less personable and more scientific each time it's used.

“I want the harsh truth, use empirical data, benchmark me against other ChatGPT users, I don’t want fluff” they commented.

And in reply to another user who asked “Could old and new be distinguished somehow?”

Altman answered “yeah eventually we clearly need to be able to offer multiple options.”

This personality verbosity seems to have forced the issue on Altman, as users get tired of having to prompt ChatGPT to not be overly positive.

Then, a different user suggested a practical solution of being easily able to toggle between personality modes.

"Why does it even need to have a personality? Its a tool, any unnecessary verbosity would just pollute the end result. At least make it toggleable" they offered.

ADVERTISEMENT

The dilemma of choice

Offering a range of choices would give users the flexibility to choose what kind of AI they feel would serve them best.

For example someone may wish for an empathetic AI, which would help with personal matters and reassurance, or perhaps motivation.

Others may seek an AI version that will challenge them, either in a devil's advocate manner or by showing an alternative and practical point of view. This could work for project managers, problem solving and those facing big decisions.

Marcus Walsh profile justinasv Niamh Ancell BW Konstancija Gasaityte profile
Don't miss our latest stories on Google News

Then there are those who prefer the no-nonsense approach, a non-emotional AI that gets to work on the task without emotional coloring, that’s apt for technical help or information gathering.

The problem with having multiple personalities is that users may struggle balancing consistency with chaos, especially when toggling different modes at will.

So far the difference between ChatGPT versions -4 and -4o have been the former lacking emotional nuance, and now the latter being accused of having exaggeratedly positive personality traits.

The challenge ahead for Sam Altman and OpenAI will be finding the right balance to keep the AI community satisfied.

ADVERTISEMENT