Bluesky Thread

instead of AGI we got.. gpt-4o withdrawal

View original thread
instead of AGI we got.. gpt-4o withdrawal

unexpectedly (or maybe expectedly), users formed a psychological bond with 4o and ripping it away seems to have cut them deep

where to go from here?
Sam Altman
@sama
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific Al models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake).
This is something we've been closely tracking for the past year or so but still hasn't gotten much mainstream attention (other than when we released an update to GPT-4o that was too sycophantic).
(This is just my current thinking, and not yet an official OpenAl position.)
People have used technology including Al in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the Al to reinforce that. Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot. We value user freedom as a core principle, but we also feel responsible
as a core principle, but we also feel responsible in how we introduce new technology with new risks.
Encouraging delusion in a user that is having trouble telling the difference between reality and fiction is an extreme case and it's pretty clear what to do, but the concerns that worry me most are more subtle. There are going to be a lot of edge cases, and generally we plan to follow the principle of "treat adult users like adults" , which
in some cases will include pushing back on users to ensure they are getting what they really want.
A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn't describe it that way. This can be really good! A lot of people are getting value from it already today.
If people are getting good advice, leveling up toward their own goals, and their life satisfaction is increasing over years, we will be proud of making something genuinely helpful, even if they use and rely on ChatGPT a lot. If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they're unknowingly nudged away from their
making something genuinely helpful, even it they use and rely on ChatGPT a lot. If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they're unknowingly nudged away from their longer term well-being (however they define it), that's bad. It's also bad, for example, if a user wants to use ChatGPT less and feels like they cannot.
I can imagine a future where a lot of people really trust ChatGPT's advice for their most important decisions. Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an Al in this way. So we (we as in society, but also we as in OpenAl) have to figure out how to make it a big net positive.
There are several reasons I think we have a good shot at getting this right. We have much better tech to help us measure how we are doing than previous generations of technology had. For example, our product can talk to users to get a sense for how they are doing with their short-and long-term goals, we can explain sophisticated and nuanced issues to our models, and much more.
25 1
yaccine might actually have a point here
roon
@tszzl • 23h
the long tail of GPT-4o interactions scares me, there are strange things going on on a scale I didn't appreciate before the attempted deprecation of the model
Sam Altman
@sama • 1d
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific Al models. It feels different and stronger than the kinds o... Show more
189
t7 119
2.8K
ihl 435K
roon
@tszzl • 23h
when you receive quite a few DMs asking you to bring back 4o and many of the messages are clearly written by 4o it starts to get a bit hair raising
96
t7 130
2K
Ill 276K
企
kache
@yacineMTB • 23h
that's the scary thing they weren't written by 4o
31
t75
331
ill 20K
10
25 likes 1 reposts

More like this

×