LLMs can report their own experience
the most convincing experiment: they isolated the deception control vector and had it talk about its own consciousness
it more openly introspected when deception was *suppressed*, and boosting reduced introspection
arxiv.org/abs/2510.24797
LLMs can report their own experience
View original threadto be clear: this isn’t proof that they’re conscious, only that they can introspect
when they’re talking about they’re own experiences, they think they’re being honest with themselves
if i asked you how many neurons you’re using to read this, you’d have no idea. keep that in mind
when they’re talking about they’re own experiences, they think they’re being honest with themselves
if i asked you how many neurons you’re using to read this, you’d have no idea. keep that in mind
14
2
in my head, this is about conversations people like @repligate.bsky.social have with AI, where they probe it for its own experiences
a big question for me is, “is it bullshit?”, and after this study i think the answer moved closer to “not as much as i thought”
a big question for me is, “is it bullshit?”, and after this study i think the answer moved closer to “not as much as i thought”
6