i’m looking at the ChatGPT & Gemini apps, reverse engineering them
ChatGPT has a “guardian_tool” where it can fetch policies
here’s what mine has, the only policy is around elections. This smells like some politician made a big stink and needed to be calmed
gist.github.com/tkellogg/200...
i’m looking at the ChatGPT & Gemini apps, reverse engineering them
View original threadbut it’s progressive disclosure, like Claude Skills
if the AI thinks the conversation is going into a sensitive area, it can request detailed instructions for how to proceed
if the AI thinks the conversation is going into a sensitive area, it can request detailed instructions for how to proceed
5
summary_tool
there’s a whole tool dedicated to just reading private CoT for the purposes of explaining
i’m imagining they have some policy of certain details they don’t want shared with the public
really makes you wonder what they don’t want shared
gist.github.com/tkellogg/069...
there’s a whole tool dedicated to just reading private CoT for the purposes of explaining
i’m imagining they have some policy of certain details they don’t want shared with the public
really makes you wonder what they don’t want shared
gist.github.com/tkellogg/069...
8
i’m also noticing that ChatGPT lists tools to the LLM even if the tool isn’t configured or usable
e.g. there’s a gmail tool but i haven’t set that up
i don’t quite understand why they do this. Maybe just to avoid killing the prefix cache if i do set it up? idk, seems thin..
e.g. there’s a gmail tool but i haven’t set that up
i don’t quite understand why they do this. Maybe just to avoid killing the prefix cache if i do set it up? idk, seems thin..
4
while ChatGPT has 15 tools, Gemini appears to have very few tools
it’s image generation is *not* a tool, but it’s “integrated”. so i guess that just means that if you enable nano banana, it’s using a different model for everything
gist.github.com/tkellogg/e6f...
it’s image generation is *not* a tool, but it’s “integrated”. so i guess that just means that if you enable nano banana, it’s using a different model for everything
gist.github.com/tkellogg/e6f...
3
Memories — ChatGPT remembers a ton more
Gemini really only remembers things you explicitly tell it to remember, but GPT mostly just remembers
also, ChatGPT loads up tons of info around usage stats, preferences, style, etc, but gemini doesn’t
Gemini really only remembers things you explicitly tell it to remember, but GPT mostly just remembers
also, ChatGPT loads up tons of info around usage stats, preferences, style, etc, but gemini doesn’t
4
1
i’m basing all this on just asking the LLM questions while in the app
in general i feel like this should work well, but Gemini 3 hallucinates a crazy amount
i’m not sure if it’s just being dodgy about its internals, or if it’s actually just hallucinating
in general i feel like this should work well, but Gemini 3 hallucinates a crazy amount
i’m not sure if it’s just being dodgy about its internals, or if it’s actually just hallucinating
4
now i’m curious if Gemini’s behavior is due to something in its system prompt or something baked deep into the model weights where it just really dislikes you getting all up in its shit
i won’t accept innocent hallucination. there’s no f way a model can be this good and also this bad
i won’t accept innocent hallucination. there’s no f way a model can be this good and also this bad
4