Evidence that Gemini 3 is very large:
1. the QT
2. Artificial analysis (image)
quote: x.com/artificialan...
report: artificialanalysis.ai/evaluations/...
3. Demis Hassabis said 1-2 months ago that major version numbers indicate OOM scaling, minor is RL scaling
Evidence that Gemini 3 is very large:
View original threadso you’ve got public statements in several directions, plus unaffiliated evidence-based confirmation
1
plus this bsky.app/profile/nato...
Recurring frontier lab gossip:
OpenAI has best post-training/rl and has pushed it super hard on weaker pretraining.
Gemini has spectacular pretraining. Making a reasoning model was super easy for them & OpenAI folks were surprised
Anthropic? Secretive i guess.
OpenAI has best post-training/rl and has pushed it super hard on weaker pretraining.
Gemini has spectacular pretraining. Making a reasoning model was super easy for them & OpenAI folks were surprised
Anthropic? Secretive i guess.
5
1 hour later
more: verification of Nato’s observation about labs
1
there’s people on here that are convinced that Gemini 3 Pro is small, but i just don’t see any reliable evidence of that
being fast just means it’s a sparse MoE, which is 100% normal these days, would be surprising if it weren’t
being fast just means it’s a sparse MoE, which is 100% normal these days, would be surprising if it weren’t
8
21 hours later
update: scaling01’s initial 7.5T estimate was based on fp4, but there’s no evidence that TPUv7 actually supports fp4, so estimate is revised down to ~5T
fyi @harsimony.bsky.social
fyi @harsimony.bsky.social
1