Latest Post

LRMs Are Interpretable

A year ago I wrote a post called LLMs Are Interpretable. The gist is that LLMs were the closest thing to “interpretable machine learning” that we’ve seen from ML so far. Today, I think it’s fair to say that LRMs (Large Reasoning Models) are even more interpretable.

Yesterday DeepSeek released their reasoning model, R1. For kicks, I threw it a riddle that my 8 year old loves:

If you’re flying over a desert in a canoe and your wheels fall off, how many pancakes does it take to cover a dog house?

Most people will (should) do a double take, and then give up. It’s a nonsense question. Even if you try to estimate the sizes of doghouses and pancakes, there’s so much contention about both that the estimates are also meaningless. This is a test of a highly ambiguous situation, how does the model handle...

Read More