AI generated code is slop, and that's a good thing

AI generated code is slop, and that's a good thing

In his recent Dwarkesh podcast interview, Andrej Karpathy (now) notoriously said:

Overall, the models are not there. I feel like the industry is making too big of a jump and is trying to pretend like this is amazing, and it’s not. It’s slop.

AI code is slop.

I argue that code should be slop. Not just AI code, but even human-written code. Slop is the ideal form of code, the pinnacle that we have always strove for. That won’t sit well with you, dear reader. So let’s take it slow.

what is slop?

In an epic blog post on defining the term, John David Pressman (@jdp) says this:

Slop is written to pad the word count.
Slop is when you procrastinate on your college essay and crap something out the night it’s due.
Slop is the logical conclusion of chasing the algorithm.
Slop is the distilled extruded essence of the Id.
Slop is when you have a formula and stick to it.
Slop is when you can guess the exact minute in a police procedural where they find the killer because it’s the same in every episode.
Slop is when the k-complexity of the generator is low enough that you can infer its pattern.
Slop is eating lunchables every day at school until you puke.
Slop is when a measure ceases to be a good target.
Slop is the 12th sequel to a superhero movie.
Slop is generated from the authors prior without new thinking or evidence.
Slop is Gell-Mann amnesia.
Slop is in distribution.
Slop is when the authors purpose for writing is money.
Slop is a failure to say anything interesting.
Slop is what you find at the bottom of the incentive gradient.
Slop is a deeper simulacra level than it purports to be.
Slop is vibes.

Slop is boring, unsurprising, predictable, uninspiring. Yawn…

code should be slop

Go look back into ancient history, like 2-3 years ago, and software engineers were saying things like:

Interesting. Good code is boring, unsurprising, predictable, uninspiring.

Slop. Good code should be slop.

Karpathy didn’t say that!!

Yes he did.

Throughout that section of the interview, Karpathy asserted that AI coding agents weren’t much help for him because his code was “out-of-distribution”. In other words, Karpathy did it to himself:

I would say nanochat is not an example of those because it’s a fairly unique repository. There’s not that much code in the way that I’ve structured it. It’s not boilerplate code. It’s intellectually intense code almost, and everything has to be very precisely arranged. The models have so many cognitive deficits. One example, they kept misunderstanding the code because they have too much memory from all the typical ways of doing things on the Internet that I just wasn’t adopting. The models, for example—I don’t know if I want to get into the full details—but they kept thinking I’m writing normal code, and I’m not.

Karpathy didn’t find AI tools helpful because he deliberately chose patterns that were not normal. He even acknowledged that he’s found them helpful on other projects.

This isn’t a knock on Karpathy, he had a goal for his code. It was going to be an educational repository. He didn’t want “normal” code, he wanted code that maximized his educational goals for it.

pristine code is not the goal

Most of the time, your employer’s goal to create value as quickly as possible. High quality & maintainable code is simply a proxy, a strategy for rapid value delivery over an extended period of time.

If code becomes a rats’ nest, too much time gets sucked into making even trivial changes and value delivery becomes slow and burdensome. Even boring code is merely a strategy toward avoiding unmaintainable code.

The end goal is still the same. Rapid value delivery. Karpathy had an exceptional case with extraordinarily strange goals. You are not Karpathy.

ai delivers value quickly

Recently I outlined how I approach AI coding:

  1. Have a sense of ownership
  2. Exploit opportunities

Recently, while explaining organizational dynamics to someone, I used the phrase “forces of nature”. If an organization prefers top-down style of communication, then doing a grass roots effort is probably going to take a ton of energy and probably fail. Because it goes against the nature of the organization.

In 2014, Tim Ewald gave a talk titled “Programming with Hand Tools” where he drew a very similar parallel between programming and woodworking. You need to observe the grain of the wood and only make cuts that acknowledge this fundamental nature of the material.

AI coding agents deliver value very quickly, but obviously fail in several scenarios. So don’t do that. Don’t do things that don’t work. This isn’t rocket science. Be an engineer, exploit opportunities and avoid pitfalls.

Karpathy:

So the agents are pretty good, for example, if you’re doing boilerplate stuff. Boilerplate code that’s just copy-paste stuff, they’re very good at that.

A real engineer would see that as an opportunity. “If I structure our code to maximize boilerplate, I can get even more leverage out of AI.” Like, maybe it’s not a great idea to add a free monad, idk.

This stuff isn’t new. It’s what software engineers do. When something’s not working, you refactor the code base, or shuffle teams into smaller more focused groups. It’s why design patterns exist. Trade-offs like microservices are a way to make your code worse along one dimension in order to make them better along another dimension that matters more to your team.

yes, but i’m an exception

Maybe you’re like Karpathy and you’ve found yourself in the exceedingly rare situation where your goal is something other than quickly delivering value. Do this: annual review season is coming soon, tell your boss that you’re not going to use AI tools because you believe your objective does not include quickly delivering value.

Just try it. I’m sure it’ll go well.

conclusion

I’ve wanted to write a “how to AI program” piece, but that feels like it’s been done far too much. Karpathy’s “slop” comment seemed like the perfect segue into what really matters: exploiting opportunities. I’ve turned around teams by iteratively asking, “what can we do better?” Why wouldn’t it work for AI tools also?

Our job as software engineers (or any kind of engineer for that matter) isn’t to write code. Many professions write code. Software engineers do something bigger. The amount of time consumed by writing code seems to have distracted us from our core job, and I think AI offers the opportunity to get our priorities straight again.

discussion