LLMs are great code reviewers. They can even spot security mistakes that open us up to vulnerabilities. But no, they’re not an adequate mitigation. You can’t use them to ensure security.

To be clear, I’m referring to this:

  1. User sends a request to app
  2. App generates SQL code
  3. App asks LLM to do a security review, and then iterates with step 2 if it fails the review
  4. App executes generated code
  5. App uses results to prompt LLM
  6. App returns LLM response to user

This might be confusing at first. LLMs are good at identifying security issues, why can’t they be used in this context?

Bad Security

The naive way to do security is to know everything about all exploits and simply not do bad things. Quickly, your naive self gets tired and realizes you’ll never know about all exploits, so anything that I can do that might...