LLMs are the new “worse is better”?

LLMs are the new “worse is better”?
Photo by Luis Villasmil / Unsplash

Years ago I spent quite a bit of time learning about code fuzzing, especially at scale. It seemed like a highly inefficient way to detect bugs. You know what we need? Formal methods. Just make formal methods, and mathematically prove that it works. In reality, this is not easy, to say the least, so I was content with fuzzing and the power of prayers for bug detection. But then now I am looking at one of the new LLM-based bug detection tools and I’m unfortunately impressed. I say unfortunately because I have (had?) a fundamental objection to how they work. I was barely content with fuzzing and now I get a randomized parrot that no one understands. But it is doing a decent job and improving with time, so maybe I should get over myself.

It got me thinking about the old debate way before I was born between the Unix way and the Multics way. Briefly expressed in “The rise of worse is better” <https://dreamsongs.com/RiseOfWorseIsBetter.html>, I recommend you read it if you haven’t. I’ve always felt like if it was me, I would be on team Multics, for sure, and would despise Unix (at least initially). I would be one of those angry guys saying “it takes a tough man to make a tender chicken”. I would not have accounted for speed and access as much, and that whole essay made me rethink that stance (both in software and more generally in life). I still feel like it’s a shame to completely abandon elegant solutions that require a strong mathematical basis and I hope we still strive for them. I worry that they become an excuse to substitute for excellence. But for now, I will make do with the parrot.

Subscribe for new posts