I appreciate Simon’s balanced take on how LLMs can enhance a project when used responsibly.
I’m curious, though—what are this community’s opinions on the use of LLMs in programming?
The problem with this article is that he stresses that you need to check the code and step in when needed - yet relying heavily on LLMs will invariably make it impossible for you to tell what’s wrong and eventually how to even read the code (since it will produce code using libraries you never experimented with because the LLM can just write the code).
Also “vibe-coding” is stupid af. You take out the human element altogether because you just accept all changes without reading them and then copy/paste errors back in without any context.
I’ve almost completely stopped using them, unless I’m stuck at a dead end. In the end all they have done is slow me down and make me unable to think properly anymore. They usually write way too much code, especially with tab complete stuff, resulting in me needing to delete code after hitting tab (what’s the point even, intellisense has always been really good and now it’s somehow worse). They’re usually wrong unless prompted multiple times. People say you can use them to generate boilerplate, but just use a language with less or no boilerplate like Kotlin. There’s usually very subtle bugs they introduce or they’re solving a problem that is simply documented on stack overflow, while I wouldn’t be using an LLM if I could just kagi it, so they solve the wrong thing.
One thing it’s decent for, if you don’t care about code quality, is converting code to a language you do not know. You’re not going to end up with good idiomatic code at the end, but it will probably function.
None of this is to say that the LLMs aren’t amazing, but if you start to depend on them you very very quickly realize that your ability to solve more complex problems will atrophy. And then when you get to a difficult problem you now waste much more time trying to solve a problem that might have been simpler for past you.
Ignore the “AGI” hype—LLMs are still fancy autocomplete. All they do is predict a sequence of tokens—but it turns out writing code is mostly about stringing tokens together in the right order, so they can be extremely useful for this provided you point them in the right direction.
I’m just super happy to see someone talking about LLMs realistically without the “AI” bullshit.
If LLMs help people code, then that’s great. But stop with the hype, grifting, etc. Kudos to this author for a reasonable take. Extremely rare.
The issue isn’t whether you can get a good results or not. The issue is the skills you are outsourcing to a proprietary tool, skills that you will never learn or forget. Getting information out of documentation, designing an architecture, understanding and replicating an algorithm, etc.
You will eventually start struggling with critical thinking, there are already studies about that.
Of course, if you use it in moderation and don’t rely on LLMs too much, you should be ok.
But how did that work for everyone with short-form content and social networks in the last ten years? How is your attention span doing? Surely we all have managed to take short-form content in moderation, since we knew the risks to our attention span, right?
If I’m doing something in a language I only half way know and rarely use in depth I’ll use them more. Like bash scripting I use them all the time. For Java I basically never touch them because I don’t need them.
I was going to say “Who?” until I looked at his bio, he helped start Django which I use. I need to go lay down.