I don’t see how that follows. You appear to be conflating the skill of learning the specific syntax of a language with the skill of designing algorithms and writing contracts. These are two separate things. A developer will simply do work that’s more akin to what a mathematician or a physicist does.
LLMs don’t allow the programmer to bypass every stage of the learning process in both an academic and professional capacity. That’s simply a false statement. You cannot create quality software with the current generation of LLMs without understanding how to structure code, how algorithms work, and so on. These are absolutely necessary skills to use these tools effectively, and they will continue to be needed.
Its proliferation is already having significant and serious effects, so no I don’t think it’s a moral panic.
Again, this was said many times before when development became easier. What’s really happening is that the barrier is being lowered, and a lot more people are now able to produce code, many of whom simply would’ve been shut out of this field before.
I think LLM technology itself is fundamentally flawed and I think believing formal methods and genetic algorithms will prevent its issues in real-world applications is a recipe for a disaster, or at least, even crappier software than most of what we get today.
I see no basis for this assertion myself, but I guess we’ll just wait and see.
Maybe history will prove me wrong, but I see zero reason to trust LLMs in basically any capacity.
Nobody is suggesting trusting LLMs here. In fact, I’ve repeatedly pointed out that trust shouldn’t be part of the equation with any kind of code whether it’s written by a human or a machine. We have proven techniques for verifying that the code does what was intended, and that’s how we write professional software.
Theres going to be another AI winter and revolution before AI programming technology is used for anything but sludgeware and vibe coded apps that leak all their userdata.
Even if this technology stopped improving today, which there is very reason to expect, it is already a huge quality of life improvement for software development. There are plenty of legitimate real world use cases for this tech already, and it’s not going away.
I don’t see how that follows. You appear to be conflating the skill of learning the specific syntax of a language with the skill of designing algorithms and writing contracts. These are two separate things. A developer will simply do work that’s more akin to what a mathematician or a physicist does.
LLMs don’t allow the programmer to bypass every stage of the learning process in both an academic and professional capacity. That’s simply a false statement. You cannot create quality software with the current generation of LLMs without understanding how to structure code, how algorithms work, and so on. These are absolutely necessary skills to use these tools effectively, and they will continue to be needed.
Again, this was said many times before when development became easier. What’s really happening is that the barrier is being lowered, and a lot more people are now able to produce code, many of whom simply would’ve been shut out of this field before.
I see no basis for this assertion myself, but I guess we’ll just wait and see.
Nobody is suggesting trusting LLMs here. In fact, I’ve repeatedly pointed out that trust shouldn’t be part of the equation with any kind of code whether it’s written by a human or a machine. We have proven techniques for verifying that the code does what was intended, and that’s how we write professional software.
Even if this technology stopped improving today, which there is very reason to expect, it is already a huge quality of life improvement for software development. There are plenty of legitimate real world use cases for this tech already, and it’s not going away.