• semioticbreakdown [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    It’s relevant because the key skill is being able to understand the problem and then understand how to represent it formally. This is the skill that’s needed whether you have agents fill in the blanks or you do it yourself. There’s a reason why you do a lot of math work and algorithms on paper in university (or at least I did back in my program). The focus was on understanding how algorithms work conceptually and writing pseudo code. The specific language used to implement the code was never the focus.

    I think the crux of my argument is that this skill straight up cannot be developed and will degrade because it is in a dialectical relationship with practice. It’s tied to their implementation and application by the learner. There was never a programming tool that allowed the user to simply bypass every stage of this learning process in both an academic and professional capacity. Its proliferation is already having significant and serious effects, so no I don’t think it’s a moral panic. And beyond the psychosis outside of the coding sphere, they make you less effective as a programmer, even for senior programmers. I think LLM technology itself is fundamentally flawed and I think believing formal methods and genetic algorithms will prevent its issues in real-world applications is a recipe for a disaster, or at least, even crappier software than most of what we get today. Maybe history will prove me wrong, but I see zero reason to trust LLMs in basically any capacity. Theres going to be another AI winter and revolution before AI programming technology is used for anything but sludgeware and vibe coded apps that leak all their userdata.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      I don’t see how that follows. You appear to be conflating the skill of learning the specific syntax of a language with the skill of designing algorithms and writing contracts. These are two separate things. A developer will simply do work that’s more akin to what a mathematician or a physicist does.

      LLMs don’t allow the programmer to bypass every stage of the learning process in both an academic and professional capacity. That’s simply a false statement. You cannot create quality software with the current generation of LLMs without understanding how to structure code, how algorithms work, and so on. These are absolutely necessary skills to use these tools effectively, and they will continue to be needed.

      Its proliferation is already having significant and serious effects, so no I don’t think it’s a moral panic.

      Again, this was said many times before when development became easier. What’s really happening is that the barrier is being lowered, and a lot more people are now able to produce code, many of whom simply would’ve been shut out of this field before.

      I think LLM technology itself is fundamentally flawed and I think believing formal methods and genetic algorithms will prevent its issues in real-world applications is a recipe for a disaster, or at least, even crappier software than most of what we get today.

      I see no basis for this assertion myself, but I guess we’ll just wait and see.

      Maybe history will prove me wrong, but I see zero reason to trust LLMs in basically any capacity.

      Nobody is suggesting trusting LLMs here. In fact, I’ve repeatedly pointed out that trust shouldn’t be part of the equation with any kind of code whether it’s written by a human or a machine. We have proven techniques for verifying that the code does what was intended, and that’s how we write professional software.

      Theres going to be another AI winter and revolution before AI programming technology is used for anything but sludgeware and vibe coded apps that leak all their userdata.

      Even if this technology stopped improving today, which there is very reason to expect, it is already a huge quality of life improvement for software development. There are plenty of legitimate real world use cases for this tech already, and it’s not going away.