• CommunistCuddlefish [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    27
    ·
    1 day ago

    In addition to churning out unreliable low quality code, this sounds like it takes any of the fun out of programming.

    And,

    Using LLMs did make me a worse software developer, as I didn’t spend as much time reading docs and thinking as before. There’s a reason why most managers suck at writing code.

    LLM agents often add unnecessary complexity in their implementations of features, they create a lot of code duplication, and make you a worse developer.

    Every time I tried using an LLM for core features of applications I develop at work, the implementations were questionable and I spent at least as much time rewriting the code than I would have spent writing it from scratch.

    Regarding frontend, agents really struggled at making good, maintainable, DRY software. All models used magic numbers everywhere even when asked not to, keyboard interaction was poorly implemented, and some widgets took 5+ prompts to get right.

    It can be helpful or useful in limited cases but it also needs to go.

  • semioticbreakdown [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    30
    ·
    1 day ago

    If you can’t read the code and spot issues, they’re hard to use past the PoC stage

    Very worried about the proliferation of vibe coding honestly, how are you gonna learn to spot issues if you just read AI code and don’t write any yourself? Computer science degrees are going to be useless for learning because tons of students are just LLM cheating their way through college

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 day ago

      I expect that we’ll see tooling and languages adapt overtime and we’ll be developing code differently.

      For example, programming languages might start shifting in the direction of contracts. You specify the signature for the function and the agents figures out how to meet the spec. You could also specify parameters like computational complexity and memory usage. It would be akin to genetic algorithm approach where the agent could converge on a solution over time.

      If that’s the direction things will be moving in, then current skills could be akin to being able to write assembly by hand. Useful in some niche situations, but not necessary vast majority of the time.

      The way code is structured will likely shift towards small composable components. As long as the code meets the spec, it doesn’t necessarily matter what quality of the functions is internally. You can treat them as black boxes as long as they’re doing what’s expected. This is how we work with libraries right now. Nobody audits all the code in a library they include, you just look at the signatures and call the API level functions.

      Incidentally, I’m noticing that functional style seems to work really well with LLMs. Having an assembly line of pure functions naturally breaks up a problem into small building blocks that you can reason about in isolation. It’s kind of like putting Lego blocks together. The advantage over approaches like microservies here is that you don’t have to deal with the complexity of orchestration and communication between the services.

      • semioticbreakdown [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        21
        ·
        1 day ago

        But in all of these cases we trust that someone wrote the code - that someone actually put in the effort to make sure shit wasn’t fucked. Library’s are black boxes, but someone made that - someone made the compilation tools, someone wrote LLVM, someone wrote these things and understands on a more fundamental level how they work! At every level of abstraction, someone did due diligence to ensure that the processes that they wrote worked. And LLMs are trained on the fruits of our labor as programmers! No line of LLM code could exist if not for someone writing it. If programmers no longer produce anything really new, because they no longer understand how to, it would just be feeding back into itself, creating output based on LLM output. Utter simulacra that will just collapse because of its poisoned dataset. Just a level of abstraction completely divorced from the real labor that the information age was built on. It seems to me like the information layer around which technocratic societies have reconstructed themselves is dissolving on a fundamental level from these tools. So I just fail to see how using the lie machine is a good thing for programming, even with program verification and referential transparency. They can’t even do math right. I’ve seen the future where programmers are reduced to writing training data for LLMs and being program verifiers and it is fucking grim, in my opinion.

        • MizuTama [he/him, any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          14
          ·
          1 day ago

          At every level of abstraction, someone did due diligence to ensure that the processes that they wrote worked

          We know very different programmers lol

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          English
          arrow-up
          11
          ·
          1 day ago

          I mean there’s plenty of shit code written by humans, and many libraries are of very poor quality. This is especially true in languages like Js. The assertion that somebody used diligence at every level is not true. Even when we do code reviews we often miss really basic things. And the way we get around that is by creating test harnesses, using tools like type systems and linters. We don’t just trust that the code works because a human wrote it and looked at it.

          Whether someone made it or not is not really that important if you have a formal contract. You specify what the inputs and outputs are, and as long as the code meets the contract you know what it’s doing. That’s basically been the whole promise with static typing. Whether a human writes this code or a model doesn’t really matter.

          Also worth noting that people used these exact types of arguments when programming languages started being used. People claimed that you had to write assembly by hand so you know what the code is doing, and that you can’t trust the compiler, and so on. Then these arguments were made regarding GC saying that you have to manage memory by hand and that GC is too unpredictable, and etc. We’ve already been here many times before.

          If programmers no longer produce anything really new, because they no longer understand how to, it would just be feeding back into itself, creating output based on LLM output.

          This isn’t what I suggested at all. What I said that the programmer would focus on the specification and understanding what the code is doing semantically. The LLM handles the implementation details of the code with the human focusing on what the code is doing while the LLM focuses on how it does it.

          • semioticbreakdown [she/her]@hexbear.net
            link
            fedilink
            English
            arrow-up
            21
            ·
            1 day ago
            Neo-Luddite ravings

            Yes, but every one of those libraries involved someone sitting down at their computer and typing that garbage out on some level even if it sucked, and were built on technologies that were written by actual people. What happens when you deploy software that breaks and hurts people, and know one can figure out why, because know one knew what it actually did in the first place, because no one really knows how to write code anymore?

            I disagree, I think it’s very important. formal verification methods and type systems are powerful but not perfect by any means, and it is still up to the intuitions and domain and technical knowledge of programmers to find mistakes and detect issues. We do test-driven development and it helps some things too, but failed in many other ways, and writing test suites is itself still dependent upon programmers. https://raywang.tech/2017/12/20/Formal-Verification:-The-Gap-between-Perfect-Code-and-Reality/

            And every programming abstraction came with tradeoffs, demonstrably so. Assembly to compiled languages, direct memory management to GC, bare HTML and ajax to transpiled javascript frameworks. Bloated electron apps eating up all your memory because no one cared about optimization when you can just load up a framework. There are always situations where the abstractions break down and issues crop up with their underlying implementations or regarding their performance. We’ve already had issues like this, which is why zero-cost abstractions etc are in vogue, and why formal verification methods even exist in the first place. I do not think the tradeoff of “never having to write function implementations again” and just writing contracts while a machine that can’t learn from its mistakes fills in the blanks is worth it. Not knowing what the code is doing because you never wrote assembly by hand is true on some level, so are the issues with GC, and so on.

            This isn’t what I suggested at all. What I said that the programmer would focus on the specification and understanding what the code is doing semantically. The LLM handles the implementation details of the code with the human focusing on what the code is doing while the LLM focuses on how it does it.

            but the how is relevant to the what, and intimately tied to the learning process of good programming as a praxis. Our domain knowledge has always been reproduced through the practice of programming itself. Not having to write any of that does mean you wont understand how it works, and when things go wrong (which they will), the people who wrote it won’t be able to do anything about it because they can’t. From personal experience, the short time I spent LLM programming made me worse as a programmer because I wasn’t actively using my skills, ending up relying on the LLM, and they degraded. Declarative programming, Prolog-style inference, these have always been good tools, but they have also always had flaws that required good programming knowledge to work around, and set up correctly. COBOL forms the backing of our banking system and no one wants to touch it because everyone who actually worked with it retired and/or died. Fortran is still in use everywhere. Assembly and C are relegated to arcane arts instead of the foundations that modern software is built on. Fields change and evolve - we were stuck in the paradigm of writing domain ontologies into class hierarchies for 40 years and now that’s getting baked into these models, too. I look at trends in software and find LLMs as their extension to be nothing but horrifying. An endless reproduction of the now, with all of its worst flaws. I do not like thinking about how much LLM cheating in college is a thing. I do not trust this stuff at all.

            • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 day ago

              Again, you use exact same tools and processes to evaluate software whether it’s written by a human or a model. The reality is that people make mistakes all the time, people write code that’s as bad as any LLM. We have developed practices to evaluate code and to catch problems. You’re acting as if human written code doesn’t already have all these same problems that we deal with on daily basis.

              Yes, every programming abstraction comes with trade offs. Yet, it’s pretty clear that most people prefer the trade offs that allow them to write code that’s more declarative. That said, using approaches like genetic algorithms coupled with agents could actually allow automating a lot of optimization that we don’t bother doing because it’s currently too tedious.

              but the how is relevant to the what, and intimately tied to the learning process of good programming as a praxis.

              It’s relevant because the key skill is being able to understand the problem and then understand how to represent it formally. This is the skill that’s needed whether you have agents fill in the blanks or you do it yourself. There’s a reason why you do a lot of math work and algorithms on paper in university (or at least I did back in my program). The focus was on understanding how algorithms work conceptually and writing pseudo code. The specific language used to implement the code was never the focus.

              What you’re talking about is a specific set of skills degrading because they’re becoming automated. This is no different from people losing skills like writing assembly by hand because the compiler can do it now.

              There’s always a moral panic every time new technology emerges that automates something that displaces a lot of skills people invested a lot of time into. And the sky never comes crashing down. We end up making some trade offs, we settle on practices that work, and the world moves on.

              • semioticbreakdown [she/her]@hexbear.net
                link
                fedilink
                English
                arrow-up
                8
                ·
                1 day ago

                It’s relevant because the key skill is being able to understand the problem and then understand how to represent it formally. This is the skill that’s needed whether you have agents fill in the blanks or you do it yourself. There’s a reason why you do a lot of math work and algorithms on paper in university (or at least I did back in my program). The focus was on understanding how algorithms work conceptually and writing pseudo code. The specific language used to implement the code was never the focus.

                I think the crux of my argument is that this skill straight up cannot be developed and will degrade because it is in a dialectical relationship with practice. It’s tied to their implementation and application by the learner. There was never a programming tool that allowed the user to simply bypass every stage of this learning process in both an academic and professional capacity. Its proliferation is already having significant and serious effects, so no I don’t think it’s a moral panic. And beyond the psychosis outside of the coding sphere, they make you less effective as a programmer, even for senior programmers. I think LLM technology itself is fundamentally flawed and I think believing formal methods and genetic algorithms will prevent its issues in real-world applications is a recipe for a disaster, or at least, even crappier software than most of what we get today. Maybe history will prove me wrong, but I see zero reason to trust LLMs in basically any capacity. Theres going to be another AI winter and revolution before AI programming technology is used for anything but sludgeware and vibe coded apps that leak all their userdata.

                • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  21 hours ago

                  I don’t see how that follows. You appear to be conflating the skill of learning the specific syntax of a language with the skill of designing algorithms and writing contracts. These are two separate things. A developer will simply do work that’s more akin to what a mathematician or a physicist does.

                  LLMs don’t allow the programmer to bypass every stage of the learning process in both an academic and professional capacity. That’s simply a false statement. You cannot create quality software with the current generation of LLMs without understanding how to structure code, how algorithms work, and so on. These are absolutely necessary skills to use these tools effectively, and they will continue to be needed.

                  Its proliferation is already having significant and serious effects, so no I don’t think it’s a moral panic.

                  Again, this was said many times before when development became easier. What’s really happening is that the barrier is being lowered, and a lot more people are now able to produce code, many of whom simply would’ve been shut out of this field before.

                  I think LLM technology itself is fundamentally flawed and I think believing formal methods and genetic algorithms will prevent its issues in real-world applications is a recipe for a disaster, or at least, even crappier software than most of what we get today.

                  I see no basis for this assertion myself, but I guess we’ll just wait and see.

                  Maybe history will prove me wrong, but I see zero reason to trust LLMs in basically any capacity.

                  Nobody is suggesting trusting LLMs here. In fact, I’ve repeatedly pointed out that trust shouldn’t be part of the equation with any kind of code whether it’s written by a human or a machine. We have proven techniques for verifying that the code does what was intended, and that’s how we write professional software.

                  Theres going to be another AI winter and revolution before AI programming technology is used for anything but sludgeware and vibe coded apps that leak all their userdata.

                  Even if this technology stopped improving today, which there is very reason to expect, it is already a huge quality of life improvement for software development. There are plenty of legitimate real world use cases for this tech already, and it’s not going away.

  • tricerotops [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 day ago

    I don’t think this is a very good or useful article because it is clearly someone who went into this “experiment” with a negative perspective on the whole thing and didn’t try very hard to make it work. Vibe coding as it stands today is, at best, a coin flip as to whether you can make something coherent and the chances of success rapidly diminish if the project can’t fit into about 50% of the context window. There are things you can do, and probably these things will be incorporated into tools in the future, that will improve your chances of achieving a good outcome. But I disagree with the author’s first statement that using LLMs in a coding workflow is trivial, because it is not. And the fact that they had a bad time proves that it is not. My perspective as someone who has a couple of decades of coding under their belt is that this technology can actually work but it’s a lot harder than anybody gives it credit for and there’s a major risk that LLMs are too unprofitable to continue to exist as tools in a few years.

    I agree though with their last point - “don’t feel pressured to use these” - for sure. I think that is a healthy approach. Nobody knows how to use them properly yet so you won’t lose anything by sitting on the sidelines. And in fact, like I said, it’s completely possible that none of this will even be a thing in 5 years because it’s just too goddamn expensive.

    • Le_Wokisme [they/them, undecided]@hexbear.net
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      I agree though with their last point - “don’t feel pressured to use these” - for sure. I think that is a healthy approach. Nobody knows how to use them properly yet so you won’t lose anything by sitting on the sidelines. And in fact, like I said, it’s completely possible that none of this will even be a thing in 5 years because it’s just too goddamn expensive.

      that’s fine for indies but i have friends at [redacted] and [redacted] and the have management on their asses to be using the shitty “ai” tools even though it literally is worse and takes longer.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      I thought it was overall a fairly sober take. I agree that becoming effective at using these tools actually does take some time. People often try them with an existing bias that these tools won’t work well, and then when they don’t see these tools working magic they claim it as evidence that they don’t work. Yet, like with any tool, you have to take the time to develop intuition for cases it works well in, when it gets into trouble, how to prompt it, etc.