• Parzivus [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 days ago

    LLMs are good enough at coding now to replace the average fresh comp sci graduate. You still need a couple senior people to check the output, but tech companies don’t need nearly as many grunts as they used to. It’s very possible that AI will plateau where it is now for a long time and the people with experience get to keep their jobs, but it’s gonna be brutal for newcomers from now on.

    • sodium_nitride [she/her, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      20
      ·
      4 days ago

      LLMs are good enough at coding now to replace the average fresh comp sci graduate.

      This really isn’t true. Modern LLMs are still not much better than an advanced Google search. I’m not even in CS doing industry work, but even I can spot misses in LLM output.

      On a basic level, the AI has 2 major disadvantages. Firstly, it is not fully upto date with the Internet (and training on data past 2022 risks poisoning the dataset), a problem that will get worse over time. Secondly, LLMs are expensive as hell to actually run. Thirdly, the LLM context windows and higher order reasoning are still limited.

      • gay_king_prince_charles [she/her, he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        3 days ago

        Firstly, it is not fully up to date with the Internet (and training on data past 2022 risks poisoning the dataset).

        Where on earth did you get that from? Sonnet-4.5 has a pre-training cutoff date of January 2025 and GPT-5 has a pre-training cutoff date of October 2024. Any vaguely modern interface can get data past that into context by RAG and MCP. These aren’t far back because of model collapse or anything, it’s just that fine tuning is a hugely labor intensive process that takes months. Model collapse is greatly mitigated with human-based feedback and finetuning, making it safe to train models on LLM generated data. Deepseek, for example, is directly trained off GPT and Claude’s output.

        • sodium_nitride [she/her, any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          I am aware that LLMs do train with datasets past 2022. But there is a risk of poisoning the dataset that will grow overt time as the use of LLMs becomes larger. It is not a risk that can be easily mitigated by human feedback and fine tuning, since getting rid of workers is exactly why business owners are hyped about LLMs in the first place.

          And yes, I did not about MCP so I was wrong about that part, but you can still put less data into context vs in training.

    • lib1 [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      4 days ago

      LLMs are good enough at coding now to replace the average fresh comp sci graduate

      This depends heavily on the graduate. I’ve met fresh graduates who don’t know what a for loop is and I’ve met fresh graduates who have a pretty extensive portfolio already. It really depends on how much they engaged with their program, hung out with other people from their major, and did projects outside of schoolwork.

      All that said, junior developers are capable of doing something that LLMs can’t currently do: becoming senior developers. If the industry wants to do this short sighted bullshit of giving juniors the shaft after all the work they put into “learn 2 code” they’re going to be painfully surprised when their seniors start leaving and there’s no labor pool to replace them.

      • Parzivus [any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        8
        ·
        4 days ago

        If the industry wants to do this short sighted bullshit of giving juniors the shaft after all the work they put into “learn 2 code” they’re going to be painfully surprised when their seniors start leaving and there’s no labor pool to replace them.

        That’s exactly what I would expect to happen, they never plan more than a quarter or two in advance

    • vala@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 days ago

      This really isn’t true for any non-trivial project. If you want a basic CRUD app maybe, maybe. But anything more complicated or novel than that and the LLM isn’t doing much aside from boilerplate for you.

      • 30_to_50_Feral_PAWGs [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 days ago

        Even for a CRUD app, it’s questionable. Ironically, we’ve been able to scaffold those pretty easily since the early aughts without burning a small rain forest in the process.