Everyone disliked that.

  • Keld [he/him, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    15 days ago

    As pointed out elsewhere and as i think would make some intuitive sense, they are not just directly letting chatgpt write an update to the OS, but making rules for how contributors who use AI code are to be treated (The same as any other coder, with the same requirements).
    Now if they were using AI to also vet the code that would end with computers exploding.

    I think the most likely bad result from this will be a lot of people without the necessary skill tying up other people’s time looking through their vibe coded nonsense to shoot it down. But that was going to happen anyway.

    • PorkrollPosadist [he/him, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      15 days ago

      From my (admittedly, limited) experience, sign-offs are often relatively shallow sanity checks. Nothing about this patch looks egregious? It solves a known problem? It makes it though the CI pipeline? Approved. When dealing with languages like C, where very subtle mistakes can introduce defects and vulnerabilities, I would not trust a LLM to do the brunt of the due diligence which would ordinarily be coming from the contributor (who typically spends a lot more time thinking about the problem than the person signing off on the patch). I’ll admit this isn’t a novel problem, but the amount of scrutiny applied to submissions will definitely need to increase if this becomes a standard process.