Everyone disliked that.

  • DirtyPair [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    43
    ·
    14 days ago

    very silly to be upset about this policy

    Accountability: You MUST take the responsibility for your contribution: Contributing to Fedora means vouching for the quality, license compliance, and utility of your submission. All contributions, whether from a human author or assisted by large language models (LLMs) or other generative AI tools, must meet the project’s standards for inclusion. The contributor is always the author and is fully accountable for the entirety of these contributions.

    Contribution & Community Evaluation: AI tools may be used to assist human reviewers by providing analysis and suggestions. You MUST NOT use AI as the sole or final arbiter in making a substantive or subjective judgment on a contribution, nor may it be used to evaluate a person’s standing within the community (e.g., for funding, leadership roles, or Code of Conduct matters). This does not prohibit the use of automated tooling for objective technical validation, such as CI/CD pipelines, automated testing, or spam filtering. The final accountability for accepting a contribution, even if implemented by an automated system, always rests with the human contributor who authorizes the action.

    the code is going to be held to the same standards as always, so it’s not like they’re going to be blindly adding slop i-cant

    you can’t stop people from using LLMs, how would you know the difference? so formalizing the process allows for better accountability

    • invalidusernamelol [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      21
      ·
      14 days ago

      I think having a policy that forces disclose of LLM code is important. It’s also important to solidify that AI code should only ever be allowed to exist in userland/ring 3. If you can’t hold the author accountable, the code should not have any permissions or be packaged with the OS.

      I can maybe see using an LLM for basic triaging of issues, but I also fear that adding that system will lead to people placing more trust in it than they should have.

        • invalidusernamelol [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          14 days ago

          I know, that was me just directly voicing that opinion. I do still think that AI code should not be allowed in anything that eve remotely needs security.

          Even if they can still be held accountable, I don’t think it’s a good idea to allow something that is known to hallucinate believable code to write important code. Just makes everything a nightmare to debug.

    • kristina [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      14 days ago

      The AI hate crowd are getting increasingly nonsensical, I see it as any other software, if it’s more open that’s the best.

    • The thing is it’ll probably be fine for the end product beyond the wave of codeslop that will be brought to the project once the shitty vibe coders hear the news. that’s just more work for the volunteers but you’re right that it isn’t really that different of a policy in practice