Hi all, I wanted to share the outcome of today’s Council meeting regarding this proposal. After several weeks of discussion and incorporating feedback from our community into better revisions of the initial proposal, the Fedora Council has approved the latest version of the AI-Assisted Contributions policy formally. The agreed-upon version can be read in this ticket. You can read the full meeting transcript in the meeting log. So what happens next? Firstly, on behalf of the Fedora Council I...
I think having a policy that forces disclose of LLM code is important. It’s also important to solidify that AI code should only ever be allowed to exist in userland/ring 3. If you can’t hold the author accountable, the code should not have any permissions or be packaged with the OS.
I can maybe see using an LLM for basic triaging of issues, but I also fear that adding that system will lead to people placing more trust in it than they should have.
I know, that was me just directly voicing that opinion. I do still think that AI code should not be allowed in anything that eve remotely needs security.
Even if they can still be held accountable, I don’t think it’s a good idea to allow something that is known to hallucinate believable code to write important code. Just makes everything a nightmare to debug.
I think having a policy that forces disclose of LLM code is important. It’s also important to solidify that AI code should only ever be allowed to exist in userland/ring 3. If you can’t hold the author accountable, the code should not have any permissions or be packaged with the OS.
I can maybe see using an LLM for basic triaging of issues, but I also fear that adding that system will lead to people placing more trust in it than they should have.
but you can hold the author accountable, that’s what OP’s quoted text is about.
I know, that was me just directly voicing that opinion. I do still think that AI code should not be allowed in anything that eve remotely needs security.
Even if they can still be held accountable, I don’t think it’s a good idea to allow something that is known to hallucinate believable code to write important code. Just makes everything a nightmare to debug.