Hi all, I wanted to share the outcome of today’s Council meeting regarding this proposal. After several weeks of discussion and incorporating feedback from our community into better revisions of the initial proposal, the Fedora Council has approved the latest version of the AI-Assisted Contributions policy formally. The agreed-upon version can be read in this ticket. You can read the full meeting transcript in the meeting log. So what happens next? Firstly, on behalf of the Fedora Council I...
Accountability: You MUST take the responsibility for your contribution: Contributing to Fedora means vouching for the quality, license compliance, and utility of your submission. All contributions, whether from a human author or assisted by large language models (LLMs) or other generative AI tools, must meet the project’s standards for inclusion. The contributor is always the author and is fully accountable for the entirety of these contributions.
Contribution & Community Evaluation: AI tools may be used to assist human reviewers by providing analysis and suggestions. You MUST NOT use AI as the sole or final arbiter in making a substantive or subjective judgment on a contribution, nor may it be used to evaluate a person’s standing within the community (e.g., for funding, leadership roles, or Code of Conduct matters). This does not prohibit the use of automated tooling for objective technical validation, such as CI/CD pipelines, automated testing, or spam filtering. The final accountability for accepting a contribution, even if implemented by an automated system, always rests with the human contributor who authorizes the action.
the code is going to be held to the same standards as always, so it’s not like they’re going to be blindly adding slop
you can’t stop people from using LLMs, how would you know the difference? so formalizing the process allows for better accountability
I think having a policy that forces disclose of LLM code is important. It’s also important to solidify that AI code should only ever be allowed to exist in userland/ring 3. If you can’t hold the author accountable, the code should not have any permissions or be packaged with the OS.
I can maybe see using an LLM for basic triaging of issues, but I also fear that adding that system will lead to people placing more trust in it than they should have.
I know, that was me just directly voicing that opinion. I do still think that AI code should not be allowed in anything that eve remotely needs security.
Even if they can still be held accountable, I don’t think it’s a good idea to allow something that is known to hallucinate believable code to write important code. Just makes everything a nightmare to debug.
Me getting links to AI generated wikis where nearly all the information is wrong but of course I’m overreacting because AI is only used by real professionals who already are experts in their domain. I just need to wait 5 years and it’ll probably be only half wrong.
I’m still working on a colleague’s AI generated module that used 2,000 lines of code to do something that could’ve been done in 500. Much productivity is being had by all.
Sure, it’s just software. It’s useless software which makes everything it is involved in worse and it’s being shoved into everything to prop up the massive bubble that all the tech companies have shoveled all their money into, desperate for any actual use case to justify all their terrible ‘investments.’
The thing is it’ll probably be fine for the end product beyond the wave of codeslop that will be brought to the project once the shitty vibe coders hear the news. that’s just more work for the volunteers but you’re right that it isn’t really that different of a policy in practice
very silly to be upset about this policy
the code is going to be held to the same standards as always, so it’s not like they’re going to be blindly adding slop
you can’t stop people from using LLMs, how would you know the difference? so formalizing the process allows for better accountability
deleted by creator
steamed hams
deleted by creator
I think having a policy that forces disclose of LLM code is important. It’s also important to solidify that AI code should only ever be allowed to exist in userland/ring 3. If you can’t hold the author accountable, the code should not have any permissions or be packaged with the OS.
I can maybe see using an LLM for basic triaging of issues, but I also fear that adding that system will lead to people placing more trust in it than they should have.
but you can hold the author accountable, that’s what OP’s quoted text is about.
I know, that was me just directly voicing that opinion. I do still think that AI code should not be allowed in anything that eve remotely needs security.
Even if they can still be held accountable, I don’t think it’s a good idea to allow something that is known to hallucinate believable code to write important code. Just makes everything a nightmare to debug.
The AI hate crowd are getting increasingly nonsensical, I see it as any other software, if it’s more open that’s the best.
Sure this isn’t true, but go off.
🙄 I see y’all’s posts I’m old not insensible
Old and wrong is possible.
Me getting links to AI generated wikis where nearly all the information is wrong but of course I’m overreacting because AI is only used by real professionals who already are experts in their domain. I just need to wait 5 years and it’ll probably be only half wrong.
It’s under the same process for validation, criticize on each case.
I’m still working on a colleague’s AI generated module that used 2,000 lines of code to do something that could’ve been done in 500. Much productivity is being had by all.
If you wrote 4x the code it must be 4x as good!
Sure, it’s just software. It’s useless software which makes everything it is involved in worse and it’s being shoved into everything to prop up the massive bubble that all the tech companies have shoveled all their money into, desperate for any actual use case to justify all their terrible ‘investments.’
seems sensible.
The thing is it’ll probably be fine for the end product beyond the wave of codeslop that will be brought to the project once the shitty vibe coders hear the news. that’s just more work for the volunteers but you’re right that it isn’t really that different of a policy in practice