Hi all, I wanted to share the outcome of today’s Council meeting regarding this proposal. After several weeks of discussion and incorporating feedback from our community into better revisions of the initial proposal, the Fedora Council has approved the latest version of the AI-Assisted Contributions policy formally. The agreed-upon version can be read in this ticket. You can read the full meeting transcript in the meeting log. So what happens next? Firstly, on behalf of the Fedora Council I...
I’d love to see the chaos on the LKML if some idiot tried to submit “AI assisted” code there.
TBH, if any distro was going to do this… well, I’m not surprised it’s Fedora. I don’t want to get into a sectarian fight about distro preferences here (this is a leftist website, way more crap to be sectarian about instead, lol), but… yeah.
I am rather fond of Fedora. I still use it on my laptop. At the end of the day it is a glorified testbed for Red Hat (read: IBM) though, so yea.
A bit surprised Ubuntu didn’t do this first, but then again Ubuntu is actually widely deployed as a server OS for some reason, while Fedora is primarily used by end-users and GTK developers. If Ubuntu made the first move, the Internet might actually stop working and we’d all have to touch grass.
As pointed out elsewhere and as i think would make some intuitive sense, they are not just directly letting chatgpt write an update to the OS, but making rules for how contributors who use AI code are to be treated (The same as any other coder, with the same requirements).
Now if they were using AI to also vet the code that would end with computers exploding.
I think the most likely bad result from this will be a lot of people without the necessary skill tying up other people’s time looking through their vibe coded nonsense to shoot it down. But that was going to happen anyway.
From my (admittedly, limited) experience, sign-offs are often relatively shallow sanity checks. Nothing about this patch looks egregious? It solves a known problem? It makes it though the CI pipeline? Approved. When dealing with languages like C, where very subtle mistakes can introduce defects and vulnerabilities, I would not trust a LLM to do the brunt of the due diligence which would ordinarily be coming from the contributor (who typically spends a lot more time thinking about the problem than the person signing off on the patch). I’ll admit this isn’t a novel problem, but the amount of scrutiny applied to submissions will definitely need to increase if this becomes a standard process.
I use fedora on a couple of machines - it’s purely pragmatic. It’s a great distro - always been solid for me worked great on my hardware (some common and some exotic), never had a update that’s broken my system (cough arch cough) but stays pretty cutting edge. I also like Gnome - so disregard my opinion lol
AI assisted code has probably already been submitted to the kernel and nothing has happened. As long as you have a proper review process, AI submitted code is no less dangerous than code written by a human who doesn’t fully know what they’re doing.
I’d love to see the chaos on the LKML if some idiot tried to submit “AI assisted” code there.
TBH, if any distro was going to do this… well, I’m not surprised it’s Fedora. I don’t want to get into a sectarian fight about distro preferences here (this is a leftist website, way more crap to be sectarian about instead, lol), but… yeah.
I am rather fond of Fedora. I still use it on my laptop. At the end of the day it is a glorified testbed for Red Hat (read: IBM) though, so yea.
A bit surprised Ubuntu didn’t do this first, but then again Ubuntu is actually widely deployed as a server OS for some reason, while Fedora is primarily used by end-users and GTK developers. If Ubuntu made the first move, the Internet might actually stop working and we’d all have to touch grass.
As pointed out elsewhere and as i think would make some intuitive sense, they are not just directly letting chatgpt write an update to the OS, but making rules for how contributors who use AI code are to be treated (The same as any other coder, with the same requirements).
Now if they were using AI to also vet the code that would end with computers exploding.
I think the most likely bad result from this will be a lot of people without the necessary skill tying up other people’s time looking through their vibe coded nonsense to shoot it down. But that was going to happen anyway.
From my (admittedly, limited) experience, sign-offs are often relatively shallow sanity checks. Nothing about this patch looks egregious? It solves a known problem? It makes it though the CI pipeline? Approved. When dealing with languages like C, where very subtle mistakes can introduce defects and vulnerabilities, I would not trust a LLM to do the brunt of the due diligence which would ordinarily be coming from the contributor (who typically spends a lot more time thinking about the problem than the person signing off on the patch). I’ll admit this isn’t a novel problem, but the amount of scrutiny applied to submissions will definitely need to increase if this becomes a standard process.
I use fedora on a couple of machines - it’s purely pragmatic. It’s a great distro - always been solid for me worked great on my hardware (some common and some exotic), never had a update that’s broken my system (cough arch cough) but stays pretty cutting edge. I also like Gnome - so disregard my opinion lol
It’s so good.
Clean, elegant consistent design, and simple enough to get out the way.
Excellent performance.
I am the one person who likes the workflow.
I also prefer gtk to qt for development purposes, so it’s nice to have my gtk stuff feel at home
AI assisted code has probably already been submitted to the kernel and nothing has happened. As long as you have a proper review process, AI submitted code is no less dangerous than code written by a human who doesn’t fully know what they’re doing.
wasn’t there some contributions on lkml using gpl version 4 or 6? i cannot remember the exact details but they were clearly ai commits