Artificial Intelligence is a transformative technology, and as a leader in open source, the Fedora Project needs a thoughtful position to guide innovation and protect our community’s values.
For the past year, we have been on a journey with the community to define what that position should be. This process began in the summer of 2024, when we asked for the community’s thoughts in an AI survey. The results, which we discussed openly at Flock and in Council meetings, gave us a clear message: we see the potential for AI to help us build a better platform, but we also have valid concerns about privacy, ethics, and quality.
The draft we are proposing below is our best effort to synthesize the Fedora community’s input into a set of clear, actionable guidelines. It is designed to empower our contributors to explore the positive uses of AI we identified, while creating clear guardrails to protect the project and its values from the risks we highlighted.
Next Steps
In accordance with the official Policy Change policy, we are now opening a formal two-week period for community review and feedback. We encourage you to read the full draft and share your thoughts.
The policy proposal is also available to read on [the community blog.](https://communityblog.fedoraproject.org/council-policy-proposal-policy-on-ai-assisted-contributions/_
After the two-week feedback period, the Fedora Council will hold a formal vote on ratifying the policy via ticket voting. Thank you for your thoughtful engagement throughout this process. We look forward to hearing your feedback as we take this important next step together.
Fedora Project Policy on AI-Assisted Contributions
Our Philosophy: AI as a Tool to Advance Free Software
The Fedora Project is a community built on four foundations: Freedom, Friends, Features, and First. We envision a world where everyone benefits from free and open source software built by inclusive, open-minded communities. In the spirit of our “First” and “Features” foundations, we see Artificial Intelligence as a tool to empower our contributors to make a more positive impact on the world.
We recognize the ongoing and important global discussions about how AI models are trained. Our policy focuses on the responsible use of these tools. The responsibility for respecting the work of others and adhering to open source licenses always rests with the contributor. AI assistants, like any other tool, must be used in a way that upholds these principles.
This policy provides a framework to help our contributors innovate confidently while upholding the project’s standards for quality, security, and open collaboration. It is a living document, reflecting our commitment to learning and adapting as this technology evolves.
1. AI-Assisted Project Contributions
We encourage the use of AI assistants as an evolution of the contributor toolkit. However, human oversight remains critical. The contributor is always the author and is fully accountable for their contributions.
- You are responsible for your contributions. AI-generated content must be treated as a suggestion, not as final code or text. It is your responsibility to review, test, and understand everything you submit. Submitting unverified or low-quality machine-generated content (sometimes called “AI slop”) creates an unfair review burden on the community and is not an acceptable contribution.
- Be transparent about your use of AI. When a contribution has been significantly assisted by an AI tool, we encourage you to note this in your pull request description, commit message, or wherever authorship is normally indicated for the work. For instance, use a commit message trailer like Assisted-by: <name of code assistant>. This transparency helps the community develop best practices and understand the role of these new tools.
- Fedora values Your Voice. Clear, concise, and authentic communication is our goal. Using AI tools to translate your thoughts or overcome language barriers is a welcome and encouraged practice, but keep in mind, we value your unique voice and perspective.
- Limit AI Tools for Reviewing. As with creating code, documentation, and other contributions, reviewers may use AI tools to assist in providing feedback, but not to wholly automate the review process. Particularly, AI should not make the final determination on whether a contribution is accepted or not.
2. AI In Fedora Project Management: To avoid introducing uncontrollable bias, AI/ML tools must not be used to score or evaluate submissions for things like code of conduct matters, funding requests, conference talks, or leadership positions. This does not prohibit the use of automated tooling for tasks like spam filtering and note-taking.
3. AI Tools for Fedora Users
Our commitment is to our users’ privacy and security. AI-powered features can offer significant benefits, but they must be implemented in a way that respects user consent and control.
- AI features MUST be opt-in. Any user-facing AI assistant, especially one that sends data to a remote service, must not be enabled by default and requires explicit, informed consent from the user.
- We SHOULD explore AI for accessibility. We actively encourage exploring the use of AI/ML tools for accessibility improvements, such as for translation, transcription, and text-to-speech.
3. Fedora as a Platform for AI Development
One of our key goals is to make Fedora the destination for Linux platform innovation, including for AI.
- Package AI tools and frameworks. We encourage the packaging of tools and frameworks needed for AI research and development in Fedora, provided they comply with all existing Fedora Packaging and Licensing guidelines.
4. Use of Fedora Project Data
The data generated by the Fedora Project is a valuable community asset. Its use in training AI models must respect our infrastructure and our open principles.
- Aggressive scraping is prohibited. Scraping data in a way that causes a significant load on Fedora Infrastructure is not allowed. Please contact the Fedora Infrastructure team to arrange for efficient data access.
- Honor our licenses. When using Fedora project data to train a model, we expect that any use of this data honors the principles of attribution and sharing inherent in those licenses.