Let me summarize my current view on the topic as I got it from this thread.
There are several points which we may need to address as Fedora community and as Fedora Council:
-
concerns of contributors who already use AI tools for their work, consider contributing to Fedora and should feel welcomed in the project;
-
concerns of contributors who maintain certain subprojects or processes, who fear that unrestricted AI usage may flood the project, and put them in the spot for dealing with consequences;
-
concerns regarding building automated systems at scale and the human oversight, responsibility and ownership on them;
-
concerns about using Fedora data, infrastructure and so on for development of AI-related tools and services.
It also becomes clear that we are not able to address all topics in one go. And maybe we should not try.
Still I feel like the latest proposal can be extended a little bit to expand the coverage.
Here is my version. It followed the RFC style. Comments are in italics.
Fedora AI-Assisted Contributions Policy v0.1.4b-2
-
You MAY use AI assistance for contributing to Fedora, as long as you follow the principles described below.
Comment: This is a cosmetic addition. But I would like to have it to be the opening clause of the policy -
Accountability: You MUST take the responsibility for your contribution: Contributing to Fedora means vouching for the quality, license compliance, and utility of your submission. All contributions, whether from a human author or assisted by large language models (LLMs) or other generative AI tools, must meet the project’s standards for inclusion. The contributor is always the author and is fully accountable for their contributions.
Set in active voice but mostly copy-paste -
Transparency: You MUST disclose the use of AI tools when the significant part of the contribution is taken from a tool without changes. You SHOULD disclose the other uses of AI tools, where it might be useful. Routine use of assistive tools for correcting grammar and spelling, or for clarifying language, does not require disclosure.
Information about the use of AI tools will help us evaluate their impact, build new best practices and adjust existing processes.
Disclosures are made where authorship is normally indicated. For contributions tracked in git, the recommended method is anAssisted-by:commit message trailer. For other contributions, disclosure may include document preambles, design file metadata, or translation notes.
Examples:Assisted-by: generic LLM chatbot
Assisted-by: ChatGPTv5Added one of my previous suggestion. Copy-paste from LLM must be disclosed always, other disclosures are encouraged. Added examples
-
Contribution & Community Evaluation: AI tools may be used to assist human reviewers by providing analysis and suggestions. You MUST NOT use AI as the sole or final arbiter in making a substantive or subjective judgment on a contribution, nor may it be used to evaluate a person’s standing within the community (e.g., for funding, leadership roles, or Code of Conduct matters). This does not prohibit the use of automated tooling for objective technical validation, such as CI/CD pipelines, automated testing, or spam filtering. The final accountability for accepting a contribution, even if implemented by an automated system, always rests with the human contributor who authorizes the action.
Mostly unchanged -
Large scale initiatives: The policy doesn’t cover the large scale initiatives which may significantly change the ways the project operates or lead to exponential growth in contributions in some parts of the project. Such initiatives need to be discussed separately with the Fedora Council.
The clause moves the questions on “what if we have a fully automated system to do X” outside of the policy as it needs to be evaluated via a different set of criteria.
Concerns about possible policy violations should be reported via private tickets to Fedora Council(link). Comment: Added a procedural must have
The key words “MAY”, “MUST”, “MUST NOT”, and “SHOULD” in this document are to be interpreted as described in RFC 2119.