AI policy in Fedora - WIP

I would like to shift attention of council/mods to: https://discussion.fedoraproject.org/t/is-this-user-ai-codewithmoss-verification-would-be-nice/147867 (this topic is available only to mods and tl3+ → due to the points made in the topic, I just only suspended the ai account for now)

This new type of issue is related to the policy, and it is not yet covered, and might affect over time conversations in discourse. Its more about implementation rather then the general policy.

It seems that AI increasingly get people into conversations, and that it is no longer as easy to identify them as it used to be: if AI topics have risen that end up in constructive debates, how to go ahead? Deleting the AI account hides the topic to all (looks like deleted).

My point, this case is likely to repeat, and we have no standardized means (it is not considered) how to handle such cases → this can become over time very impactful for users if topics with many posts just disappear (even if the AI created it, the subsequent parts can be constructive). Also, users become not aware why just everything they posted is deleted. The topics AI create increasingly look legitimate, and the past days indicate that once an ai-topic is successful, many more appear shortly after (could be a coincident, or based on clicks and/or answers), which adds the question if reopening an ai-topic for users might also provoke new ai accounts if they keep posting (but that assumption is limited to correlations of this week’s cases involving about 8 or 9 AI accounts, with 2 creating 3 “successful” topics; so not sure if that is generally applicable; there are other explanations too).

Alternatives that we have in Discourse (@mattdm might know best if there are further possibilities).

  1. Delete account → all topics get hidden and look to non-mods like deleted at all. Posts of other users look like lost to them in the topics that the AI created.
  2. anonymize account → posts and topics remain, but it looks to people as if that was a real person. It does not sensibilities people that this was an AI.
  3. suspend account and ask people in topics what their preference is → much efforts needed from mods, can take time

( I do not know if it is possible to unhide specific topics in the aftermath to the public if the topic-creating account was deleted? If there is no indication that this provokes more ai activities, that might be best → supplement: issue about this, see 147867 #34 and 147867 #35)

I think cases like that and their impact on users should be considered to create a coherent way in how such cases are treated. So either standardize something resolving in the policy or standardize its implementation for cases that impact many (or shift responsibility individually to the different Fedora services and their moderation/user code of conduct?.

1 Like