Fedora-Council/tickets ticket #530: CSAM on Matrix: Request for Council Legal & Resource Support

@jflory7 filed Fedora-Council/tickets ticket #530. Discuss here and record votes and decisions in the ticket.

Ticket text:

1 Like

To help make these vital action points actionable, here are my thoughts on how each can be approached:

I can take the lead on these first two action points. My role on the Fedora Code of Conduct Committee positions me well to act as a liaison between our community and the necessary Red Hat Legal and Human Resources colleagues. This will be crucial in helping us adopt more resilient defensive practices and support structures.

While I could potentially lead this area, I believe it would be more effective for another community member to do so. As a Matrix community admin, I am already an active participant in the Matrix Working Group and prefer to continue in that capacity. This action point presents a significant opportunity for other experienced Fedora community members to step up and drive this high-impact work forward. We urgently need partners to help share this load.

This point is primarily technical, focusing on infrastructure implementation for enhanced moderation. It will require close liaison with Fedora Infrastructure members (e.g., @kevin, @gwmngilfen, @smilner) regarding moderation tools and collaboration with community members who maintain existing services like the Draupnir moderation bot.

This work also directly connects to the legal and HR support outlined in points 1 and 2. Therefore, whoever takes the lead on this technical action point will need to work closely with me in my liaison role (as established for points 1 & 2) when engaging with internal Red Hat Legal and HR resources.

For this crucial collaborative effort, I believe Greg (@gwmngilfen) is uniquely positioned to lead. His connections to the Matrix Foundation, awareness of how other communities are tackling these issues, and ability to engage effectively with our community stakeholders make him an ideal candidate to foster this cross-community partnership.

Our community has consistently demonstrated creativity and resilience in addressing challenges. It is vital that we collaboratively support and amplify these existing efforts, rather than trying to reinvent solutions independently.

6 Likes

This is awful.

Can we block the posting of images from any non-Fedora account? Can we move to an allow-list model for federated servers?

1 Like

Unfortunately we’d have to probably block images in user profiles and room descriptions as well for an image block to be effective.

There’s probably room here to discuss what a cross server CoC agreement looks like with our neighbors in matrix-space who are also dealing with this. A shared allow list across a segment of the space that are willing to affirm agreed on neighborly norms might be a reasonable way to address this for widest ecosystem benefit.

3 Likes

Unfortunately we’d have to probably block images in user profiles and room descriptions as well for an image block to be effective.

It seems to me that an image block is still worth considering. Images in user profiles and room descriptions are useful, but not mission-critical. The same is true of screenshots posted in channels. Losing all of those still seems better than the status quo.

2 Likes

I would be on board with removing this feature, even just temporarily, until we have something more hardened in place. Is that an option we could do kind of immediately?

I am not a Matrix protocol expert, but I don’t think images can be blocked universally. And even if they could, this would likely only be for users on fedora.im homeserver. We still have lots of community members who use other homeservers. I would defer to @gwmngilfen, @rorysys , or @kevin if they know more about this specific aspect.

These also sound super viable. However, again, I am not an expert on the Matrix protocol, so I am not sure how this would work in a federated context. Can we actually allow a list of homeservers to participate? This gets complicated because we have two homeservers. Otherwise, we could limit participation only to our fedora.im homeserver (which would still lock a lot of people out).

Another approach

Another approach I have been thinking about mirrors how we used to solve this problem in the IRC days. In IRC, there was the concepts of ā€œoperatorsā€ and ā€œvoiced usersā€. Users who were ā€œvoicedā€ had permission to chat in the channel. Users without voice could observe but not participate. In order to be voiced, you had to register an account on IRC NickServ. This provided an extra step that made it harder to coordinate spam attacks on IRC. (This also emphasizes that this is not really the first time we have faced this problem, although this time, it is a lot more graphic.)

So, I know Matrix has basic concepts of permissions. I wonder, if we made it that you had to have power level 1 to talk in a Matrix room (instead of 0, the default), could this help? To get ā€œvoicedā€ on Matrix, we would need to come up with some mechanism to handle that. The most convenient idea I can think of is to use zodbot, which is hooked up to our account system. Perhaps we could extend zodbot to perform a check on whether a Matrix ID is registered in accounts.fp.o, and if yes, give that user voice. If no, no voice for you.

Does this idea make sense? I’m wondering if I have explained it in a way that will make sense to folks who have not used IRC extensively.

The current problem isn’t interior to a room, the problem is invitations to join rooms hosted on other homeservers. Its outside the scope of any room moderation. The power level mechanic by itself isnt gonna help with the homeserver wide invite problem. It appears to only be solvable with homeserver wide configuration.

Even if we drop image viewing support, we will still be left with the problem with abusive text in the room names that show up in invites.

Server side configs for the synapse server code base appear to allow for a federation ban/allow mechanism.

It also appears that the addon moderation tool called mjolnir has a concept of cross org shareble ban lists and has a server side module for the synapse server that can be configured to disable invite spam like we are experiencing. Which means using mjolnir we should be able create our own banlist and have the option to create a wider ecosystem banlist in coordination with other homeserver operators.

general mjohnir capability ref:
Matrix.org - Community Moderation

mjohnir invite spam configuration ref:
mjolnir/docs/synapse_module.md at main Ā· matrix-org/mjolnir Ā· GitHub

1 Like

So, my understanding/take:

There is no ability to block media/image posting. I wish there was, but I don’t see anything in synapse.
Even if that existed and we blocked messages with type image, people could post urls that many clients preview by default. Of course thats a higher bar and would perhaps provide some information about the attacker (since they would need a web presense to provide that link). Also, as mentioned invites and avatars/usernames as well as any text could be disturbing/bad.

I’m not sure how the allowlist for servers or trying to vet people will actually help here. The problem is a determined attacker. They can act nice and ask to have their server added to the allowlist, or ask to be allowed send messages power level.
I suppose we could require allowlisting servers only to be accepted from an actual server admin, but I am not sure how we tell that. (They are using servers that have open registrations). Do we block matrix.org? That would probibly block most non fedora server using users who might want to ask for help.

As for mjohnir, we already have a draupnir instance run by a community member.
It’s subscribed to the bans that other communities put in place (opensuse, mozilla, etc). So, it bans/redacts things pretty quickly, but there’s still sometimes federation lag for the redact/ban to work, so people still see the images if they happen to be looking at the wrong time. Sometimes we are hit first, so we add the user to the list.

Upstream folks are working on a policy server idea, but I am not sure how far along it is. This is basically a thing that the server sends all messages to before posting and if it says ā€˜no’, no one even sees it. I am not sure what ability it has to detect ā€˜bad content’ tho. [WIP] MSC4284: Policy Servers by turt2live Ā· Pull Request #4284 Ā· matrix-org/matrix-spec-proposals Ā· GitHub

Draupnir folks are working on: Gnuxie & /Draupnir/: March 2025
which is focusing on a onboarding flow. (Again I am not sure this solves determined bad actor).

Last I checked (a few weeks ago), Element offered to enable it for particular rooms but it’s not far enough along that it can be enabled for all Fedora rooms due to performance or scalability issues. Basically it’s in prototype stage. So I don’t think it helps us now.

The future benefit is (a) the content is blocked at the server level and so doesn’t get stored in your local cache even after it’s redacted, and (b) that means it also eliminates the ā€œfederation lagā€ problem when removing content. So it’s an improvement, but not gonna save us.

To mitigate CSAM specifically I’d recommend looking into https://www.microsoft.com/en-us/photodna which is pretty widely used in the industry.

Do we know whether clients and federated servers will expire/delete moderated content immediately? I’m concerned that users might end up accidentally caching this content on their systems without realizing, which would obviously be problematic on multiple angles.

I don’t know about clients, but we have known since 2014 that federated servers don’t ever remove any media uploads automatially. Additionally in 2022 I pointed out that it can be understood that since malicious homeservers can store CSAM forever, there is no need for benevolent homeservers to ever delete CSAM.

The issue has since been silently migrated alongside other Element and Synapse issues, and 6 thumbs ups at the time of writing probably speak of the importance of the issue for themselves.

Finally I wish to apologise for my disappointed tone, I have been part of Matrix since 2016 and documented so much of the issues especially here and raised concerns with Fedora Diversity which Pride Month reminds me of, but eventually remained unacknowledged.

1 Like

No need to apologize. I used to pay for EMS hosting (first for a 5-user tier, then Element One) and after several years I finally gave up. Now unless I have to use it I prefer other protocols.

1 Like

If we have the ability to get hi-pri feature requests - one thing that would really help is to add support for limiting who can send invites. e.g. on my fedora.im account, especially if I’m traveling and my devices are subject to inspection, I would rather never see, or store data, about invitations from non-fedora.im accounts.

We just need to stop showing avatars in the room invites. (And yes, I know this is easier said than done because it requires changes in every client.)

If you block invites from anybody without a fedora.im account, then you won’t be able to talk to a lot of Fedora contributors, e.g. me.

Some of these invites include malicious user names and room names so I’m afraid blocking avatars would only help part way.

2 Likes

I’m just noting here for posterity that the CommOps and Matrix WG teams took action yesterday (3rd June) to redact all media on Fedora-Moderation-Controlled channels on the homeserver using the Moderation bot. Copy of text sent out to announcements:

:police_car_light: Important Matrix Policy Change on Fedora’s Homeserver :police_car_light:

Effective immediately, sending of all media has been disabled across all rooms protected by the Fedora Moderation bot. Any media (images or videos) sent in these channels will be automatically redacted regardless of their source.

This decision was made by in response to the escalating series of targeted spam attacks involving CSAM (Child Sexual Abuse Material) and other illegal/traumatic content shared by malicious actors through federated Matrix servers. Although the content is originating from outside Fedora’s homeservers, it has resulted in harm to community members and placed a significant burden on our volunteer moderators.

You can learn more about the scope of the issue and the Fedora Council’s involvement here:

< snip links already in thread >

We understand this situation is distressing for many. Resources to help community members and moderators cope with the emotional impact are actively being compiled in coordination with the Fedora Council and other experts to ensure the right support structures are in place.

This media restriction is a temporary but necessary step as we work toward a safer, long-term moderation framework. We appreciate your patience, care, and commitment to Fedora.

Questions or feedback? Join us in #wg-matrix:fedoraproject.org

Here’s hoping it’s enough to at least put a dampener on the amount of attacks we are getting.

1 Like

It doesn’t actually work because the moderation bot generally has power level 49 in Fedora rooms, but power level 50 is required to remove images. At least this is true in the first two rooms I checked.

1 Like

Which rooms are those? As far as I know, the bot is working in all the rooms it’s invited to.