Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create cross-platform_pt 2 #538

Merged
merged 1 commit into from
Apr 17, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 52 additions & 0 deletions uli-website/src/pages/blog/cross-platform_pt 2
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
name: "Another Side of Whack-A-Mole"
excerpt: "We make a case for what federated and centralised platforms must implement in order to tackle cross-platform abuse more effectively "
author: "Kaustubha"
project: " "
date: 17-04-2024
---

import ContentPageShell from "../../components/molecules/ContentPageShell.jsx"

<ContentPageShell>

We're continuing by taking a look at how cross-platform abuse operates on federated and centralised platforms, and the kind of solutions they must test and implement to tackle cross-platform abuse.
The most popular decentralised/federated social media platform is Mastadon; Threads and Bluesky have also followed suit into adopting federation protocols for social media platforms.
dennyabrain marked this conversation as resolved.
Show resolved Hide resolved
The platforms, built on the Activity Pub Protocol and the AT Protocol, do not have a centralised authority that oversees activity on the platform.
Instead, users join different 'instances', and each instance has its own set of rules, block lists and moderators. The instances interact with one another through 'federation'.

##Federated Moderation

On federated platforms mentioned above, it has been noted that hateful material can rapidly disseminate from one instance to [another](https://arxiv.org/pdf/2302.05915.pdf).
Federation policies help administrators of instances create rules that ban or modify content from [instances](https://arxiv.org/pdf/2302.05915.pdf).
Administrators determine the rules for a particular instance, as opposed to traditional social media platforms which are more centralised, hire human moderators and have access to a vast range of data to refine their automated [classifiers](https://arxiv.org/pdf/2204.12709.pdf).
They do not have access to as many resources, and oversee only their own instance. Administrators of instances appear to be overwhelmed when it comes to moderation, with a gap observed in the time taken to respond to content that violates the policy of an [instance](https://arxiv.org/pdf/2302.05915.pdf).
In the decentralised platform, there are two levels of interaction: within instances, and across instances, and moderation therefore should also operate on these two levels as well.
dennyabrain marked this conversation as resolved.
Show resolved Hide resolved

From the user-end, if a user wishes to flag content on an instance, they must 'Report' the content, and add a note as to why they are doing [so](https://docs.joinmastodon.org/user/moderating/).
Pleroma mentions the option to report a user's post to the administrator if it is 'naughty', but there is no additional information available in their documentation that walks one through the process (Posting, reading, basic functions. - Pleroma Documentation).
Centralised social media platforms on the other hand have more extensive documentation on the process for [redressal](https://docs-develop.pleroma.social/frontend/user_guide/posting_reading_basic_functions/).
On both federated and centralised platforms, the user goes through different reporting mechanisms for recourse.

##Responses
As we discussed earlier, centralised responses to tackle cross-platform abuse focus on prima-facie illegal content such as CSAM and terrorism.
Amongst research on the decentralised web, there have been suggestions on tools that could be used to tackle issues that come with moderation on federated platforms:
(i)WatchGen, a tool which proposes instances that moderators should focus on and thus reduce the burden of moderation on [administrators](https://arxiv.org/pdf/2302.05915.pdf);
(ii) ModPair, a system which allows for collaboratively moderation where instances can share partially trained and automated toxicity detection models with one another;
and thereby set up a decentralised automated content moderation system on these federated [platforms](https://arxiv.org/pdf/2204.12709.pdf).

On the broader issue of cross-platform abuse, we can see that on federated platforms, tech focused moderation mechanisms and tools have been suggested try to address it.
By automating and attempting to improve the detection of toxic content and flag them for administrators, they veer away from options such as hiring more human moderators, possibly given limited resources and the structure of these platforms.
Both categories of platforms must test out tools that engage in collaborative moderation for a more effective and thorough action on content. Given the gravity of instances such as online abuse, platforms must extend signal-sharing protocols and similar tech responses to such offences as well, beyond
straightforward offences such as CSAM and terrorism.

##In sum

Corporate accountability may limit the extent of responsibility a platform has towards a user (i.e. a platform entity is only responsible for what goes on in the platform), and considers an issue resolved when flagged content is acted on by moderators/administrators, as the case may be.
Within federated platforms, an administrator's responsbility is limited to acting upon content in their instance, and the issue is considered 'resolved', just as in centralised platforms.
However, when it comes to cross-platform abuse, the issue of, say, a journalist facing harassment on multiple instances, or multiple centralised platforms is not resolved until the content is acted upon on each of them- the journalist is still facing a chain of harassment even if one limb (on an instance, or one centralised platform) is cut off.
This combined with singular, platform-based reporting adds to the fatigue experienced by them.
Addressing the issue by testing out tools as suggested in the context of the fediverse may prove useful if implemented by centralised platforms to enable better detection of abusive content amongst each other and also the fediverse;
further, implementation of standardised reporting methods/ signal sharing methods will reduce the burden on users, who will not need to relive an unpleasant experience each time they must report content on an instance/platform.

</ContentPageShell>
Loading