From b74f92da12d58f9de7635a97aa37f06a22f0fb4f Mon Sep 17 00:00:00 2001 From: KVarma <136114974+kaustubhavarma@users.noreply.github.com> Date: Wed, 17 Apr 2024 14:04:49 +0530 Subject: [PATCH] Update cross-platform_pt_2.mdx Updating with correct conventions --- uli-website/src/pages/blog/cross-platform_pt_2.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/uli-website/src/pages/blog/cross-platform_pt_2.mdx b/uli-website/src/pages/blog/cross-platform_pt_2.mdx index d558259a..70c5cd94 100644 --- a/uli-website/src/pages/blog/cross-platform_pt_2.mdx +++ b/uli-website/src/pages/blog/cross-platform_pt_2.mdx @@ -11,11 +11,11 @@ import ContentPageShell from "../../components/molecules/ContentPageShell.jsx" We're continuing by taking a look at how cross-platform abuse operates on federated and centralised platforms, and the kind of solutions they must test and implement to tackle cross-platform abuse. -The most popular decentralised/federated social media platform is Mastadon; Threads and Bluesky have also followed suit into adopting federation protocols for social media platforms. +The most popular decentralised/federated social media platform is Mastodon; Threads and Bluesky have also followed suit into adopting federation protocols for social media platforms. The platforms, built on the Activity Pub Protocol and the AT Protocol, do not have a centralised authority that oversees activity on the platform. Instead, users join different 'instances', and each instance has its own set of rules, block lists and moderators. The instances interact with one another through 'federation'. -##Federated Moderation +## Federated Moderation On federated platforms mentioned above, it has been noted that hateful material can rapidly disseminate from one instance to [another](https://arxiv.org/pdf/2302.05915.pdf). Federation policies help administrators of instances create rules that ban or modify content from [instances](https://arxiv.org/pdf/2302.05915.pdf). @@ -28,7 +28,7 @@ Pleroma mentions the option to report a user's post to the administrator if it i Centralised social media platforms on the other hand have more extensive documentation on the process for [redressal](https://docs-develop.pleroma.social/frontend/user_guide/posting_reading_basic_functions/). On both federated and centralised platforms, the user goes through different reporting mechanisms for recourse. -##Responses +## Responses As we discussed earlier, centralised responses to tackle cross-platform abuse focus on prima-facie illegal content such as CSAM and terrorism. Amongst research on the decentralised web, there have been suggestions on tools that could be used to tackle issues that come with moderation on federated platforms: (i)WatchGen, a tool which proposes instances that moderators should focus on and thus reduce the burden of moderation on [administrators](https://arxiv.org/pdf/2302.05915.pdf); @@ -40,7 +40,7 @@ By automating and attempting to improve the detection of toxic content and flag Both categories of platforms must test out tools that engage in collaborative moderation for a more effective and thorough action on content. Given the gravity of instances such as online abuse, platforms must extend signal-sharing protocols and similar tech responses to such offences as well, beyond straightforward offences such as CSAM and terrorism. -##In sum +## In sum Corporate accountability may limit the extent of responsibility a platform has towards a user (i.e. a platform entity is only responsible for what goes on in the platform), and considers an issue resolved when flagged content is acted on by moderators/administrators, as the case may be. Within federated platforms, an administrator's responsbility is limited to acting upon content in their instance, and the issue is considered 'resolved', just as in centralised platforms.