diff --git a/uli-website/src/pages/blog/cross-platform pt1 b/uli-website/src/pages/blog/cross-platform-pt1.mdx similarity index 94% rename from uli-website/src/pages/blog/cross-platform pt1 rename to uli-website/src/pages/blog/cross-platform-pt1.mdx index f3fb016a..35dd21c0 100644 --- a/uli-website/src/pages/blog/cross-platform pt1 +++ b/uli-website/src/pages/blog/cross-platform-pt1.mdx @@ -11,12 +11,12 @@ import ContentPageShell from "../../components/molecules/ContentPageShell.jsx" Online abuse persists either through posts across online platforms, or through circumvented attempts to create newer accounts and reposting previously reported content. Cross-platform harassment, characterised by coordinated and deliberate attempts to harass an individual across multiple platforms1, takes advantage of platforms only moderating their content; while one platform may have taken down content after finding it violates their policies/after it was reported (thereby fulfilling their legal mandates), the issue still persists. -Individuals are left playing what is a game of whack-a-mole across different accounts and platforms- going through different reporting mechanisms, policies, and instructions to take down abusive content about them (PEN America has a page with all of the relevant reporting links of prominent platforms: [Reporting Online Harassment] (https://onlineharassmentfieldmanual.pen.org/reporting-online-harassment-to-platforms/)). +Individuals are left playing what is a game of whack-a-mole across different accounts and platforms- going through different reporting mechanisms, policies, and instructions to take down abusive content about them (PEN America has a page with all of the relevant reporting links of prominent platforms: [Reporting Online Harassment](https://onlineharassmentfieldmanual.pen.org/reporting-online-harassment-to-platforms/)). While accounts may be de-platformed permanently, the contents of the post itself may resurface. -Perpetrators may subside for a given period, but content may be picked up in the next viral cycle, or even some months later by another person/account, restarting this process all over again (note: large social media platforms do use signals to detect and prevent recidivism [^1). +Perpetrators may subside for a given period, but content may be picked up in the next viral cycle, or even some months later by another person/account, restarting this process all over again (note: large social media platforms do use signals to detect and prevent recidivism)[^1]. Journalists have pointed to this issue as well. -Companies’ policies do not account for cross-platform online abuse faced by women journalists [^2]. Abuse floods them on multiple platforms for long periods, and it is exhausting for them to monitor all of these spaces where they are the topic of discussion or target of abuse [^3]. +Companies’ policies do not account for cross-platform online abuse faced by women journalists [^2]. Abuse floods them on multiple platforms for long periods, and it is exhausting for them to monitor all of these spaces where they are the topic of discussion or target of abuse[^3]. Focus on predominant social media platforms also overlooks abuse on other online spaces, such as comments on individuals’ newsletters or blogs [^4]. Cross-platform brigading has been highlighted as a key issue that needs to be addressed [^5]- brigading refers to online tactics that involve coordinated abusive engagement online [^6], and journalists have urged platforms to be more proactive and exchange data about abuse in the hopes that this would lead to shared best practices to tackle single and cross-platform abuse [^7]. ### Tech Responses @@ -47,12 +47,19 @@ While it is currently implemented for more widely accepted and extreme instances It would be useful to evaluate similar technical response tools and their effectiveness at various stages of redressal to understand the landscape and develop broader protocols which could handle this issue of persistant abuse. [^1]: https://onlineharassmentfieldmanual.pen.org/reporting-online-harassment-to-platforms/ -[^2]:Julie Posetti, Kalina Bontcheva and Nabeelah Shabbir, ‘The Chilling: Assessing Big Tech’s Response to Online Violence Against Women Journalists’ (May 2022, UNESCO) -[^3]: Ibid + +[^2]: Julie Posetti, Kalina Bontcheva and Nabeelah Shabbir, ‘The Chilling: Assessing Big Tech’s Response to Online Violence Against Women Journalists’ (May 2022, UNESCO) + +[^3]: Ibid + [^4]: Ibid + [^5]: https://rebootingsocialmedia.org/2022/12/01/from-emergency-to-prevention-protecting-journalists-from-online-abuse/ + [^6]: https://www.institute.global/insights/tech-and-digitalisation/social-media-futures-what-brigading + [^7]: https://rebootingsocialmedia.org/2022/12/01/from-emergency-to-prevention-protecting-journalists-from-online-abuse/ + [^8]: https://www.orfonline.org/expert-speak/identifying-and-removing-terrorist-content-online - + \ No newline at end of file diff --git a/uli-website/src/pages/research.mdx b/uli-website/src/pages/research.mdx index 919ce5a4..34584071 100644 --- a/uli-website/src/pages/research.mdx +++ b/uli-website/src/pages/research.mdx @@ -1,41 +1,42 @@ -import ContentPageShell from "../components/molecules/ContentPageShell.jsx" +import ContentPageShell from "../components/molecules/ContentPageShell.jsx"; +import { Box, Text } from "grommet"; -# Research +# Research The Uli team is cross-disciplinary. This is the list of publications based on work done by the authors with Uli. They are listed in rough chrological order and are not categorized by field or discipline. At the end of the page is an explicit section on Uli as an exploration of alternative imaginations of AI. -* Vaidya, A., Arora, A., Joshi, A., Prabhakar, T. (2023). A shared task on Gendered Abuse Detection in Indic Languages at ICON. [https://sites.google.com/view/icon2023-tattle-sharedtask/](https://sites.google.com/view/icon2023-tattle-sharedtask/) -* Arora, A., Jinadoss, M., Arora, C., George, D., Khan, H. D., Rawat, K., ... & Prabhakar, T. (2023). The Uli Dataset: An Exercise in Experience Led Annotation of oGBV. arXiv preprint - [https://arxiv.org/abs/2311.09086](https://arxiv.org/abs/2311.09086). -* Mehta, T., & Chakraborty, S. (2023). Visuais Para Tecnologia Emancipatória: Um Estudo de Caso Sobre a Cocriação de uma Linguagem Visual Para Combater a Violência de Género Online no Twitter Indiano. Vista, (11), e023007. [https://doi.org/10.21814/vista.4133](https://doi.org/10.21814/vista.4133) -* Arora, A., Arora, C., & J, M. (2023). Designing for Disagreements: A Machine Learning Tool to Detect Online Gender-based Violence. In Feminist Perspectives on Social Media Governance. IT for Change. [https://itforchange.net/feminist-perspectives-on-social-media-governance-0](https://itforchange.net/feminist-perspectives-on-social-media-governance-0) -* Arora, C., & Prabhakar, T. To think interdisciplinarity as intercurrence: Or, working as an interdisciplinary team to develop a plug-in to tackle the experience of online gender-based violence and hate speech. 2021. [⟨hal-03505844⟩](https://hal.science/hal-03505844) -* Arora, C. (2022). ...The tool is underdevelopment... In R. Singh, R. L. Guzmán, & P. Davison (Eds.), Parables of AI in/from the Majority World. [https://datasociety.net/wp-content/uploads/2022/12/DSParablesAnthology_Ch5_Arora.pdf](https://datasociety.net/wp-content/uploads/2022/12/DSParablesAnthology_Ch5_Arora.pdf) - +- Vaidya, A., Arora, A., Joshi, A., Prabhakar, T. (2023). A shared task on Gendered Abuse Detection in Indic Languages at ICON. [https://sites.google.com/view/icon2023-tattle-sharedtask/](https://sites.google.com/view/icon2023-tattle-sharedtask/) +- Arora, A., Jinadoss, M., Arora, C., George, D., Khan, H. D., Rawat, K., ... & Prabhakar, T. (2023). The Uli Dataset: An Exercise in Experience Led Annotation of oGBV. arXiv preprint - [https://arxiv.org/abs/2311.09086](https://arxiv.org/abs/2311.09086). +- Mehta, T., & Chakraborty, S. (2023). Visuais Para Tecnologia Emancipatória: Um Estudo de Caso Sobre a Cocriação de uma Linguagem Visual Para Combater a Violência de Género Online no Twitter Indiano. Vista, (11), e023007. [https://doi.org/10.21814/vista.4133](https://doi.org/10.21814/vista.4133) +- Arora, A., Arora, C., & J, M. (2023). Designing for Disagreements: A Machine Learning Tool to Detect Online Gender-based Violence. In Feminist Perspectives on Social Media Governance. IT for Change. [https://itforchange.net/feminist-perspectives-on-social-media-governance-0](https://itforchange.net/feminist-perspectives-on-social-media-governance-0) +- Arora, C., & Prabhakar, T. To think interdisciplinarity as intercurrence: Or, working as an interdisciplinary team to develop a plug-in to tackle the experience of online gender-based violence and hate speech. 2021. [⟨hal-03505844⟩](https://hal.science/hal-03505844) +- Arora, C. (2022). ...The tool is underdevelopment... In R. Singh, R. L. Guzmán, & P. Davison (Eds.), Parables of AI in/from the Majority World. [https://datasociety.net/wp-content/uploads/2022/12/DSParablesAnthology_Ch5_Arora.pdf](https://datasociety.net/wp-content/uploads/2022/12/DSParablesAnthology_Ch5_Arora.pdf) ## Responsible/ Trustworthy/ Feminist AI The project started as an exploration of building AI aligned with (feminist principles)[https://uli.tattle.co.in/blog/approach/]. There are many terms- such as responsible, ethical, trustworthy, participatory, feminist- to describe better or alternative visions of AI. Uli speaks to these themes, and also complicates them. A number of publications (listed above) are on the process and the reflection from trying to build Uli informed by these visions. More are forthcoming. The following presentation summarizes the process and reflection on specifically feminist principles in context of our work: - - - {" "} - Building Participatory AI - - - - + + + {" "} + Building Participatory AI + + + + + -