Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: html error in research page & chore: mdx formatting in blog #534

Merged
merged 1 commit into from
Mar 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,12 @@ import ContentPageShell from "../../components/molecules/ContentPageShell.jsx"

Online abuse persists either through posts across online platforms, or through circumvented attempts to create newer accounts and reposting previously reported content.
Cross-platform harassment, characterised by coordinated and deliberate attempts to harass an individual across multiple platforms1, takes advantage of platforms only moderating their content; while one platform may have taken down content after finding it violates their policies/after it was reported (thereby fulfilling their legal mandates), the issue still persists.
Individuals are left playing what is a game of whack-a-mole across different accounts and platforms- going through different reporting mechanisms, policies, and instructions to take down abusive content about them (PEN America has a page with all of the relevant reporting links of prominent platforms: [Reporting Online Harassment] (https://onlineharassmentfieldmanual.pen.org/reporting-online-harassment-to-platforms/)).
Individuals are left playing what is a game of whack-a-mole across different accounts and platforms- going through different reporting mechanisms, policies, and instructions to take down abusive content about them (PEN America has a page with all of the relevant reporting links of prominent platforms: [Reporting Online Harassment](https://onlineharassmentfieldmanual.pen.org/reporting-online-harassment-to-platforms/)).
While accounts may be de-platformed permanently, the contents of the post itself may resurface.
Perpetrators may subside for a given period, but content may be picked up in the next viral cycle, or even some months later by another person/account, restarting this process all over again (note: large social media platforms do use signals to detect and prevent recidivism [^1).
Perpetrators may subside for a given period, but content may be picked up in the next viral cycle, or even some months later by another person/account, restarting this process all over again (note: large social media platforms do use signals to detect and prevent recidivism)[^1].

Journalists have pointed to this issue as well.
Companies’ policies do not account for cross-platform online abuse faced by women journalists [^2]. Abuse floods them on multiple platforms for long periods, and it is exhausting for them to monitor all of these spaces where they are the topic of discussion or target of abuse [^3].
Companies’ policies do not account for cross-platform online abuse faced by women journalists [^2]. Abuse floods them on multiple platforms for long periods, and it is exhausting for them to monitor all of these spaces where they are the topic of discussion or target of abuse[^3].
Focus on predominant social media platforms also overlooks abuse on other online spaces, such as comments on individuals’ newsletters or blogs [^4]. Cross-platform brigading has been highlighted as a key issue that needs to be addressed [^5]- brigading refers to online tactics that involve coordinated abusive engagement online [^6], and journalists have urged platforms to be more proactive and exchange data about abuse in the hopes that this would lead to shared best practices to tackle single and cross-platform abuse [^7].

### Tech Responses
Expand Down Expand Up @@ -47,12 +47,19 @@ While it is currently implemented for more widely accepted and extreme instances
It would be useful to evaluate similar technical response tools and their effectiveness at various stages of redressal to understand the landscape and develop broader protocols which could handle this issue of persistant abuse.

[^1]: https://onlineharassmentfieldmanual.pen.org/reporting-online-harassment-to-platforms/
[^2]:Julie Posetti, Kalina Bontcheva and Nabeelah Shabbir, ‘The Chilling: Assessing Big Tech’s Response to Online Violence Against Women Journalists’ (May 2022, UNESCO)
[^3]: Ibid

[^2]: Julie Posetti, Kalina Bontcheva and Nabeelah Shabbir, ‘The Chilling: Assessing Big Tech’s Response to Online Violence Against Women Journalists’ (May 2022, UNESCO)

[^3]: Ibid

[^4]: Ibid

[^5]: https://rebootingsocialmedia.org/2022/12/01/from-emergency-to-prevention-protecting-journalists-from-online-abuse/

[^6]: https://www.institute.global/insights/tech-and-digitalisation/social-media-futures-what-brigading

[^7]: https://rebootingsocialmedia.org/2022/12/01/from-emergency-to-prevention-protecting-journalists-from-online-abuse/

[^8]: https://www.orfonline.org/expert-speak/identifying-and-removing-terrorist-content-online

</ContentPageShell>
</ContentPageShell>
53 changes: 27 additions & 26 deletions uli-website/src/pages/research.mdx
Original file line number Diff line number Diff line change
@@ -1,41 +1,42 @@
import ContentPageShell from "../components/molecules/ContentPageShell.jsx"
import ContentPageShell from "../components/molecules/ContentPageShell.jsx";
import { Box, Text } from "grommet";

<ContentPageShell>

# Research
# Research

The Uli team is cross-disciplinary. This is the list of publications based on work done by the authors with Uli. They are listed in rough chrological order and are not categorized by field or discipline. At the end of the page is an explicit section on Uli as an exploration of alternative imaginations of AI.

* Vaidya, A., Arora, A., Joshi, A., Prabhakar, T. (2023). A shared task on Gendered Abuse Detection in Indic Languages at ICON. [https://sites.google.com/view/icon2023-tattle-sharedtask/](https://sites.google.com/view/icon2023-tattle-sharedtask/)
* Arora, A., Jinadoss, M., Arora, C., George, D., Khan, H. D., Rawat, K., ... & Prabhakar, T. (2023). The Uli Dataset: An Exercise in Experience Led Annotation of oGBV. arXiv preprint - [https://arxiv.org/abs/2311.09086](https://arxiv.org/abs/2311.09086).
* Mehta, T., & Chakraborty, S. (2023). Visuais Para Tecnologia Emancipatória: Um Estudo de Caso Sobre a Cocriação de uma Linguagem Visual Para Combater a Violência de Género Online no Twitter Indiano. Vista, (11), e023007. [https://doi.org/10.21814/vista.4133](https://doi.org/10.21814/vista.4133)
* Arora, A., Arora, C., & J, M. (2023). Designing for Disagreements: A Machine Learning Tool to Detect Online Gender-based Violence. In Feminist Perspectives on Social Media Governance. IT for Change. [https://itforchange.net/feminist-perspectives-on-social-media-governance-0](https://itforchange.net/feminist-perspectives-on-social-media-governance-0)
* Arora, C., & Prabhakar, T. To think interdisciplinarity as intercurrence: Or, working as an interdisciplinary team to develop a plug-in to tackle the experience of online gender-based violence and hate speech. 2021. [⟨hal-03505844⟩](https://hal.science/hal-03505844)
* Arora, C. (2022). ...The tool is underdevelopment... In R. Singh, R. L. Guzmán, & P. Davison (Eds.), Parables of AI in/from the Majority World. [https://datasociety.net/wp-content/uploads/2022/12/DSParablesAnthology_Ch5_Arora.pdf](https://datasociety.net/wp-content/uploads/2022/12/DSParablesAnthology_Ch5_Arora.pdf)

- Vaidya, A., Arora, A., Joshi, A., Prabhakar, T. (2023). A shared task on Gendered Abuse Detection in Indic Languages at ICON. [https://sites.google.com/view/icon2023-tattle-sharedtask/](https://sites.google.com/view/icon2023-tattle-sharedtask/)
- Arora, A., Jinadoss, M., Arora, C., George, D., Khan, H. D., Rawat, K., ... & Prabhakar, T. (2023). The Uli Dataset: An Exercise in Experience Led Annotation of oGBV. arXiv preprint - [https://arxiv.org/abs/2311.09086](https://arxiv.org/abs/2311.09086).
- Mehta, T., & Chakraborty, S. (2023). Visuais Para Tecnologia Emancipatória: Um Estudo de Caso Sobre a Cocriação de uma Linguagem Visual Para Combater a Violência de Género Online no Twitter Indiano. Vista, (11), e023007. [https://doi.org/10.21814/vista.4133](https://doi.org/10.21814/vista.4133)
- Arora, A., Arora, C., & J, M. (2023). Designing for Disagreements: A Machine Learning Tool to Detect Online Gender-based Violence. In Feminist Perspectives on Social Media Governance. IT for Change. [https://itforchange.net/feminist-perspectives-on-social-media-governance-0](https://itforchange.net/feminist-perspectives-on-social-media-governance-0)
- Arora, C., & Prabhakar, T. To think interdisciplinarity as intercurrence: Or, working as an interdisciplinary team to develop a plug-in to tackle the experience of online gender-based violence and hate speech. 2021. [⟨hal-03505844⟩](https://hal.science/hal-03505844)
- Arora, C. (2022). ...The tool is underdevelopment... In R. Singh, R. L. Guzmán, & P. Davison (Eds.), Parables of AI in/from the Majority World. [https://datasociety.net/wp-content/uploads/2022/12/DSParablesAnthology_Ch5_Arora.pdf](https://datasociety.net/wp-content/uploads/2022/12/DSParablesAnthology_Ch5_Arora.pdf)

## Responsible/ Trustworthy/ Feminist AI

The project started as an exploration of building AI aligned with (feminist principles)[https://uli.tattle.co.in/blog/approach/]. There are many terms- such as responsible, ethical, trustworthy, participatory, feminist- to describe better or alternative visions of AI. Uli speaks to these themes, and also complicates them. A number of publications (listed above) are on the process and the reflection from trying to build Uli informed by these visions. More are forthcoming. The following presentation summarizes the process and reflection on specifically feminist principles in context of our work:

<Box>
<Box>
<Text style={{ fontSize: "1.4em" }} margin={{ bottom: "0.8em" }}>
{" "}
Building Participatory AI
</Text>
<Box fill>
<iframe
width="100%"
height="400"
src="https://www.youtube.com/watch?v=Zt088nILmDM"
title="Working with Feminist AI"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
</Box>
<Box>
<Text style={{ fontSize: "1.4em" }} margin={{ bottom: "0.8em" }}>
{" "}
Building Participatory AI
</Text>
<Box fill>
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/Zt088nILmDM"
title="A New Thing Under the Sun? Alternative Visions for Tech in the Age of AI"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen
></iframe>
</Box>
</Box>
</Box>


</ContentPageShell>
Loading