From a5fd4b57361720987a654ef047d6659f3b048880 Mon Sep 17 00:00:00 2001 From: charlottejmc <143802849+charlottejmc@users.noreply.github.com> Date: Wed, 11 Sep 2024 15:26:30 +0800 Subject: [PATCH 1/3] Update intro-to-twitterbots.md Reinsert fixed link --- en/lessons/intro-to-twitterbots.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/en/lessons/intro-to-twitterbots.md b/en/lessons/intro-to-twitterbots.md index 81536b048..088234f50 100755 --- a/en/lessons/intro-to-twitterbots.md +++ b/en/lessons/intro-to-twitterbots.md @@ -34,7 +34,7 @@ Access to Twitter’s API has recently changed. The Free Tier no longer allows u This lesson explains how to create simple twitterbots using the [Tracery generative grammar](http://tracery.io) and the [Cheap Bots Done Quick](http://cheapbotsdonequick.com/) service. Tracery exists in multiple languages and can be integrated into websites, games, bots. You may fork it [on github here](https://github.com/galaxykate/tracery/tree/tracery2). ## Why bots? -Strictly speaking, a twitter bot is a piece of software for automated controlling a Twitter account. When thousands of these are created and are tweeting more or less the same message, they have the ability to shape discourse on Twitter which then can influence other media discourses. Bots of this kind [can even be seen as credible sources of information](http://www.sciencedirect.com/science/article/pii/S0747563213003129). Projects such as [Documenting the Now](https://github.com/DocNow) are creating tools to allow researchers to create and query archives of social media around current events - and which will naturally contain many bot-generated posts. In this tutorial, I want to demonstrate how one can build a simple twitterbot so that, knowing how they operate, historians may more easily spot the bots in our archives - and perhaps counter with bots of their own. +Strictly speaking, a twitter bot is a piece of software for automated controlling a Twitter account. When thousands of these are created and are tweeting more or less the same message, they have the ability to shape discourse on Twitter which then can influence other media discourses. Bots of this kind [can even be seen as credible sources of information](http://www.sciencedirect.com/science/article/pii/S0747563213003129). Projects such as [Documenting the Now](http://www.docnow.io/) are creating tools to allow researchers to create and query archives of social media around current events - and which will naturally contain many bot-generated posts. In this tutorial, I want to demonstrate how one can build a simple twitterbot so that, knowing how they operate, historians may more easily spot the bots in our archives - and perhaps counter with bots of their own. But I believe also that there is space in digital history and the digital humanities more generally for creative, expressive, artistic work. I belive that there is space for programming historians to use the affordances of digital media to create _things_ that could not otherwise exist to move us, to inspire us, to challenge us. There is room for satire; there is room for comment. With Mark Sample, I believe that there is a need for '[bots of conviction](https://medium.com/@samplereality/a-protest-bot-is-a-bot-so-specific-you-cant-mistake-it-for-bullshit-90fe10b7fbaa)'. From 86abcdc9b1517a12986932499c4593e95d1807b7 Mon Sep 17 00:00:00 2001 From: charlottejmc <143802849+charlottejmc@users.noreply.github.com> Date: Wed, 11 Sep 2024 15:27:08 +0800 Subject: [PATCH 2/3] Update intro-aux-bots-twitter.md Reinsert fixed link --- fr/lecons/intro-aux-bots-twitter.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fr/lecons/intro-aux-bots-twitter.md b/fr/lecons/intro-aux-bots-twitter.md index 5f20d3e6a..2b05f3638 100644 --- a/fr/lecons/intro-aux-bots-twitter.md +++ b/fr/lecons/intro-aux-bots-twitter.md @@ -42,7 +42,7 @@ L'accès à l'API de Twitter a récemment changé. Le niveau gratuit ne permet p Cette leçon explique comment créer des bots basiques sur Twitter à l’aide de la [grammaire générative Tracery](http://tracery.io) et du service [Cheap Bots Done Quick](http://cheapbotsdonequick.com/). Tracery est interopérable avec plusieurs langages de programmation et peut être intégrée dans des sites web, des jeux ou des bots. Vous pouvez en faire une copie (fork) sur github [ici](https://github.com/galaxykate/tracery/tree/tracery2). ## Pourquoi des bots? -Pour être exact, un bot Twitter est un logiciel permettant de contrôler automatiquement un compte Twitter. Lorsque des centaines de bots sont créés et tweetent plus ou moins le même message, ils peuvent façonner le discours sur Twitter, ce qui influence ensuite le discours d’autres médias. Des bots de ce type [peuvent même être perçus comme des sources crédibles d’information](http://www.sciencedirect.com/science/article/pii/S0747563213003129). Des projets tels que [Documenting the Now](https://github.com/DocNow) mettent au point des outils qui permettent aux chercheur(e)s de créer et d’interroger des archives de réseaux sociaux en ligne à propos d’événements récents qui comprennent très probablement un bon nombre de messages générés par des bots. Dans ce tutoriel, je veux montrer comment construire un bot Twitter basique afin que des historiens et des historiennes, ayant connaissance de leur fonctionnement, puissent plus facilement les repérer dans des archives et, peut-être, même les neutraliser grâce à leurs propres bots. +Pour être exact, un bot Twitter est un logiciel permettant de contrôler automatiquement un compte Twitter. Lorsque des centaines de bots sont créés et tweetent plus ou moins le même message, ils peuvent façonner le discours sur Twitter, ce qui influence ensuite le discours d’autres médias. Des bots de ce type [peuvent même être perçus comme des sources crédibles d’information](http://www.sciencedirect.com/science/article/pii/S0747563213003129). Des projets tels que [Documenting the Now](http://www.docnow.io/) mettent au point des outils qui permettent aux chercheur(e)s de créer et d’interroger des archives de réseaux sociaux en ligne à propos d’événements récents qui comprennent très probablement un bon nombre de messages générés par des bots. Dans ce tutoriel, je veux montrer comment construire un bot Twitter basique afin que des historiens et des historiennes, ayant connaissance de leur fonctionnement, puissent plus facilement les repérer dans des archives et, peut-être, même les neutraliser grâce à leurs propres bots. Mais je crois aussi qu’il y a de la place en histoire et dans les humanités numériques de façon plus large pour un travail créatif, expressif, voire artistique. Les historiens et les historiennes qui connaissent la programmation peuvent profiter des possibilités offertes par les médias numériques pour monter des créations, autrement impossibles à réaliser pour nous émouvoir, nous inspirer, nous interpeller. Il y a de la place pour de la satire, il y a de la place pour commenter. Comme Mark Sample, je crois qu’il y a besoin de « [bots de conviction](https://medium.com/@samplereality/a-protest-bot-is-a-bot-so-specific-you-cant-mistake-it-for-bullshit-90fe10b7fbaa)». Ce sont des bots de contestation, des bots si pointus et pertinents, qu’il devient impossible de les prendre pour autre chose par erreur. Selon Sample, il faudrait que de tels bots soient: From 14e3552f7b13d726c8bd16971d9b59e1904b9e15 Mon Sep 17 00:00:00 2001 From: charlottejmc <143802849+charlottejmc@users.noreply.github.com> Date: Wed, 11 Sep 2024 16:05:14 +0800 Subject: [PATCH 3/3] Update beginners-guide-to-twitter-data.md Update docnow link --- en/lessons/beginners-guide-to-twitter-data.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/en/lessons/beginners-guide-to-twitter-data.md b/en/lessons/beginners-guide-to-twitter-data.md index 603844550..2a9adfa18 100644 --- a/en/lessons/beginners-guide-to-twitter-data.md +++ b/en/lessons/beginners-guide-to-twitter-data.md @@ -40,7 +40,7 @@ While this walkthrough proposes a specific workflow that we think is suitable fo First, we need to gather some data. George Washington University’s [TweetSets](https://tweetsets.library.gwu.edu/) allows you to create your own data queries from existing Twitter datasets they have compiled. The datasets primarily focus on the biggest (mostly American) geopolitical events of the last few years, but the TweetSets website states they are also open to queries regarding the construction of new datasets. We chose TweetSets because it makes narrowing and cleaning your dataset very easy, creating stable, archivable datasets through a relatively simple graphical interface. Additionally, this has the benefit of allowing you to search and analyze the data with your own local tools, rather than having your results shaped by Twitter search algorithms that may prioritize users you follow, etc. -You could, however, substitute any tool that gives you a set of dehydrated tweets. Because tweets can be correlated to so much data, it’s more efficient to distribute dehydrated data sets consisting of unique tweet IDs, and then allow users to “hydrate” the data, linking retweet counts, geolocation info, etc., to unique IDs. More importantly, [Twitter's terms for providing downloaded content to third parties](https://developer.twitter.com/en/developer-terms/agreement-and-policy), as well as research ethics, are at play. Other common places to acquire dehydrated datasets include Stanford’s [SNAP](https://snap.stanford.edu/data/) collections, the [DocNow Project](https://www.docnow.io/catalog/) and data repositories, or the [Twitter Application Programming Interface (API)](https://developer.twitter.com/), directly. (If you wonder what an API is, please check this [lesson](/en/lessons/introduction-to-populating-a-website-with-api-data#what-is-application-programming-interface-api).) This latter option will require some coding, but Justin Littman, one of the creators of TweetSets, does a good job summarizing some of the higher-level ways of interacting with the API in this [post](https://gwu-libraries.github.io/sfm-ui/posts/2017-09-14-twitter-data). +You could, however, substitute any tool that gives you a set of dehydrated tweets. Because tweets can be correlated to so much data, it’s more efficient to distribute dehydrated data sets consisting of unique tweet IDs, and then allow users to “hydrate” the data, linking retweet counts, geolocation info, etc., to unique IDs. More importantly, [Twitter's terms for providing downloaded content to third parties](https://developer.twitter.com/en/developer-terms/agreement-and-policy), as well as research ethics, are at play. Other common places to acquire dehydrated datasets include Stanford’s [SNAP](https://snap.stanford.edu/data/) collections, the [DocNow Project](https://www.docnow.io) and data repositories, or the [Twitter Application Programming Interface (API)](https://developer.twitter.com/), directly. (If you wonder what an API is, please check this [lesson](/en/lessons/introduction-to-populating-a-website-with-api-data#what-is-application-programming-interface-api).) This latter option will require some coding, but Justin Littman, one of the creators of TweetSets, does a good job summarizing some of the higher-level ways of interacting with the API in this [post](https://gwu-libraries.github.io/sfm-ui/posts/2017-09-14-twitter-data). We find that the graphical, web-based nature of TweetSets, however, makes it ideal for learning this process. That said, if you want to obtain a dehydrated dataset by other means, you can just start at the [Hydrating](/en/lessons/beginners-guide-to-twitter-data#hydrating) section.