Analysis code for the paper Kobak et al. 2024, Delving into LLM-assisted writing in biomedical publications through excess vocabulary.
How to cite:
@article{kobak2024delving,
title={Delving into {LLM}-assisted writing in biomedical publications through excess vocabulary},
author={Kobak, Dmitry and Gonz\'alez-M\'arquez, Rita and Horv\'at, Em\H{o}ke-\'Agnes and Lause, Jan},
journal={arXiv preprint arXiv:2406.07016},
year={2024}
}
- All 900 excess words that we identified from 2013 to 2024 are listed in
results/excess_words.csv
together with our annotations. - The 362,442 × 15 matrix of yearly word occurrences (for each word and year, the number of abstracts in that year containing that word; the additional last row contains the total number of abstracts in that year) is available in
results/yearly-counts.csv.gz
. It allows to reproduce the main parts of our analysis. - All figures from the paper are available in the
figures/
folder.
- All excess frequency analysis and all figures shown in the paper (and provided in the
figures/
folder) are produced by thescripts/03-figures.ipynb
Python notebook (apart from Figure 7, which is produced byscripts/08-figure-tsne.ipynb
). This notebook takes as input theresults/yearly-counts.csv.gz
file with yearly counts of each word and several other files with yearly counts of word groups (yearly-counts*
). The notebook only takes a minute to run. - These yearly word count files are produced by the
scripts/02-preprocess-and-count.py
script which takes a few hours to run and needs a lot of memory. This script takes CSV files with abstract texts as input, performs abstracts cleaning via regular expressions (~1 hour), then runs(~0.5 hours), and then does yearly aggregation.vectorizer = sklearn.feature_extraction.text.CountVectorizer(binary=True, min_df=1e-6) vectorizer.fit_transform(df.AbstractText.values)
- The input to the
scripts/02-preprocess-and-count.py
script ispubmed_baseline_2025.parquet.gzip
containing PubMed data from the end-of-2024 snapshot. This is similar to files available at the repository associated with our Patterns paper "The landscape of biomedical research", but corresponds to the newer PubMed snapshot. This file is constructed by thescripts/01-process-baseline.ipynb
notebook that takes all XML files from https://ftp.ncbi.nlm.nih.gov/pubmed/baseline/ as input. These files have to be previously downloaded from the link above, unzipped, and stored in a directory, from which thescripts/01-process-baseline.ipynb
notebook will read, combine, and save as a single dataframe (pubmed_baseline_2025.parquet.gzip
). - The t-SNE figure produced by
scripts/08-figure-tsne.ipynb
takesdf_tsne_22_24.parquet.gzip
as input, which contains the t-SNE coordinates of the 2022-2024 papers as well as some metadata (class labels, country, inferred gender, and whether the paper is retracted or not). The t-SNE embedding is obtained as follows: the raw texts are first processed with a transformer (PubMedBERT) to obtain a numerical high-dimensional representation of each abstract (this is done in the file04-obtain-BERT-embeddings.py
). Then, the high dimensional vectors are reduced to two dimensions with t-SNE (this is done in the file05-obtain-tsne-embeddings.py
). After, the metadata is prepared (with the exception of the retractions) and saved with the 2D coordinates indf_tsne_22_24.parquet.gzip
(this is done in the06-generate-tsne-df.ipynb
). - In the notebook
07-analysis-retracted-papers.ipynb
, PMIDs from retracted papers are scraped from PubMed and combined with the ones available in the database Retraction Watch. Retracted papers are then ploted in the 2022-2024 t-SNE embedding. Additionally, a boolean of whether a paper is retracted or not is computed and added to thedf_tsne_22_24.parquet.gzip
dataframe.