A collection of AWESOME papers and resources on the large language model (LLM) related recommender system topics.
😆 Please check out our survey paper for LLM-enhanced RS: How Can Recommender Systems Benefit from Large Language Models: A Survey
To catch up with the latest research progress, this repository will be actively maintained as well as our released survey paper. Newly added papers will first appear in 1.6 Paper Pending List: to be Added to Our Survey Paper
section.
🚀 2023.06.29 - Paper v4 released: 7 papers have been newly added.
Survey Paper Update Logs
- 2023.06.29 - Paper v4 released: 7 papers have been newly added.
- 2023.06.28 - Paper v3 released: Fix typos.
- 2023.06.12 - Paper v2 released: Add summerization table in the appendix.
- 2023.06.09 - Paper v1 released: Initial version.
We classify papers according to where LLM will be adapted in the pipeline of RS, which is summarized in the figure below.
1.1 LLM for Feature Engineering
Name | Paper | LLM Backbone (Largest) | LLM Tuning Strategy | Publication | Link |
---|---|---|---|---|---|
GReaT | Language Models are Realistic Tabular Data Generators | GPT2-medium (355M) | Full Finetuning | ICLR 2023 | [Link] |
GENRE | A First Look at LLM-Powered Generative News Recommendation | ChatGPT | Frozen | Arxiv 2023 | [Link] |
AnyPredict | AnyPredict: Foundation Model for Tabular Prediction | ChatGPT | Frozen | Arxiv 2023 | [Link] |
LLM4KGC | Knowledge Graph Completion Models are Few-shot Learners: An Empirical Study of Relation Labeling in E-commerce with LLMs | PaLM (540B)/ ChatGPT | Frozen | Arxiv 2023 | [Link] |
TagGPT | TagGPT: Large Language Models are Zero-shot Multimodal Taggers | ChatGPT | Frozen | Arxiv 2023 | [Link] |
ICPC | Large Language Models for User Interest Journeys | LaMDA (137B) | Full Finetuning/ Prompt Tuning | Arxiv 2023 | [Link] |
DPLLM | Privacy-Preserving Recommender Systems with Synthetic Query Generation using Differentially Private Large Language Models | T5-XL (3B) | Full Finetuning | Arxiv 2023 | [Link] |
KAR | Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models | ChatGPT | Frozen | Arxiv 2023 | [Link] |
MINT | Large Language Model Augmented Narrative Driven Recommendations | GPT3 (175B) | Frozen | RecSys 2023 | [Link] |
1.2 LLM as Feature Encoder
Name | Paper | LLM Backbone (Largest) | LLM Tuning Strategy | Publication | Link |
---|---|---|---|---|---|
U-BERT | U-BERT: Pre-training User Representations for Improved Recommendation | BERT-base (110M) | Full Finetuning | AAAI 2021 | [Link] |
UNBERT | UNBERT: User-News Matching BERT for News Recommendation | BERT-base (110M) | Full Finetuning | IJCAI 2021 | [Link] |
PLM-NR | Empowering News Recommendation with Pre-trained Language Models | RoBERTa-base (125M) | Full Finetuning | SIGIR 2021 | [Link] |
Pyramid-ERNIE | Pre-trained Language Model based Ranking in Baidu Search | ERNIE (110M) | Full Finetuning | KDD 2021 | [Link] |
ERNIE-RS | Pre-trained Language Model for Web-scale Retrieval in Baidu Search | ERNIE (110M) | Full Finetuning | KDD 2021 | [Link] |
CTR-BERT | CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models | Customized BERT (1.5B) | Full Finetuning | ENLSP 2021 | [Link] |
ZESRec | Zero-Shot Recommender Systems | BERT-base (110M) | Frozen | Arxiv 2021 | [Link] |
UniSRec | Towards Universal Sequence Representation Learning for Recommender Systems | BERT-base (110M) | Frozen | KDD 2022 | [Link] |
PREC | Boosting Deep CTR Prediction with a Plug-and-Play Pre-trainer for News Recommendation | BERT-base (110M) | Full Finetuning | COLING 2022 | [Link] |
MM-Rec | MM-Rec: Visiolinguistic Model Empowered Multimodal News Recommendation | BERT-base (110M) | Full Finetuning | SIGIR 2022 | [Link] |
Tiny-NewsRec | Tiny-NewsRec: Effective and Efficient PLM-based News Recommendation | UniLMv2-base (110M) | Full Finetuning | EMNLP 2022 | [Link] |
PLM4Tag | PTM4Tag: Sharpening Tag Recommendation of Stack Overflow Posts with Pre-trained Models | CodeBERT (125M) | Full Finetuning | ICPC 2022 | [Link] |
TwHIN-BERT | TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations | BERT-base (110M) | Full Finetuning | Arxiv 2022 | [Link] |
TransRec | TransRec: Learning Transferable Recommendation from Mixture-of-Modality Feedback | BERT-base (110M) | Full Finetuning | Arxiv 2022 | [Link] |
VQ-Rec | Learning Vector-Quantized Item Representation for Transferable Sequential Recommenders | BERT-base (110M) | Frozen | WWW 2023 | [Link] |
IDRec vs MoRec | Where to Go Next for Recommender Systems? ID- vs. Modality-based Recommender Models Revisited | BERT-base (110M) | Full Finetuning | SIGIR 2023 | [Link] |
TransRec | Exploring Adapter-based Transfer Learning for Recommender Systems: Empirical Studies and Practical Insights | RoBERTa-base (125M) | Layerwise Adapter Tuning | Arxiv 2023 | [Link] |
LSH | Improving Code Example Recommendations on Informal Documentation Using BERT and Query-Aware LSH: A Comparative Study | BERT-base (110M) | Full Finetuning | Arxiv 2023 | [Link] |
TCF | Exploring the Upper Limits of Text-Based Collaborative Filtering Using Large Language Models: Discoveries and Insights | OPT-175B (175B) | Frozen/ Full Finetuning | Arxiv 2023 | [Link] |
1.3 LLM as Scoring/Ranking Function
1.3.1 Item Scoring Task
Name | Paper | LLM Backbone (Largest) | LLM Tuning Strategy | Publication | Link |
---|---|---|---|---|---|
LMRecSys | Language Models as Recommender Systems: Evaluations and Limitations | GPT2-XL (1.5B) | Full Finetuning | ICBINB 2021 | [Link] |
PTab | PTab: Using the Pre-trained Language Model for Modeling Tabular Data | BERT-base (110M) | Full Finetuning | Arxiv 2022 | [Link] |
UniTRec | UniTRec: A Unified Text-to-Text Transformer and Joint Contrastive Learning Framework for Text-based Recommendation | BART (406M) | Full Finetuning | ACL 2023 | [Link] |
Prompt4NR | Prompt Learning for News Recommendation | BERT-base (110M) | Full Finetuning | SIGIR 2023 | [Link] |
RecFormer | Text Is All You Need: Learning Language Representations for Sequential Recommendation | LongFormer (149M) | Full Finetuning | KDD 2023 | [Link] |
TabLLM | TabLLM: Few-shot Classification of Tabular Data with Large Language Models | T0 (11B) | Few-shot Parameter-effiecnt Finetuning | AISTATS 2023 | [Link] |
Zero-shot GPT | Zero-Shot Recommendation as Language Modeling | GPT2-medium (355M) | Frozen | Arxiv 2023 | [Link] |
FLAN-T5 | Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction | FLAN-5-XXL (11B) | Full Finetuning | Arxiv 2023 | [Link] |
BookGPT | BookGPT: A General Framework for Book Recommendation Empowered by Large Language Model | ChatGPT | Frozen | Arxiv 2023 | [Link] |
TALLRec | TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation | LLaMA (7B) | LoRA | RecSys 2023 | [Link] |
PBNR | PBNR: Prompt-based News Recommender System | T5-small (60M) | Full Finetuning | Arxiv 2023 | [Link] |
LLMRec | LLMRec: Large Language Models with Graph Augmentation for Recommendation | ChatGPT | Frozen | WSDM 2024 | [Link] |
1.3.2 Item Generation Task
Name | Paper | LLM Backbone (Largest) | LLM Tuning Strategy | Publication | Link |
---|---|---|---|---|---|
GPT4Rec | GPT4Rec: A Generative Framework for Personalized Recommendation and User Interests Interpretation | GPT2 (110M) | Full Finetuning | Arxiv 2023 | [Link] |
UP5 | UP5: Unbiased Foundation Model for Fairness-aware Recommendation | T5-base (223M) | Full Finetuning | Arxiv 2023 | [Link] |
VIP5 | VIP5: Towards Multimodal Foundation Models for Recommendation | T5-base (223M) | Layerwise Adater Tuning | Arxiv 2023 | [Link] |
P5-ID | How to Index Item IDs for Recommendation Foundation Models | T5-small (61M) | Full Finetuning | Arxiv 2023 | [Link] |
FaiRLLM | Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation | ChatGPT | Frozen | RecSys 2023 | [Link] |
PALR | PALR: Personalization Aware LLMs for Recommendation | LLaMA (7B) | Full Finetuning | Arxiv 2023 | [Link] |
ChatGPT | Large Language Models are Zero-Shot Rankers for Recommender Systems | ChatGPT | Frozen | Arxiv 2023 | [Link] |
AGR | Sparks of Artificial General Recommender (AGR): Early Experiments with ChatGPT | ChatGPT | Frozen | Arxiv 2023 | [Link] |
NIR | Zero-Shot Next-Item Recommendation using Large Pretrained Language Models | GPT3 (175B) | Frozen | Arxiv 2023 | [Link] |
GPTRec | Generative Sequential Recommendation with GPTRec | GPT2-medium (355M) | Full Finetuning | Gen-IR@SIGIR 2023 | [Link] |
ChatNews | A Preliminary Study of ChatGPT on News Recommendation: Personalization, Provider Fairness, Fake News | ChatGPT | Frozen | Arxiv 2023 | [Link] |
1.3.3 Hybrid Task
Name | Paper | LLM Backbone (Largest) | LLM Tuning Strategy | Publication | Link |
---|---|---|---|---|---|
P5 | Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5) | T5-base (223M) | Full Finetuning | RecSys 2022 | [Link] |
M6-Rec | M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems | M6-base (300M) | Option Tuning | Arxiv 2022 | [Link] |
InstructRec | Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach | FLAN-T5-XL (3B) | Full Finetuning | Arxiv 2023 | [Link] |
ChatGPT | Is ChatGPT a Good Recommender? A Preliminary Study | ChatGPT | Frozen | Arxiv 2023 | [Link] |
ChatGPT | Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent | ChatGPT | Frozen | Arxiv 2023 | [Link] |
ChatGPT | Uncovering ChatGPT's Capabilities in Recommender Systems | ChatGPT | Frozen | RecSys 2023 | [Link] |
1.4 LLM for RS Pipeline Controller
Name | Paper | LLM Backbone (Largest) | LLM Tuning Strategy | Publication | Link |
---|---|---|---|---|---|
Chat-REC | Chat-REC: Towards Interactive and Explainable LLMs-Augmented Recommender System | ChatGPT | Frozen | Arxiv 2023 | [Link] |
RecLLM | Leveraging Large Language Models in Conversational Recommender Systems | LLaMA (7B) | Full Finetuning | Arxiv 2023 | [Link] |
1.5 Other Related Papers
1.5.1 Related Survey Papers
Paper | Publication | Link |
---|---|---|
Large Language Models for Generative Recommendation: A Survey and Visionary Discussions | Arxiv 2023 | [Link] |
Large Language Models for Information Retrieval: A Survey | Arxiv 2023 | [Link] |
When Large Language Models Meet Personalization: Perspectives of Challenges and Opportunities | Arxiv 2023 | [Link] |
Recommender Systems in the Era of Large Language Models (LLMs) | Arxiv 2023 | [Link] |
A Survey on Large Language Models for Recommendation | Arxiv 2023 | [Link] |
Pre-train, Prompt and Recommendation: A Comprehensive Survey of Language Modelling Paradigm Adaptations in Recommender Systems | Arxiv 2023 | [Link] |
Self-Supervised Learning for Recommender Systems: A Survey | Arxiv 2022 | [Link] |
1.5.2 Other Papers
Paper | Publication | Link |
---|---|---|
Evaluation of Synthetic Datasets for Conversational Recommender Systems | Arxiv 2023 | [Link] |
Generative Recommendation: Towards Next-generation Recommender Paradigm | Arxiv 2023 | [Link] |
Towards Personalized Prompt-Model Retrieval for Generative Recommendation | Arxiv 2023 | [Link] |
Generative Next-Basket Recommendation | RecSys 2023 | [Link] |
1.6 Paper Pending List: to be Added to Our Survey Paper
Name | Paper | LLM Backbone (Largest) | LLM Tuning Strategy | Publication | Link |
---|---|---|---|---|---|
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | RecSys 2023 | [Link] | |||
LLM4Rec: Large Language Models for Recommendation via A Lightweight Tuning Framework | RecSys 2023 | [Link] | |||
CR-SoRec: BERT driven Consistency Regularization for Social Recommendation | RecSys 2023 | [Link] | |||
Leveraging Large Language Models for Sequential Recommendation | RecSys 2023 | [Link] | |||
Beyond Labels: Leveraging Deep Learning and LLMs for Content Metadata | RecSys 2023 | [Link] | |||
GenRec | GenRec: Large Language Model for Generative Recommendation | LLaMA (7B) | LoRA | Arxiv 2023 | [Link] |
Towards Personalized Cold-Start Recommendation with Prompts | [Link] | ||||
Prompt Tuning Large Language Models on Personalized Aspect Extraction for Recommendations | [Link] | ||||
Exploring Large Language Model for Graph Data Understanding in Online Job Recommendations | [Link] | ||||
TIGER | Recommender Systems with Generative Retrieval | NIPS 2023 | [Link] | ||
Better Generalization with Semantic IDs: A case study in Ranking for Recommendations | Arxiv 2023 | [Link] | |||
Product Information Extraction using ChatGPT | Arxiv 2023 | [Link] | |||
Enhancing Job Recommendation through LLM-based Generative Adversarial Networks | Arxiv 2023 | [Link] | |||
Generative Job Recommendations with Large Language Model | Arxiv 2023 | [Link] | |||
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences | RecSys 2023 | [Link] | |||
LLM-Rec: Personalized Recommendation via Prompting Large Language Models | Arxiv 2023 | [Link] | |||
Heterogeneous Knowledge Fusion: A Novel Approach for Personalized Recommendation via LLM | RecSys 2023 | [Link] | |||
A Large Language Model Enhanced Conversational Recommender System | Arxiv 2023 | [Link] | |||
LLaMA-E: Empowering E-commerce Authoring with Multi-Aspect Instruction Following | Arxiv 2023 | [Link] | |||
The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommendations | EAAMO 2023 | [Link] | |||
BERT4CTR: An Efficient Framework to Combine Pre-trained Language Model with Non-textual Features for CTR Prediction | KDD 2023 | [Link] | |||
A Bi-Step Grounding Paradigm for Large Language Models in Recommendation Systems | Arxiv 2023 | [Link] | |||
Knowledge Prompt-tuning for Sequential Recommendation | Arxiv 2023 | [Link] | |||
Learning Supplementary NLP Features for CTR Prediction in Sponsored Search | KDD 2022 | [Link] | |||
Leveraging Large Language Models for Pre-trained Recommender Systems | Arxiv 2023 | [Link] | |||
Enhancing Recommender Systems with Large Language Model Reasoning Graphs | Arxiv 2023 | [Link] | |||
Large Language Models as Zero-Shot Conversational Recommenders | CIKM 2023 | [Link] | |||
RAH! RecSys-Assistant-Human: A Human-Central Recommendation Framework with Large Language Models | Arxiv 2023 | [Link] | |||
TBIN: Modeling Long Textual Behavior Data for CTR Prediction | Arxiv 2023 | [Link] | |||
LKPNR: LLM and KG for Personalized News Recommendation Framework | Arxiv 2023 | [Link] | |||
LLMRec: Benchmarking Large Language Models on Recommendation Task | Arxiv 2023 | [Link] | |||
ReLLa: Retrieval-enhanced Large Language Models for Lifelong Sequential Behavior Comprehension in Recommendation | Arxiv 2023 | [Link] | |||
Prompt Distillation for Efficient LLM-based Recommendation | CIKM 2023 | [Link] | |||
RecMind: Large Language Model Powered Agent For Recommendation | Arxiv 2023 | [Link] | |||
Text Matching Improves Sequential Recommendation by Reducing Popularity Biases | CIKM 2023 | [Link] | |||
Zero-Shot Recommendations with Pre-Trained Large Language Models for Multimodal Nudging | Arxiv 2023 | [Link] | |||
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations | Arxiv 2023 | [Link] | |||
Evaluating ChatGPT as a Recommender System: A Rigorous Approach | Arxiv 2023 | [Link] | |||
Unveiling Challenging Cases in Text-based Recommender Systems | RecSys Workshop 2023 | [Link] | |||
Retrieval-augmented Recommender System: Enhancing Recommender Systems with Large Language Models | RecSys Doctoral Symposium 2023 | [Link] | |||
User-Centric Conversational Recommendation: Adapting the Need of User with Large Language Models | RecSys Doctoral Symposium 2023 | [Link] | |||
An Unified Search and Recommendation Foundation Model for Cold-Start Scenario | CIKM 2023 | [Link] | |||
JobRecoGPT -- Explainable job recommendations using LLMs | Arxiv 2023 | [Link] | |||
Reformulating Sequential Recommendation: Learning Dynamic User Interest with Content-enriched Language Modeling | Arxiv 2023 | [Link] | |||
Towards Efficient and Effective Adaptation of Large Language Models for Sequential Recommendation | Arxiv 2023 | [Link] | |||
Lending Interaction Wings to Recommender Systems with Conversational Agents | NIPS 2023 | [Link] | |||
A Multi-facet Paradigm to Bridge Large Language Model and Recommendation | Arxiv 2023 | [Link] | |||
MuseChat: A Conversational Music Recommendation System for Videos | Arxiv 2023 | [Link] | |||
EcomGPT: Instruction-tuning Large Language Models with Chain-of-Task Tasks for E-commerce | Arxiv 2023 | [Link] | |||
ClickPrompt: CTR Models are Strong Prompt Generators for Adapting Language Models to CTR Prediction | Arxiv 2023 | [Link] | |||
AgentCF: Collaborative Learning with Autonomous Language Agents for Recommender Systems | Arxiv 2023 | [Link] | |||
Factual and Personalized Recommendations using Language Models and Reinforcement Learning | Arxiv 2023 | [Link] | |||
On Generative Agents in Recommendation | Arxiv 2023 | [Link] | |||
Leveraging Large Language Models (LLMs) to Empower Training-Free Dataset Condensation for Content-Based Recommendation | Arxiv 2023 | [Link] | |||
Collaborative Contextualization: Bridging the Gap between Collaborative Filtering and Pre-trained Language Model | Arxiv 2023 | [Link] | |||
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models | Arxiv 2023 | [Link] | |||
Language Models As Semantic Indexers | Arxiv 2023 | [Link] | |||
Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language | Arxiv 2023 | [Link] | |||
MISSRec: Pre-training and Transferring Multi-modal Interest-aware Sequence Representation for Recommendation | MM 2023 | [Link] | |||
Representation Learning with Large Language Models for Recommendation | Arxiv 2023 | [Link] | |||
One Model for All: Large Language Models are Domain-Agnostic Recommendation Systems | Arxiv 2023 | [Link] | |||
Beyond Yes and No: Improving Zero-Shot LLM Rankers via Scoring Fine-Grained Relevance Labels | Arxiv 2023 | [Link] | |||
Multiple Key-value Strategy in Recommendation Systems Incorporating Large Language Model | CIKM GenRec 2023 | [Link] | |||
LightLM: A Lightweight Deep and Narrow Language Model for Generative Recommendation | Arxiv 2023 | [Link] | |||
Improving Conversational Recommendation Systems via Bias Analysis and Language-Model-Enhanced Data Augmentation | Arxiv 2023 | [Link] | |||
Conversational Recommender System and Large Language Model Are Made for Each Other in E-commerce Pre-sales Dialogue | Arxiv 2023 | [Link] | |||
LLMRec: Large Language Models with Graph Augmentation for Recommendation | WSDM 2023 | [Link] | |||
CoLLM: Integrating Collaborative Embeddings into Large Language Models for Recommendation | Arxiv 2023 | [Link] | |||
ALT: Towards Fine-grained Alignment between Language and CTR Models for Click-Through Rate Prediction | Arxiv 2023 | [Link] | |||
Enhancing Recommender Systems with Large Language Model Reasoning Graphs | Arxiv 2023 | [Link] | |||
Large Language Model Can Interpret Latent Space of Sequential Recommender | Arxiv 2023 | [Link] | |||
BTRec: BERT-Based Trajectory Recommendation for Personalized Tours | Arxiv 2023 | [Link] | |||
Large Multi-modal Encoders for Recommendation | Arxiv 2023 | [Link] | |||
Collaborative Large Language Model for Recommender Systems | Arxiv 2023 | [Link] | |||
Recommendations by Concise User Profiles from Review Text | Arxiv 2023 | [Link] |
The datasets & benchmarks for LLM-related RS topics should maintain the original semantic/textual features, instead of anonymous feature IDs.
Dataset | RS Scenario | Link |
---|---|---|
Reddit-Movie | Conversational & Movie | [Link] |
Amazon-M2 | E-commerce | [Link] |
MovieLens | Movie | [Link] |
Amazon | E-commerce | [Link] |
BookCrossing | Book | [Link] |
GoodReads | Book | [Link] |
Anime | Anime | [Link] |
PixelRec | Short Video | [Link] |
Netflix | Movie | [Link] |
Benchmarks | Webcite Link | Paper |
---|---|---|
Amazon-M2 (KDD Cup 2023) | [Link] | [Paper] |
OpenP5 | [Link] | [Paper] |
TABLET | [Link] | [Paper] |
Repo Name | Maintainer |
---|---|
rs-llm-paper-list | wwliu555 |
awesome-recommend-system-pretraining-papers | archersama |
LLM4Rec | WLiK |
Awesome-LLM4RS-Papers | nancheng58 |
LLM4IR-Survey | RUC-NLPIR |
👍 Welcome to contribute to this repository.
If you have come across relevant resources or found some errors in this repesitory, feel free to open an issue or submit a pull request.
Contact: chiangel [DOT] ljh [AT] gmail [DOT] com
@article{lin2023can,
title={How Can Recommender Systems Benefit from Large Language Models: A Survey},
author={Lin, Jianghao and Dai, Xinyi and Xi, Yunjia and Liu, Weiwen and Chen, Bo and Li, Xiangyang and Zhu, Chenxu and Guo, Huifeng and Yu, Yong and Tang, Ruiming and others},
journal={arXiv preprint arXiv:2306.05817},
year={2023}
}