Skip to content
/ must Public

ACL 2023: One cannot stand for everyone! Leveraging Multiple User Simulators to train Task-oriented Dialogue Systems.

Notifications You must be signed in to change notification settings

kiseliu/must

Repository files navigation

Codes for the paper "One cannot stand for everyone! Leveraging Multiple User Simulators to train Task-oriented Dialogue Systems" (ACL 2023).

Dataset Preparation

The training data of user simulators:

  • GPT user simulator: data/multiwoz-master/data/multi-woz/rest_usr_simulator_goal_mwz.json.
  • GPT$_{\mathrm{IL}}$ user simulator: data/multiwoz-master/data/multi-woz/rest_usr_simulator_goal_gpt_il.json.

Training & Inference

Train the user simulators

# train GPT, GPT_IL user simulators
python -u simulator_gpt_act/model.py -mode train

Train the system agents

# training the systems with AgenX, RNNX user simulators
nohup python -u run_rl_training.py > rl_repro.log 2>&1 &

# training the systems with GPT, GPT_IL user simulators
nohup python -u run_rl_training_with_gpt.py > rl_gpt.log 2>&1 &

# training the systems with MUST
nohup python -u run_must_training.py > rl_must.log 2>&1 &

Evaluate the system agents

bash evaluation_matrix.sh

Citation

@inproceedings{liu-etal-2023-one,
    title = "One Cannot Stand for Everyone! Leveraging Multiple User Simulators to train Task-oriented Dialogue Systems",
    author = "Liu, Yajiao  and
      Jiang, Xin  and
      Yin, Yichun  and
      Wang, Yasheng  and
      Mi, Fei  and
      Liu, Qun  and
      Wan, Xiang  and
      Wang, Benyou",
    editor = "Rogers, Anna  and
      Boyd-Graber, Jordan  and
      Okazaki, Naoaki",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-long.1",
    doi = "10.18653/v1/2023.acl-long.1",
    pages = "1--21",
    abstract = "User simulators are agents designed to imitate human users; recent advances have found that Task-oriented Dialogue (ToD) systems optimized toward a user simulator could better satisfy the need of human users. However, this might result in a sub-optimal ToD system if it is tailored to only one \textit{ad hoc} user simulator, since human users can behave differently. In this paper, we propose a framework called MUST to optimize ToD systems via leveraging Multiple User SimulaTors. The main challenges of implementing MUST fall in 1) how to adaptively determine which user simulator to interact with the ToD system at each optimization step, since the ToD system might be over-fitted to some specific user simulators, and simultaneously under-fitted to some others; 2) how to avoid catastrophic forgetting of the adaption for a simulator that is not selected for several consecutive optimization steps.To tackle these challenges, we formulate MUST as a Multi-armed bandits (MAB) problem and provide a method called MUST$_{\mathrm{adaptive}}$ that balances \textit{i}) the \textit{boosting adaption} for adaptive interactions between different user simulators and the ToD system and\textit{ii}) the \textit{uniform adaption} to avoid the catastrophic forgetting issue.With both automatic evaluations and human evaluations, our experimental results on MultiWOZ show that the dialogue system trained by MUST achieves a better performance than those trained by a single user simulator. It also has a better generalization ability when testing with unseen user simulators.",
}

Acknowledgement

This code is modified upon the released code of previous EMNLP-IJCNLP 2019 paper "How to Build User Simulators to Train RL-based Dialog Systems".

About

ACL 2023: One cannot stand for everyone! Leveraging Multiple User Simulators to train Task-oriented Dialogue Systems.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published