Skip to content

A benchmark for multi-scale assessment of the quality of content generated by social AI characters.

License

Notifications You must be signed in to change notification settings

Airoura/OrcaBench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Social LLM Benchmark-Generation

Pipeline

  1. We have m characters in total, each with a simulated profile and each with an exact personality trait and score.
  2. We have n real tweets for each character, each with a potential knowledge, and each tweet is strongly related to the user resume, personality trait and potential knowledge.
  3. Construct the request body based on the information contained above.
  4. LLMs are asked to publish tweets based on the prompt.
  5. After collecting the responses from the LLMs, we evaluate the performance of the model according to the following criteria:
    1. Overlap.

      1. Bleu.
      2. Rouge.
      3. Distinct.
    2. LLM Judger.

      1. Resume Related (+1).
      2. Personality Related (+1).
      3. Potential Knowledge Related (+1).
    3. BigFive Personality Consistency.

      1. Evaluate the personality trait scores of the character based on the n tweets generated by LLMs.
      2. Compare the consistency of the character's personality trait scores with the ground truth personality trait scores.

Usage

python main.py \
    --platform zhipuai \
    --base-url https://open.bigmodel.cn/api/paas/v4 \
    --api-key 120985c00120985c00120985c00 \
    --model glm-4-flash \
    --max-tokens 1024 \
    --temperature 0.6 \
    --top-p 0.7 \
    --platform-critic openai \
    --base-url-critic https://api.openai.com/v1 \
    --api-key-critic 120985c00120985c00120985c00 \
    --model-critic GPT-4o \
    --max-tokens-critic 1024 \
    --temperature-critic 0.01 \
    --convs-per-chunk 10 \
    --qps 30 \
    --qps-critic 30 \
    --max-retry-times 5

Reference

To cite this work please use:

@misc{huang2024orcaenhancingroleplayingabilities,
      title={Orca: Enhancing Role-Playing Abilities of Large Language Models by Integrating Personality Traits}, 
      author={Yuxuan Huang},
      year={2024},
      eprint={2411.10006},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2411.10006}, 
}

License

OrcaBench is released under Apache-2.0 license, see LICENSE for details.

About

A benchmark for multi-scale assessment of the quality of content generated by social AI characters.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published