Skip to content

Code repo for paper: Large Language Models Assume People are More Rational than We Really are

Notifications You must be signed in to change notification settings

theryanl/LLM-rationality

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM-rationality

This repository contains the code for our paper: Large Language Models Assume People are More Rational than We Really are

Our paper consists of the forward modeling experiment (simulations and predictions) and an additional inverse modeling experiment (inferences from observing others). The forward_model, data, analysis, and open_sourced directories correspond to the forward modeling experiment. The inverse_model directory corresponds to the inverse modeling experiment.

💻Running Scripts

To run the closed-source model experiments collecting data, use the following commands:

Data

Download the choices13k dataset for forward modeling experiments from https://github.com/jcpeterson/choices13k

forward modeling:

sh scripts.run_forward.sh

inverse modeling:

./inverse_model/run_inverse.sh

To run analyses, use the following commands:

forward modeling:

sh scripts.run_forward_analysis.sh

inverse modeling:

./inverse_model/run_analysis.sh

🤔Questions?

If you have any questions related to the paper or code, feel free to email Ryan {ryanliu@princeton.edu} and Jiayi {jiayig@princeton.edu}.

🔗Citation

Please cite our paper if you find the repo helpful in your work:

@article{liu2024large,
  title={Large Language Models Assume People are More Rational than We Really are},
  author={Liu, Ryan and Geng, Jiayi and Peterson, Joshua C and Sucholutsky, Ilia and Griffiths, Thomas L},
  journal={arXiv preprint arXiv:2406.17055},
  year={2024}
}

About

Code repo for paper: Large Language Models Assume People are More Rational than We Really are

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published