Skip to content

Joshua-Ren/iICL

Repository files navigation

Language Model Evolution: An Iterated Learning Perspective (iICL)

This repository contains all the LLM experiments in Bias Amplification in Language Model Evolution: An Iterated Learning Perspective.

Abstract

With the widespread adoption of Large Language Models (LLMs), the prevalence of iterative interactions among these models is anticipated to increase. Notably, recent advancements in multi-round self-improving methods allow LLMs to generate new examples for training subsequent models. At the same time, multi-agent LLM systems, involving automated interactions among agents, are also increasing in prominence. Thus, in both short and long terms, LLMs may actively engage in an evolutionary process. We draw parallels between the behavior of LLMs and the evolution of human culture, as the latter has been extensively studied by cognitive scientists for decades. Our approach involves leveraging Iterated Learning (IL), a Bayesian framework that elucidates how subtle biases are magnified during human cultural evolution, to explain some behaviors of LLMs. This paper outlines key characteristics of agents' behavior in the Bayesian-IL framework, including predictions that are supported by experimental verification with various LLMs. This theoretical framework could help to more effectively predict and guide the evolution of LLMs in desired directions.

Interesting findings

  • Letting LLMs learn from the data samples generated by another LLM (can be itself) attracts more attention nowadays.

  • Such a procedure could be depicted by the following Bayesian iterated learning framework (with interaction filter included): bias in prior would be amplified generation by generation and a good interaction phase can mitigate this.

About this repo

The experiments we considered here are simple but quite helpful: we can directly observe many details during evolution, like the logits of all hypotheses, stats of the transferred data, etc. All the experiments in the paper (including different settings, seeds, etc.) only take less than 5 dollars. But designing them takes me more than 1024 hairs. You can just open the jupyter notebook file, run the code, and find the chat log and results in the corresponding folder (usually in model_name/experiment_name).

Requirements

  • API of GPT, Claude, Mistral

Reference

For technical details and full experimental results, please check our paper.

@inproceedings{ren:iicl,
    author = {Yi Ren and Shangmin Guo and Linlu Qiu and Bailin Wang and Danica J. Sutherland},
    title = {Bias Amplification in Language Model Evolution: An Iterated Learning Perspective},
    year = {2024},
    booktitle = {NeurIPS},
}

Contact

Please contact renyi.joshua@gmail.com if you have any questions on the codes.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published