The rapid evolution of large language models (LLMs) has created a need for comprehensive assessments of their performance across various dimensions. LFED is a dataset designed to evaluate the capabilities of LLMs in understanding and reasoning over long literary fiction texts. This dataset includes 95 literary novels, originally written in Chinese or translated into Chinese, covering a wide range of topics from different centuries. We developed a taxonomy of 8 question categories to guide the creation of 1,304 questions. Our experiments with various state-of-the-art LLMs demonstrate significant challenges in addressing questions related to literary fictions, with ChatGPT achieving an accuracy of only 57.08% in a zero-shot setting.
Numerous datasets have been developed for machine reading comprehension tasks. However, these passage-based datasets are inadequate for evaluating the advanced capabilities of LLMs. LFED addresses this gap by providing a challenging dataset with long documents that require complex reasoning, such as character relationship reasoning and counterfactual reasoning.
LFED consists of a diverse collection of 95 literary fictions, either originally written in Chinese or translated into Chinese. The dataset includes 1,304 questions categorized into 8 types:
- Character Relationships
- Characterization
- Literary Style
- Role Behavior
- Event Relations
- Fiction Plot
- Background Topic
- Counterfactual Reasoning
The questions are designed to cover the core aspects of literary fictions, such as content, character relationships, storyline, writing techniques, and thematic values. This systematic approach ensures the questions are comprehensive and challenging.
The dataset was collected through a rigorous crowdsourcing process with multiple quality control steps. The novels were selected based on their complex narratives, character development, profound themes, and rich linguistic expressions. We manually checked each novel to ensure it met specific criteria, including literary value, genre diversity, and alignment with contemporary human values.
- Collecting Literary Fictions: Novels were selected based on recommendations from Douban, a Chinese community site for book and movie reviews.
- Creating Questions: Crowdsourced workers read the novels and created questions based on a defined taxonomy.
- Annotating Questions: Questions were annotated and reviewed to ensure quality.
- Experts Checking: Experts reviewed the questions for accuracy.
- Penultimate Dataset: Agreement checks were conducted to ensure consistency.
- Final Dataset: The dataset was filtered and finalized for public release.
LFED can serve as a comprehensive and challenging benchmark for assessing the capabilities of LLMs in fact understanding, logical reasoning, contextual comprehension, common-sense reasoning, and value judgment.
We evaluated several state-of-the-art LLMs on LFED under zero- and few-shot settings. The results indicate that long literary fiction comprehension poses significant challenges for these models, with ChatGPT achieving an accuracy of 57.08% in a zero-shot setting.
The LFED dataset is publicly available at https://github.com/tjunlp-lab/LFED.git.
For any queries or contributions, please contact the corresponding author at linhaoyu@tju.edu.cn.