Skip to content

HITsz-TMG/awesome-llm-reader

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 

Repository files navigation

A Repository of Retrieval-augmented LLMs

  • [2023/05] Active Retrieval Augmented Generation. Zhengbao Jiang et al. arXiv. [paper]

  • [2023/05] Augmented Large Language Models with Parametric Knowledge Guiding. Ziyang Luo et al. arXiv. [paper]

  • [2023/05] RET-LLM: Towards a General Read-Write Memory for Large Language Models. Ali Modarressi et al. arXiv. [paper]

  • [2023/05] Query Rewriting for Retrieval-Augmented Large Language Models. Xinbei Ma et al. EMNLP. [paper]

  • [2023/05] Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy. Zhihong Shao et al. EMNLP. [paper]

  • [2023/05] WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences. Xiao Liu et al. KDD. [paper]

  • [2023/07] Chain of Thought Prompting Elicits Knowledge Augmentation. Dingjun Wu et al. arXiv. [paper]

  • [2023/10] Retrieval-Generation Synergy Augmented Large Language Models. Zhangyin Feng et al. arXiv. [paper]

  • [2023/10] FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation. Tu Vu et al. arXiv. [paper]

  • [2023/10] Hexa: Self-Improving for Knowledge-Grounded Dialogue System. Daejin Jo et al. arXiv. [paper]

  • [2023/10] Retrieve Anything To Augment Large Language Models. Peitian Zhang et al. arXiv. [paper]

  • [2023/10] Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection. Akari Asai et al. arXiv. [paper]

  • [2024/01] DocLLM: A layout-aware generative language model for multimodal document understanding. Dongsheng Wang et al. arXiv. [paper]

  • [2024/03] Uni-SMART: Universal Science Multimodal Analysis and Research Transformer. Hengxing Cai et al. arXiv. [paper]

  • [2024/03] RA-ISF: Learning to Answer and Understand from Retrieval Augmentation via Iterative Self-Feedback. Yanming Liu et al. arXiv. [paper]

📝 Knowledge Preprocessing

  • [2023/09] PDFTriage: Question Answering over Long, Structured Documents. Jon Saad-Falcon et al. arXiv. [paper]

  • [2023/10] Walking Down the Memory Maze: Beyond Context Limit through Interactive Reading. Howard Chen et al. arXiv. [paper]

  • [2023/11] LLatrieval: LLM-Verified Retrieval for Verifiable Generation. Xiaonan Li et al. arXiv. [paper]

  • [2023/11] Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models. Wenhao Yu et al. arXiv. [paper]

  • [2023/11] Drilling Down into the Discourse Structure with LLMs for Long Document Question Answering. Inderjeet Nair et al. arXiv. [paper]

  • [2023/11] Revolutionizing Retrieval-Augmented Generation with Enhanced PDF Structure Recognition. Demiao LIN arXiv. [paper]

  • [2024/06] LumberChunker: Long-Form Narrative Document Segmentation. André V. Duarte arXiv. [paper]

📈 Evaluation

  • [2023/04] Can ChatGPT-like Generative Models Guarantee Factual Accuracy? On the Mistakes of New Generation Search Engines. Ruochen Zhao et al. arXiv. [paper]

  • [2023/06] ToolQA: A Dataset for LLM Question Answering with External Tools. Yuchen Zhuang et al. arXiv. [paper]

  • [2023/09] Evaluating Large Language Models for Document-grounded Response Generation in Information-Seeking Dialogues. Norbert Braunschweiler et al. arXiv. [paper]

  • [2023/09] Benchmarking Large Language Models in Retrieval-Augmented Generation. Jiawei Chen et al. arXiv. [paper]

  • [2023/10] Understanding Retrieval Augmentation for Long-Form Question Answering. Hung-Ting Chen et al. arXiv. [paper]

  • [2024/01] Corrective Retrieval Augmented Generation. Shi-Qi Yan et al. arXiv. [paper]

  • [2024/01] CRUD-RAG: A Comprehensive Chinese Benchmark for Retrieval-Augmented Generation of Large Language Models. Yuanjie Lyu et al. arXiv. [paper]

  • [2024/04] How faithful are RAG models? Quantifying the tug-of-war between RAG and LLMs’ internal prior. Kevin Wu et al. arXiv. [paper]

  • [2024/07] Retrieval Augmented Generation or Long-Context LLMs? A Comprehensive Study and Hybrid Approach. Zhuowan Li et al. arXiv. [paper]

🚀 Efficiency

  • [2022/12] Parallel Context Windows for Large Language Models. Nir Ratner et al. ACL. [paper]

  • [2023/05] Plug-and-Play Knowledge Injection for Pre-trained Language Models. Zhengyan Zhang et al. ACL. [paper]

  • [2023/05] Adapting Language Models to Compress Contexts. Alexis Chevalier et al. arXiv. [paper]

  • [2023/07] Thrust: Adaptively Propels Large Language Models with External Knowledge. Xinran Zhao et al. arXiv. [paper]

  • [2023/10] RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation. Fangyuan Xu et al. arXiv. [paper]

  • [2023/10] Compressing Context to Enhance Inference Efficiency of Large Language Models. Yucheng Li et al. arXiv. [paper]

  • [2023/10] CacheGen: Fast Context Loading for Language Model Applications. Yuhan Liu et al. arXiv. [paper]

  • [2023/10] TCRA-LLM: Token Compression Retrieval Augmented Large Language Model for Inference Cost Reduction. Junyi Liu et al. arXiv. [paper]

  • [2023/11] Learning to Filter Context for Retrieval-Augmented Generation. Zhiruo Wang et al. arXiv. [paper]

  • [2024/02] Generative Representational Instruction Tuning. Niklas Muennighoff et al. arXiv. [paper]

  • [2024/02] A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts. Kuang-Huei Lee et al. arXiv. [paper]

  • [2024/02] Superposition Prompting: Improving and Accelerating RetrievalAugmented Generation. Thomas Merth et al. arXiv. [paper]

  • [2024/05] Accelerating Inference of Retrieval-Augmented Generation via Sparse Context Selection. Yun Zhu et al. arXiv. [paper]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published