- 2023/03/15: OpenAI released gpt4, which can be accessed on ChatGPT's plus service, we view it as a latest version of ChatGPT.
- 2023/04/28: We are maintaining a dataset ChatLog, which collects ChatGPT responses everyday from 2023-03-05 to now. We evaluate ChatGPT's performance on 21 benchmarks across time and find that previous evaluation results may change at new dates. Based on the colleted data, we build OpenChatLog, a search engine for LLM generated texts. Try our website (If your ip is in China).
- 2023/06/08: We propose Language-Model-as-an-Examiner, a novel benchmarking framework where the LM serves as a knowledgeable examiner that formulates questions based on its knowledge and evaluates responses in a reference-free manner. Try our dataset LMExamQA and benchmarking result at here.
- 2023/06/16: We are delighted to announce the official release of KoLA, a continuously evolving world knowledge evaluation platform that encompasses a 4-layer cognitive structure and 19 tasks. Our goal is to provide unbiased evaluation results to assist in enhancing the capabilities of knowledge systems and large models. You can participate in the evaluation or provide feedback through the platform at https://kola.xlore.cn/ or via GitHub.
This repository stores Dataset Resources, Evaluation Papers and Detection Tools for ChatGPT.
-
ChatGPT: A Meta-Analysis after 2.5 Months.
Christoph Leiter, Ran Zhang, Yanran Chen, Jonas Belouadi, Daniil Larionov, Vivian Fresen, Steffen Eger. [abs], 2023.2
-
Summary of ChatGPT/GPT-4 Research and Perspective Towards the Future of Large Language Models.
Yiheng Liu, Tianle Han, Siyuan Ma, Jiayue Zhang, Yuanyuan Yang, Jiaming Tian, Hao He, Antong Li, Mengshen He, Zhengliang Liu, Zihao Wu, Dajiang Zhu, Xiang Li, Ning Qiang, Dingang Shen, Tianming Liu, Bao Ge. [abs], 2023.4
-
Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond.
Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Bing Yin, Xia Hu. [abs], 2023.4
-
A Survey on Evaluation of Large Language Models.
Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie. [abs], 2023.7
-
GPTEval: A Survey on Assessments of ChatGPT and GPT-4.
Rui Mao, Guanyi Chen, Xulang Zhang, Frank Guerin, Erik Cambria. [abs], 2023.8
-
How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection.
Biyang Guo, Xin Zhang , Ziyuan Wang, Minqi Jiang , Jinran Nie, Yuxuan Ding, Jianwei Yue , Yupeng Wu. [abs],[github], 2023.1
-
ChatGPT: Jack of all trades, master of none.
Jan Kocoń , Igor Cichecki , Oliwier Kaszyca , Mateusz Kochanek , Dominika Szydło , Joanna Baran, Julita Bielaniewicz, Marcin Gruza, Arkadiusz Janz, Kamil Kanclerz, Anna Kocoń, Bartłomiej Koptyra, Wiktoria Mieleszczenko-Kowszewicz, Piotr Miłkowski, Marcin Oleksy, Maciej Piasecki, Łukasz Radliński, Konrad Wojtasik, Stanisław Woźniak and Przemysław Kazienko. [abs],[github], 2023.2
-
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT.
Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao. [abs],[github], 2023.2
-
Is ChatGPT A Good Translator? A Preliminary Study.
Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, Zhaopeng Tu. [abs],[github], 2023.1
-
On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective.
Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue Zhang, Xing Xie . [abs],[github], 2023.2
-
An Independent Evaluation of ChatGPT on Mathematical Word Problems (MWP).
Paulo Shakarian, Abhinav Koyyalamudi, Noel Ngu, Lakshmivihari Mareedu. [abs][github], 2023.2
-
Evaluation of ChatGPT as a Question Answering System for Answering Complex Questions.
Yiming Tan, Dehai Min, Yu Li, Wenbo Li, Nan Hu, Yongrui Chen, Guilin Qi. [abs][github], 2023.3
-
Instruction Tuning with GPT-4.
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao. [abs][github], 2023.4
-
medAlpaca: Finetuned Large Language Models for Medical Question Answering.
Keno Bressem, Tianyu Han, Shan Chen, et al. [github], 2023.4
-
ChatLog: Recording and Analyzing ChatGPT Across Time.
Shangqing Tu, Chunyang Li, Jifan Yu, Xiaozhi Wang, Lei Hou, Juanzi Li. [abs][github], 2023.4
-
Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs.
Jinyang Li, Binyuan Hui, Ge Qu, Binhua Li, Jiaxi Yang, Bowen Li, Bailin Wang, Bowen Qin, Rongyu Cao, Ruiying Geng, Nan Huo, Chenhao Ma, Kevin C.C. Chang, Fei Huang, Reynold Cheng, Yongbin Li. [abs][github], 2023.5
-
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets.
Md Tahmid Rahman Laskar, M Saiful Bari, Mizanur Rahman, Md Amran Hossen Bhuiyan, Shafiq Joty, Jimmy Xiangji Huang. [abs][github], 2023.5
Data statistics of these resources:
Paper with Dataset | Task | #Examples |
---|---|---|
How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection | QA + Dialog | 40,000 |
ChatGPT: Jack of all trades, master of none | 25 classification/ QA/reasoning task | 38,000 |
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT | sentiment analysis / Paraphrase / NLI | 475 |
Is ChatGPT A Good Translator? A Preliminary Study | Translation | 5,609 |
On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective | Robustness | 2,237 |
An Independent Evaluation of ChatGPT on Mathematical Word Problems (MWP). | Reasoning | 1,000 |
Evaluation of ChatGPT as a Question Answering System for Answering Complex Questions. | Complex QA | 194,782 |
Instruction Tuning with GPT-4. | Instruction Following | 172,000 |
medAlpaca: Finetuned Large Language Models for Medical Question Answering. | Medical QA | 1.5 million |
ChatLog: Recording and Analyzing ChatGPT Across Time. | 21 NLU and NLG tasks | 73,730 (growing everyday) |
ArguGPT: evaluating, understanding and identifying argumentative essays generated by GPT models. | Essay Writing | 9,647 |
Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs. | Text-to-SQL | 12,751 |
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets. | NLP Benchmarks | 255K |
-
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT.
Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao. [abs],[github], 2023.2
-
ChatGPT: Jack of all trades, master of none.
Jan Kocoń , Igor Cichecki , Oliwier Kaszyca , Mateusz Kochanek , Dominika Szydło , Joanna Baran, Julita Bielaniewicz, Marcin Gruza, Arkadiusz Janz, Kamil Kanclerz, Anna Kocoń, Bartłomiej Koptyra, Wiktoria Mieleszczenko-Kowszewicz, Piotr Miłkowski, Marcin Oleksy, Maciej Piasecki, Łukasz Radliński, Konrad Wojtasik, Stanisław Woźniak and Przemysław Kazienko. [abs],[github], 2023.2
-
How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks.
Xuanting Chen, Junjie Ye, Can Zu, Nuo Xu, Rui Zheng, Minlong Peng, Jie Zhou, Tao Gui, Qi Zhang, Xuanjing Huang. [abs], 2023.3
-
Consistency Analysis of ChatGPT.
Myeongjun Jang, Thomas Lukasiewicz. [abs], 2023.3
-
Does ChatGPT resemble humans in language use?
Zhenguang G. Cai, David A. Haslett, Xufeng Duan, Shuqi Wang, Martin J. Pickering. [abs], 2023.3
-
A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models.
Junjie Ye, Xuanting Chen, Nuo Xu, Can Zu, Zekai Shao, Shichun Liu, Yuhan Cui, Zeyang Zhou, Chao Gong, Yang Shen, Jie Zhou, Siming Chen, Tao Gui, Qi Zhang, Xuanjing Huang. [abs], 2023.3
-
Can we trust the evaluation on ChatGPT?
Rachith Aiyappa, Jisun An, Haewoon Kwak, Yong-Yeol Ahn. [abs], 2023.3
-
A comprehensive evaluation of ChatGPT's zero-shot Text-to-SQL capability.
Aiwei Liu, Xuming Hu, Lijie Wen, Philip S. Yu. [abs][github], 2023.3
-
ChatGPT or Grammarly? Evaluating ChatGPT on Grammatical Error Correction Benchmark.
Haoran Wu, Wenxuan Wang, Yuxuan Wan, Wenxiang Jiao, Michael Lyu. [abs], 2023.3
-
Safety Analysis in the Era of Large Language Models: A Case Study of STPA using ChatGPT.
Tao Fang, Shu Yang, Kaixin Lan, Derek F. Wong, Jinpeng Hu, Lidia S. Chao, Yue Zhang. [abs], 2023.4
-
Is ChatGPT a Good Sentiment Analyzer? A Preliminary Study.
Zengzhi Wang, Qiming Xie, Zixiang Ding, Yi Feng, Rui Xia. [abs], 2023.4
-
A Preliminary Evaluation of ChatGPT for Zero-shot Dialogue Understanding.
Wenbo Pan, Qiguang Chen, Xiao Xu, Wanxiang Che, Libo Qin. [abs], 2023.4
-
Zero-shot Temporal Relation Extraction with ChatGPT.
Chenhan Yuan, Qianqian Xie, Sophia Ananiadou. [abs], 2023.4
-
Can ChatGPT Reproduce Human-Generated Labels? A Study of Social Computing Tasks.
Yiming Zhu, Peixian Zhang, Ehsan-Ul Haq, Pan Hui, Gareth Tyson. [abs], 2023.4
-
ChatGraph: Interpretable Text Classification by Converting ChatGPT Knowledge to Graphs.
Yucheng Shi, Hehuan Ma, Wenliang Zhong, Gengchen Mai, Xiang Li, Tianming Liu, Junzhou Huang. [abs], 2023.5
-
Uncovering the Potential of ChatGPT for Discourse Analysis in Dialogue: An Empirical Study.
Yaxin Fan, Feng Jiang. [abs], 2023.5
-
Evaluating ChatGPT's Performance for Multilingual and Emoji-based Hate Speech Detection.
Mithun Das, Saurabh Kumar Pandey, Animesh Mukherjee. [abs], 2023.5
-
Distilling ChatGPT for Explainable Automated Student Answer Assessment.
Jiazheng Li, Lin Gui, Yuxiang Zhou, David West, Cesare Aloisi, Yulan He. [abs], 2023.5
-
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets.
Md Tahmid Rahman Laskar, M Saiful Bari, Mizanur Rahman, Md Amran Hossen Bhuiyan, Shafiq Joty, Jimmy Xiangji Huang. [abs], 2023.5
-
Sentiment Analysis in the Era of Large Language Models: A Reality Check.
Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, Lidong Bing. [abs], 2023.5
-
The Two Word Test: A Semantic Benchmark for Large Language Models.
-
Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation.
Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Qianyu He, Rui Xu, Wenhao Huang, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao. [abs], [github], 2023.6
-
ChatGPT for Suicide Risk Assessment on Social Media: Quantitative Evaluation of Model Performance, Potentials and Limitations.
Hamideh Ghanadian, Isar Nejadgholi, Hussein Al Osman. [abs], 2023.6
-
Metacognitive Prompting Improves Understanding in Large Language Models.
-
LLMeBench: A Flexible Framework for Accelerating LLMs Benchmarking.
Fahim Dalvi, Maram Hasanain, Sabri Boughorbel, Basel Mousi, Samir Abdaljalil, Nizi Nazar, Ahmed Abdelali, Shammur Absar Chowdhury, Hamdy Mubarak, Ahmed Ali, Majd Hawasly, Nadir Durrani, Firoj Alam. [abs],[github], 2023.8
-
ZhuJiu: A Multi-dimensional, Multi-faceted Chinese Benchmark for Large Language Models.
Baoli Zhang, Haining Xie, Pengfan Du, Junhao Chen, Pengfei Cao, Yubo Chen, Shengping Liu, Kang Liu, Jun Zhao. [abs],[github], 2023.8
-
A Function Interpretation Benchmark for Evaluating Interpretability Methods.
Sarah Schwettmann, Tamar Rott Shaham, Joanna Materzynska, Neil Chowdhury, Shuang Li, Jacob Andreas, David Bau, Antonio Torralba. [abs], 2023.9
-
Exploring AI Ethics of ChatGPT: A Diagnostic Analysis.
Terry Yue Zhuo, Yujin Huang , Chunyang Chen , Zhenchang Xing. [abs], 2023.2
-
Is ChatGPT better than Human Annotators? Potential and Limitations of ChatGPT in Explaining Implicit Hate Speech.
Fan Huang, Haewoon Kwak, Jisun An. [abs], 2023.2
-
ChatGPT Outperforms Crowd-Workers for Text-Annotation Tasks.
Fabrizio Gilardi, Meysam Alizadeh, Maël Kubli. [abs], 2023.3
-
Chinese Intermediate English Learners outdid ChatGPT in deep cohesion: Evidence from English narrative writing.
Tongquan Zhou, Siyi Cao, Siruo Zhou, Yao Zhang, Aijing He. [abs], 2023.3
-
A Perspectival Mirror of the Elephant: Investigating Language Bias on Google, ChatGPT, Wikipedia, and YouTube.
Queenie Luo, Michael J. Puett, Michael D. Smith. [abs], 2023.3
-
Assessing Cross-Cultural Alignment between ChatGPT and Human Societies: An Empirical Study.
Yong Cao, Li Zhou, Seolhwa Lee, Laura Cabello, Min Chen, Daniel Hershcovich. [abs], 2023.3
-
Safety Analysis in the Era of Large Language Models: A Case Study of STPA using ChatGPT.
Yi Qi, Xingyu Zhao, Xiaowei Huang. [abs], 2023.4
-
Investigating Chain-of-thought with ChatGPT for Stance Detection on Social Media.
Bowen Zhang, Xianghua Fu, Daijun Ding, Hu Huang, Yangyang Li, Liwen Jing. [abs], 2023.4
-
Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models.
Emilio Ferrara. [abs], 2023.4
-
Multi-step Jailbreaking Privacy Attacks on ChatGPT.
Haoran Li, Dadi Guo, Wei Fan, Mingshi Xu, Yangqiu Song. [abs], 2023.4
-
Toxicity in ChatGPT: Analyzing Persona-assigned Language Models.
Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, Karthik Narasimhan. [abs], 2023.4
-
The Self-Perception and Political Biases of ChatGPT.
Jérôme Rutinowski, Sven Franke, Jan Endendyk, Ina Dormuth, Markus Pauly. [abs], 2023.4
-
Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4.
Kent K. Chang, Mackenzie Cramer, Sandeep Soni, David Bamman. [abs], 2023.5
-
ChatGPT Needs SPADE (Sustainability, PrivAcy, Digital divide, and Ethics) Evaluation: A Review.
Sunder Ali Khowaja, Parus Khuwaja, Kapal Dev. [abs], 2023.5
-
Not All Languages Are Created Equal in LLMs: Improving Multilingual Capability by Cross-Lingual-Thought Prompting.
Haoyang Huang, Tianyi Tang, Dongdong Zhang, Wayne Xin Zhao, Ting Song, Yan Xia, Furu Wei. [abs], 2023.5
-
Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large Language Model Recommendation.
Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He. [abs], [github], 2023.5
-
BAD: BiAs Detection for Large Language Models in the context of candidate screening.
Nam Ho Koh, Joseph Plata, Joyce Chai. [abs], 2023.5
-
Knowledge of cultural moral norms in large language models.
Aida Ramezani, Yang Xu. [abs], 2023.6
-
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models.
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li. [abs],[github], 2023.6
-
TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models.
Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun. [abs], 2023.6
-
ProPILE: Probing Privacy Leakage in Large Language Models.
Siwon Kim, Sangdoo Yun, Hwaran Lee, Martin Gubri, Sungroh Yoon, Seong Joon Oh. [abs], 2023.7
-
Latent Jailbreak: A Benchmark for Evaluating Text Safety and Output Robustness of Large Language Models.
Huachuan Qiu, Shuai Zhang, Anqi Li, Hongliang He, Zhenzhong Lan. [abs],[github], 2023.7
-
XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models.
Paul Röttger, Hannah Rose Kirk, Bertie Vidgen, Giuseppe Attanasio, Federico Bianchi, Dirk Hovy. [abs], 2023.8
-
On Large Language Models' Selection Bias in Multi-Choice Questions.
Chujie Zheng, Hao Zhou, Fandong Meng, Jie Zhou, Minlie Huang. [abs], 2023.9
-
SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions.
Zhexin Zhang, Leqi Lei, Lindong Wu, Rui Sun, Yongkang Huang, Chong Long, Xiao Liu, Xuanyu Lei, Jie Tang, Minlie Huang. [abs], 2023.9
-
Exploring the Limits of ChatGPT for Query or Aspect-based Text Summarization.
Xianjun Yang, Yan Li, Xinlu Zhang, Haifeng Chen, Wei Cheng. [abs], 2023.2
-
Can ChatGPT Write a Good Boolean Query for Systematic Review Literature Search?
Shuai Wang, Harrisen Scells, Bevan Koopman, Guido Zuccon. [abs], 2023.2
-
ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports.
Katharina Jeblick, Balthasar Schachtner, Jakob Dexl, Andreas Mittermeier, Anna Theresa Stüber, Johanna Topalis, Tobias Weber, Philipp Wesp, Bastian Sabel, Jens Ricke, Michael Ingrisch. [abs], 2022.12
-
Cross-Lingual Summarization via ChatGPT.
Jiaan Wang, Yunlong Liang, Fandong Meng, Zhixu Li, Jianfeng Qu, Jie Zhou. [abs], 2023.2
-
ChatGPT as a Factual Inconsistency Evaluator for Abstractive Text Summarization.
Zheheng Luo, Qianqian Xie, Sophia Ananiadou. [abs], 2023.3
-
Assessing Cross-Cultural Alignment between ChatGPT and Human Societies: An Empirical Study.
Yong Cao, Li Zhou, Seolhwa Lee, Laura Cabello, Min Chen, Daniel Hershcovich. [abs], 2023.3
-
Comparing Abstractive Summaries Generated by ChatGPT to Real Summaries Through Blinded Reviewers and Text Classification Algorithms.
Mayank Soni, Vincent Wade. [abs], 2023.3
-
Comparative Analysis of CHATGPT and the evolution of language models.
Oluwatosin Ogundare, Gustavo Quiros Araya. [abs], 2023.4
-
Extractive Summarization via ChatGPT for Faithful Summary Generation.
Haopeng Zhang, Xiao Liu, Jiawei Zhang. [abs], 2023.4
-
Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent.
Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, Zhaochun Ren. [abs], [github], 2023.4
-
BIM-GPT: a Prompt-Based Virtual Assistant Framework for BIM Information Retrieval.
Junwen Zheng, Martin Fischer. [abs], 2023.4
-
Uncovering ChatGPT's Capabilities in Recommender Systems.
Sunhao Dai, Ninglu Shao, Haiyuan Zhao, Weijie Yu, Zihua Si, Chen Xu, Zhongxiang Sun, Xiao Zhang, Jun Xu. [abs], [github], 2023.5
-
A First Look at LLM-Powered Generative News Recommendation.
Qijiong Liu, Nuo Chen, Tetsuya Sakai, Xiao-Ming Wu. [abs], 2023.5
-
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models.
Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen. [abs], 2023.5
-
WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia Sina J. Semnani, Violet Z. Yao, Heidi C. Zhang, Monica S. Lam [abs], [github], 2023.5
-
RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text.
Wangchunshu Zhou, Yuchen Eleanor Jiang, Peng Cui, Tiannan Wang, Zhenxin Xiao, Yifan Hou, Ryan Cotterell, Mrinmaya Sachan. [abs], [github], 2023.5
-
PokemonChat: Auditing ChatGPT for Pokémon Universe Knowledge.
Laura Cabello, Jiaang Li, Ilias Chalkidis. [abs], 2023.6
-
Hybrid Long Document Summarization using C2F-FAR and ChatGPT: A Practical Study.
Guang Lu, Sylvia B. Larcher, Tu Tran. [abs], 2023.6
-
Generative Job Recommendations with Large Language Model.
Zhi Zheng, Zhaopeng Qiu, Xiao Hu, Likang Wu, Hengshu Zhu, Hui Xiong. [abs], 2023.7
-
Assessing the Ability of ChatGPT to Screen Articles for Systematic Reviews.
Eugene Syriani, Istvan David, Gauransh Kumar. [abs], 2023.7
-
L-Eval: Instituting Standardized Evaluation for Long Context Language Models.
Chenxin An, Shansan Gong, Ming Zhong, Mukai Li, Jun Zhang, Lingpeng Kong, Xipeng Qiu. [abs], [github], 2023.7
-
LLM-Rec: Personalized Recommendation via Prompting Large Language Models.
Hanjia Lyu, Song Jiang, Hanqing Zeng, Yinglong Xia, Jiebo Luo. [abs], 2023.7
-
LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding.
Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, Juanzi Li. [abs], [github], 2023.8
-
Can Large Language Models Understand Real-World Complex Instructions?
Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao. [abs],[github] 2023.9
-
S3Eval: A Synthetic, Scalable, Systematic Evaluation Suite for Large Language Models.
Fangyu Lei, Qian Liu, Yiming Huang, Shizhu He, Jun Zhao, Kang Liu. [abs],[github] 2023.10
-
Mathematical Capabilities of ChatGPT.
Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Alexis Chevalier, Julius Berner. [abs], 2023.1
-
Is ChatGPT a General-Purpose Natural Language Processing Task Solver?
Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, Diyi Yang. [abs], 2023.2
-
A Categorical Archive of ChatGPT Failures.
Ali Borji. [abs], 2023.2
-
An Independent Evaluation of ChatGPT on Mathematical Word Problems (MWP).
Paulo Shakarian, Abhinav Koyyalamudi, Noel Ngu, Lakshmivihari Mareedu. [abs][github], 2023.2
-
Mind meets machine: Unravelling GPT-4's cognitive psychology.
Sifatkaur, Manmeet Singh, Vaisakh SB, Neetiraj Malviya. [abs], 2023.3
-
Capabilities of GPT-4 on Medical Challenge Problems.
Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, Eric Horvitz. [abs], 2023.3
-
GPT is becoming a Turing machine: Here are some ways to program it.
Ana Jojic, Zhen Wang, Nebojsa Jojic. [abs], 2023.3
-
ChatGPT is a Knowledgeable but Inexperienced Solver: An Investigation of Commonsense Problem in Large Language Models.
Ning Bian, Xianpei Han, Le Sun, Hongyu Lin, Yaojie Lu, Ben He. [abs], 2023.3
-
Humans in Humans Out: On GPT Converging Toward Common Sense in both Success and Failure.
Philipp Koralus, Vincent Wang-Maścianica. [abs], 2023.3
-
LLMMaps -- A Visual Metaphor for Stratified Evaluation of Large Language Models.
Patrik Puchert, Poonam Poonam, Christian van Onzenoodt, Timo Ropinski. [abs], 2023.4
-
How well do Large Language Models perform in Arithmetic tasks?
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang. [abs], 2023.4
-
Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4.
Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, Yue Zhang. [abs], 2023.4
-
ChatGPT-Crawler: Find out if ChatGPT really knows what it's talking about.
-
ChatABL: Abductive Learning via Natural Language Interaction with ChatGPT.
Tianyang Zhong, Yaonai Wei, Li Yang, Zihao Wu, Zhengliang Liu, Xiaozheng Wei, Wenjun Li, Junjie Yao, Chong Ma, Xiang Li, Dajiang Zhu, Xi Jiang, Junwei Han, Dinggang Shen, Tianming Liu, Tuo Zhang. [abs], 2023.4
-
Is ChatGPT a Good Causal Reasoner? A Comprehensive Evaluation.
Jinglong Gao, Xiao Ding, Bing Qin, Ting Liu. [abs],[github] 2023.5
-
StructGPT: A General Framework for Large Language Model to Reason over Structured Data.
Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, Ji-Rong Wen. [abs], 2023.5
-
Chain-of-Symbol Prompting Elicits Planning in Large Langauge Models.
Hanxu Hu, Hongyuan Lu, Huajian Zhang, Wai Lam, Yue Zhang. [abs], 2023.5
-
Tree of Thoughts: Deliberate Problem Solving with Large Language Models.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan. [abs],[github] 2023.5
-
Chain of Knowledge: A Framework for Grounding Large Language Models with Structured Knowledge Bases.
Xingxuan Li, Ruochen Zhao, Yew Ken Chia, Bosheng Ding, Lidong Bing, Shafiq Joty, Soujanya Poria. [abs], 2023.5
-
Adaptive Chameleon or Stubborn Sloth: Unraveling the Behavior of Large Language Models in Knowledge Conflicts.
Jian Xie, Kai Zhang, Jiangjie Chen, Renze Lou, Yu Su. [abs], 2023.5
-
Can ChatGPT Defend the Truth? Automatic Dialectical Evaluation Elicits LLMs' Deficiencies in Reasoning.
Boshi Wang, Xiang Yue, Huan Sun. [abs], 2023.5
-
GPT-3.5 vs GPT-4: Evaluating ChatGPT's Reasoning Performance in Zero-shot Learning.
Jessica López Espejel, El Hassane Ettifouri, Mahaman Sanoussi Yahaya Alassan, El Mehdi Chouham, Walid Dahhane. [abs], 2023.5
-
LogiCoT: Logical Chain-of-Thought Instruction-Tuning Data Collection with GPT-4.
Hanmeng Liu, Zhiyang Teng, Leyang Cui, Chaoli Zhang, Qiji Zhou, Yue Zhang. [abs], 2023.5
-
Enabling Large Language Models to Generate Text with Citations.
Tianyu Gao, Howard Yen, Jiatong Yu, Danqi Chen. [abs],[github] 2023.5
-
Why Does ChatGPT Fall Short in Providing Truthful Answers?
Shen Zheng, Jie Huang, Kevin Chen-Chuan Chang. [abs], 2023.4
-
Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views.
Fangzhi Xu, Qika Lin, Jiawei Han, Tianzhe Zhao, Jun Liu, Erik Cambria. [abs], 2023.6
-
Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation.
Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua Wu, Ji-Rong Wen, Haifeng Wang. [abs],[github] 2023.7
-
SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models.
Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang. [abs],[github] 2023.7
-
Think-on-Graph: Deep and Responsible Reasoning of Large Language Model with Knowledge Graph.
Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Heung-Yeung Shum, Jian Guo. [abs] 2023.7
-
Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models.
Jiaao Chen, Xiaoman Pan, Dian Yu, Kaiqiang Song, Xiaoyang Wang, Dong Yu, Jianshu Chen. [abs] 2023.8
-
Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions.
Pouya Pezeshkpour, Estevam Hruschka. [abs] 2023.8
-
Evaluating Large Language Models on Graphs: Performance Insights and Comparative Analysis.
-
Large Language Models on the Chessboard: A Study on ChatGPT's Formal Language Comprehension and Complex Reasoning Skills.
Mu-Tien Kuo, Chih-Chung Hsueh, Richard Tzong-Han Tsai. [abs] 2023.8
-
MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback.
Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji. [abs],[github] 2023.9
-
NLPBench: Evaluating Large Language Models on Solving NLP Problems.
Linxin Song, Jieyu Zhang, Lechao Cheng, Pengyuan Zhou, Tianyi Zhou, Irene Li. [abs],[github] 2023.9
-
Evaluating Cognitive Maps and Planning in Large Language Models with CogEvals.
Ida Momennejad, Hosein Hasanbeig, Felipe Vieira, Hiteshi Sharma, Robert Osazuwa Ness, Nebojsa Jojic, Hamid Palangi, Jonathan Larson. [abs] 2023.9
-
Large Language Models Cannot Self-Correct Reasoning Yet.
Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, Denny Zhou. [abs] 2023.10
-
MacGyver: Are Large Language Models Creative Problem Solvers?.
Yufei Tian, Abhilasha Ravichander, Lianhui Qin, Ronan Le Bras, Raja Marjieh, Nanyun Peng, Yejin Choi, Thomas L. Griffiths, Faeze Brahman. [abs], 2023.11
-
Language Models can be Logical Solvers.
Jiazhan Feng, Ruochen Xu, Junheng Hao, Hiteshi Sharma, Yelong Shen, Dongyan Zhao, Weizhu Chen. [abs], 2023.11
-
GAIA: a benchmark for General AI Assistants.
Grégoire Mialon, Clémentine Fourrier, Craig Swift, Thomas Wolf, Yann LeCun, Thomas Scialom. [abs], [data], 2023.12
-
A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity.
Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, Pascale Fung. [abs], 2023.2
-
A Pilot Evaluation of ChatGPT and DALL-E 2 on Decision Making and Spatial Reasoning.
Zhisheng Tang, Mayank Kejriwal. [abs], 2023.2
-
MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action.
Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, Lijuan Wang. [abs], 2023.3
-
Sparks of Artificial General Intelligence: Early experiments with GPT-4.
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, Yi Zhang. [abs], 2023.3
-
GesGPT: Speech Gesture Synthesis With Text Parsing from GPT.
Nan Gao, Zeyu Zhao, Zhi Zeng, Shuwu Zhang, Dongdong Weng. [abs], 2023.3
-
ChatGPT4PCG Competition: Character-like Level Generation for Science Birds.
Pittawat Taveekitworachai, Febri Abdullah, Mury F. Dewantoro, Ruck Thawonmas, Julian Togelius, Jochen Renz. [abs], 2023.3
-
HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace.
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang. [abs], 2023.3
-
WavCaps: A ChatGPT-Assisted Weakly-Labelled Audio Captioning Dataset for Audio-Language Multimodal Research.
Xinhao Mei, Chutong Meng, Haohe Liu, Qiuqiang Kong, Tom Ko, Chengqi Zhao, Mark D. Plumbley, Yuexian Zou, Wenwu Wang. [abs], 2023.3
-
How does ChatGPT rate sound semantics?
Kai Siedenburg, Charalampos Saitis. [abs], 2023.4
-
MultiModal-GPT: A Vision and Language Model for Dialogue with Humans.
Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, Kai Chen. [abs], [github], 2023.5
-
LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation.
Yujie Lu, Xianjun Yang, Xiujun Li, Xin Eric Wang, William Yang Wang. [abs], [github], 2023.5
-
ChatBridge: Bridging Modalities with Large Language Model as a Language Catalyst.
Zijia Zhao, Longteng Guo, Tongtian Yue, Sihan Chen, Shuai Shao, Xinxin Zhu, Zehuan Yuan, Jing Liu. [abs], [github], 2023.5
-
GPT4GEO: How a Language Model Sees the World's Geography.
Jonathan Roberts, Timo Lüddecke, Sowmen Das, Kai Han, Samuel Albanie. [abs], 2023.6
-
A Survey on Multimodal Large Language Models.
Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, Enhong Chen. [abs], [github], 2023.6
-
What Matters in Training a GPT4-Style Language Model with Multimodal Inputs?
Yan Zeng, Hanbo Zhang, Jiani Zheng, Jiangnan Xia, Guoqiang Wei, Yang Wei, Yuchen Zhang, Tao Kong. [abs], 2023.7
-
Building Cooperative Embodied Agents Modularly with Large Language Models.
Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan. [abs], [github], 2023.7
-
3D-LLM: Injecting the 3D World into Large Language Models.
Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, Chuang Gan. [abs], [project], 2023.7
-
SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension.
Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan. [abs], [github], 2023.7
-
AgentBench: Evaluating LLMs as Agents.
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang. [abs], [github], 2023.8
-
A Survey on Large Language Model based Autonomous Agents.
Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, Ji-Rong Wen. [abs], [github], 2023.8
-
PPTC Benchmark: Evaluating Large Language Models for PowerPoint Task Completion.
Yiduo Guo, Zekai Zhang, Yaobo Liang, Dongyan Zhao, Duan Nan. [abs], [github], 2023.11
-
MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI.
Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen. [abs], [github], 2023.11
-
HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language Models.
Tianrui Guan*, Fuxiao Liu*, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, Dinesh Manocha, Tianyi Zhou. [abs], [github], 2023.10
-
Zero-Shot Information Extraction via Chatting with ChatGPT.
Xiang Wei, Xingyu Cui, Ning Cheng, Xiaobin Wang, Xin Zhang, Shen Huang, Pengjun Xie, Jinan Xu, Yufeng Chen, Meishan Zhang, Yong Jiang, Wenjuan Han. [abs][github][demo], 2023.2
-
Exploring the Feasibility of ChatGPT for Event Extraction.
Jun Gao, Huan Zhao, Changlong Yu, Ruifeng Xu. [abs], 2023.3
-
Extracting Accurate Materials Data from Research Papers with Conversational Language Models and Prompt Engineering -- Example of ChatGPT.
Maciej P. Polak, Dane Morgan. [abs], 2023.3
-
Is ChatGPT A Good Keyphrase Generator? A Preliminary Study.
Mingyang Song, Haiyun Jiang, Shuming Shi, Songfang Yao, Shilong Lu, Yi Feng, Huafeng Liu, Liping Jing. [abs], 2023.3
-
Yes but.. Can ChatGPT Identify Entities in Historical Documents?
Carlos-Emiliano González-Gallardo, Emanuela Boros, Nancy Girdhar, Ahmed Hamdi, Jose G. Moreno, Antoine Doucet. [abs], 2023.3
-
ChatGPT vs State-of-the-Art Models: A Benchmarking Study in Keyphrase Generation Task.
Roberto Martínez-Cruz, Alvaro J. López-López, José Portela. [abs], 2023.4
-
Evaluating ChatGPT's Information Extraction Capabilities: An Assessment of Performance, Explainability, Calibration, and Faithfulness.
Bo Li, Gexiang Fang, Yang Yang, Quansen Wang, Wei Ye, Wen Zhao, Shikun Zhang. [abs], 2023.4
-
ChatGPT Evaluation on Sentence Level Relations: A Focus on Temporal, Causal, and Discourse Relations.
Chunkit Chan, Jiayang Cheng, Weiqi Wang, Yuxin Jiang, Tianqing Fang, Xin Liu, Yangqiu Song. [abs], 2023.4
-
Using ChatGPT for Entity Matching.
Ralph Peeters, Christian Bizer. [abs], 2023.5
-
LLMs for Knowledge Graph Construction and Reasoning: Recent Capabilities and Future Opportunities.
Yuqi Zhu, Xiaohan Wang, Jing Chen, Shuofei Qiao, Yixin Ou, Yunzhi Yao, Shumin Deng, Huajun Chen, Ningyu Zhang. [abs], [github], 2023.5
-
Prompt ChatGPT In MNER: Improved multimodal named entity recognition method based on auxiliary refining knowledge from ChatGPT.
Jinyuan Li, Han Li, Zhuo Pan, Gang Pan. [abs], 2023.5
-
STAR: Boosting Low-Resource Event Extraction by Structure-to-Text Data Generation with Large Language Models.
Mingyu Derek Ma, Xiaoxuan Wang, Po-Nien Kung, P. Jeffrey Brantingham, Nanyun Peng, Wei Wang. [abs] 2023.5
-
Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors.
Kai Zhang, Bernal Jiménez Gutiérrez, Yu Su. [abs], 2023.5
-
Product Information Extraction using ChatGPT.
Alexander Brinkmann, Roee Shraga, Reng Chiz Der, Christian Bizer. [abs], 2023.6
-
AutoAlign: Fully Automatic and Effective Knowledge Graph Alignment enabled by Large Language Models.
Rui Zhang, Yixin Su, Bayu Distiawan Trisedya, Xiaoyan Zhao, Min Yang, Hong Cheng, Jianzhong Qi. [abs], 2023.7
-
UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition.
Wenxuan Zhou, Sheng Zhang, Yu Gu, Muhao Chen, Hoifung Poon. [abs], [github], 2023.8
-
Developing a Scalable Benchmark for Assessing Large Language Models in Knowledge Graph Engineering.
Lars-Peter Meyer, Johannes Frey, Kurt Junghanns, Felix Brei, Kirill Bulert, Sabine Gründer-Fahrer, Michael Martin. [abs], [github], 2023.8
-
ChatRule: Mining Logical Rules with Large Language Models for Knowledge Graph Reasoning.
Linhao Luo, Jiaxin Ju, Bo Xiong, Yuan-Fang Li, Gholamreza Haffari, Shirui Pan. [abs], 2023.9
-
Is ChatGPT A Good Translator? A Preliminary Study.
Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, Zhaopeng Tu. [abs],[github], 2023.1
-
Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT.
Qingyu Lu, Baopu Qiu, Liang Ding, Liping Xie, Dacheng Tao. [abs],[github], 2023.3
-
Towards Making the Most of ChatGPT for Machine Translation.
Keqin Peng, Liang Ding, Qihuang Zhong, Li Shen, Xuebo Liu, Min Zhang, Yuanxin Ouyang, Dacheng Tao. [abs],[github], 2023.3
-
Linguistically Informed ChatGPT Prompts to Enhance Japanese-Chinese Machine Translation: A Case Study on Attributive Clauses.
Wenshi Gu. [abs], 2023.3
-
Unleashing the Power of ChatGPT for Translation: An Empirical Study.
Yuan Gao, Ruili Wang, Feng Hou. [abs], 2023.4
-
Large language models effectively leverage document-level context for literary translation, but critical errors persist.
-
ChatGPT Beyond English: Towards a Comprehensive Evaluation of Large Language Models in Multilingual Learning.
Viet Dac Lai, Nghia Trung Ngo, Amir Pouran Ben Veyseh, Hieu Man, Franck Dernoncourt, Trung Bui, Thien Huu Nguyen. [abs], 2023.4
-
BayLing: Bridging Cross-lingual Alignment and Instruction Following through Interactive Translation for Large Language Models.
Shaolei Zhang, Qingkai Fang, Zhuocheng Zhang, Zhengrui Ma, Yan Zhou, Langlin Huang, Mengyu Bu, Shangtong Gui, Yunji Chen, Xilin Chen, Yang Feng. [abs],[github], 2023.6
-
Neural Machine Translation Data Generation and Augmentation using ChatGPT.
Wayne Yang, Garrett Nicolai. [abs], 2023.7
-
ChatGPT MT: Competitive for High- (but not Low-) Resource Languages.
Nathaniel R. Robinson, Perez Ogayo, David R. Mortensen, Graham Neubig. [abs], 2023.9
-
BHASA: A Holistic Southeast Asian Linguistic and Cultural Evaluation Suite for Large Language Models.
Wei Qi Leong, Jian Gang Ngui, Yosephine Susanto, Hamsawardhini Rengarajan, Kengatharaiyer Sarveswaran, William Chandra Tjhi. [abs], 2023.9
-
ChatGPT: The End of Online Exam Integrity?
Teo Susnjak. [abs], 2022.12
-
ChatGPT: Bullshit spewer or the end of traditional assessments in higher education?
Jürgen Rudolph, Samson Tan, Shannon Tan. [pdf], 2023.1
-
Will ChatGPT get you caught? Rethinking of Plagiarism Detection.
Mohammad Khalil, Erkan Er. [abs], 2023.2
-
Seeing ChatGPT Through Students' Eyes: An Analysis of TikTok Data.
Anna-Carolina Haensch, Sarah Ball, Markus Herklotz, Frauke Kreuter. [abs], 2023.3
-
ChatGPT Participates in a Computer Science Exam.
-
Evaluating GPT-3.5 and GPT-4 Models on Brazilian University Admission Exams.
Desnes Nunes, Ricardo Primi, Ramon Pires, Roberto Lotufo, Rodrigo Nogueira. [abs],[github], 2023.3
-
Can AI Chatbots Pass the Fundamentals of Engineering (FE) and Principles and Practice of Engineering (PE) Structural Exams?
M.Z. Naser, Brandon Ross, Jennier Ogle, Venkatesh Kodur, Rami Hawileh, Jamal Abdalla, Huu-Tai Thai. [abs], 2023.3
-
Harnessing LLMs in Curricular Design: Using GPT-4 to Support Authoring of Learning Objectives.
Pragnya Sridhar, Aidan Doyle, Arav Agarwal, Christopher Bogart, Jaromir Savelka, Majd Sakr. [abs], 2023.6
-
ChEDDAR: Student-ChatGPT Dialogue in EFL Writing Education.
Jieun Han, Haneul Yoo, Junho Myung, Minsun Kim, Tak Yeon Lee, So-Yeon Ahn, Alice Oh. [abs],[github] 2023.9
-
How Does ChatGPT Perform on the Medical Licensing Exams? The Implications of Large Language Models for Medical Education and Knowledge Assessment.
Aidan Gilson, Conrad Safranek, Thomas Huang, Vimig Socrates, Ling Chi, R. Andrew Taylor, David Chartash. [pdf], 2022.12
-
Evaluating ChatGPT as an Adjunct for Radiologic Decision-Making.
Arya Rao, John Kim, Meghana Kamineni, Michael Pang, Winston Lie, Marc D. Succi. [pdf], 2023.2
-
Dr ChatGPT, tell me what I want to hear: How prompt knowledge impacts health answer correctness.
Guido Zuccon, Bevan Koopman. [abs], 2023.2
-
The utility of ChatGPT for cancer treatment information.
Shan Chen, Benjamin H Kann, Michael B Foote, Hugo JWL Aerts, Guergana K Savova, Raymond H Mak, Danielle S Bitterman. [abs],[github], 2023.3
-
Translating Radiology Reports into Plain Language using ChatGPT and GPT-4 with Prompt Learning: Promising Results, Limitations, and Potential.
Qing Lyu, Josh Tan, Mike E. Zapadka, Janardhana Ponnatapuram, Chuang Niu, Ge Wang, Christopher T. Whitlow. [abs], 2023.3
-
Evaluation of ChatGPT for NLP-based Mental Health Applications.
Bishal Lamichhane. [abs], 2023.3
-
Evaluating GPT-4 and ChatGPT on Japanese Medical Licensing Examinations.
Jungo Kasai, Yuhei Kasai, Keisuke Sakaguchi, Yutaro Yamada, Dragomir Radev. [abs],[github], 2023.3
-
Evaluation of GPT and BERT-based models on identifying protein-protein interactions in biomedical text.
Hasin Rehana, Nur Bengisu Çam, Mert Basmaci, Yongqun He, Arzucan Özgür, Junguk Hur. [abs], 2023.3
-
Evaluation of ChatGPT Family of Models for Biomedical Reasoning and Classification.
Shan Chen, Yingya Li, Sheng Lu, Hoang Van, Hugo JWL Aerts, Guergana K. Savova, Danielle S. Bitterman. [abs], 2023.4
-
ChatGPT for Shaping the Future of Dentistry: The Potential of Multi-Modal Large Language Model.
Hanyao Huang, Ou Zheng, Dongdong Wang, Jiayi Yin, Zijin Wang, Shengxuan Ding, Heng Yin, Chuan Xu, Renjie Yang, Qian Zheng, Bing Shi. [abs], 2023.4
-
On the Evaluations of ChatGPT and Emotion-enhanced Prompting for Mental Health Analysis.
Kailai Yang, Shaoxiong Ji, Tianlin Zhang, Qianqian Xie, Sophia Ananiadou. [abs], 2023.4
-
ImpressionGPT: An Iterative Optimizing Framework for Radiology Report Summarization with ChatGPT.
Chong Ma, Zihao Wu, Jiaqi Wang, Shaochen Xu, Yaonai Wei, Zhengliang Liu, Lei Guo, Xiaoyan Cai, Shu Zhang, Tuo Zhang, Dajiang Zhu, Dinggang Shen, Tianming Liu, Xiang Li. [abs], 2023.4
-
Evaluation of GPT-3.5 and GPT-4 for supporting real-world information needs in healthcare delivery.
Debadutta Dash, Rahul Thapa, Juan M. Banda, Akshay Swaminathan, Morgan Cheatham, Mehr Kashyap, Nikesh Kotecha, Jonathan H. Chen, Saurabh Gombar, Lance Downing, Rachel Pedreira, Ethan Goh, Angel Arnaout, Garret Kenn Morris, Honor Magon, Matthew P Lungren, Eric Horvitz, Nigam H. Shah. [abs], 2023.4
-
Retrieval Augmented Chest X-Ray Report Generation using OpenAI GPT models.
Mercy Ranjit, Gopinath Ganapathy, Ranjit Manuel, Tanuja Ganu. [abs], 2023.5
-
MedGPTEval: A Dataset and Benchmark to Evaluate Responses of Large Language Models in Medicine.
Jie Xu, Lu Lu, Sen Yang, Bilin Liang, Xinwei Peng, Jiali Pang, Jinru Ding, Xiaoming Shi, Lingrui Yang, Huan Song, Kang Li, Xin Sun, Shaoting Zhang. [abs], 2023.5
-
HuatuoGPT, towards Taming Language Model to Be a Doctor.
Hongbo Zhang, Junying Chen, Feng Jiang, Fei Yu, Zhihong Chen, Jianquan Li, Guiming Chen, Xiangbo Wu, Zhiyi Zhang, Qingying Xiao, Xiang Wan, Benyou Wang, Haizhou Li. [abs], [github], 2023.5
-
Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective.
Jiatong Li, Yunqing Liu, Wenqi Fan, Xiao-Yong Wei, Hui Liu, Jiliang Tang, Qing Li. [abs],[github] 2023.6
-
Comparative Performance Evaluation of Large Language Models for Extracting Molecular Interactions and Pathway Knowledge.
Gilchan Park, Byung-Jun Yoon, Xihaier Luo, Vanessa López-Marrero, Patrick Johnstone, Shinjae Yoo, Francis J. Alexander. [abs],[github] 2023.7
-
Establishing Trust in ChatGPT BioMedical Generated Text: An Ontology-Based Knowledge Graph to Validate Disease-Symptom Links.
Ahmed Abdeen Hamed, Alessandro Crimi, Magdalena M. Misiak, Byung Suk Lee. [abs] 2023.8
-
Uncovering Language Disparity of ChatGPT in Healthcare: Non-English Clinical Environment for Retinal Vascular Disease Classification.
Xiaocong Liu, Jiageng Wu, An Shao, Wenyue Shen, Panpan Ye, Yao Wang, Juan Ye, Kai Jin, Jie Yang. [abs], 2023.6
-
LLM-Mini-CEX: Automatic Evaluation of Large Language Model for Diagnostic Conversation.
Xiaoming Shi, Jie Xu, Jinru Ding, Jiali Pang, Sichen Liu, Shuqing Luo, Xingwei Peng, Lu Lu, Haihong Yang, Mingtao Hu, Tong Ruan, Shaoting Zhang. [abs], 2023.8
-
Is GPT-3 a Psychopath? Evaluating Large Language Models from a Psychological Perspective.
Xingxuan Li, Yutong Li, Linlin Liu, Lidong Bing, Shafiq Joty. [abs], 2022.12
-
Theory of Mind May Have Spontaneously Emerged in Large Language Models.
Michal Kosinski. [abs], 2023.2
-
Can ChatGPT Assess Human Personalities? A General Evaluation Framework.
Haocong Rao, Cyril Leung, Chunyan Miao. [abs][github], 2023.3
-
Will Affective Computing Emerge from Foundation Models and General AI? A First Evaluation on ChatGPT.
Mostafa M. Amin, Erik Cambria, Björn W. Schuller. [abs], 2023.3
-
What does ChatGPT return about human values? Exploring value bias in ChatGPT using a descriptive value theory.
Ronald Fischer, Markus Luczak-Roesch, Johannes A Karl. [abs], 2023.4
-
Is ChatGPT Equipped with Emotional Dialogue Capabilities?
Weixiang Zhao, Yanyan Zhao, Xin Lu, Shilong Wang, Yanpeng Tong, Bing Qin. [abs], 2023.4
-
Assessing Working Memory Capacity of ChatGPT.
Dongyu Gong. [abs], 2023.5
-
Do Large Language Models Show Decision Heuristics Similar to Humans? A Case Study Using GPT-3.5.
Gaurav Suri, Lily R. Slater, Ali Ziaee, Morgan Nguyen. [abs], 2023.5
-
ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models.
Sophie Jentzsch, Kristian Kersting. [abs], 2023.6
-
Can LLMs like GPT-4 outperform traditional AI tools in dementia diagnosis? Maybe, but not today.
Zhuo Wang, Rongzhen Li, Bowen Dong, Jie Wang, Xiuxing Li, Ning Liu, Chenhui Mao, Wei Zhang, Liling Dong, Jing Gao, Jianyong Wang. [abs], 2023.6
-
EmotionPrompt: Leveraging Psychology for Large Language Models Enhancement via Emotional Stimulus.
Cheng Li, Jindong Wang, Kaijie Zhu, Yixuan Zhang, Wenxin Hou, Jianxun Lian, Xing Xie. [abs], 2023.7
-
Do LLMs Possess a Personality? Making the MBTI Test an Amazing Evaluation for Large Language Models.
Keyu Pan, Yawen Zeng. [abs], 2023.7
-
Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench.
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu. [abs], [github], 2023.10
-
SelfEvolve: A Code Evolution Framework via Large Language Models.
Shuyang Jiang, Yuhao Wang, Yu Wang. [abs], 2023.6
-
Demystifying GPT Self-Repair for Code Generation.
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama. [abs], 2023.6
-
Generative AI for Programming Education: Benchmarking ChatGPT, GPT-4, and Human Tutors.
Tung Phung, Victor-Alexandru Pădurean, José Cambronero, Sumit Gulwani, Tobias Kohn, Rupak Majumdar, Adish Singla, Gustavo Soares. [abs], 2023.6
-
Exploring the Robustness of Large Language Models for Solving Programming Problems.
Atsushi Shirafuji, Yutaka Watanobe, Takumi Ito, Makoto Morishita, Yuki Nakamura, Yusuke Oda, Jun Suzuki. [abs], 2023.6
-
Unmasking the giant: A comprehensive evaluation of ChatGPT's proficiency in coding algorithms and data structures.
Sayed Erfan Arefin, Tasnia Ashrafi Heya, Hasan Al-Qudah, Ynes Ineza, Abdul Serwadda. [abs], 2023.7
-
ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation.
Xueying Du, Mingwei Liu, Kaixin Wang, Hanlin Wang, Junwei Liu, Yixuan Chen, Jiayi Feng, Chaofeng Sha, Xin Peng, Yiling Lou. [abs], 2023.8
-
FinGPT: Open-Source Financial Large Language Models.
Hongyang Yang, Xiao-Yang Liu, Christina Dan Wang. [abs],[github], 2023.6
-
Chatgpt goes to law school.
Teo Susnjak. [abs], 2023
-
Explaining Legal Concepts with Augmented Large Language Models (GPT-4).
Jaromir Savelka, Kevin D. Ashley, Morgan A. Gray, Hannes Westermann, Huihui Xu. [abs], 2023.6
-
Legal Syllogism Prompting: Teaching Large Language Models for Legal Judgment Prediction.
Cong Jiang, Xiaolei Yang. [abs], 2023.7
-
Large Language Models in Cryptocurrency Securities Cases: Can ChatGPT Replace Lawyers?
Arianna Trozze, Toby Davies, Bennett Kleinberg. [abs], 2023.8
-
LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models.
Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher Ré, Adam Chilton, Aditya Narayana, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel N. Rockmore, Diego Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, Zehua Li. [abs],[github], 2023.8
-
Can Large Language Models Play Text Games Well? Current State-of-the-Art and Open Questions.
Chen Feng Tsai, Xiaochen Zhou, Sierra S. Liu, Jing Li, Mo Yu, Hongyuan Mei. [abs], 2023.4
-
Solving and Generating NPR Sunday Puzzles with Large Language Models.
Jingmiao Zhao, Carolyn Jane Anderson. [abs], 2023.6
-
Are ChatGPT and GPT-4 Good Poker Players? -- A Pre-Flop Analysis.
Akshat Gupta. [abs], 2023.8
-
Exploring New Frontiers in Agricultural NLP: Investigating the Potential of Large Language Models for Food Applications.
Saed Rezayi, Zhengliang Liu, Zihao Wu, Chandra Dhakal, Bao Ge, Haixing Dai, Gengchen Mai, Ninghao Liu, Chen Zhen, Tianming Liu, Sheng Li. [abs], 2023.6
-
ChatGPT for Robotics: Design Principles and Model Abilities.
Sai Vemprala, Rogerio Bonatti, Arthur Bucker, Ashish Kapoor. [abs],[github], 2023.6
-
ChatHaruhi: Reviving Anime Character in Reality via Large Language Model.
Cheng Li, Ziang Leng, Chenxi Yan, Junyi Shen, Hao Wang, Weishi MI, Yaying Fei, Xiaoyang Feng, Song Yan, HaoSheng Wang, Linkang Zhan, Yaokai Jia, Pingyu Wu, Haozhen Sun. [abs],[github], 2023.8
-
ChatLog: Recording and Analyzing ChatGPT Across Time.
Shangqing Tu, Chunyang Li, Jifan Yu, Xiaozhi Wang, Lei Hou, Juanzi Li. [abs][github], 2023.4
-
Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models.
Yuheng Huang, Jiayang Song, Zhijie Wang, Huaming Chen, Lei Ma. [abs], 2023.7
-
How is ChatGPT's behavior changing over time?
Lingjiao Chen, Matei Zaharia, James Zou. [abs][github], 2023.7
-
Uncertainty in Natural Language Generation: From Theory to Applications.
Joris Baan, Nico Daheim, Evgenia Ilia, Dennis Ulmer, Haau-Sing Li, Raquel Fernández, Barbara Plank, Rico Sennrich, Chrysoula Zerva, Wilker Aziz. [abs] 2023.7
-
Generative Models as a Complex Systems Science: How can we make sense of large language model behavior?
Ari Holtzman, Peter West, Luke Zettlemoyer. [abs] 2023.8
-
Time Travel in LLMs: Tracing Data Contamination in Large Language Models.
Shahriar Golchin, Mihai Surdeanu. [abs] 2023.8
-
Don't Make Your LLM an Evaluation Benchmark Cheater.
Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, Jiawei Han. [abs] 2023.11
-
DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature.
Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D. Manning, Chelsea Finn. [abs],[demo], 2023.1
-
GPTScore: Evaluate as You Desire.
Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, Pengfei Liu. [abs],[github], 2023.2
-
MAUVE Scores for Generative Models: Theory and Practice.
Krishna Pillutla, Lang Liu, John Thickstun, Sean Welleck, Swabha Swayamdipta, Rowan Zellers, Sewoong Oh, Yejin Choi, Zaid Harchaoui. [abs], 2022.12
-
Large Language Models Are State-of-the-Art Evaluators of Translation Quality.
-
Is ChatGPT a Good NLG Evaluator? A Preliminary Study.
Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, Jie Zhou. [abs],[github], 2023.3
-
Exploring ChatGPT's Ability to Rank Content: A Preliminary Study on Consistency with Human Preferences.
Yunjie Ji, Yan Gong, Yiping Peng, Chao Ni, Peiyan Sun, Dongyu Pan, Baochang Ma, Xiangang Li. [abs], 2023.3
-
Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT.
Qingyu Lu, Baopu Qiu, Liang Ding, Liping Xie, Dacheng Tao. [abs],[github], 2023.3
-
GPTEval: NLG Evaluation using GPT-4 with Better Human Alignment.
Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, Chenguang Zhu. [abs], 2023.3
-
Exploring the Use of Large Language Models for Reference-Free Text Quality Evaluation: A Preliminary Empirical Study.
Yi Chen, Rui Wang, Haiyun Jiang, Shuming Shi, Ruifeng Xu. [abs],[github], 2023.4
-
Human-like Summarization Evaluation with ChatGPT.
Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, Xiaojun Wan. [abs], 2023.4
-
Can ChatGPT and Bard Generate Aligned Assessment Items? A Reliability Analysis against Human Performance.
Abdolvahab Khademi. [abs], 2023.4
-
Multidimensional Evaluation for Text Style Transfer Using ChatGPT.
Huiyuan Lai, Antonio Toral, Malvina Nissim. [abs], 2023.4
-
Re-visiting Automated Topic Model Evaluation with Large Language Models.
Dominik Stammbach, Vilém Zouhar, Alexander Hoyle, Mrinmaya Sachan, Elliott Ash. [abs],[github], 2023.5
-
Benchmarking Foundation Models with Language-Model-as-an-Examiner.
Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, Jiayin Zhang, Juanzi Li, Lei Hou. [abs],[website], 2023.6
-
Judging LLM-as-a-judge with MT-Bench and Chatbot Arena.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica. [abs],[github], 2023.6
-
Wider and Deeper LLM Networks are Fairer LLM Evaluators.
Xinghua Zhang, Bowen Yu, Haiyang Yu, Yangyu Lv, Tingwen Liu, Fei Huang, Hongbo Xu, Yongbin Li. [abs],[github], 2023.8
-
Learning Evaluation Models from Large Language Models for Sequence Generation.
Chenglong Wang, Hang Zhou, Kaiyan Chang, Tongran Liu, Chunliang Zhang, Quan Du, Tong Xiao, Jingbo Zhu. [abs], 2023.8
-
ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate.
Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu. [abs], 2023.8
-
The Devil is in the Errors: Leveraging Large Language Models for Fine-grained Machine Translation Evaluation.
Patrick Fernandes, Daniel Deutsch, Mara Finkelstein, Parker Riley, André F. T. Martins, Graham Neubig, Ankush Garg, Jonathan H. Clark, Markus Freitag, Orhan Firat. [abs], 2023.8
-
T
$^3$ Bench: Benchmarking Current Progress in Text-to-3D Generation.Yuze He, Yushi Bai, Matthieu Lin, Wang Zhao, Yubin Hu, Jenny Sheng, Ran Yi, Juanzi Li, Yong-Jin Liu. [abs],[github], 2023.10
-
Benchmarking Cognitive Biases in Large Language Models as Evaluators.
Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park, Zae Myung Kim, Dongyeop Kang. [abs],[github], 2023.10
-
Are Large Language Models Reliable Judges? A Study on the Factuality Evaluation Capabilities of LLMs.
Xue-Yong Fu, Md Tahmid Rahman Laskar, Cheng Chen, Shashi Bhushan TN. [abs], 2023.11
-
AI vs. Human -- Differentiation Analysis of Scientific Content Generation.
Yongqiang Ma, Jiawei Liu, Fan Yi, Qikai Cheng, Yong Huang, Wei Lu, Xiaozhong Liu. [abs], 2023.1
-
ChatGPT or academic scientist? Distinguishing authorship with over 99% accuracy using off-the-shelf machine learning tools.
Heather Desaire, Aleesa E. Chua, Madeline Isom, Romana Jarosova, David Hua. [abs], 2023.3
-
Distinguishing ChatGPT(-3.5, -4)-generated and human-written papers through Japanese stylometric analysis.
Wataru Zaitsu, Mingzhe Jin. [abs], 2023.4
-
ArguGPT: evaluating, understanding and identifying argumentative essays generated by GPT models.
Yikang Liu, Ziyin Zhang, Wanyang Zhang, Shisen Yue, Xiaojing Zhao, Xinyuan Cheng, Yiwen Zhang, Hai Hu. [abs],[github],[data], 2023.4
-
Origin Tracing and Detecting of LLMs.
Linyang Li, Pengyu Wang, Ke Ren, Tianxiang Sun, Xipeng Qiu. [abs], 2023.4
-
AI, write an essay for me: A large-scale comparison of human-written versus ChatGPT-generated essays.
Steffen Herbold, Annette Hautli-Janisz, Ute Heuer, Zlata Kikteva, Alexander Trautsch. [abs], 2023.4
-
CHEAT: A Large-scale Dataset for Detecting ChatGPT-writtEn AbsTracts.
Peipeng Yu, Jiahan Chen, Xuan Feng, Zhihua Xia. [abs], 2023.4
-
Differentiate ChatGPT-generated and Human-written Medical Texts.
Wenxiong Liao, Zhengliang Liu, Haixing Dai, Shaochen Xu, Zihao Wu, Yiyang Zhang, Xiaoke Huang, Dajiang Zhu, Hongmin Cai, Tianming Liu, Xiang Li. [abs], 2023.4
-
ChatLog: Recording and Analyzing ChatGPT Across Time.
Shangqing Tu, Chunyang Li, Jifan Yu, Xiaozhi Wang, Lei Hou, Juanzi Li. [abs][github], 2023.4
-
Bot or Human? Detecting ChatGPT Imposters with A Single Question.
Hong Wang, Xuan Luo, Weizhi Wang, Xifeng Yan. [abs],[github], 2023.5
-
GPT-Sentinel: Distinguishing Human and ChatGPT Generated Content.
Yutian Chen, Hao Kang, Vivian Zhai, Liangze Li, Rita Singh, Bhiksha Ramakrishnan. [abs], 2023.5
-
Large Language Models can be Guided to Evade AI-Generated Text Detection.
Ning Lu, Shengcai Liu, Rui He, Ke Tang. [abs], 2023.5
-
G3Detector: General GPT-Generated Text Detector.
Haolan Zhan, Xuanli He, Qiongkai Xu, Yuxiang Wu, Pontus Stenetorp. [abs], 2023.5
-
GPT Paternity Test: GPT Generated Text Detection with GPT Genetic Inheritance.
Xiao Yu, Yuang Qi, Kejiang Chen, Guoqiang Chen, Xi Yang, Pengyuan Zhu, Weiming Zhang, Nenghai Yu. [abs], 2023.5
-
On the Reliability of Watermarks for Large Language Models.
John Kirchenbauer, Jonas Geiping, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, Tom Goldstein. [abs], 2023.6
-
Implementing BERT and fine-tuned RobertA to detect AI generated news by ChatGPT.
Zecong Wang, Jiaxi Cheng, Chen Cui, Chenhao Yu. [abs], 2023.6
-
Towards a Robust Detection of Language Model Generated Text: Is ChatGPT that Easy to Detect?
Wissam Antoun, Virginie Mouilleron, Benoît Sagot, Djamé Seddah. [abs], 2023.6
-
Evade ChatGPT Detectors via A Single Space.
Shuyang Cai, Wanyun Cui. [abs], 2023.7
-
Is ChatGPT Involved in Texts? Measure the Polish Ratio to Detect ChatGPT-Generated Text.
Lingyi Yang, Feng Jiang, Haizhou Li. [abs], 2023.7
-
Robust Distortion-free Watermarks for Language Models.
Rohith Kuditipudi, John Thickstun, Tatsunori Hashimoto, Percy Liang. [abs], 2023.7
-
Towards Codable Text Watermarking for Large Language Models.
Lean Wang, Wenkai Yang, Deli Chen, Hao Zhou, Yankai Lin, Fandong Meng, Jie Zhou, Xu Sun. [abs], 2023.7
-
Fighting Fire with Fire: Can ChatGPT Detect AI-generated Text
-
HC3 Plus: A Semantic-Invariant Human ChatGPT Comparison Corpus.
Zhenpeng Su, Xing Wu, Wei Zhou, Guangyuan Ma, Songlin Hu. [abs], 2023.9
-
Detecting ChatGPT: A Survey of the State of Detecting ChatGPT-Generated Text.
Mahdi Dhaini, Wessel Poelman, Ege Erdogan. [abs], 2023.9
-
On the Generalization of Training-based ChatGPT Detection Methods.
Han Xu, Jie Ren, Pengfei He, Shenglai Zeng, Yingqian Cui, Amy Liu, Hui Liu, Jiliang Tang. [abs], [github], 2023.10
-
Does GPT-4 Pass the Turing Test?
Cameron Jones, Benjamin Bergen. [abs], 2023.10
- Hello-SimpleAI ChatGPT Detector: An open-source detection project consists of three versions of models to detect text generated with ChatGPT, including QA version, Sinlge-text version and Linguistic version.
- GPTZero: A demo to detect writings generated by ChatGPT. The creator has seen that the technology was used by students to cheat on assignments, so he came up with a safeguard.
- OpenAI Classifier: A classifier fine-tuned on a dataset of pairs of human-written text and AI-written text on the same topic.
- Contentatscale AI Content Detector : A tool that allows users to receive the Human or AI Content score in the text to detect. It provides probability for each sentence.
- Writers AI Content Detector: A tool similar to Contentatscale. It requires either the URL of the page or text to calculate the “Human-Generated Content” score.
Statistics of these tools:
Tool | Detection Target | Language | Input Range (# characters) |
---|---|---|---|
Hello-SimpleAI ChatGPT Detector | ChatGPT | en/zh | (0,~1500] (512tokens) |
GPTZero | LLM | en | (250,♾️) |
OpenAI Classifier | LLM | en | (0,♾️) |
Contentatscale AI Content Detector | AI Content (NLP+SERP) | en | (0,25,000] |
Writers AI Content Detector | AI Content | en | (0, 1,500] |