Most existing fact-checking systems are unable to explain their decisions by providing relevant rationales (justifications) for their predictions. It highlights a lack of transparency that poses significant risks, such as the prevalence of unexpected biases, which may increase political polarization due to limitations in impartiality. To address this critical gap, we introduce a new method to improve explainable fact-checking. The SEntence-Level FActual Reasoning (SELFAR) relies on fact extraction and verification by predicting the news source reliability and factuality (veracity) of news articles or claims at the sentence level, generating post-hoc explanations using SHAP/LIME and zero-shot prompts. Our experiments show that unreliable news stories predominantly consist of subjective statements, in contrast to reliable ones. Consequently, predicting unreliable news articles at the sentence level by analyzing impartiality and subjectivity is a promising approach for fact extraction and improving explainable fact-checking. Furthermore, LIME outperforms SHAP in explaining predictions on reliability. Additionally, while zero-shot prompts provide highly readable explanations and achieve an accuracy of 0.71 in predicting factuality, their tendency to hallucinate remains a challenge. Lastly, we present the first study on explainable fact-checking in the Portuguese language. The SELFAR to enhance explainable fact-checking. SELFAR encompasses three main tasks: Fact Extraction (FE), Fact Verification (FV), and Explanation Generation (EG), as shown in figure as follows.
Francielle Vargas, Isadora Salles, Diego Alves, Ameeta Agrawal, Thiago Pardo, Fabrício Benevenuto (2024). Improving Explainable Fact-Checking via Sentence-Level Factual Reasoning. Proceedings of the 7th Fact Extraction and VERification Workshop (FEVER @ EMNLP 2024), Miami, United States.