uSing argUmentation foR Fact-checkING
Some grand challenges facing humanity include climate change and health. Unfortunately, implementing effective solutions for these challenges is counteracted by misinformation, disinformation, and malinformation (MDM), e.g., climate change denying or anti-vaccine propaganda. The team proposes robust and holistic fact verification methods to address this issue. The proposed methods can reduce data bias, aggregate information across multiple statements, and yield global conclusions. While humans can utilize background and domain knowledge to argue about the veracity of a fact, computers do not normally have access to such information. To capture this real-world background knowledge, the research aims to mine arguments from the web and construct domain-agnostic fact graphs that indicate if facts attack/support each other. They will then propose to develop argumentation theory-based graph algorithms to aggregate and argue over this knowledge. Based on these steps, the CNRS-UArizona team will arrive at the truthfulness value of a given argument that considers all the background knowledge available.
The researchers propose a novel graph-based approach to identify the trustworthiness of online information through deep learning, ranking algorithms, and computational argumentation theory. Around 67% of Americans report that online misinformation, disinformation, and malinformation (MDM) causes confusion about simple information they read online, and 54% say that these MDM affects their confidence in other people they interact with, making MDM a critical societal issue. Several studies have aimed to address the MDM issue through novel argument mining techniques and proposing datasets. These methods and datasets suffer from two problems: high dependency on domain data and not being able to capture sufficient background information that is required for true fact verification. These issues primarily originate from looking at fact verification as a natural language inference (NLI) problem.NLI analyzes a provided premise in isolation to form conclusions about the premise’s relations with a given hypothesis. As NLI forms conclusions about one statement at a time by looking at statements in silos, any NLI-based fact verification system will fail to capture the background knowledge required to arrive at the truth.