The Open University (OU) has been awarded £250,000 from the Engineering and Physical Sciences Research Council to track the manipulation of online information, particularly about COVID-19 and climate change.
The project, CIMPLE, which stands for Countering Creative Information Manipulation with Explainable AI, is part of a €1.6M collaborative project funded through CHIST-ERA; a consortium of international research funding organisations.
The project aims to research and develop innovative knowledge-graph based solutions for the detection and tracking of online information manipulation, while taking into account various AI-explainability needs and requirements. Explainable AI is artificial intelligence (AI) in which the process and results of the solution can be traced and understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers might struggle to explain why an AI arrived at a specific decision.
CIMPLE will use computational creativity techniques to generate powerful, engaging, and easily understandable explanations of rather complex AI decisions and behaviours. These explanations will be tested in the domain of detection and tracking of manipulated information, with COVID-19 and climate change being the two main use cases, both of which are common targets for much misinformation.
Professor Harith Alani, the OU’s Principal Investigator on the project, said: “AI is being increasingly used to automatically detect misinformation to cope with the sheer scale of the problem. But research shows that presenting users with true/false decisions on claims circulating online is often inadequate and ineffective, particularly when a black-box algorithm is used.”
Explainability is of significant importance in the move towards trusted, responsible and ethical AI, yet the understandability of such explanations and their suitability to particular users and application domains received very little attention so far.
Professor Alani, adds: “Acceptance of AI approaches is sometimes dependent on various human factors. In some contexts, such as misinformation detection, we need to find more creative and engaging ways to explain the auto-generated credibility assessments.”
CIMPLE starts in April 2021, and will run for three years, involving five partners from five countries: