This is the implementation of Dynamic Evidence-based FAct-checking with Multimodal Experts (DEFAME), a strong multimodal claim verification system. DEFAME decomposes the fact-checking task into a dynamic 6-stage pipeline, leveraging an MLLM to accomplish sub-tasks like planning, reasoning, and evidence summarization.
DEFAME is the successor of our challenge-winning unimodal fact-checking system, InFact. This repository is under constant development. You can access the original code of InFact here, the code of DEFAME here.
You can install DEFAME either via Docker or manually. In any case, you first need to clone the repository:
git clone https://github.com/multimodal-ai-lab/DEFAME
cd DEFAME
Choose this option if you're interested in executing DEFAME.
If you have Docker installed, from the project root simply run
docker compose up -d
docker compose exec defame bash
This will download and execute the latest images we have built for you. It opens a shell. You can continue with Usage from here.
Choose this option if you want to modify DEFAME.
Follow these steps:
-
Optional: Set up a virtual environment and activate it:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install required packages:
pip install -r requirements.txt python -c "import nltk; nltk.download('wordnet')"
If you have a CUDA-enabled GPU and the CUDA toolkit installed, also run:
pip install -r requirements_gpu.txt
If you want to evaluate DEFAME on a benchmark, you need to do the following:
-
Download the needed benchmarks. We use:
-
Order the benchmarks in the following directory structure:
your_dataset_folder/ βββ MOCHEG/ β βββ images/ β βββ train/ β βββ ... βββ VERITE/ β βββ images/ β βββ VERITE.csv β βββ ... βββ AVeriTeC/ βββ train.json βββ dev.json βββ ...
-
Include the path to
your_dataset_folder
in thedata_base_dir
variable insideconfig/globals.py
. DEFAME will automatically locate and process the datasets withinMOCHEG
,VERITE
, andAVeriTeC
when needed.
All execution scripts are located in (subfolders of) scripts/
.
Note
Whenever running a script, ensure the project root to be the working directory. You can accomplish that by using the -m
parameter as in the commands below (note the script path notation):
Hardware requirements: CPU-only is sufficient if you refrain from using a local LLM.
Output location: All generated reports, statistics, logs etc. will be saved in out/
by default. You may change this in the config/globals.py
file.
With scripts/run.py
, you can fact-check any image-text claim. The script already contains an example. Execute it with
python -m scripts.run
It will run DEFAME with the default configuration (using GPT-4o). When running this command the first time, you'll be prompted to enter API keys. Just enter the ones you need. (See the APIs Section which keys DEFAME requires.)
Benchmark evaluations can be run in two different ways. We recommend to use YAML configuration files. See config/verite
for a bunch of examples. After you configured your own config, just copy the config's file path into run_config.py
and run
python -m scripts.run_config
LLMs and tools may require external APIs. All API keys are saved inside config/api_keys.yaml
. A few tools need additional set up, see the tool-specific setup guidelines below. Here's an overview of all APIs:
API | Free | Required for... |
---|---|---|
OpenAI | β | DEFAME with GPT (default), otherwise optional |
π€ Hugging Face | βοΈ | DEFAME with Llava, otherwise optional |
Serper | β | DEFAME with Google Web Search, otherwise optional |
DuckDuckGo | βοΈ | nothing. Reaches rate limit quickly. |
Google Vision | β | DEFAME with Reverse Image Search |
Firecrawl | βοΈ | DEFAME with Reverse Image Search |
You will need the OpenAI API if you want to use any of OpenAI's GPT models.
Required for the usage of open source LLMs on π€ Hugging Face, like Llama and Llava.
The Serper API serves standard Google Search as well as Google Image Search. If you don't want to use DuckDuckGo (which has restrictive rate limits, unfortunately), you will need this API.
The Google Cloud Vision API is required to perform Reverse Image Search. Follow these steps, to set it up:
- In the Google Cloud Console, create a new Project.
- Go to the Service Account overview and add a new Service Account.
- Open the new Service Account, go to "Keys" and generate a new JSON key file.
- Save the downloaded key file at the path
config/google_service_account_key.json
.
This project uses Firecrawl as the default web scraping service. It falls back to a simple BeautifulSoup implementation if Firecrawl is not running. Firecrawl runs automatically if you used docker compose
to install DEFAME.
If you installed DEFAME manually, you also need to run Firecrawl manually by executing
docker run -d -p 3002:3002 tudamailab/firecrawl
If you really want to set up Firecrawl manually and without Docker, follow the Firecrawl Documentation. We do not recommend that because, in our experience, that setup procedure is rather involving. Therefore, we recommend to use the Firecrawl Docker Image we provide for this project. (You may modify and re-build the Firecrawl Image via the Dockerfile
stored in third_party/firecrawl
.)
To extend the fact-checker with an additional tool, follow these steps:
-
Implement the Tool
It needs to inherit from the
Tool
class and implement all abstract methods. See there (and other tools) for details. -
Add a Usage Example to the
defame/prompts/plan_exemplars
folderThis exemplar is for the LLM to understand the purpose and format of the tool. You may consider existing exemplars from other tools to understand what's expected here.
-
Register Your Tool in
defame/tools/__init__.py
Incorporate the new tool into the
TOOL_REGISTRY
list and its actions in theACTION_REGISTRY
set. -
Configure the Tool in the execution script
Don't forget to specify the tool in your configuration (either the YAML configuration file or initialization of the
FactChecker
). -
Optional: Register the Tool in Benchmarks
This step is required only if you want to use your tool for evaluation on one of the benchmarks. To this end, navigate to the respective benchmark file under
defame/eval/<benchmark_name>/benchmark.py
. There, in theavailable_actions
list, add your Tool.
This repository and all its contents (except for the contents inside third_party/
) are licensed under the Apache 2.0 License.
Please use the following BibTeX to refer to the authors:
@article{braun2024defamedynamicevidencebasedfactchecking,
title={DEFAME: Dynamic Evidence-based FAct-checking with Multimodal Experts},
author={Tobias Braun and Mark Rothermel and Marcus Rohrbach and Anna Rohrbach},
year={2024},
eprint={2412.10510},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.10510},
journal={arXiv preprint arXiv:2412.10510},
}