- 2024-11-13 - We released version 1.0.1: our first official release! 🎉
Outputs from large language models (LLMs) may contain errors: semantic, factual, and lexical.
With factgenie, you can have the error spans annotated:
- From LLMs through an API.
- From humans through a crowdsourcing service.
Factgenie can provide you:
- A user-friendly website for collecting annotations from human crowdworkers.
- API calls for collecting equivalent annotations from LLM-based evaluators.
- A visualization interface for visualizing the data and inspecting the annotated outputs.
What does factgenie not help with is collecting the data (we assume that you already have these), starting the crowdsourcing campaign (for that, you need to use a service such as Prolific.com) or running the LLM evaluators (for that, you need a local framework such as Ollama or a proprietary API).
Make sure you have Python >=3.9 installed.
If you want to quickly try out factgenie, you can install the package from PyPI:
pip install factgenie
However, the recommended approach for using factgenie is using an editable package:
git clone https://github.com/ufal/factgenie.git
cd factgenie
pip install -e .[dev,deploy]
This approach will allow you to manually modify configuration files and write your own data classes.
After installing factgenie, use the following command to run the server on your local computer:
factgenie run --host=127.0.0.1 --port 8890
More information on how to set up factgenie is on Github wiki.
See the following wiki pages that that will guide you through various use-cases of factgenie:
Topic | Description |
---|---|
🔧 Setup | How to install factgenie. |
🗂️ Data Management | How to manage datasets and model outputs. |
🤖 LLM Annotations | How to annotate outputs using LLMs. |
👥 Crowdsourcing Annotations | How to annotate outputs using human crowdworkers. |
✍️ Generating Outputs | How to generate outputs using LLMs. |
📊 Analyzing Annotations | How to obtain statistics on collected annotations. |
💻 Command Line Interface | How to use factgenie command line interface. |
🌱 Contributing | How to contribute to factgenie. |
We also provide step-by-step walkthroughs showing how to employ factgenie on the the dataset from the Shared Task in Evaluating Semantic Accuracy:
Tutorial | Description |
---|---|
🏀 #1: Importing a custom dataset | Loading the basketball statistics and model-generated basketball reports into the web interface. |
💬 #2: Generating outputs | Using Llama 3.1 with Ollama for generating basketball reports. |
📊 #3: Customizing data visualization | Manually creating a custom dataset class for better data visualization. |
🤖 #4: Annotating outputs with an LLM | Using GPT-4o for annotating errors in the basketball reports. |
👨💼 #5: Annotating outputs with human annotators | Using human annotators for annotating errors in the basketball reports. |
If you want to get a quick feedback or actively participate in development of factgenie, join our public Slack workspace:
We used factgenie for our related research project. We host the outputs from the project using a public instance of factgenie.
Important
Note that this preview is very limited: it enables only data viewing, not any data collection or management.
👉️ You can access the preview here.
Our paper was published at INLG 2024 System Demonstrations!
You can also find the paper on arXiv.
For citing us, please use the following BibTeX entry:
@inproceedings{kasner2024factgenie,
title = "factgenie: A Framework for Span-based Evaluation of Generated Texts",
author = "Kasner, Zden{\v{e}}k and
Platek, Ondrej and
Schmidtova, Patricia and
Balloccu, Simone and
Dusek, Ondrej",
editor = "Mahamood, Saad and
Minh, Nguyen Le and
Ippolito, Daphne",
booktitle = "Proceedings of the 17th International Natural Language Generation Conference: System Demonstrations",
year = "2024",
address = "Tokyo, Japan",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.inlg-demos.5",
pages = "13--15",
}
This work was co-funded by the European Union (ERC, NG-NLG, 101039303).