Skip to content

[Preprint' 24] LLMs Are Biased Towards Output Formats! Systematically Evaluating and Mitigating Output Format Bias of LLMs

Notifications You must be signed in to change notification settings

WING-NUS/FormatBiasEval

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

arXiv

LLMs Are Biased Towards Output Formats! Systematically Evaluating and Mitigating Output Format Bias of LLMs

Do Xuan Long1,2  Hai Nguyen Ngoc3  Tiviatis Sim1,4  Hieu Dao1Shafiq Joty5,6Kenji Kawaguchi1Nancy F. Chen2Min-Yen Kan1
1National University of Singapore 
2Institute for Infocomm Research (I2R), A*STAR 
3VinAI Research  4Institute of High Performance Computing (IHPC), A*STAR  5Salesforce Research  6Nanyang Technological University 

Abstract

We present the first systematic evaluation examining format bias in performance of large language models (LLMs). Our approach distinguishes between two categories of an evaluation metric under format constraints to reliably and accurately assess performance: one measures performance when format constraints are adhered to, while the other evaluates performance regardless of constraint adherence. We then define a metric for measuring the format bias of LLMs and establish effective strategies to reduce it. Subsequently, we present our empirical format bias evaluation spanning four commonly used categories -- multiple-choice question-answer, wrapping, list, and mapping -- covering 15 widely-used formats. Our evaluation on eight generation tasks uncovers significant format bias across state-of-the-art LLMs. We further discover that improving the format-instruction following capabilities of LLMs across formats potentially reduces format bias. Based on our evaluation findings, we study prompting and fine-tuning with synthesized format data techniques to mitigate format bias. Our methods successfully reduce the variance in ChatGPT's performance among wrapping formats from 235.33 to 0.71 (%$^2$)

1. Installation

Using conda environment is recommended:

conda create -n myenv python=3.10
conda activate myenv

Then install the required packages:

pip install -e requirements.txt

2. Formats and Models Support

Formats

  • Format of MCQ answer (A. Yes, B. No):

    1. Character identifier (A/B).
    2. Text description of the choice (Yes/No).
  • Wrapping formats:

    1. Special character (<ANSWER>, </ANSWER>).
    2. Bolding (e.g., The answer is 10.).
    3. Italicizing (e.g., The answer is [[10]].).
    4. Double brackets (e.g., The answer is ((10)).).
    5. Double parentheses (e.g., The answer is ""10"".).
    6. Placeholder (e.g., The answer is 10.).
    7. Quoting (e.g., The answer is """10""".).
  • List format:

    1. Python list.
    2. Bullet-point list.
    3. List of elements separated by a special character ”[SEP]”.
    4. List of elements arranged on separate lines.
  • Mapping format:

    1. JSON/Python dictionary.
    2. YAML.

Models

  • ChatGPT:
    • gpt-3.5-turbo-0125
  • Mistral:
    • mistralai/Mistral-7B-Instruct-v0.2
  • Gemma:
    • google/gemma-7b-it

More models are coming!

3. Running the codes

  • The codes for models and prompting baselines are in /FormatBiasToPublish/FormatEval/src where each model and a prompting method has a separate Python file.

  • Simply running: CUDA_VISIBLE_DEVICES=x, python ....py should work. Metrics' computations are also integrated in them.

  • Our evaluation outputs are provided in /FormatBiasToPublish/FormatEval/src/output.

3. Reference

  • If you have any question or found any bugs, please email directly to Do Xuan Long via email: xuanlong.do@u.nus.edu.

  • If you found our work helpful, please cite it:

@article{long2024llms,
  title={LLMs Are Biased Towards Output Formats! Systematically Evaluating and Mitigating Output Format Bias of LLMs},
  author={Long, Do Xuan and Ngoc, Hai Nguyen and Sim, Tiviatis and Dao, Hieu and Joty, Shafiq and Kawaguchi, Kenji and Chen, Nancy F and Kan, Min-Yen},
  journal={arXiv preprint arXiv:2408.08656},
  year={2024}
}

About

[Preprint' 24] LLMs Are Biased Towards Output Formats! Systematically Evaluating and Mitigating Output Format Bias of LLMs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%