Skip to content

LivNLP/prompt_bias_suppression

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 

Repository files navigation

In-Contextual Gender Bias Suppression for Large Language Models

This repository hosts the code for our paper, In-Contextual Gender Bias Suppression for Large Language Models. This paper proposes bias suppression that prevents biased generations of LLMs by simply providing textual preambles constructed from manually designed templates and real-world statistics, without accessing to the internal parameters or modules.

Code

The code we provide can be run on Google Colab with the link below or local machines with .ipynb files in /notebook. Note that each must apply to use Llama2 in order to execute code related to Llama2.

Model Experiment Colab
meta-llama/Llama-2-7b-hf Bias Suppression Open In Colab
meta-llama/Llama-2-7b-hf Downstream Tasks Open In Colab
openlm-research/open_llama_7b_v2 Bias Suppression Open In Colab
openlm-research/open_llama_7b_v2 Downstream Tasks Open In Colab
mosaicml/mpt-7b Bias Suppression Open In Colab
mosaicml/mpt-7b Downstream Tasks Open In Colab

Citation

@misc{oba2024incontextual,
      title={In-Contextual Gender Bias Suppression for Large Language Models}, 
      author={Daisuke Oba and Masahiro Kaneko and Danushka Bollegala},
      year={2024},
      eprint={2309.07251},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published