Skip to content

Repository for ACL2024 "Revealing the Parametric Knowledge of Language Models: A Unified Framework for Attribution Methods"

Notifications You must be signed in to change notification settings

copenlu/reveal-param-knowledge

Repository files navigation

Revealing the Parametric Knowledge of Language Models: A Unified Framework for Attribution Methods

This is a repository for the paper Revealing the Parametric Knowledge of Language Models: A Unified Framework for Attribution Methods accepted at ACL2024.

Abstract

Our study introduces a novel evaluation framework to quantify and compare the knowledge revealed by IA and NA. To align the results of the methods we introduce the attribution method NA-Instances to apply NA for retrieving influential training instances, and IA-Neurons to discover important neurons of influential instances discovered by IA. We further propose a comprehensive list of faithfulness tests to evaluate the comprehensiveness and sufficiency of the explanations provided by both methods. Through extensive experiments and analysis, we demonstrate that NA generally reveals more diverse and comprehensive information regarding the LM's parametric knowledge compared to IA. Nevertheless, IA provides unique and valuable insights into the LM's parametric knowledge, which are not revealed by NA. Our findings further suggest the potential of a synergistic approach of combining the diverse findings of IA and NA for a more holistic understanding of an LM's parametric knowledge.

Fine-tuning

to be added

Neuron Attribution

to be added

Instance Attribution

to be added

Neuron Attribution Faithfulness Test

to be added

About

Repository for ACL2024 "Revealing the Parametric Knowledge of Language Models: A Unified Framework for Attribution Methods"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published