Skip to content
/ OntoTune Public

[Paper][WWW2025] OntoTune: Ontology-Driven Self-training for Aligning Large Language Models

Notifications You must be signed in to change notification settings

zjukg/OntoTune

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OntoTune

In this work, we propose an ontology-driven self-training framework called OntoTune, which aims to align LLMs with ontology through in-context learning, enabling the generation of responses guided by the ontology.

🔔 News

  • 2025-01 OntoTune is accepted by WWW 2025 !
  • 2025-02 Our paper is released on arxiv !

🚀 How to start

git clone https://github.com/zjukg/OntoTune.git

The code of fine-tuning is constructed based on open-sourced repo LLaMA-Factory.

Dependencies

cd LLaMA-Factory
pip install -e ".[torch,metrics]"

Data Preparation

  1. The supervised instruction-tuned data generated by LLaMA3 8B for the LLM itself is placed in the link.
  2. Put the downloaded OntoTune_sft.json file under LLaMA-Factory/data/ directory.
  3. Evaluation datasets for hypernym discovery and medical question answering are in LLaMA-Factory/data/evaluation_HD and LLaMA-Factory/data/evaluation_QA, respectively.

Finetune LLaMA3

You need to add model_name_or_path parameter to yaml file。

cd LLaMA-Factory
llamafactory-cli train script/OntoTune_sft.yaml

🤝 Cite:

Please consider citing this paper if you find our work useful.


@article{OntoTune,
  title={OntoTune: Ontology-Driven Self-training for Aligning Large Language Models},
  author={Liu, Zhiqiang and Gan, Chengtao and Wang, Junjie and Zhang, Yichi and Bo, Zhongpu and Sun, Mengshu and Chen, Huajun and Zhang, Wen},
  journal={arXiv preprint arXiv:2502.05478},
  year={2025}
}

About

[Paper][WWW2025] OntoTune: Ontology-Driven Self-training for Aligning Large Language Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages