Skip to content

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO) #652

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO)

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO) #652

Triggered via pull request March 5, 2025 19:25
Status Success
Total duration 26s
Artifacts

check_code_quality.yml

on: pull_request
pre-commit
18s
pre-commit
Fit to window
Zoom out
Zoom in