Skip to content

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO) #650

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO)

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO) #650

Triggered via pull request March 5, 2025 18:18
Status Success
Total duration 35s
Artifacts

check_code_quality.yml

on: pull_request
pre-commit
27s
pre-commit
Fit to window
Zoom out
Zoom in