Skip to content

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO) #648

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO)

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO) #648

Triggered via pull request March 5, 2025 17:53
Status Cancelled
Total duration 14s
Artifacts

check_code_quality.yml

on: pull_request
pre-commit
0s
pre-commit
Fit to window
Zoom out
Zoom in

Annotations

2 errors
pre-commit
Canceling since a higher priority waiting request for 'pre-commit-Vprov/dpo_python' exists
pre-commit
The operation was canceled.