Skip to content

Commit 83bd68d

Browse files
Fix incorrect logic related to --keep_last_checkpoint_only
We should not assign a value to this parameter, as it takes no values. Signed-off-by: Courtney Pacheco <6019922+courtneypacheco@users.noreply.github.com>
1 parent 989e4fb commit 83bd68d

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

src/instructlab/training/main_ds.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -707,9 +707,11 @@ def run_training(torch_args: TorchrunArgs, train_args: TrainingArgs) -> None:
707707
f"--max_batch_len={train_args.max_batch_len}",
708708
f"--seed={train_args.random_seed}",
709709
f"--chat-tmpl-path={train_args.chat_tmpl_path}",
710-
f"--keep_last_checkpoint_only={train_args.keep_last_checkpoint_only}",
711710
]
712711

712+
if train_args.keep_last_checkpoint_only:
713+
command.append(f"--keep_last_checkpoint_only")
714+
713715
if train_args.checkpoint_at_epoch:
714716
command.append("--checkpoint_at_epoch")
715717

0 commit comments

Comments
 (0)