Skip to content

Commit 6716ccf

Browse files
a-r-r-o-wsayakpaul
andauthored
update (#198)
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
1 parent dbffc80 commit 6716ccf

File tree

2 files changed

+8
-6
lines changed

2 files changed

+8
-6
lines changed

README.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,8 @@ Then launch LoRA fine-tuning. Below we provide an example for LTX-Video. We refe
4343
<details>
4444
<summary>Training command</summary>
4545

46+
TODO: LTX does not do too well with the disney dataset. We will update this to use a better example soon.
47+
4648
```bash
4749
#!/bin/bash
4850
export WANDB_MODE="offline"
@@ -75,18 +77,18 @@ dataset_cmd="--data_root $DATA_ROOT \
7577
dataloader_cmd="--dataloader_num_workers 0"
7678

7779
# Diffusion arguments
78-
diffusion_cmd="--flow_resolution_shifting"
80+
diffusion_cmd="--flow_weighting_scheme logit_normal"
7981

8082
# Training arguments
8183
training_cmd="--training_type lora \
8284
--seed 42 \
8385
--mixed_precision bf16 \
8486
--batch_size 1 \
85-
--train_steps 1200 \
87+
--train_steps 3000 \
8688
--rank 128 \
8789
--lora_alpha 128 \
8890
--target_modules to_q to_k to_v to_out.0 \
89-
--gradient_accumulation_steps 1 \
91+
--gradient_accumulation_steps 4 \
9092
--gradient_checkpointing \
9193
--checkpointing_steps 500 \
9294
--checkpointing_limit 2 \

docs/training/ltx_video.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -36,18 +36,18 @@ dataset_cmd="--data_root $DATA_ROOT \
3636
dataloader_cmd="--dataloader_num_workers 0"
3737

3838
# Diffusion arguments
39-
diffusion_cmd="--flow_resolution_shifting"
39+
diffusion_cmd="--flow_weighting_scheme logit_normal"
4040

4141
# Training arguments
4242
training_cmd="--training_type lora \
4343
--seed 42 \
4444
--mixed_precision bf16 \
4545
--batch_size 1 \
46-
--train_steps 1200 \
46+
--train_steps 3000 \
4747
--rank 128 \
4848
--lora_alpha 128 \
4949
--target_modules to_q to_k to_v to_out.0 \
50-
--gradient_accumulation_steps 1 \
50+
--gradient_accumulation_steps 4 \
5151
--gradient_checkpointing \
5252
--checkpointing_steps 500 \
5353
--checkpointing_limit 2 \

0 commit comments

Comments
 (0)