-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Full Finetuning for LTX possibily extended to other models. #192
base: main
Are you sure you want to change the base?
Changes from 1 commit
eeea82a
f0db0cc
4cd5a8e
cb9381b
bd02e6d
72ec207
d0ee9c3
19bba0a
acffc2d
8188f8a
5183405
162e6cd
2c6f549
28b3e84
c0f3889
6d59769
a422f7f
34da4c5
014960f
d6821c3
06dd96c
1f304b3
491b35f
ca957e5
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -455,7 +455,7 @@ def _add_training_arguments(parser: argparse.ArgumentParser) -> None: | |
"--training_type", | ||
type=str, | ||
default=None, | ||
help="Type of training to perform. Choose between ['lora']", | ||
help="Type of training to perform. Choose between ['lora','finetune']", | ||
) | ||
parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can happen in another PR but we could also provide some info to the users know that if WDYT? |
||
parser.add_argument( | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -321,3 +321,18 @@ def _pack_latents(latents: torch.Tensor, patch_size: int = 1, patch_size_t: int | |
"forward_pass": forward_pass, | ||
"validation": validation, | ||
} | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I didn't know if you two wanted to enable this through swapping out prepare parameters I thought it might have been over engineering to do that so I just made a copy of this config. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is perfect and the intended usage. This will eventually be refactored out into model specs to add some syntactic sugar and make the code easier to follow |
||
LTX_VIDEO_T2V_FT_CONFIG = { | ||
"pipeline_cls": LTXPipeline, | ||
"load_condition_models": load_condition_models, | ||
"load_latent_models": load_latent_models, | ||
"load_diffusion_models": load_diffusion_models, | ||
"initialize_pipeline": initialize_pipeline, | ||
"prepare_conditions": prepare_conditions, | ||
"prepare_latents": prepare_latents, | ||
"post_latent_preparation": post_latent_preparation, | ||
"collate_fn": collate_fn_t2v, | ||
"forward_pass": forward_pass, | ||
"validation": validation, | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added this here should it be called full_finetune?