Skip to content

Commit

Permalink
add more pretrained model, and low-resource training scripts
Browse files Browse the repository at this point in the history
  • Loading branch information
YuanGongND committed Jan 7, 2024
1 parent 82d3ce5 commit d120d46
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -430,7 +430,7 @@ If you have a question about the code, please create an issue.

## Required Computational Resources

For LTU/LTU-AS training, we use 4 X A6000 (4 X 48GB=196GB VRAM). The code can be run on 1 X A6000 (or similar GPUs). To run on smaller GPUs, turn on model parallelism, we were able to run it on 4 X A5000 (4 X 24GB = 96GB) (see [LTU script]() and [LTU-AS script]()).
For LTU/LTU-AS training, we use 4 X A6000 (4 X 48GB=196GB VRAM). The code can be run on 1 X A6000 (or similar GPUs). To run on smaller GPUs, turn on model parallelism, we were able to run it on 4 X A5000 (4 X 24GB = 96GB) (see [LTU script](https://github.com/YuanGongND/ltu/blob/main/src/ltu/train_script/finetune_toy_low_resource.sh) and [LTU-AS script](https://github.com/YuanGongND/ltu/blob/main/src/ltu_as/train_script/finetune_toy_low_resource.sh)).

For inference, the minimal would be 2 X TitanX (2 X 12GB = 24GB) for LTU and 4 X TitanX (4 X 12GB = 48GB) for LTU-AS (as Whisper takes some memory). However, you can run inference on CPUs.

Expand Down

0 comments on commit d120d46

Please sign in to comment.