From d120d46b834a26546c582bdbf871deb8edecdbba Mon Sep 17 00:00:00 2001 From: Yuan Gong Date: Sat, 6 Jan 2024 20:20:51 -0500 Subject: [PATCH] add more pretrained model, and low-resource training scripts --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index ee6490d..449c682 100644 --- a/README.md +++ b/README.md @@ -430,7 +430,7 @@ If you have a question about the code, please create an issue. ## Required Computational Resources -For LTU/LTU-AS training, we use 4 X A6000 (4 X 48GB=196GB VRAM). The code can be run on 1 X A6000 (or similar GPUs). To run on smaller GPUs, turn on model parallelism, we were able to run it on 4 X A5000 (4 X 24GB = 96GB) (see [LTU script]() and [LTU-AS script]()). +For LTU/LTU-AS training, we use 4 X A6000 (4 X 48GB=196GB VRAM). The code can be run on 1 X A6000 (or similar GPUs). To run on smaller GPUs, turn on model parallelism, we were able to run it on 4 X A5000 (4 X 24GB = 96GB) (see [LTU script](https://github.com/YuanGongND/ltu/blob/main/src/ltu/train_script/finetune_toy_low_resource.sh) and [LTU-AS script](https://github.com/YuanGongND/ltu/blob/main/src/ltu_as/train_script/finetune_toy_low_resource.sh)). For inference, the minimal would be 2 X TitanX (2 X 12GB = 24GB) for LTU and 4 X TitanX (4 X 12GB = 48GB) for LTU-AS (as Whisper takes some memory). However, you can run inference on CPUs.