easy-llm-finetuner
is a project aimed to simplify the deployment and utilization of state-of-the-art large language models (LLMs) for fine-tuning. Leveraging Docker, this project eliminates the hassle of environment setup, allowing users to focus more on model training and less on configuration.
- Ensure Docker is installed on your machine. If not, download and install it from the following links:
- Make sure the NVIDIA drivers are properly installed on your machine if you intend to use GPU for model training. Refer to the official NVIDIA website for driver installation.
Currently, easy-llm-finetuner
supports the following projects:
Each of these projects can be easily fine-tuned using the easy-llm-finetuner
environment. Check out each project's page for more specific details on how to utilize them within this system.
-
Run the Docker container
Use the provided script to start the Docker container. This script mounts your local directories for code, model data, and output to the corresponding directories in the container.
bash ./docker_run/easy_fastchat_docker.sh
-
Start the model training
Within the Docker container, execute the one-click training script to start fine-tuning your model.
docker exec -it fastchat bash # After attaching to docker # For default fsdp finetuning bash /workspace/easy_llm_finetuner/fastchat_finetune_fsdp.sh # For deepspeed optimazed finetuning (more memmory efficiency) bash /workspace/easy_llm_finetuner/fastchat_finetune_deepspeed.sh
And that's it! You are now fine-tuning your LLM using state-of-the-art methods, all within a neatly encapsulated environment.
-
Modify the mount directory Replace , , and with your actual directories in the provided script.
### Easy-LLM-Finetuner/docker_run/fastchat_docker.sh CODE_DIR="<Your code directory>" INPUT_DIR="<Your input directory>" OUTPUT_DIR="<Your output directory>"
-
Run the Docker container
Use the provided script to start the Docker container. This script mounts your local directories for code, model data, and output to the corresponding directories in the container.
bash ./docker_run/fastchat_docker.sh
-
Start the model training
Within the Docker container, execute the one-click training script to start fine-tuning your model.
docker exec -it fastchat bash # After attaching to docker # For default fsdp finetuning bash /workspace/easy_llm_finetuner/fastchat_finetune_fsdp.sh # For deepspeed optimazed finetuning (more memmory efficiency) bash /workspace/easy_llm_finetuner/fastchat_finetune_deepspeed.sh
And that's it! You are now fine-tuning your LLM using state-of-the-art methods, all within a neatly encapsulated environment.
Model Types | Recommended Configurations |
---|---|
llama-13b | 4090*8 |
llama-7b | 4090*3 |
Model Types | Recommended Configurations |
---|---|
llama-65b | 4090*3 |
llama-33b | 4090*3 or A100*1(faster) |
llama-13b | 4090*2 |
llama-7b | 4090*1 |
If you encounter any issues or have suggestions for improvements, feel free to submit an issue or a pull request.
This project is licensed under the terms of the MIT license. See the LICENSE file for details.
We hope easy-llm-finetuner
makes your journey in large language model fine-tuning a breeze. If you find this project useful, please consider giving it a star!
k