Tip
Now you can use a variety of popular models other than OpenAI's GPT2✨
Important
Main Repository: Zeta
Fully Open-source LLM Tool
- Select Pre-trained Model👐
- Select Dataset🧠
- Wait🕰️
- Successfully Created Your Own LLM✨
- Install Git and Git-LFS
- Clone This Repository (Example:
git clone https://github.com/DiamondGotCat/Zeta-Tool.git
) - Check and Install Python and PIP (Recommend: Miniconda)
- Install Requiments using PIP (Example:
pip install pandas transformers torch rich
)
- Run training.py using Python
- Answer Selection
- Wait
- Done
- Run execute.py using Python
- If Loaded, then Enter Prompt
- Get Answer
- Enter
/q
to Exit Chat
- training.py: Learning using Zeta-Formatted Dataset
- execute.py: Run Learned Model (You Need Move Model Folder to ./trained-model)
- (Need Git-LFS to Clone This) OpenO1-SFT by OpenO1 Team I converted to Azuki-Formatted Dataset.
- Zeta-Classic 2n by DiamondGotCat
-
gpt2
- Overview: Training from Scratch
- Tokenizer:
openai-community/gpt2
-
gpt2-small
- Tokenizer/Model:
openai-community/gpt2
- Tokenizer/Model:
-
gpt2-medium
- Tokenizer/Model:
openai-community/gpt2-medium
- Tokenizer/Model:
NOTE: Model Access Permission Required
-
gemma
- Size 2b:
- Tokenizer/Model:
google/gemma-2b
- Tokenizer/Model:
- Size 7b:
- Tokenizer/Model:
google/gemma-7b
- Tokenizer/Model:
- Size 2b:
-
codegemma
- Size 2b:
- Tokenizer/Model:
google/codegemma-2b
- Tokenizer/Model:
- Size 7b:
- Tokenizer/Model:
google/codegemma-7b
- Tokenizer/Model:
- Size 2b:
-
gemma1.1 (Inst)
- Size 2b:
- Tokenizer/Model:
google/gemma-1.1-2b-it
- Tokenizer/Model:
- Size 7b:
- Tokenizer/Model:
google/gemma-1.1-7b-it
- Tokenizer/Model:
- Size 2b:
-
gemma2
- Size 2b:
- Tokenizer/Model:
google/gemma-2-2b
- Tokenizer/Model:
- Size 9b:
- Tokenizer/Model:
google/gemma-2-9b
- Tokenizer/Model:
- Size 27b:
- Tokenizer/Model:
google/gemma-2-27b
- Tokenizer/Model:
- Size 2b:
NOTE: Model Access Permission Required
-
llama2
- Size 7b:
- Tokenizer/Model:
meta-llama/Llama-2-7b
- Tokenizer/Model:
- Size 13b:
- Tokenizer/Model:
meta-llama/Llama-2-13b
- Tokenizer/Model:
- Size 70b:
- Tokenizer/Model:
meta-llama/Llama-2-70b
- Tokenizer/Model:
- Size 7b:
-
codellama
- Size 7b:
- Tokenizer/Model:
meta-llama/CodeLlama-7b-hf
- Tokenizer/Model:
- Size 13b:
- Tokenizer/Model:
meta-llama/CodeLlama-13b-hf
- Tokenizer/Model:
- Size 34b:
- Tokenizer/Model:
meta-llama/CodeLlama-34b-hf
- Tokenizer/Model:
- Size 70b:
- Tokenizer/Model:
meta-llama/CodeLlama-70b-hf
- Tokenizer/Model:
- Size 7b:
-
llama3
- Size 8b:
- Tokenizer/Model:
meta-llama/Meta-Llama-3-8B
- Tokenizer/Model:
- Size 70b:
- Tokenizer/Model:
meta-llama/Meta-Llama-3-70B
- Tokenizer/Model:
- Size 8b:
-
llama3.1
- Size 8b:
- Tokenizer/Model:
meta-llama/Llama-3.1-8B
- Tokenizer/Model:
- Size 70b:
- Tokenizer/Model:
meta-llama/Llama-3.1-70B
- Tokenizer/Model:
- Size 405b:
- Tokenizer/Model:
meta-llama/Llama-3.1-405B
- Tokenizer/Model:
- Size 8b:
-
llama3.2
- Size 1b:
- Tokenizer/Model:
meta-llama/Llama-3.2-1B
- Tokenizer/Model:
- Size 3b:
- Tokenizer/Model:
meta-llama/Llama-3.2-3B
- Tokenizer/Model:
- Size 1b:
-
llama3.2-v (Vision)
- Size 11b:
- Tokenizer/Model:
meta-llama/Llama-3.2-11B-Vision
- Tokenizer/Model:
- Size 90b:
- Tokenizer/Model:
meta-llama/Llama-3.2-90B-Vision
- Tokenizer/Model:
- Size 11b:
-
llama3.3
- Size 70b: (Select
confirm
)- Tokenizer/Model:
meta-llama/Llama-3.3-70B-Instruct
- Tokenizer/Model:
- Size 70b: (Select
-
qwen
- Size 1.8b:
- Tokenizer/Model:
Qwen/Qwen-1_8B
- Tokenizer/Model:
- Size 7b:
- Tokenizer/Model:
Qwen/Qwen-7B
- Tokenizer/Model:
- Size 14b:
- Tokenizer/Model:
Qwen/Qwen-14B
- Tokenizer/Model:
- Size 72b:
- Tokenizer/Model:
Qwen/Qwen-72B
- Tokenizer/Model:
- Size 1.8b:
-
qwen1.5
- Size 0.5b:
- Tokenizer/Model:
Qwen/Qwen1.5-0.5B
- Tokenizer/Model:
- Size 1.8b:
- Tokenizer/Model:
Qwen/Qwen1.5-1.8B
- Tokenizer/Model:
- Size 4b:
- Tokenizer/Model:
Qwen/Qwen1.5-4B
- Tokenizer/Model:
- Size 14b:
- Tokenizer/Model:
Qwen/Qwen1.5-14B
- Tokenizer/Model:
- Size 32b:
- Tokenizer/Model:
Qwen/Qwen1.5-32B
- Tokenizer/Model:
- Size 72b:
- Tokenizer/Model:
Qwen/Qwen1.5-72B
- Tokenizer/Model:
- Size 0.5b:
- trained_model
- Overview: Your Pre-trained Model
Thank you for reading this.
Actually, Zeta-Tool is a personal project. There are still parts that haven't been developed yet.
If possible, please help in one of the following ways:
- Simple: Please give it a star.
- For programmers/engineers: Help with code fixes or testing. (See Ideas)
- For those who can support Zeta-Tool's future: Publish the trained models on HuggingFace. However, please include information about the Zeta-Tool project. For more details, see Help with Trained Model.