Skip to content

alxxtexxr/indowebgen

Repository files navigation

🇮🇩🌐🤖 IndoWebGen

Data Generation

The data generation process utilizes the Alpaca Self-Instruct pipeline, but with the OpenAI chat model (e.g., gpt-3.5-turbo). Please note tthat his pipeline does not employ a batch system since the chat model does not support prompt batching.

python -m generate_data generate_instruction_following_data \
  --output_dir="./" \
  --num_instructions_to_generate=100 \
  --num_instructions_to_generate_per_request=2 \
  --model_name="gpt-3.5-turbo-16k" \
  --similarity_threshold=0.6

Fine-tuning

The fine-tuning process utilizes the WizardLM-13B-V1.2 model.

python finetune.py \
  --hf_token hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx \
  --base_model_id codellama/CodeLlama-7b-hf \
  --dataset_id alimtegar/indowebgen-dataset \
  --output_dir "./indowebgen-7b-4k-lora" \
  --output_model_id alimtegar/indowebgen-7b-4k-lora \
  --commit_message "Finetune for 3 epochs" \
  --vastai_api_key xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx \
  --vastai_instance_id 1234567
  --stop_vastai_instance 0

About

IndoWebGen model pipeline

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published