Skip to content

Latest commit

 

History

History
180 lines (145 loc) · 7.85 KB

MINER.md

File metadata and controls

180 lines (145 loc) · 7.85 KB

LogicNet: Miner documentation

Overview

The Miner is responsible for solving the challenges generated by the Validator. The Miner will receive the challenges from the Validator, solve them, and submit the solutions back to the Validator. The Miner will be rewarded based on the number of challenges solved and the quality of the solutions.

Miner is free to customize the solution. We provide a default simple solution for the miner to start with. Miner can use the vLLM server to generate the solution.

Protocol: LogicSynapse.

  • Miner will be provided:
    • logic_question: The challenge generated by the Validator.
  • Miner have to fill following content in the synapse to submit the solution:
    • logic_reasoning: The step by step reasoning to solve the challenge.
    • logic_answer: The final answer of the challenge as a short sentence.

Reward Structure:

  • correctness (bool): Validator ask LLM to check the matching between logic_answer and the ground truth.
  • similarity (float): Validator compute cosine similarity between logic_reasoning and validator's reasoning.
  • time_penalty (float): Penalty for late submission. It's ratio of process_time / timeout * MAX_PENALTY.

Setup for Miner

  1. Git clone the repository
git clone https://github.com/LogicNet-Subnet/LogicNet logicnet
cd logicnet
  1. Install the requirements
python -m venv main
. main/bin/activate

bash install.sh

or manually install the requirements

pip install -e .
pip uninstall uvloop -y
pip install git+https://github.com/lukew3/mathgenerator.git
  • For ease of use, you can run the scripts with PM2. To install PM2:
sudo apt update && sudo apt install jq && sudo apt install npm && sudo npm install pm2 -g && pm2 update

There are two ways to run the Miner:

  1. Running Model via Together.AI API
  2. Running Model Locally using vLLM

METHOD 1: Running Model via Together.AI

Alternatively, you can use together.ai's API to access various language models without hosting them locally.

Note: You need to register an account with together.ai, obtain an API key, and set the API key in a .env file.

  1. Register and Obtain API Key

    • Visit together.ai and sign up for an account.
    • Obtain your API key from the together.ai dashboard.
  2. Set Up the .env File

    Create a .env file in your project directory and add your together.ai API key TOGETHER_API_KEY=your_together_ai_api_key

    You can do this in one command:

     echo "TOGETHER_API_KEY=your_together_ai_api_key" > .env
  3. Select a Model

    Together.ai provides access to various models. Please select a suitable chat/language model from the list below:

    Model Name Model ID Pricing (per 1M tokens)
    Qwen 1.5 Chat (72B) qwen/Qwen-1.5-Chat-72B $0.90
    Qwen 2 Instruct (72B) Qwen/Qwen2-Instruct-72B $0.90
    LLaMA-2 Chat (13B) meta-llama/Llama-2-13b-chat-hf $0.22
    LLaMA-2 Chat (7B) meta-llama/Llama-2-7b-chat-hf $0.20
    MythoMax-L2 (13B) Gryphe/MythoMax-L2-13B $0.30
    Mistral (7B) Instruct v0.3 mistralai/Mistral-7B-Instruct-v0.3 $0.20
    Mistral (7B) Instruct v0.2 mistralai/Mistral-7B-Instruct-v0.2 $0.20
    Mistral (7B) Instruct mistralai/Mistral-7B-Instruct $0.20
    etc...

    More models are available on the together.ai platform here: together.ai models

    Note: You don't have to choose image models, choose either chat or language models.

  4. Run the Miner with together.ai

    Activate your virtual environment:

    . main/bin/activate

    Source the .env file:

    source .env

    Start the miner using the following command, replacing placeholders with your actual values:

    pm2 start python --name "sn35-miner" -- neurons/miner/miner.py \
      --netuid 35 \
      --wallet.name "your-wallet-name" \
      --wallet.hotkey "your-hotkey-name" \
      --subtensor.network finney \
      --axon.port "your-open-port" \
      --miner.category Logic \
      --miner.epoch_volume 200 \
      --miner.llm_client.base_url https://api.together.xyz/v1 \
      --miner.llm_client.model "model_id_from_list" \
      --llm_client.key $TOGETHER_API_KEY \
      --logging.debug

    Replace "model_id_from_list" with the Model ID you have chosen from the together.ai model list. For example, Qwen/Qwen2-Instruct-72B.

Notes:

  • Ensure your TOGETHER_API_KEY is correctly set in the .env file and sourced before running the command. you can check the .env file by running cat .env. And to make sure you sourced the .env file correctly, you can run echo $TOGETHER_API_KEY.
  • The --miner.llm_client.base_url should point to the together.ai API endpoint: https://api.together.xyz/v1
  • Make sure your --miner.llm_client.model matches the Model ID provided by together.ai.
  • For more details on the together.ai API, refer to their documentation.

METHOD 2: Running Model locally using vLLM

Minimum Compute Requirements:

  • 1x GPU 24GB VRAM (RTX 4090, A100, A6000, L4, etc...)
  • Storage: 100GB
  • Python 3.10
  1. Create env for vLLM
python -m venv vllm
. vllm/bin/activate
pip install vllm
  1. Setup LLM Configuration
  • Self host a vLLM server
. vllm/bin/activate
pm2 start "vllm serve Qwen/Qwen2-7B-Instruct --port 8000 --host 0.0.0.0" --name "sn35-vllm" # change port and host to your preference
  • If you want to run larger models on GPUs with less VRAM, there are several techniques you can use to optimize GPU memory utilization:
    • You can adjust the GPU memory utilization to maximize the available memory by using a flag like --gpu_memory_utilization. This allows the model to use a specified percentage of GPU memory.
    pm2 start "vllm serve Qwen/Qwen2-7B-Instruct --gpu_memory_utilization 0.95 --port 8000 --host 0.0.0.0" --name "sn35-vllm" 
    # This command sets the model to use 95% of the available GPU memory.
    • Using mixed precision (FP16) instead of full precision (FP32) reduces the amount of memory required to store model weights, which can significantly lower VRAM usage.
    pm2 start "vllm serve Qwen/Qwen2-7B-Instruct --precision fp16 --gpu_memory_utilization 0.95 --port 8000 --host 0.0.0.0" --name "sn35-vllm"
    • If you have multiple GPUs, you can shard the model across them to distribute the memory load.
    pm2 start "vllm serve Qwen/Qwen2-7B-Instruct --shard --port 8000 --host 0.0.0.0" --name "sn35-vllm"
  1. Run the following command to start mining
. main/bin/activate
pm2 start python --name "sn35-miner" -- neurons/miner/miner.py \
--netuid 35 --wallet.name "wallet-name" --wallet.hotkey "wallet-hotkey" \
--subtensor.network finney \
--axon.port "your-open-port" \
--miner.category Logic \ # specify the category to join. Currently, only Logic is supported
--miner.epoch_volume 50 \ # commit no of requests to be solved in an epoch. It will affect the reward calculation
--miner.llm_client.base_url http://localhost:8000/v1 \ # vLLM server base url
--miner.llm_client.model Qwen/Qwen2-7B-Instruct \ # vLLM model name
--logging.debug \ # Optional: Enable debug logging

If you encounter any issues, check the miner logs or contact the LogicNet support team.

Happy Mining!