Reinforcement learning project for Pokemon Showdown. Currently only supports the Gen-4 random battle format.
This project has three parts:
- PsBot framework for creating a general Pokemon Showdown bot and setting up the battle interface.
- Battle state tracker and PS protocol parser.
- Neural network training script.
2022-08-14_demo.mp4
Me (left) vs a model (right) that was trained over ~16k games against itself
Make sure you have at least Node v18 (LTS) and Miniconda 3 installed. Should work on Linux and likely also Windows WSL2.
# Download the repository.
git clone https://github.com/taylorhansen/pokemonshowdown-ai
cd pokemonshowdown-ai
# Checkout submodules.
git submodule init
git submodule update
# Setup Python/TensorFlow.
conda env create --name psai --file environment.yml
conda activate psai
# Setup TS.
npm install
npm run build
# Run formatter.
npm run format
isort src test
black src test
# Run linter.
npm run lint
pylint src test
mypy src test
# Run tests.
npm test
python -m test.unit
# Edit hyperparmeters as needed.
cp config/train_example.yml config/train.yml
python -m src.py.train
Trains the neural network through self-play. This requires a powerful computer and/or GPU, and may take several hours depending on how it's configured.
Training logs are saved to ./experiments/
by default.
Metrics such as loss and evaluation scores can be viewed using TensorBoard.
pip install tensorboard
tensorboard --logdir experiments
# Edit config as needed.
cp config/psbot_example.yml config/psbot.yml
npm run psbot
Connects to the PS server specified in the config
and starts accepting battle challenges in the gen4randombattle
format, which
is the only format that this project supports for now. By default it loads the
model from ./experiments/train/model
(assuming a training run was completed)
and connects to a locally-hosted PS instance (see
guide
on how to set one up). This allows the trained model to take on human
challengers or any other outside bots.
See LICENSE.