Screen.Recording.2023-04-14.at.21.55.15.mov
I decided to use NEAT (Neuroevolution of augmenting topologies) to teach a Feed Forward neural network model to play a platform game. It connects genetic algorithms with the power of neural networks which I've found fascinating. The notion behind NEAT is to encode neural networks as chromosomes and improve them through evolution using fitness function as a metric. It's a powerful tool for various reinforcement learning tasks.
It was initially introduced in this paper by Kenneth O. Stanley and Risto Miikkulainen from The University of Texas. Basic steps of this algorithm:
- Start with a population of neural networks with random weights
- Calculate fitness for every network
- Crossover best networks with each others
- Mutate networks with some probability
- Mutate by adding new node
- Mutate by deleting node
- Mutate by adding connection
- Mutate by deleting connection
- Run this over many generations to get the best result
In NEAT each chromosome contain two sets of genes.
- Node genes - storing information about occurrence of neurons
- Connection genes - storing information about connections between neurones and their wights
- Player (cat) has to avoid obstacles (snails or flies)
- Every avoided obstacle is a 1 point
- Player can only jump
- Obstacles are chosen randomly
- Speed (velocity) of obstacles is also random for every obstacle
My proposed fitness function evaluates player (neural network/chromosome) by following critiera:
- For every frame the player is alive +0.1
- For every obstacle the player avoided +5
- Every time player hits the obstacle -1 and we terminate this solution
Features I choose:
- Next obstacle Y-axis value
- Distance between the player and the very next obstacle
- Velocity the next obstacle is coming
- Type of the next obstacle (0 - snail, 1 - fly)
I also decided to use MinMax scaler on numeric features. It really improved the results.
Important initial parameters:
- Input nodes: 4
- Hidden nodes: 0
- Output nodes: 1
- Activation function: tanh
- Initial connection: full
- Max generations: 35
- Max score: 100 (above that solution is perfect, no need to train furthermore)
- Initial population: 30
Here you can see an example video of a training process in 100 fps: training video It took it 6 generations to master the game this time.
Below is a neural network our algorithm found works best for this problem. I save it as a pickle file in models/best_model.pickle.
Here you can check out our trained network playing the game by it's own.
Screen.Recording.2023-04-14.at.21.55.15.mov
git clone https://github.com/Ilnicki010/neat-ai-python-game.git
cd neat-ai-python-game
pip install -r requirements.txt
python main.py
My list of ideas to improve in a neuroevolution:
- Make game more complex (maybe sth like Mario or Donkey Kong)
- Try to do a multiplayer game (many AIs fight with each other)
- Neuroevolution = many neural networks = many simulations = computing power. Read more about running these in parallel, optimize, run on Azure, etc.