Skip to content

A Predator-Prey-Grass multi-agent gridworld environment implemented with Farama's Gymnasium, PettingZoo and MOMAland. Featuring dynamic spawning and deletion and partial observability of agents.

License

Notifications You must be signed in to change notification settings

doesburg11/PredPreyGrass

Repository files navigation

Python 3.11.10 PettingZoo version dependency Open In Colab



Predator-Prey-Grass multi-agent reinforcement learning (MARL)

Predator-Prey-Grass gridworld deploying multi-agent environments with dynamic deletion and spawning of partially observant agents, utilizing Farama's PettingZoo.



The environments

predpregrass_base.py: A (single-objective) multi-agent reinforcement learning (MARL) environment, centralized trained and decentralized evaluated using Proximal Policy Optimization (PPO). Learning agents Predators (red) and Prey (blue) both expend energy moving around, and replenish it by eating. Prey eat Grass (green), and Predators eat Prey if they end up on the same grid cell. In the base case, the agents obtain all the energy from the eaten Prey or Grass. Predators die of starvation when their energy is zero, Prey die either of starvation or when being eaten by a Predator. The agents asexually reproduce when energy levels of learning agents rise above a certain treshold by eating. Learning agents learn to execute movement actions based on their partial observations (transparent red and blue squares respectively as depicted above) of the environment to maximize cumulative reward.In the base case, the single objective rewards (stepping, eating, dying and reproducing) are aggregated and can be adjusted in the environment configuration file.

Emergent Behaviors

Training the single objective environment predpregrass_base.py with the PPO algorithm is an example of how elaborate behaviors can emerge from simple rules in agent-based models. In the above displayed MARL example, rewards for learning agents are solely obtained by reproduction. So all other reward options are set to zero in the environment configuration. Despite these relative sparse reward structure, maximizing these rewards results in elaborate emerging behaviors such as:

  • Predators hunting Prey
  • Prey finding and eating grass
  • Predators hovering around grass to catch Prey
  • Prey trying to escape Predators

Moreover, these learning behaviors lead to more complex emergent dynamics at the ecosystem level. The trained agents are displaying a classic Lotka–Volterra pattern over time:

More emergent behavior and findings are described on our website.

Installation

Editor used: Visual Studio Code 1.93.1 on Linux Mint 21.3 Cinnamon

  1. Clone the repository:
    git clone https://github.com/doesburg11/PredPreyGrass.git
  2. Open Visual Studio Code and execute:
    • Press ctrl+shift+p
    • Type and choose: "Python: Create Environment..."
    • Choose environment: Conda
    • Choose interpreter: Python 3.11.10 or higher
    • Open a new terminal
    • pip install -e .
  3. Install the following requirements:
    • pip install supersuit==3.9.3 
    • pip install tensorboard==2.18.0 
    • pip install stable-baselines3[extra]==2.4.0
      
    • conda install -y -c conda-forge gcc=12.1.0

Getting started

Visualize a random policy

In Visual Studio Code run: predpreygrass/single_objective/eval/evaluate_random_policy.py

Training and visualize trained model using PPO from stable baselines3

Adjust parameters accordingly in:

predpreygrass/single_objective/config/config_predpreygrass.py

In Visual Studio Code run:

predpreygrass/single_objective/train/train_ppo_parallel_wrapped_aec_env.py

To evaluate and visualize after training follow instructions in:

predpreygrass/single_objective/eval/evaluate_ppo_from_file_aec_env.py

Batch training and evaluating in one go:

predpreygrass/single_objective/eval/parameter_variation_train_wrapped_to_parallel_and_evaluate_aec.py

References