This repo contains implementations of various algorithms used to solve the problem of Multi-Agent Pickup and Delivery (a generalization of Multi-Agent Path Finding) and a simulation envirionment used to test them.
In the image we can see an overview of the simulation pipeline.
Green rectangles represents the input (a yaml file containing information about the environment, agents, tasks, delays) and the output (a json file containing the actual path walked by the agents) of the simulation. Blue rectangles constitute the core components of the simulation. The orange rectangular outline symbolizes the loop performed by advancing the simulation of one time step until all the tasks are completed. At every time step, the simulation manager calls the high level algorithm (k-TP or p-TP) to plan paths for the agents to complete available tasks; after the high level algorithm return the paths, agents are advanced of one step along their paths or remain in the current location depending on whether they are delayed at that time step or not. Also agents that would cause collisions by moving are stopped. Information about these delayed agents is given to the high level algorithm at the next time step. Finally, the purple rectangular outline represents the loop performed at any time step on all the agents that can be assigned to a task or that need a replan. When one of these agents is selected, he is assigned to the closest task according to Manhattan distance (in case of a replan task assignment is already done) and then the low level algorithm (in the code we can see two algorithms called, CBS and state-space A*, but in practice only state-space A* is used) is called to compute a path from the start location of the agent to the pickup vertex of task (in case of a replan, if an agent has already reached the pickup vertex this first path is empty) and then a path from the the pickup vertex to the goal vertex of the task.
The code has been tested with Python version 3.6.9.
All the packages needed to run the code can be found in the file requirements.txt
. To install all the requirements, run the following command:
pip install -r requirements.txt
Before running the simulation, an environment can be chosen. The Environments folder contains different predefined environments. There exists two main types of environments, differentiated by the presence or absence of the sub-string _random in the name. The ''random'' environments just specify the number of tasks and delay per agents, while the others present fixed tasks and delays (to use these environments, a special simulation parameter must be set). To change the simulation environment, open the file config.json
and modify the parameter ''input_name'' with the file name of the desired environment.
Then, to start the simulation, the script demo.py
can be run. The script accepts various command line arguments:
- -k: an integer (k >= 0) which represents the robustness parameter for k-TP;
- -p: a float (0 <= p <= 1) which represents the robustness parameter (probability threshold) for p-TP;
- -pd: a float (0 <= pd <= 1, default .02) which represents the expected probability of an agent of being delayed at any time step (used in p-TP);
- -p_iter: an integer (p_iter >= 1, default 1) which represents the number of times a new path can be recalculated if the one calculated before exceeds the probability threshold (used in p-TP);
- -a_star_max_iter: an integer (a_star_max_iter >= 1, default 5000) which represents the maximum number of states explored by the low-level algorithm state-space A*;
- -slow_factor: an integer (slow_factor >= 1, default 1) which allows to slow down the visualization;
- -not_rand: this parameter needs to be present if the input environment is not randomized.
sudo apt-get install python3-tk
In the following we present some example runs.
Run TP with recovery routines:
python3 demo.py
Run k-TP with k = 2 and slower visualization:
python3 demo.py -k 2 -slow_factor 3
Run p-TP with p = 0.6, pd = 0.05 in a non-randomized environment:
python3 demo.py -p 0.6 -pd 0.05 -not_rand
To run multiple experiments and collect all the statistics, a specific script, run_all_experiments_new.py
, can be used. This script contains a list of experiments (easy to modify and extend) that will be run exploiting multi-threading; after all the experiments terminate a json file with the results will be saved in the Experiments folder. The script can be run with the following command:
python3 -m Utils.run_all_experiments_new
To see the results plotted as box plots, the script plot_experiments.py
can be used. First, modify the file config.json
changing the parameter ''experiments_name'' with the name of the experiments file that has just been created. Then run the visualization tool with the following command:
python3 -m Utils.plot_experiments
When a plot is closed, the next one will appear until the end of the experiments.