Skip to content

Latest commit

 

History

History

duckietown_il

Imitation Learning - Behavioral Cloning

This folder contains all the scripts required for the Imitation Learning approach.

Description

The only sensor available on the Duckiebot is a camera. The final objective is to build a model that allows the Duckiebot to drive on streets within Duckietown using only the camera. In order to do so we collected data using the Expert, then, train a NN hat predicts actions the Duckiebot has to take to drive safely. The NN takes as input an observation captured by the camera and gives in output the action to take.

You can see here below the trained Duckiebot running in two different maps:

Trained Duckiebot

Getting Started

Prerequisite

Remember to install all the dependencies

Collect data

Let's collect data using the collect_data.py script.

What this is script does is:

  • Run an expert on a variety of maps (see maps).
  • Record the actions it takes and save pairs <observation, action> in the .log file.

An important aspect is the number and the variety of samples:

  • To increase/decrease the number of samples you can increase/decrease the value of STEPS and/or EPISODES in the collect_data.py file.
  • It is possible to run the expert on a single/variety of gym-duckietown maps. To do so use the randomize_maps_on_reset parameters of the class Simulator (see env.py).

In general, you can use the parameters of the Simulator class to change the environment settings.

Run the expert and collect data:

python collect_data.py

Train the model

Now that you've collected data using the Expert, you'll train an agent that drives safely on the streets within Duckietown. The model you'll train is a Neural Network (NN) that takes as input an observation and learn to gives in output the right action t he Duckiebot has to perform to drive safely.

The architecture of the NN you'll train is taken from the paper

In the model.py file you can find three different models:

Train the model:

python train_actions.py

Evaluate the model

Once you've trained the model you can evaluate its performance. You can do that using the eval_actions.py script. With this script you will:

  • see the Duckiebot that drives using the trained model.
  • check the reward taken by the agent during its driving.
  • plot images to check Predictions VS. Ground Truth. In this case you need to uncomment this lines:
# # Plot Predictions VS. Ground Truth
# fig = plt.figure(figsize=(40, 30))
# i = 0
# for prediction, gt, img in zip(predictions_25Ep, gts, observations):
#     fig, ax = plt.subplots(1, 1, constrained_layout=True)
#     ax.imshow(img)
#     ax.set_title(f"Pred: [{prediction[0]:.3f}, {prediction[1]:.3f}]\n"
#                  f"GT: [{gt[0]:.3f}, {gt[1]:.3f}]")
#     i += 1
#
# plt.show()

Evaluate the model:

python eval_actions.py

Folder Details

gym_duckietown/
maps/
utils/
env.py
expert.py.py
collect_data.py

Author