This project utilizes the Deep Deterministic Policy Gradient (DDPG) algorithm to address tracking control problems. The primary objective is to minimize the tracking error in the most efficient manner.
The agent is trained using a noise-free exploration approach, enabling it to effectively track a wide range of reference values once the training is complete.
The results of this project underscore the effectiveness of the DDPG algorithm in solving tracking control problems, and highlight its potential for a variety of applications.