POLICEd-RL: Learning Closed-Loop Robot Control Policies with Provable Satisfaction of Hard Constraints
Repository containing code to implement POLICEd RL presented at RSS 2024. The objective of POLICEd RL is to guarantee the satisfaction of an affine hard constraint when learning a policy in closed-loop with a black-box deterministic environment. The algorithm enforces a repulsive buffer in front of the constraint preventing trajectories to approach and violate this constraint. To analytically verify constraint satisfaction, the policy is made affine in that repulsive buffer using the POLICE algorithm.
POLICEd RL guarantees that this KUKA robotic arm will never cross the red surface when reaching for the green target thanks to the cyan repulsive buffer.
We provide the code for our implementation of POLICEd RL on several systems:
- an illustrative 2D system
- the CartPole
- the Gymnasium Inverted Pendulum
- a KUKA robotic arm
We illustrate POLICEd RL on a 2D system tasked with reaching a target location (cyan) without crossing a constraint line (red). In the repulsive buffer (green) the policy is affine and learns to point away from the constraint.
The following repositories have been instrumental from both an algorithm and software architecture perspective in the development of this project: