Skip to content
#

distributional-rl

Here are 27 public repositories matching this topic...

PyTorch implementation of the state-of-the-art distributional reinforcement learning algorithm Fully Parameterized Quantile Function (FQF) and Extensions: N-step Bootstrapping, PER, Noisy Layer, Dueling Networks, and parallelization.

  • Updated Oct 10, 2020
  • Jupyter Notebook

Quantile Regression DQN implementation for bridge fleet maintenance optimization using Markov Decision Process. Migrated from C51 distributional RL (v0.8) with 200 quantiles and Huber loss. Features: Dueling architecture, Noisy Networks, PER, N-step learning. All 6 maintenance actions show positive returns with 68-78% VaR improvement.

  • Updated Dec 12, 2025
  • Python

C51 Distributional DQN (v0.8) for bridge fleet maintenance optimization. Implements categorical return distributions (Bellemare et al., PMLR 2017) with 300x speedup via vectorized projection. Combines Noisy Networks, Dueling DQN, Double DQN, PER, and n-step learning. Validated on 200-bridge fleet: +3,173 reward in 83 min (25k episodes).

  • Updated Dec 8, 2025
  • Python

Multi-Equipment CBM system using QR-DQN with advanced probability distribution analysis. Coordinated maintenance decision-making for 4 industrial equipment units with realistic anomaly rates (1.9-2.2%), comprehensive risk analysis (VaR/CVaR), and 51-quantile distribution visualization.

  • Updated Dec 21, 2025
  • Python

Improve this page

Add a description, image, and links to the distributional-rl topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the distributional-rl topic, visit your repo's landing page and select "manage topics."

Learn more