Skip to content

hovsept/RL-BATT-DDPG

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RL-BATT-DDPG

This is the application of reinforcement learning (RL) for battery fast-charging in the advanced battery management system (BMS). Using deep deterministic policy gradient (DDPG) algorithm, the RL agent is able to achieve fast-charging control strategy compared to existing model-based control strategies. In this repo, the RL results are reproduced by using the publicly available Lithium cobalt battery (LiCoO2) battery chemistry. Specifically, we present the RL agent in terms of state-feedback and output-feedback learning policy, which can be found in the sub-folders.

Requirements

How to run

  • In the "settings_file.py" for each folder, the training settings and constraints are defined. For RL training configuration, the hyper-paramters are defined in "model.py".

  • You can simply run the code and reproduce the results in the folder by following:

  • You can try with different parameter initialization using additional argument, i.e., --id 0, --id 1, ...

# Training & Testing
python main_training_testing_output_fdbk.py --id 0
python main_training_testing_state_fdbk.py --id 0
  • After training the RL agent, one can evaluate the performance of RL controller by executing notebook:
# Evaluation & Visualization
jupyter notebook
  • Launch the notebook file (.ipynb) in the sub-folder for deeper analysis.

Figures and Results are saved in "figures" in each subfolders of the repository. The figures in the manuscript are different from repository results because the battery model is adopted from company-owned battery parameters.

Results

Learning Curve

State_Feedback

Output_Feedback

Li-plating Constraint Violation

State_Feedback

Voltage Constraint Violation

Output_Feedback_VoltConst

Temperature Constraint Violation

State_Feedback_TempConst

Output_Feedback_TempConst

Charging Time

State_Feedback_Time

Output_Feedback_Time

Bibtex

@article{park2022deep,
  title={A Deep Reinforcement Learning Framework for Fast Charging of Li-ion Batteries},
  author={Park, Saehong and Pozzi, Andrea and Whitmeyer, Michael and Perez, Hector and Kandel, Aaron and Kim, Geumbee and Choi, Yohwan and Joe, Won Tae and Raimondo, Davide M and Moura, Scott},
  journal={IEEE Transactions on Transportation Electrification},
  year={2022},
  publisher={IEEE}
}

Contacts / Collaborators

Saehong Park: sspark@berkeley.edu

Andrea Pozzi: andrea.pozzi03@universitadipavia.it

Aaron Kandel: aaronkandel@berkeley.edu

About

RL for battery fast-charging

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • SWIG 51.5%
  • Jupyter Notebook 27.9%
  • Python 12.3%
  • C++ 8.0%
  • C 0.2%
  • CMake 0.1%