Latent ODE and Latent ODE-RNN models are difficult to train due to the vanishing and exploding gradients problem. To overcome this problem, the main contribution of this paper is to propose and illustrate a new model based on a new Latent ODE using an ODE-LSTM (Long Short-Term Memory) network as an encoder - the Latent ODE-LSTM model. To limit the growth of the gradients the Norm Gradient Clipping strategy was embedded on the Latent ODE-LSTM model. The performance evaluation of the new Latent ODE-LSTM (with and without Norm Gradient Clipping) for modelling CTS with regular and irregular sampling rates is then demonstrated. Numerical experiments show that the new Latent ODE-LSTM performs better than Latent ODE-RNNs and can avoid the vanishing and exploding gradients during training.
Originally available here.
Latent ODE-RNN
python latentODE_RNN_workingSpiral.py
Latent ODE-LSTM
python latentODE_LSTM_workingSpiral.py
Download the dataset here.
Download the dataset here.
If you found this resource useful in your research, please consider citing:
@article{coelho2023enhancing, title={Enhancing Continuous Time Series Modelling with a Latent ODE-LSTM Approach}, author={Coelho, C and Costa, M Fernanda P and Ferr{'a}s, LL}, journal={arXiv preprint arXiv:2307.05126}, year={2023} }