Skip to content

SongJgit/filternet

Repository files navigation

FilterNet - Learning-Aided Filtering in Python

Welcome to FilterNet

pre-commit Ruff powered by Pytorch

Notice

🎉🎉🎉Excited to share that our paper "LAKALMANTRACKER: ROBUST LEARNING-AIDED KALMAN FILTERING FOR MULTI-OBJECT TRACKING" has been accepted to ICASSP 2026! 🎉🎉🎉

We will upload all the code once the paper submitted to "Information Fusion" has been accepted.

You can view papers related to Learning-Aided Filtering through the following links.

🥳 What's New

  • Feb. 2025: 🌟🌟🌟🌟🌟 Add NCLT Fusion task benchmark (with WandB logger), Lorenz Attractor benchmark (with WandB logger).
  • Feb. 2025: 🌟🌟🌟 First commit. Added support for model-based Kalman filter, Extended Kalman filter, Interacting Multiple model, and learning-aided Kalman filtering KalmanNet, Split-KalmanNet, DANSE.

Introduction

This library provides Learning-Aided/Data-Driven Kalman filtering and related optimal and non-optimal filtering software in Python. It contains Kalman filters, Extended Kalman filters, KalmanNet, Split-KalmanNet, and Ours Semantic-Independent KalmanNet(submitted to Information Fusion, waiting review). This library is implemented with Pytorch-Lightning, MMEngine, and WandB.

Highlights

Learning-Aided Kalman Filtering

  • Unified data structure

    Now that Learning-Aided Kalman Filtering paper implementations have their own characteristics, which makes comparing algorithms very difficult, we use a unified data structure for the supported algorithms here, so that the user only needs to change the Datasets to seamlessly compare the algorithms.

  • Multiple tasks supported

    Facilitates users to compare the performance of your own algorithms on different tasks, such as Lorenz Attractor, NCLT Fusion task, NCLT Estimation, and Motion Estimation, etc.

  • Easy to develop your own models

    Many basic modules have been implemented, e.g. CV, CA modeling, etc., which can be easily extended to your own models.

  • Support for multiple GPUs and Batches

    The code supports multi-GPU as well as mini-batch training (not supported by earlier versions of many papers, e.g. KalmanNet and DANSE).

Advanced Features

  • Pytorch-Lightning

    We use Pytorch-Lightning to simplify the training process. It provides a rich API that saves a lot of time in writing engineering code, such as DDP, logger, loop, etc.

  • MMEngine

    We use MMEngine.Config to manage the model's config. There are several benefits of using config file to manage the training of the model:

    • Backup & Restore: Avoiding internal code modifications and improving the reproducibility of experiments.
    • Flexible: The config file provides a fast and flexible way to modify the training hyperparameter.
    • Friendliness: config file are separated from the model/training code, and by reading the config file, the user can quickly understand the hyperparameter of different models as well as the training strategies, such as optimizer, scheduler and data augmentation.
  • WandB

    We use WandB to visualize the training log. Pytorch-Lightning supports a variety of loggers, such as tensorboard and wandb, but in this project, we use wandb as the default logger because it is very easy to share training logs, as well as very easy for multiple people to collaborate. In the future, we will share the logs of all models in wandb, so that you can easily view and compare the performance and convergence speed of different models.

Model Zoo

Learning-Aided Kalman Filtering

Overview
Supported methods Supported datasets Supported Tasks Others

Model-Based Kalman Filtering

Overview
Supported methods Supported datasets Supported Tasks

Abbrv

Method Abbrv Name
KNet KalmanNet
SKNet Split-KalmanNet
DANSE DANSE
SIKNet Semantic-Independent KalmanNet

Supervised Learning or Unsupervised Learning?

Methods Supervised Learning Unsupervised Learning
KNet
SKNet
DANSE
SIKNet

BenchMark

✨Note

  • The number of parameters of the same model is not fixed, this is because the number of parameters in the network is often related to the dimensions of the system state and the observation, and the dimensions of these two are often different for different tasks. Therefore, the number of parameters of the same model may vary greatly for different tasks.
  • 🚩🚩Such of these model are extremely sensitive to numerical values, and different machines/parameters may cause drastic changes in performance (It is possible that the metric are slightly lower or slightly higher than in the original paper). We provide the best possible metrics for each model.

Motion Estimation in MOT Datasets

Methods Recall@50 Recall@75 Recall@50:95
KF
KNet
SKNet
SIKNet

Lorenz Attractor

  • For convenience, we directly use RMSE.
  • The default parameters $q^2 = 1e-4$, and $r^2 \in {1, 10, 100, 1000}$.
  • System state dimension: $m = 3$, observation dimension: $n = 3$.
  • More details.
  • WandB Logger. It needs to be viewed in groups。

Note: In order to compare with other models, DANSE is trained using a supervised method from the source code.

Methods Params RMSE@1 RMSE@10 RMSE@100 RMSE@1000 Config
Obs Error None 2.31 3.78 10.26 31.56 None
KNetArch1 366 K Nan config
KNetArch2 366 K Nan config
SKNet 149 K config
DANSE 4.3 K config
SIKNet 140 K config

NCLT Sensor Fusion

RMSE in Meters for Different Methods on Test Dataset
Traj.Date 2012-11-04 2012-11-16 2013-04-05 Avg All Config
Methods Traj.Len[S] 4834 4917 4182 - - -
Params
G-O - 46.141 19.500 34.538 33.393 35.084 -
W-O EKF - 117.76 81.61 83.99 94.46 - -
W-G EKF - 18.76 12.29 8.94 13.33 - -
W-G KNetArch1 1.3 M 15.520 7.806 7.087 10.137 10.961 config
W-G KNetArch2 107 K 14.899 8.916 8.490 10.768 11.256 config
W-G SKNet 463 K 16.105 10.037 6.532 10.891 11.762 config
W-G SIKNet 453 K 14.434 7.687 5.999 9.374 10.399 config

Getting Started

Installation

Please refer to Installation.

Supported Datasets

Please refer to Datasets.

Training

Please refer to Training.

Citation

If you find this repo useful, please cite our papers.

@misc{song2025motionestimationmultiobjecttracking,
      title={Motion Estimation for Multi-Object Tracking using KalmanNet with Semantic-Independent Encoding},
      author={Jian Song and Wei Mei and Yunfeng Xu and Qiang Fu and Renke Kou and Lina Bu and Yucheng Long},
      year={2025},
      eprint={2509.11323},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2509.11323},
}
@ARTICLE{10605082,
  author={Song, Jian and Mei, Wei and Xu, Yunfeng and Fu, Qiang and Bu, Lina},
  journal={IEEE Signal Processing Letters},
  title={Practical Implementation of KalmanNet for Accurate Data Fusion in Integrated Navigation},
  year={2024},
  volume={31},
  number={},
  pages={1890-1894},
  keywords={Training;Sensor fusion;Global Positioning System;Navigation;Vectors;Kalman filters;Wheels;Integrated navigation and localization;Kalman filter;recurrent neural networks;sensor fusion},
  doi={10.1109/LSP.2024.3431443}}

Others

Acknowledgement

The structure of this repository and much of the code is thanks to the authors of the following repositories.

  • filterpy: A really great (I think the best) python based filter repository. Also has a filter teaching repository with it.
  • torchfilter: Is a library for discrete-time Bayesian filtering in PyTorch. By writing filters as standard PyTorch modules.

Star History

Star History Chart

About

Python learning-aided filters library. Implements Kalman filter, Extended Kalman filter, KalmanNet, Split-KalmanNet and more.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors