JRS is an object detection benchmark based on Jittor, and mainly focus on aerial image object detection (oriented object detection).
JRS environment requirements:
- System: Linux(e.g. Ubuntu/CentOS/Arch), macOS, or Windows Subsystem of Linux (WSL)
- Python version >= 3.7
- CPU compiler (require at least one of the following)
- g++ (>=5.4.0)
- clang (>=8.0)
- GPU compiler (optional)
- nvcc (>=10.0 for g++ or >=10.2 for clang)
- GPU library: cudnn-dev (recommend tar file installation, reference link)
Step 1: Install the requirements
git clone https://github.com/NK-JittorCV/nk-remote JRS
cd JRS
python -m pip install -r requirements.txt
If you have any installation problems for Jittor, please refer to Jittor
Step 2: Install JRS
cd JRS
# suggest this
python setup.py develop
# or
python setup.py install
If you don't have permission for install,please add --user
.
Or use PYTHONPATH
:
You can add export PYTHONPATH=$PYTHONPATH:{you_own_path}/JRS/python
into .bashrc
, and run
source .bashrc
The following datasets are supported in JRS, please check the corresponding document before use.
DOTA1.0/DOTA1.5/DOTA2.0 Dataset: dota.md.
FAIR Dataset: fair.md
SSDD/SSDD+: ssdd.md
You can also build your own dataset by convert your datas to DOTA format.
JRS defines the used model, dataset and training/testing method by config-file
, please check the config.md to learn how it works.
python tools/run_net.py --config-file=configs/s2anet_r50_fpn_1x_dota.py --task=train
If you want to test the downloaded trained models, please set resume_path={you_checkpointspath}
in the last line of the config file.
python tools/run_net.py --config-file=configs/s2anet_r50_fpn_1x_dota.py --task=test
You can test and visualize results on your own image sets by:
python tools/run_net.py --config-file=configs/s2anet_r50_fpn_1x_dota.py --task=vis_test
You can choose the visualization style you prefer, for more details about visualization, please refer to visualization.md.
In this section, we will introduce how to build a new project(model) with JRS. We need to install JRS first, and build a new project by:
mkdir $PROJECT_PATH$
cd $PROJECT_PATH$
cp $JRS_PATH$/tools/run_net.py ./
mkdir configs
Then we can build and edit configs/base.py
like $JRS_PATH$/configs/retinanet.py
.
If we need to use a new layer, we can define this layer at $PROJECT_PATH$/layers.py
and import layers.py
in $PROJECT_PATH$/run_net.py
, then we can use this layer in config files.
Then we can train/test this model by:
python run_net.py --config-file=configs/base.py --task=train
python run_net.py --config-file=configs/base.py --task=test
Models | Dataset | Sub_Image_Size/Overlap | Train Aug | Test Aug | Optim | Lr schd | mAP | Paper | Config | Download |
---|---|---|---|---|---|---|---|---|---|---|
OrientedRCNN-LSKNet-T-FPN | DOTA1.0 | 1024/200 | flip+ra90+bc+ms | ms | AdamW | 1x | 81.37 | IJCV | config | model |
OrientedRCNN-LSKNet-S-FPN | DOTA1.0 | 1024/200 | flip+ra90+bc+ms | ms | AdamW | 1x | 81.64 | IJCV | config | model |
OrientedRCNN-R50-FPN | DOTA1.0 | 1024/200 | Flip | - | SGD | 1x | 75.62 | ICCV21 | config | model |
S2ANet-R50-FPN | DOTA1.0 | 1024/200 | flip | - | SGD | 1x | 74.11 | arxiv | config | model |
S2ANet-R50-FPN | DOTA1.0 | 1024/200 | flip+ra90+bc | - | SGD | 1x | 76.40 | arxiv | config | model |
S2ANet-R50-FPN | DOTA1.0 | 1024/200 | flip+ra90+bc+ms | ms | SGD | 1x | 79.72 | arxiv | config | model |
S2ANet-R101-FPN | DOTA1.0 | 1024/200 | Flip | - | SGD | 1x | 74.28 | arxiv | config | model |
Gliding-R50-FPN | DOTA1.0 | 1024/200 | Flip | - | SGD | 1x | 72.93 | arxiv | config | model |
Gliding-R50-FPN | DOTA1.0 | 1024/200 | Flip+ra90+bc | - | SGD | 1x | 74.93 | arxiv | config | model |
H2RBox-R50-FPN | DOTA1.0 | 1024/200 | flip | - | AdamW | 1x | 67.62 | arxiv | config | model |
RetinaNet-hbb-R50-FPN | DOTA1.0 | 1024/200 | flip | - | SGD | 1x | 68.02 | arxiv | config | model |
RetinaNet-obb-R50-FPN | DOTA1.0 | 1024/200 | flip | - | SGD | 1x | 68.07 | arxiv | config | model |
GWD-R50-FPN | DOTA1.0 | 1024/200 | flip | - | SGD | 1x | 68.88 | arxiv | config | model |
KLD-R50-FPN | DOTA1.0 | 1024/200 | flip | - | SGD | 1x | 69.10 | arxiv | config | model |
KFIoU-R50-FPN | DOTA1.0 | 1024/200 | flip | - | SGD | 1x | 69.36 | arxiv | config | model |
FasterRCNN-R50-FPN | DOTA1.0 | 1024/200 | Flip | - | SGD | 1x | 69.631 | arxiv | config | model |
RoITransformer-R50-FPN | DOTA1.0 | 1024/200 | Flip | - | SGD | 1x | 73.842 | arxiv | config | model |
FCOS-R50-FPN | DOTA1.0 | 1024/200 | flip | - | SGD | 1x | 70.40 | ICCV19 | config | model |
ReDet-R50-FPN | DOTA1.0 | 1024/200 | Flip | - | SGD | 1x | 76.23 | arxiv | config | model pretrained |
CSL-R50-FPN | DOTA1.0 | 1024/200 | flip | - | SGD | 1x | 67.99 | arxiv | config | model |
RSDet-R50-FPN | DOTA1.0 | 1024/200 | Flip | - | SGD | 1x | 68.41 | arxiv | config | model |
ATSS-R50-FPN | DOTA1.0 | 1024/200 | flip | - | SGD | 1x | 72.44 | arxiv | config | model |
Reppoints-R50-FPN | DOTA1.0 | 1024/200 | flip | - | SGD | 1x | 56.34 | arxiv | config | model |
Notice:
- ms: multiscale
- flip: random flip
- ra: rotate aug
- ra90: rotate aug with angle 90,180,270
- 1x : 12 epochs
- bc: balance category
- mAP: mean Average Precision on DOTA1.0 test set
✔️Supported 🕒Doing ➕TODO
- ✔️ S2ANet
- ✔️ Gliding
- ✔️ RetinaNet
- ✔️ Rotated RetinaNet
- ✔️ Faster R-CNN
- ✔️ SSD
- ✔️ ROI Transformer
- ✔️ FCOS
- ✔️ Oriented R-CNN
- ✔️ YOLOv5
- ✔️ GWD
- ✔️ KLD
- ✔️ H2RBox
- ✔️ KFIoU
- ✔️ Localization Distillation
- ✔️ ReDet
- ✔️ CSL
- ✔️ Reppoints
- ✔️ RSDet
- ✔️ ATSS
- ✔️ LSKNet
- ✔️ Strip R-CNN
✔️Supported 🕒Doing ➕TODO
- ✔️ DOTA1.0
- ✔️ DOTA1.5
- ✔️ DOTA2.0
- ✔️ SSDD
- ✔️ SSDD+
- ✔️ FAIR
- ✔️ COCO
JRS is currently maintained by the NKU Media Computing Lab. If you are also interested in JRS and want to improve it, Please join us!
@article{Li_2024_IJCV,
title={LSKNet: A Foundation Lightweight Backbone for Remote Sensing},
author={Li, Yuxuan and Li, Xiang and Dai, Yimain and Hou, Qibin and Liu, Li and Liu, Yongxiang and Cheng, Ming-Ming and Yang, Jian},
journal={International Journal of Computer Vision},
year={2024},
doi = {https://doi.org/10.1007/s11263-024-02247-9},
publisher={Springer}
}
@article{yuan2025strip,
title={Strip R-CNN: Large Strip Convolution for Remote Sensing Object Detection},
author={Yuan, Xinbin and Zheng, ZhaoHui and Li, Yuxuan and Liu, Xialei and Liu, Li and Li, Xiang and Hou, Qibin and Cheng, Ming-Ming},
journal={arXiv preprint arXiv:2501.03775},
year={2025}
}
@article{hu2020jittor,
title={Jittor: a novel deep learning framework with meta-operators and unified graph execution},
author={Hu, Shi-Min and Liang, Dun and Yang, Guo-Ye and Yang, Guo-Wei and Zhou, Wen-Yang},
journal={Science China Information Sciences},
volume={63},
number={222103},
pages={1--21},
year={2020}
}