Skip to content
forked from mszuyx/MOSTS

This repo is the official implementation of A Self-Supervised Miniature One-Shot Texture Segmentation (MOSTS) Model for Real-Time Robot Navigation and Embedded Applications.

License

Notifications You must be signed in to change notification settings

AUVSL/Outdoor-MOST

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Miniature One-Shot Texture Segmentation (MOSTS)

This repo is the official implementation of A Self-Supervised Miniature One-Shot Texture Segmentation (MOSTS) Model for Real-Time Robot Navigation and Embedded Applications. [https://arxiv.org/pdf/2306.08814v1.pdf]

@article{chen2023self, title={A Self-Supervised Miniature One-Shot Texture Segmentation (MOSTS) Model for Real-Time Robot Navigation and Embedded Applications}, author={Chen, Yu and Rastogi, Chirag and Zhou, Zheyu and Norris, William R}, journal={arXiv preprint arXiv:2306.08814}, year={2023} }

This repo is under the Creative Commons Attribution-NonCommercial-ShareAlike license. (CC-BY-NC-SA)

Please cite our work if you find this repo helpful! : )

MOSTS system architecture:

Picture4 (1)

Overall system flow chart

Picture3

Example results

Picture2

Demo video

mosts_demo_video.mp4

To run model training on your own pc:

Navigate to the folder directory, open a terminal and create a virtual environment:

python3 -m venv env               # Create a virtual environment

source env/bin/activate           # Activate virtual environment

Install none pytorch dependencies:

pip install -r requirements.txt   # Install dependencies

Install pytorch 1.10 (you might have to use a different version depending on your CUDA version)

pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html

To start the training:

python3 train_ablation.py

If you encounter some runtime memory issues, you can decrease the batch_size / num_workers according to your GPU spec Remember to change the dataset file path in "ablation_data_loader" according to your file system.

To exit virtual environment:

deactivate                       # Exit virtual environment

The encoder backbone (pretrained/mobilenetv3-large-1cd25616.pth) can be downloaded from here:

https://github.com/d-li14/mobilenetv3.pytorch Please place the downloaded .pth file under /utils/model/ for the train_ablation.py to work.

DTD dataset can be downloaded from here:

https://www.robots.ox.ac.uk/~vgg/data/dtd/

The Idoor Small Object Dataset (ISOD) can be downloaded from here:

https://www.kaggle.com/datasets/yuchen66/indoor-small-object-dataset Picture1

Please cite our work if you want to use this dataset for research/publication!

About

This repo is the official implementation of A Self-Supervised Miniature One-Shot Texture Segmentation (MOSTS) Model for Real-Time Robot Navigation and Embedded Applications.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%