- For semantic segmentation of natural underwater images
- 1525 annotated images for training/validation and 110 samples for testing
- BW: Background/waterbody • HD: human divers • PF: Aquatic plants and sea-grass • WR: Wrecks/ruins
- RO: Robots/instruments • RI: Reefs/invertebrates • FV: Fish and vertebrates • SR: Sea-floor/rocks
- A fully-convolutional encoder-decoder network: embodies residual learning and mirrored skip connection
- Offers competitive semantic segmentation performance at a fast rate (28.65 FPS on a 1080 GPU)
- Detailed architecture is in model.py; associated train/test scripts are also provided
- The get_f1_iou.py script is used for performance evaluation
- Performance analysis for semantic segmentation and saliency prediction
- SOTA models in comparison: • FCN • UNet • SegNet • PSPNet • DeepLab-v3
- Metrics: • region similarity (F score) and • contour accuracy (mIOU)
- Further analysis and implementation details are provided in the paper
- https://github.com/qubvel/segmentation_models
- https://github.com/divamgupta/image-segmentation-keras
- https://github.com/floodsung/Deep-Learning-Papers-Reading-Roadmap
- https://github.com/zhixuhao/unet
- https://github.com/aurora95/Keras-FCN
- https://github.com/MLearing/Keras-Deeplab-v3-plus/
- https://github.com/wenguanwang/ASNet