This repository contains the code and resources for the comparative analysis of two state-of-the-art semantic segmentation models, DenseASPP and GatedSCNN, on the WoodScapes dataset. The dataset features fisheye images commonly used in automotive applications, presenting significant radial distortion challenges.
The study employs transfer learning to adapt the models, originally trained on the Cityscapes dataset, to handle distortions in WoodScapes. Results indicate that the GatedSCNN model outperforms DenseASPP in mean Intersection over Union (mIoU) and F1-score, showing better boundary precision and class differentiation.
- Pre-trained Models: Utilizes DenseASPP and GatedSCNN models pre-trained on Cityscapes.
- Transfer Learning: Adapts models to the WoodScapes dataset with radial distortions.
- Performance Metrics: Evaluates models using mIoU and F1-score.
- Python 3.6 or higher
- PyTorch
- CUDA (for GPU support)
The WoodScapes dataset is used for training and evaluation. It includes high-resolution fisheye images with significant radial distortion.
- Download: WoodScapes Dataset
- Preparation: Follow the scripts provided in the
data_preparation
directory to preprocess and annotate the dataset.
- Objective: Evaluate pre-trained models on Cityscapes and WoodScapes datasets.
- Results: Initial poor performance due to radial distortions.
- Objective: Adapt models using transfer learning.
- Results: Significant performance improvement on WoodScapes.
- Metrics: mIoU and F1-score
- Comparison:
- DenseASPP: mIoU - 38.54%, F1-score - 0.46
- GatedSCNN: mIoU - 50.28%, F1-score - 0.62
This project is licensed under the MIT License. See the LICENSE file for details.
- Andrea Marinelli - andrea.marinelli@studenti.unipd.it
- Gianmarco Betti - gianmarco.betti@studenti.unipd.it
For more information, please refer to the project report.