This repository contains the code and resources for the research project on position estimation of CubeSats using monocular vision and YOLOv8. The project explores computer vision techniques for detecting and localizing CubeSats in space using a single camera, leveraging deep learning for object detection and pose estimation.
- Implementation of YOLOv8 for real-time CubeSat detection.
- Localization using a monocular camera without depth sensors.
- Camera calibration for accurate position estimation.
- Training and evaluation scripts for deep learning models.
- Experimental results and performance metrics.
cubesatMonoPosEstimation/
│── data/ # Dataset used for training and evaluation
│── models/ # Trained YOLOv8 models
│── src/ # Source code
│ ├── detect.py # CubeSat detection script
│ ├── main.py # Position estimation script MP4 video file
│ ├── pecmcv_SBC.py # Position estimation script using USB camera
│ ├── calibraCamera.py # Camera calibration tool
│ ├── trainYolov8n.py # YOLOv8 training pipeline
│── results/ # Performance results and experiment logs
│── README.md # Project description and instructions
│── requirements.txt # Dependencies
│── .gitignore # Files to be ignored in Git
To run this project, ensure you have the following dependencies installed:
python>=3.9
torch
ultralytics
opencv-python
numpy
matplotlib
You can install them using:
pip install -r requirements.txt
To test CubeSat detection on an image or video:
- Go to
/scriptsand run download_media.ipynb cells, in sequence. - Run the following script:
python scripts/medirDistYoloCV2.py
If you wish to train YOLOv8 on a custom dataset:
- Go to
/dataand run download_dataset.ipynb cells, in sequence. - Run the following script:
python scripts/trainYolov8n.py
Below are visualizations of the model’s performance in terms of detection accuracy and labelling:
If you use this project in your research, please cite the authors:
For any questions or collaborations, feel free to reach out via GitHub Issues or email: vdmrvitor@gmail.com



