An autonomous system based on computer vision techniques that detects road accidents and reports them in real-time as well as allowing the monitoring of accidents using a client server architecture and an interactive GUI.
Explore Full Documentation »
IEEE Research Paper
.
IEEE Presentation
·
View Demo
·
System Architecture
·
Request Feature
This paper proposes a framework to detect road accidents in real-time using generic CCTV cameras installed on roads. The framework focuses on how to achieve high performance on congested roads by introducing a new technique called track-compensated frame interpolation (TCFI) for tracking the vehicles, focuses on achieving higher accuracy by introducing a new approach for crash detection called crash estimation algorithm and focuses on how the system will deal with the massive number of CCTV cameras by dropping footages that are highly unlikely to contain an accident at an early stage and implemented the modules based on the pipelining technique. The framework is formed up of four stages; first, vehicles are detected using YOLO neural network, then tracked for several frames using MOSSE tracker, followed by a filtration process based on a new approach of crash estimation; finally, for every vehicle, we process its tracked footage through ViF descriptor then use the output as a feature vector for an SVM model that classifies accidents. The system achieves 93% accuracy with processing time beats all previous systems.
K. Sabry and M. Emad, "Road Traffic Accidents Detection Based On Crash Estimation," 2021 17th International Computer Engineering Conference (ICENCO), 2021, pp. 63-68, doi: 10.1109/ICENCO49852.2021.9698968.
The framework consists of 4 phases; it starts with a vehicle detection phase using YOLO architecture. The second phase is the vehicle tracking using MOSSE tracker. Then the third phase is a new approach we introduce to detect crash based on crash estimation. Finally, we can consider either what remains after the third phase is a crash or start the fourth phase, crash detection using violent flow descriptor.
- Install requirements.txt
- Run Backend Services
- RunMaster.py
- RunDetect.py
- RunTracker.py
- RunCrash.py
- Run Client Services
- RunGui.py
- RunCamera.py
- Select video from videos folder
- From RunCamera.py hit Process
As the project has different settings there are hyper-parameter you need to configure to use the module you actually want. Go to file called "Constants.py" in "Argus/System/Data/Constants.py"
In Detection Module
- Work_Detect_Files : True if you want to use already saved vehicles detected in the videos provided in project instead of using YOLO architecture
In Tracking Module
- Work_Tracker_Type_Mosse : True if you want to use Mosse Tracker instead of Dlib Tracker
- Work_Tracker_Interpolation : True if you want to use track-compensated frame interpolation (TCFI) instead of normal tracking algorithm
In Crash Module
- Work_Crash_Estimation_Only : True if you want to use Crash Estimation Module Only, instead of following the Crash Estimation Module with ViF Descripton
The trailer gives a light on the problem so the audience can start thinking about it. The trailer captures the audience's mind and the audience will ask themselves.... What is ARGUS? How will it help to save people's lives?
The trailer discusses the problem of road crashes and how argus will help to solve this problem
Explaining in Arabic how to test a video in Argus
The video shows a compilation of road crashes which is the output of the system