Skip to content

Commit

Permalink
Adding readme, fix test_oop
Browse files Browse the repository at this point in the history
  • Loading branch information
aagustinconti committed Nov 25, 2022
1 parent a39d12a commit f18d459
Show file tree
Hide file tree
Showing 10 changed files with 280 additions and 46 deletions.
162 changes: 162 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,162 @@
# YoloV7 | DeepSort and Count

**#ComputerVision** **#RealTimeDetection** **#RealTimeCounting** **#DeepLearning** **#DeepSort** **#MachineLearning** **#AI** **#OpenCV** **#Python** **#YOLO** **#MOT #Docker** **#DigitalImageProcessing**

![Test Cars](readme-img/messi.gif)

## Principal idea

This project started with the idea of being able **to help farm workers who still count cattle by hand.**

Starting with this idea, for a project on Digital Image Processing at the [National University of Río Cuarto](https://www.unrc.edu.ar/), my research began to achieve the first objective.

As the project developed, I realized that I could further generalize its scope. It wouldn't just be cattle, but could be any class, as long as we can train the detection model.

I have managed to **make detections in "real time" (almost 30 FPS)** with videos or images from different types of sources.

In addition to the detections, **it has been possible to track these objects and detect when they enter a certain region of interest (ROI)** defined by the user **and make a count by class independently** detected, for example:

- person: 2; car: 5.

### Information flow

![Information flow](readme-img/Untitled.png)

### Count algorithm

![Count Algorithm](readme-img/Untitled%201.png)

## Results

- One class
- Source: YouTube
- Invert Frame: False
- Classes: car → [0]


![Test Cars](readme-img/test_cars.gif)


### Test environment

I’ve done test with:

- **PC:** HP-OMEN ce004-la
- **GPU:** NVIDIA 1060 MAX-Q DESIGN 6GB
- **CPU:** INTEL I7 7700HQ
- **RAM:** 16GB
- **DISK:** SSD M2

## Performance

- I’ve obtained almost **30 FPS (detection + tracking time**) in the majority of the tests.
- Performance suffers when the number of concurrent detections per frame rises above 10/20. We can suffer a drop in fps (detection + tracking) to about 5 per second.


## Problems

- You need a lot of processing capacity.
- Occlusion it is a problem (when the objects are too close)
- When an object get out of the frame we lost the tracking and then the program consider a new object if this come into the frame again.


## Ideas for the future

- You can create an **API.**
- You can **re-train the detection model with your own class/classes** and run the program.
- You can add an **visual interface.**
- You can **collaborate on this repository** integrating new features or solving bugs!
-

---
## Try it yourself. Do a test!

### Warnings

- **NVIDIA Graphic Card** needed to run this project.
- **Docker** previously installed in your computer.
- **NVIDIA Docker image** needed.

### Configuring your HOST PC

Do this following steps *in order.*

**Install Docker**

1. [Install Docker](https://docs.docker.com/engine/install/ubuntu/)

2. [Post-Install](https://docs.docker.com/engine/install/linux-postinstall/)

**Install Nvidia Drivers**

- Instalation:

```bash
ubuntu-drivers devices # To know which driver is recommended
sudo ubuntu-drivers autoinstall # To automatically install the recommended driver
reboot # We need to reboot the system after the installation
```


- Checks:

```bash
nvidia-smi # Command to check if the driver was installed correctly: The output must be a list of GPU's and a list processes running on it
sudo apt install nvtop # Program to check the GPU usage
nvtop # To run the nvtop
```

![nvidia-smi](/readme-img/Untitled%202.png)

**Pull nvidia-gpu image**

Follow this **[installation guide.](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)**

**Pull this repository**

```bash
git pull https://github.com/aagustinconti/yolov7_counting
```

**Change permissions of ./start.sh**

```bash
sudo chmod +x ./start.sh
```

**Download YOLOv7x pretrained weights**

1. Download the pretrained weights of YOLOv7x [here.](https://github.com/WongKinYiu/yolov7/blob/main/README.md#performance)
2. Save them in **./pretrained_weights** directory.

**RUN ./start.sh**

To build the docker image if is not available in your PC and run the container with the propp

```bash
run ./start.sh
```

### Configuring the test

- Into the **test_oop.py** file you can modify some characteristics of the instance of the class *YoloSortCount()* before to execute the *run()* method.
- By default:
- Source: WebCamera (0)
- Invert frame: True
- Classes: person (ID: 0)
- Save: False

**RUN the test**

On the docker terminal run

```bash
python test_oop.py
```


## Thanks to

[YOLOv7](https://github.com/WongKinYiu/yolov7/)

[SORT Algorithm](https://github.com/dongdv95/yolov5/blob/master/Yolov5_DeepSort_Pytorch/track.py)
3 changes: 0 additions & 3 deletions pretrained_weights/README.md

This file was deleted.

Binary file added readme-img/Untitled 1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added readme-img/Untitled 2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added readme-img/Untitled.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added readme-img/messi.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added readme-img/test_cars.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
27 changes: 0 additions & 27 deletions start.sh
Original file line number Diff line number Diff line change
@@ -1,32 +1,5 @@
#!/bin/bash

######## Host PC ########

# Pre run
# 1. sudo chmod +x start.sh

# Do the complete instalation of Docker.

# 1. Install: https://docs.docker.com/engine/install/ubuntu/
# 2. Post-install: https://docs.docker.com/engine/install/linux-postinstall/


# Install Nvidia Drivers.

# Instalation:
# 1. ubuntu-drivers devices (To know which driver is recommended)
# 2. sudo ubuntu-drivers autoinstall (To automatically install the recommended driver)
# 3. reboot # We need to reboot the system after the installation
# Checks:
# 1. nvidia-smi (Command to check if the driver was installed correctly: The output must be a list of GPU's and a list processes running on it)
# 2. sudo apt install nvtop (Program to check the GPU usage)
# 3. nvtop (To run the nvtop)

# Pull nvidia-gpu image

# 1. Instalation guide: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html


# Create the image of torch

echo "Checking if the Image already exists..."
Expand Down
132 changes: 117 additions & 15 deletions test_oop.py
Original file line number Diff line number Diff line change
@@ -1,39 +1,141 @@
from yolov7_sort_count_oop import YoloSortCount

#################### TEST ####################

# INSTANCIATE

# Test
test = YoloSortCount()

# Source
test.video_path = 0#"https://www.youtube.com/watch?v=qP1y7Tdab7Y" | "http://IP/hls/stream_src.m3u8" | 0 | "img_bank/cows_for_sale.mp4"

test.max_fps = 1000 #Max 1000

"""
###### AVAILABLE SOURCES ######
WebCamera: 0 ---> DEFAULT
Youtube Video or stream: "https://www.youtube.com/watch?v=qP1y7Tdab7Y"
Stream URL: "http://IP/hls/stream_src.m3u8"
RSTP Stream:
Local video: "img_bank/cows_for_sale.mp4"
Local image: "img_bank/img.jpg" | "img_bank/img.png"
"""
test.video_path = "https://www.youtube.com/watch?v=emI8r2dfk6g"


"""
###### FRAME PROPERTIES ######
- Set the max size of the frame (width)
- Set the max fps of the video
- Invert the image (In case of your WebCamera is mirrored, IE)
"""
test.max_width = 720
test.max_fps = 25 #Max 1000
test.inv_h_frame = False


# Show results

"""
###### SHOWING RESULTS ######
- Show the results in your display (Interactive ROI, imshow of the out frame)
- In case of you are not showing the results, set the timer to stop the execution.
- Stop the frame with hold_image method in case you are using image as a source.
"""
test.show_img = True
test.ends_in_sec = 10
test.hold_img = False


"""
###### ROI ######
- Load the ROI manually.
-
- Load the ROI color.
"""
#test.roi = [0,0,0,0]
test.auto_load_roi = True
test.roi_color = (255, 255, 255)

test.ends_in_sec = 10

# Debug

"""
###### DETECTION MODEL ######
- Specify the path of the model.
- Select the ID of your Graphic Card (nvidia-smi)
- Select the classes to detect
- Set the image size (Check if the YOLO model allows that --> IE: yolov7.pt 640, yolov7-w6.pt 1280 or 640)
- Set the bounding box color
- Set the minimum confidence to detect.
- Set the minimum overlap of a predicted versus actual bounding box for an object.
"""
test.model_path = 'pretrained_weights/yolov7.pt'
test.graphic_card = 0
test.class_ids = [0,2]
test.img_sz = 640
test.color = (0, 255, 0)
test.conf_thres = 0.5
test.iou_thres = 0.65

"""
###### TRACKING MODEL ######
- Specify the path of the model.
- Set the max distance between two points to consider a tracking object.
- Set the max overlap of a predicted versus actual bounding box for an object.
- Set the image size (Check if the YOLO model allows that --> IE: yolov7.pt 640, yolov7-w6.pt 1280 or 640)
- Set max_age to consider a lost of a tracking object that get out of the seen area.
- Set the minimum frames to start to track objects.
- Set the value that indicates how many previous frames of feature vectors should be retained for distance calculation for each track.
- Set the color of the centroid and label of a tracking object.
"""
test.deep_sort_model = "osnet_x1_0"
test.ds_max_dist = 0.1
test.ds_max_iou_distance = 0.7
test.ds_max_age = 30
test.ds_n_init = 3
test.ds_nn_budget = 100
test.ds_color = (0, 0, 255)

"""
###### PLOT RESULTS ######
- Specify the min x (left to right) to plot the draws
- Specify the min y (top to bottom) to plot the draws
- Specify padding between rectangles and text
- Specify the text color.
- Specify the rectangles color.
"""
test.plot_xmin = 10
test.plot_ymin = 10
test.plot_padding = 2
test.plot_text_color = (255, 255, 255)
test.plot_bgr_color = (0, 0, 0)



"""
###### DEBUG TEXT ######
- Show the configs
- Show the detection output variables
- Show the tracking output variables
- Show counting output variables
"""
test.show_configs = False
test.show_detection = False
test.show_tracking = False
test.show_count = False

# Detection model
test.class_ids = [0]
test.conf_thres = 0.5
"""
###### SAVING RESULTS ######
# Frame
test.inv_h_frame = True
- Select if you want to save the results
- Select a location to save the results
"""
test.save_vid = True
test.save_loc = "results/messi"

# Save
test.save_loc = "results/test_test"
test.save_vid = False

# Run
test.run()
2 changes: 1 addition & 1 deletion yolov7_sort_count_oop.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# https://learnopencv.com/yolov7-object-detection-paper-explanation-and-inference/
# https://github.com/WongKinYiu/yolov7/

# Pytorch
import torch
Expand Down

0 comments on commit f18d459

Please sign in to comment.