I just Update the C++ Version this time,Python Version will be updated when I have free time.maybe next week.
ROS version ROS-Object-Detection-2Dto3D-RealsenseD435
Object Dection with SLAM ,With SLAM(SC-Lego-LOAM) :Perception-of-Autonomous-mobile-robot
Ubuntu18.04 OR 16.04
Opencv 4.x
C++ 11_std At least,I used the C++ 17 std
Eigen3 :in absolutely Path /usr/local/eigen3
Cmake>= 3.17
PCL lib>=1.7.1
Intel Realsense SDK >=2.0
Yolov3/v4 by Darknet
Dlib #I had pushed on this Github repository on :Object-Detection-and-location-RealsenseD435/C++/dlib.zip,uzip it on this Path.
You can DownLoad the lastest version too.
The config weight and cfg is Yolov4 based EfficentNetB0 ,it is a smaller model than yolov4_tiny but the result better.it can be run over 10FPS on CPU.
pyrealsense2.x
Opencv-python
numpy
with same as C++ version,the SDK of realsense D435 must have been installed
A Nvidia GPU is essential.I used Yolov5 by pytorch. tensorrtx Yolov5
git clone https://github.com/Mazhichaoruya/Object-Detection-and-location-RealsenseD435.git
cd Object-Detection-and-location-RealsenseD435/DNN/engine/
wget https://pjreddie.com/media/files/yolov3.weights ;wget https://pjreddie.com/media/files/yolov3-tiny.weights
cd Object-Detection-and-location-RealsenseD435/;uzip dlib.zip
mv dlib /DNN
you Can change the engine path in src/main.cpp
on line 25-27
String yolo_tiny_model ="../engine/yolov3.weights";
String yolo_tiny_cfg = "../engine/yolov3.cfg";
String classname_path="../engine/coco.names";
You can use your weight by Darknet or others supported by DNN too
cd ..
mkdir bulid; cd build
cmake ..
make
./DNN_Yolo
Attention:Default parameter on line 251 and 252 in src/main.cpp
net.setPreferableBackend(DNN_BACKEND_OPENCV);// DNN_BACKEND_INFERENCE_ENGINE DNN_BACKEND_CUDA
net.setPreferableTarget(DNN_TARGET_CPU);//DNN_TARGET_CUDA
if you have IntelCore CPU you can chose "DNN_BACKEND_INFERENCE_ENGINE"to accelerate youe model--Openvino;
But you should make sure your CPU is Intel and the Contrib of Opencv has been installed.
If you have GPU(From Nvidia),You can Think about The Cuda acceleration.Before this you should reinstall Your Opencv(Version Most>4.2) with This:OpenCV_DNN
Open the Cuda Setting when CMake.
When you install the essential job ,just run the main.py,the model will start to work.
cd Python
python3 main.py
At first,Go to tensorrtxget the tensorrt engine(Before this you must had installed Ubuntu,CUDA,TensorRT).
Then move the .enigine files to Yolov5-TensorRT-AGX_Xavier/engine
#define VIDEO_TYPE (0) //0 means laptop camera ;1 means images,2 means Videos,3 means RealsenseD435;
#define NET x // s m l x
You can change The Define type as you like,From src/main.cpp.
cd TensorRT;
mkdir bulid; cd build
cmake ..
make -j12
./Yolov5_trt
DNN version on 9-21:
RGBD and Center position:
Point Cloud of Objects:
Yolov5 by TensorRT:
The ROS and SLAM version had been uploaded,But it is just a Beginning,I will do more jod form my Graduation Project,if you were Interested in this, welcome to follow me!
请参考英文注释操作编译,默认使用CPU模型,Opencv 4.x,当使用给予EfficentNetB0轻量级网络的Yolov4时,在CPU上可以超过10FPS。实测AMD R5 4600H 15FPS。
仅支持N卡独显,C若使用CPU可以使用DNN模型,但是很难达到实时。
tensorrtx
Yolov5
请参考如上工程的说明配置Cuda、TensorRT环境
首先按照 tensorrtx获得Tensorrt可使用的模型,(前提是已经安装了 Ubuntu,CUDA,TensorRT,Opencv等必要的库).
移动 .enigine files 到 Yolov5-TensorRT-AGX_Xavier/engine目录下
#define VIDEO_TYPE (0) //0 means laptop camera ;1 means images,2 means Videos,3 means RealsenseD435;
#define NET x // s m l x
可以自行修改main.cpp的参数,NET表示s、m、l、x的模型类别,VIDEO_TYPE 表示输入数据类型,0-笔记本自带0号摄像头,1-图片,2-输入视频文件,3-深度相机(realsenseD435)
cd TensorRT;
mkdir bulid; cd build
cmake ..
make -j12
./Yolov5_trt