-
-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Encoding too slow - option for hardware acceleration? #1422
Comments
Same so,after using yoloV8 detection,the framerate on my apps were just 4-10FPS... |
@ElinLiu0 : Are you sure, that this low framerate is caused by the streamlit-webrtc part? In my case, I can see that not the model is the problem (the frame-callbacks do not lag behind). The buffer builds up after the frame was returned from the callback. It seems to be caused by the underlying aiortc library. Currently, this seems to support hardware accelerated encoding only on RasPi with the h264_omx encoder (which is legacy meanwhile). I saw some approaches for NVidia GPUs but nothing which has been published. |
Exactly not the Model Problem,i was deploy it on NVIDIA Triton Inference Server and it's throughput is very high by using tensorRT backend. Though without any frame operation,it's still slow framerate on my Logic 720i Web Camera,almost about 10-15 FPS(lower that Logic says 30) |
I have an application reading from a RTSP source where the displayed result lags more and more behind the source. I can see that the callback is called near realtime, so up to that point there is no problem. The problem seems to happen after the frames have been returned back from the callback.
So my assumption is, that the h.264 encoding for WebRTC happens on the CPU only without any hardware acceleration.
Is there an option to make use of hardware acceleration in that part? The system is a Ubuntu Linux system with NVidia GPU.
The text was updated successfully, but these errors were encountered: