-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
speed #14
Comments
Sorry for the late reply. The network part is fast, as quoted from our optical expansion paper
Together with the segmentation head, it took roughly 300ms per pair with a resolution of 1k as I remember. The rigid motion fitting part is quite slow due to RACSAC and some inefficient CPU operations. The total run is around 2s. |
Thans a lot for your reply! I would like to know if it is possible for me to use the depth map, optical flow map and expansion map obtained independently by other methods (such as using tensorrt engine to accelerate calculation respectively) to calculate the cost map in this network? i.e. can I just use the motion segmentation sub-network and turn it into a TensorRT model? If so, could you plsease tell me roughly how to strip this sub-netwok (I have little programming experience with deep learning)? |
You are right,
|
Thanks for your reply! I have read the code of rigidmask and of DytanVO. I found that DytanVO did use the rigidmask as a submodule but did not swap the flow part (VCN) in rigidmask. They additionally used a PWC-based module to estimate flow map only one time and used this flow (firstly be masked by the result of fg/bg segmentation from motion-seg-net) as an input of the pose net. They added this PWC-module because for the first iteration the rigidmask submodule doesn't work since the pose of camera is unknown yet. I don't know why they didn't directly use this PWC-based flow result to sawp the VCN-based flow result in rigidmask ( yes in the paper they said that they swap it but in the code actually did not). |
Or can I just directly remove 3D P+P cost (it requires flow-expan) in the cost-map? Since I already had relatively reliable depth of both left images in the steroe camero situation, maybe I could even only remain the depth-contrast cost in the cost-map ( since it doesn't cause any ambiguity of motion segmentation) and remove all of the other costs? The other costs seem to be specially designed for the monocular situation since depth-estimation is not reliable enough? |
My general advice is to make minimum modifications to the working pipeline (which is the stereo example in the codebase) and gradually swap things. Also, since the segmentation head is trained with all costs, setting a cost to other values might be "out-of-distribution" for the seg head. From this perspective, you may want to keep the expansion module as is and only swap the flow and depth input, where expansion can be computed from the new flow. In the case of discarding the expansion network, setting expansion's log uncertainty to 0 might work, which assumes expansion is unit-variance gaussians. Still, re-training the segmentation head would be the safest way. |
Thanks for your work. Could you please tell me briefly what is the approximate running speed of your network under what kind of hardware?
The text was updated successfully, but these errors were encountered: