Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

speed #14

Open
Cai-RS opened this issue Nov 25, 2023 · 9 comments
Open

speed #14

Cai-RS opened this issue Nov 25, 2023 · 9 comments

Comments

@Cai-RS
Copy link

Cai-RS commented Nov 25, 2023

Thanks for your work. Could you please tell me briefly what is the approximate running speed of your network under what kind of hardware?

@gengshan-y
Copy link
Owner

Sorry for the late reply. The network part is fast, as quoted from our optical expansion paper

The expansion and motion-in-depth networks take 15ms for KITTI-sized images on a TITAN Xp GPU, giving a total run time of 200ms together with flow.

Together with the segmentation head, it took roughly 300ms per pair with a resolution of 1k as I remember.

The rigid motion fitting part is quite slow due to RACSAC and some inefficient CPU operations. The total run is around 2s.

@Cai-RS
Copy link
Author

Cai-RS commented Dec 7, 2023

Thans a lot for your reply! I would like to know if it is possible for me to use the depth map, optical flow map and expansion map obtained independently by other methods (such as using tensorrt engine to accelerate calculation respectively) to calculate the cost map in this network? i.e. can I just use the motion segmentation sub-network and turn it into a TensorRT model? If so, could you plsease tell me roughly how to strip this sub-netwok (I have little programming experience with deep learning)?

@gengshan-y
Copy link
Owner

It's a bit clunky but this part should be the one to look at.

DytanVO might be relevant. I remember they used rigidmak as a submodule and swapped out the flow with a faster PWCNet.

@Cai-RS
Copy link
Author

Cai-RS commented Dec 7, 2023

It's a bit clunky but this part should be the one to look at.

DytanVO might be relevant. I remember they used rigidmak as a submodule and swapped out the flow with a faster PWCNet.

Ok, thanks sincerely!

@Cai-RS
Copy link
Author

Cai-RS commented Dec 17, 2023

Hi. In the function get_intrinsics()(Line 589 in file submodule.py), why for the test time dfx and dfy also equal 1.0?
1
These two values may not always be 1 during testing, right? After all, they are also related to the main function parameter 'testres'

@gengshan-y
Copy link
Owner

You are right, dfx=dfy=1 assumes --testres 1, and a more generic version should be

dfx=dfy=1/testres

@Cai-RS
Copy link
Author

Cai-RS commented Dec 19, 2023

Thanks for your reply! I have read the code of rigidmask and of DytanVO. I found that DytanVO did use the rigidmask as a submodule but did not swap the flow part (VCN) in rigidmask. They additionally used a PWC-based module to estimate flow map only one time and used this flow (firstly be masked by the result of fg/bg segmentation from motion-seg-net) as an input of the pose net. They added this PWC-module because for the first iteration the rigidmask submodule doesn't work since the pose of camera is unknown yet. I don't know why they didn't directly use this PWC-based flow result to sawp the VCN-based flow result in rigidmask ( yes in the paper they said that they swap it but in the code actually did not).
I want to only use the cost-map-built module and seg-heads of your code. I will use other faster networks to produce flow result (for both left images) and stereo image disparity (for left and right images), and combining these two results can give the flow-expan for left images. It seems easy to estimate the out-of-range confidence score of flow by directly putting the final flow result into the ”oor_module“ (according to your code, is it right?). The problem is how to estimate the uncertainty of the flow-expan since I don't use the expansion network? Can you give me some advice?
Sincerely waiting for your reply.

@Cai-RS
Copy link
Author

Cai-RS commented Dec 19, 2023

Or can I just directly remove 3D P+P cost (it requires flow-expan) in the cost-map? Since I already had relatively reliable depth of both left images in the steroe camero situation, maybe I could even only remain the depth-contrast cost in the cost-map ( since it doesn't cause any ambiguity of motion segmentation) and remove all of the other costs? The other costs seem to be specially designed for the monocular situation since depth-estimation is not reliable enough?

@gengshan-y
Copy link
Owner

gengshan-y commented Dec 28, 2023

My general advice is to make minimum modifications to the working pipeline (which is the stereo example in the codebase) and gradually swap things. Also, since the segmentation head is trained with all costs, setting a cost to other values might be "out-of-distribution" for the seg head.

From this perspective, you may want to keep the expansion module as is and only swap the flow and depth input, where expansion can be computed from the new flow.

In the case of discarding the expansion network, setting expansion's log uncertainty to 0 might work, which assumes expansion is unit-variance gaussians.

Still, re-training the segmentation head would be the safest way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants