-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
Thank you for your amazing project and making your code public!
if self.depth2occ_intra:
if self.depth_gt:
if not self.training:
kwargs['gt_depth']=kwargs['gt_depth'][0]
gt_depth=self.get_downsampled_gt_depth(kwargs['gt_depth'])
b,n,d,h,w=depth.shape
gt_depth=gt_depth.reshape(b,n,h,w,d).permute(0,1,4,2,3)
fg_mask = torch.max(gt_depth, dim=2,keepdim=True).values > 0.0
fg_mask=fg_mask.repeat(1,1,d,1,1)
depth_=depth.clone()
depth_[fg_mask]=gt_depth[fg_mask]
else:
depth_=depth
the paper mentions that ground-truth depth is used to guide model training in the early stage. However, as shown in the code above, the depth used in the computation of occ_weight is expected to be the model’s own predicted depth. We find that ground-truth depth is actually used during this computation. Would this introduce unfairness or lead to an unfair comparison?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels