-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ConvCaps p_in a_in view #3
Comments
Sorry about the late reply, as I am busy preparing cvpr submission. As stated in Matrix-Capsules-EM-PyTorch/model/capsules.py Line 192 in 9cb3fc0
The shape of
|
@tomahawk810 I get your point now. I have fixed the problem now. Could you please help review the PR #4? Thanks for pointing out! |
@tomahawk810 I merge the PR since there is no comment for two weeks. |
I think there is an issue in the way the input tensor
x
is reshaped in order to extracta_in
andp_in
.It seems to me that the dimensions of
a_in
andp_in
require a permutation before applyingTensor.view()
.Note that I changed the training batch size to 16, also I am using
A, B, C, D = 32, 32, 32, 32
.Transformation before view:
After this line:
Matrix-Capsules-EM-PyTorch/model/capsules.py
Line 253 in 9cb3fc0
I get this:
View:
The view is done in the following way:
Matrix-Capsules-EM-PyTorch/model/capsules.py
Line 255 in 9cb3fc0
To do the view in this way,
p_in.shape
should be:torch.Size([16, 6, 6, 3, 3, 512])
Do you agree? I am new to Pytorch, so I might misunderstand the way Tensor.view() works.
The text was updated successfully, but these errors were encountered: