You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is suspicious. So after tracing the layers and tensors I found the source of the issue to be in adjacency_0 matrix. Switching back to old sparse -> dense -> sparse solves the issue. Investigating further shows that the resulting sparse matrices are not identical.
# After tracing with the layers in CAN, the data source with the issue is this part from the tutorialadjacency_0_list= []
forcell_complexincc_list:
adjacency_0=cell_complex.adjacency_matrix(rank=0)
adjacency_0_new=from_sparse(adjacency_0)
adjacency_0_old=torch.from_numpy(adjacency_0.todense()).to_sparse()
#print(adjacency_0)adjacency_0_list.append((adjacency_0, adjacency_0_new, adjacency_0_old))
fori, elementsinenumerate(adjacency_0_list):
print(i)
original=elements[0]
new=elements[1]
old=elements[2]
torch.testing.assert_allclose(new, old)
The issue is that the orignal matrix contains zero values which are preserved in the new from_sparse method but are ignored and excluded in the current casting. We can validate that by checking that the dense representation of the matrices matches in all 3:
# compare torch sparse using new from_sparse()np.allclose(original.todense(), new.to_dense().numpy())
True# compare torch sparse using old sparse -> dense -> sparsenp.allclose(original.todense(), old.to_dense().numpy())
True
Given that I'm not familiar yet with CAN architecture, do the explicit zeros represent something and need to be preserved?
1-Assuming this is an adjacency matrix, then the answer might be no and the issue is in the data generation of the original matrix which seems to be generated by toponetx.classes.cell_complex. I'm not familiar with the other repo so would love someone to share their thoughts.
2-If so, then there's a problem with the current implementation that the new casting method exposed.
Would love some feedback from those familiar with the can architecture / math.
This is related to test failure of pr #235 which replaces current sparse matrix casting to torch with the newly merged
from_sparse
method.The tutorial here fails due to attempting to call indices of unoalesced sparse matrix in topomodelx/nn/cell/can_layer.py
MultiHeadLiftLayer
:Adding coalesce in
LiftLayer
forward method in can_layer.py solves the the above issue:but then training fails due to mismatch in tensor dimension in
MultiHeadLiftLayer
forward:This is suspicious. So after tracing the layers and tensors I found the source of the issue to be in
adjacency_0
matrix. Switching back to old sparse -> dense -> sparse solves the issue. Investigating further shows that the resulting sparse matrices are not identical.Comparing with the original numpy sparse matrix, the new
from_sparse
method matches in the indices while the current method does not:The issue is that the orignal matrix contains zero values which are preserved in the new
from_sparse
method but are ignored and excluded in the current casting. We can validate that by checking that the dense representation of the matrices matches in all 3:Given that I'm not familiar yet with CAN architecture, do the explicit zeros represent something and need to be preserved?
1-Assuming this is an adjacency matrix, then the answer might be no and the issue is in the data generation of the original matrix which seems to be generated by
toponetx.classes.cell_complex
. I'm not familiar with the other repo so would love someone to share their thoughts.2-If so, then there's a problem with the current implementation that the new casting method exposed.
Would love some feedback from those familiar with the can architecture / math.
@mhajij @ninamiolane @papamarkou @jarpri
Read more about pytorch coo coalesce here
The text was updated successfully, but these errors were encountered: