-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementation of HOAN Mesh Classification model (model 1). #104
Conversation
…t raw in Figure 11 of the article "Architectures of Topological Deep Learning: A Survey on Topological Neural Networks"
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Codecov Report
@@ Coverage Diff @@
## main #104 +/- ##
==========================================
+ Coverage 93.00% 94.02% +1.02%
==========================================
Files 8 12 +4
Lines 200 368 +168
==========================================
+ Hits 186 346 +160
- Misses 14 22 +8
|
…eshClassifier # Conflicts: # topomodelx/nn/combinatorial/hmc_layer.py
high-order att block for squared matrices updated + test
this has been handled elsewhere |
This is an official submission for the ICML 2023 Topological Deep Learning Challenge, hosted by the second annual Topology and Geometry in Machine Learning Workshop at ICML. In this submission, we implement the Higher Order Attention Network (HOAN) Model for Mesh Classification. See Figure 11, [HOAN, Hajij22a] first left to right, of the article "Architectures of Topological Deep Learning: A Survey on Topological Neural Networks,” for a graphical representation using the tensor diagram’s notation. The model was initially presented in the first version of the paper “Topological Deep Learning: Going Beyond Graph Data” [TDLGBGD], at that time under the title of “Higher-Order Attention Networks,” [HOAN] (Hajij et al.).
We have contributed four novel implementations, which can be found in the following files:
topomodelx/base/hbns.py
,topomodelx/base/hbs.py
,nn/combinatorial/hmc_layer.py
,tutorials/combinatorial/hmc_layer.py
.(1. & 2.) The first two files present our implementation of the higher-order attention message passing steps for single non-squared and squared combinatorial complex neighbourhood matrices, respectively.
(3.) In the ‘nn/combinatorial/hmc_layer.py’ file, we implement the basic building block of the model, which consists of two consecutive levels of higher-order attentional message passing steps, transforming the signal features over the cells of the zeroth, first and second skeletons of the combinatorial complex (CC) representing the mesh. The first level displays two non-squared attention-blocks between vertices and edges, and edges and faces, along with a squared attention-block sending information from vertices to vertices. The second level consists of two non-squared attention-blocks, simply propagating the signals from the vertices to the edges, and from edges to vertices, alongside three squared attention blocks transforming the signals of each of the vertices, edges and faces. The latter propagate the information following their corresponding adjacency relations for vertices and edges and coadjacency for faces.
(4.) Lastly, in the fourth file, we implement and train the HOAN model for mesh classification over the
shrec16
dataset, as it is specified on the webpage of the challenge. The model consists of a sequence of stacked layers (implemented in file 3.) and a final sum pooling layer which effectively maps the nodal, edge, and face features of the meshes into a shared N-dimensional Euclidean space, where N represents the number of different classes. We implement the dataset class and training procedure specifically for this problem.Furthermore, we have also carefully tested each of the implementations of the novel message passing steps and the the HOAN basic building block in the following files:
test/base/test_hbs.py
,test/base/test_hbns.py
,test/nn/combinatorial/test_hmc_layer.py
.(1. & 2.) In files 1 and 2, we test the attention mechanism and the forward pass of both higher-order attention blocks, respectively. We have not only considered the correctness of the output dimensions, but also made an effort to verify the accuracy of the values by comparing them to hand-made examples.
(3.) In file 3, we test the layer’s forward pass following the same methodology.
We have made a conscious effort to comment on the code thoroughly, describing all the novel classes, attention mechanisms, forward passes, and auxiliary methods according to their formal definitions and notations introduced in [HOAN] and in the article “Architectures of Topological Deep Learning: A Survey on Topological Neural Networks” (Papillon et al.). We also updated the
docs
folder to account for our implementations in thebase
andnn
folders for generating the new documentation.Additional comments:
MessagePassing
class for theHBS
andHBNS
classes, making them inherit directly fromtorch.nn.Module
instead. We believe that the implemented higher-order attention mechanisms for combinatorial complexes should receive messages in their method signatures as they operate over those transformed signals. To avoid overriding method signatures or making changes in the code that could compromise its understanding and readability, we have avoidedMessagePassing
inheritance for now. This decision is open for further discussion and modification if deemed necessary.utils/srn.py
) for computing row normalization of sparse matrices.