Skip to content

Commit

Permalink
Implemented AtomAttentionEncoder and AtomAttentionDecoder, as well as…
Browse files Browse the repository at this point in the history
… token-to-atom mapping functions
  • Loading branch information
ardagoreci committed May 29, 2024
1 parent 2474b04 commit d96d634
Show file tree
Hide file tree
Showing 4 changed files with 584 additions and 67 deletions.
4 changes: 2 additions & 2 deletions src/diffusion/attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@ def __init__(
num_blocks:
Number of blocks.
num_heads:
Number of parallel attention heads. Note that embed_dim will be split across num_heads
(i.e. each head will have dimension embed_dim // num_heads).
Number of parallel attention heads. Note that c_atom will be split across num_heads
(i.e. each head will have dimension c_atom // num_heads).
dropout:
Dropout probability on attn_output_weights. Default: 0.0 (no dropout).
"""
Expand Down
Loading

0 comments on commit d96d634

Please sign in to comment.