Skip to content

Commit fa8c87d

Browse files
committed
Error
Signed-off-by: Dashiell Stander <dstander@protonmail.com>
1 parent af7276f commit fa8c87d

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

megatron/model/transformer.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -474,7 +474,7 @@ def flash_attention(self, query_layer, key_layer, value_layer):
474474
# Combined k/v into [b * sk, 2, np, hn].
475475
kv = torch.concat([key_layer, value_layer], dim=1)
476476

477-
output = self.flash_attn_unpadded_kvpacked_func(
477+
output = self.flash_attention_function(
478478
query_layer,
479479
kv,
480480
cu_seqlens_q,

0 commit comments

Comments
 (0)