Skip to content

Commit

Permalink
Delete loop unroll in SDPA op (pytorch#2438)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: pytorch#2438

Delete #pragma unroll as it's causing compiler errors when building llama/runner.cpp for AOSP targets

bypass-github-export-checks
bypass-github-pytorch-ci-checks

Reviewed By: kimishpatel

Differential Revision: D54916248

fbshipit-source-id: 40b24e9d5f4af98bb961bacbeca94df8be71f989
  • Loading branch information
Shrey Desai authored and facebook-github-bot committed Mar 15, 2024
1 parent 71ea7cb commit eb976d2
Showing 1 changed file with 0 additions and 4 deletions.
4 changes: 0 additions & 4 deletions examples/models/llama2/custom_ops/op_sdpa.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -177,10 +177,6 @@ inline void fill_stub(scalar_t* data, scalar_t val, int64_t size) {
for (; d < size - (size % Vec::size()); d += Vec::size()) {
data_vec.store(data + d);
}
#if !defined(_MSC_VER) && !defined(COMPILING_FOR_MIN_SIZE) && \
!defined(__ANDROID__)
#pragma unroll
#endif
for (; d < size; d++) {
data[d] = val;
}
Expand Down

0 comments on commit eb976d2

Please sign in to comment.