-
Notifications
You must be signed in to change notification settings - Fork 61
Naive register <-> tmem load/store support #3786
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
7 commits
Select commit
Hold shift + click to select a range
0b2e784
Naive register <-> tmem load/store support.
zasdfgbnm 0610fc0
tmem
zasdfgbnm f721504
format
zasdfgbnm 8a7f03f
skip on non-blackwell
zasdfgbnm 225b35b
comment
zasdfgbnm 3b789d6
Merge remote-tracking branch 'origin/main' into tmem-no-alloc
zasdfgbnm 1b1f4cc
fix
zasdfgbnm File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
// clang-format off | ||
/* | ||
* SPDX-FileCopyrightText: Copyright (c) 2023-present NVIDIA CORPORATION & AFFILIATES. | ||
* All rights reserved. | ||
* SPDX-License-Identifier: BSD-3-Clause | ||
*/ | ||
// clang-format on | ||
|
||
#include <device_lower/analysis/tensor_memory.h> | ||
#include <fusion.h> | ||
#include <ir/all_nodes.h> | ||
|
||
namespace nvfuser { | ||
|
||
TensorMemoryInfo computeTMemInfo(Fusion* fusion) { | ||
bool found = false; | ||
for (auto tv : fusion->allTvs()) { | ||
if (tv->getMemoryType() == MemoryType::Tensor) { | ||
NVF_ERROR(!found, "Only one tensor on TMem is supported"); | ||
found = true; | ||
} | ||
} | ||
return {}; | ||
} | ||
|
||
} // namespace nvfuser |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,65 @@ | ||
// clang-format off | ||
/* | ||
* SPDX-FileCopyrightText: Copyright (c) 2023-present NVIDIA CORPORATION & AFFILIATES. | ||
* All rights reserved. | ||
* SPDX-License-Identifier: BSD-3-Clause | ||
*/ | ||
// clang-format on | ||
#pragma once | ||
|
||
namespace nvfuser { | ||
|
||
class Fusion; | ||
|
||
// Information used to lower tensor memory. So far, there is no information | ||
// needed, the computeTMemInfo just check that there is only one tensor on TMem | ||
// in the fusion. This limitation is described in the note below, and it is only | ||
// for incremental development. This limitation will be removed soon in the | ||
// future. | ||
struct TensorMemoryInfo; | ||
TensorMemoryInfo computeTMemInfo(Fusion* fusion); | ||
|
||
// Note: [Tensor Memory Allocation] | ||
// | ||
// Tensor memory is a very special memory, so its allocation is also very | ||
// different from other memory types. | ||
// | ||
// It is highly recommended to read the PTX documentation for tensor memory | ||
// if you are not alreay familiar with it: | ||
// https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#tensor-memory | ||
// | ||
// The first thing to note is, TMem does not have virtualization. This means: | ||
// We can not just allocate starting from address 0 like how we allocate shared | ||
// memory, and rely on page table to translate the same virtual address of | ||
// different CTA to different physical address. There is no virtual TMem | ||
// address. All addresses are physical addresses. | ||
// | ||
// Because multiple CTAs can execute on the same SM simultaneously, there must | ||
// be some handshaking mechanism for each CTA to know the region of TMem that it | ||
// can use. This is done by using the PTX instruction tcgen05.alloc. To ensure | ||
// safety, there is a mutex "I have the right to allocate TMem" in the | ||
// hardware. At the beginning of each CTA, the CTA will try to acquire the mutex | ||
// automatically. If it fails, the CTA will be blocked until the mutex is free. | ||
// This means, only one CTA can allocate TMem at a time. Once the CTA has | ||
// finished allocating TMem, it should release the mutex to relinquish the right | ||
// to allocate. After the right to allocate is relinquished, this CTA can not | ||
// allocate new TMem any more, but it can still access the TMem that it has | ||
// allocated, and it can also free the TMem that it has allocated. Once one CTA | ||
// relinquishes the right to allocate, the next CTA that is blocked will be | ||
// unblocked and can acquire the mutex to allocate TMem. | ||
// | ||
// Currently, the TMem allocation is not supported in nvFuser. We currently only | ||
// allow one TensorView to be on TMem, and because we never relinquish the right | ||
// to allocate TMem, CTA will be serialized on SM. A new CTA can be scheduled on | ||
// an SM only after the previous CTA on that SM has completely finished | ||
// executing. Thanks to this serialization, we can just skip allocating and | ||
// think that our only TMem TensorView own the entire TMem, because we are sure | ||
// that there will not be another CTA using that address. As a result, we could | ||
// just provide address 0 to our instructions that access TMem. In principle, it | ||
// is clearly wrong to write to an address that is not allocated, but because we | ||
// are sure that it will in practice work for the specific unit test that we are | ||
// targeting, we just do it so we have incremental development. | ||
|
||
struct TensorMemoryInfo {}; | ||
|
||
} // namespace nvfuser |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Due to this handshaking mechanism, is it better to have only a single CTA occupy an SM?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you talking about kernel design for better perf? My guess is, if you allocate at the beginning of the kernel, and relinquish after allocate, the latency should be acceptable if you want to use multiple CTA on SM. But we need to test it before making any conclusion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, for maximum performance.