forked from tile-ai/tilelang
-
Notifications
You must be signed in to change notification settings - Fork 4
[Feature] Semantics support and remote atomic add #48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
tzj-fxz
wants to merge
28
commits into
main
Choose a base branch
from
tzj-dev
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
28 commits
Select commit
Hold shift + click to select a range
c859a16
[Fence] Add fence options for barrier_blocks
tzj-fxz cd4e509
[Feature] Add remote atomic-add and more scopes/semantics for wait op
tzj-fxz e37fea4
[Misc] Remove unused code
tzj-fxz 12a98d0
[Example] Remove redundant buffer
tzj-fxz 231dad1
[Example] Remove direction-related buffer
tzj-fxz b496a54
[Refactor] Unified scope and semantic representation in tilescale lan…
tzj-fxz 6f072ab
[Misc] Add fence options
tzj-fxz 268f54a
[BugFix] Intermediate buffer for each path
tzj-fxz e8d036a
[Lint] Block_M for alltoall
tzj-fxz fc98be0
[Lint]
tzj-fxz 8404728
[Lint]
tzj-fxz 7b00d85
[BugFix] Add fence for inner CTA memory op
tzj-fxz 067511c
[BugFix] Fence and debug
tzj-fxz 31a4643
[Lint]
tzj-fxz 67065ab
[BugFix] Restore the signal to avoid duplicated sum of finish barrier
tzj-fxz e2d8ee3
[Lint]
tzj-fxz b00bdd8
[BugFix] Warp-level scheduling with active blocks and correct synchro…
tzj-fxz 2822ced
[Example] Add benchmark options
tzj-fxz 81af526
[Enhancement] Fully utilize blocks to send/recv data
tzj-fxz 5f584e5
[Routing] Optimize for balanced routing direction
tzj-fxz 1d4bbd3
[BugFix] Reinitialize the signal before benchmark
tzj-fxz b065199
[Feature] Add return value of wait op
tzj-fxz 3845d36
[Routing] New version of routing
tzj-fxz 6f570f4
[BugFix] Transfer source index before put data
tzj-fxz 5abf3f1
[BugFix] Interface for benchmark
tzj-fxz a9dae4c
[Misc] Remove log
tzj-fxz 0d1dc1c
[BugFix] Warp level communication with robust per-slot signal
tzj-fxz baf1fc4
[Routing] AOT routing and signal slot assignment
tzj-fxz File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,114 @@ | ||
| import tilelang | ||
| import tilelang.language as T | ||
| from tilelang.distributed import init_dist | ||
| import torch | ||
| import torch.distributed as dist | ||
| import argparse | ||
|
|
||
|
|
||
| def alltoall(PE_num, M, N, block_M, block_N, threads): | ||
| assert block_N == N | ||
|
|
||
| @T.prim_func | ||
| def main( | ||
| src: T.Tensor((PE_num * M, N), "float16"), | ||
| dst: T.Tensor((PE_num * M, N), "float16"), | ||
| barrier: T.Tensor((PE_num), "int32"), | ||
| ): | ||
| # Currently not support tiled copy | ||
| with T.Kernel( | ||
| PE_num, T.ceildiv(M, block_M), T.ceildiv(N, block_N), | ||
| threads=threads) as (bx, by, bz): | ||
| rank = T.alloc_local([1], "int32") | ||
| num_ranks = T.alloc_local([1], "int32") | ||
|
|
||
| dst_rank = bx | ||
| rank[0] = T.get_rank() | ||
| num_ranks[0] = T.get_num_ranks() | ||
|
|
||
| T.put_block( | ||
| src=T.address_of(src[dst_rank * M + by * block_M, 0]), | ||
| dst=T.address_of(dst[rank[0] * M + by * block_M, 0]), | ||
| size=block_M * block_N, | ||
| dst_pe=dst_rank, | ||
| ) | ||
| T.fence_sys(sem=T.MemorySemantic.RELEASE) | ||
|
|
||
| return main | ||
|
|
||
|
|
||
| def run_alltoall(local_rank, num_ranks, args): | ||
| PE_num = args.PE_num | ||
| M = args.M | ||
| N = args.N | ||
| block_M = 32 | ||
| block_N = N | ||
| threads = 256 | ||
|
|
||
| local_rank, num_ranks, group_size = init_dist(local_rank, num_ranks) | ||
| allocator = tilelang.get_allocator( | ||
| size=2**34, | ||
| device="cuda", | ||
| is_distributed=True, | ||
| local_rank=local_rank, | ||
| num_local_ranks=num_ranks, | ||
| group=group_size, | ||
| ) | ||
| kernel = tilelang.compile(alltoall(PE_num, M, N, block_M, block_N, threads)) | ||
| kernel.initialize(allocator=allocator) | ||
| src = tilelang.tensor((PE_num * M, N), torch.float16, allocator=allocator).random_() | ||
| dst = tilelang.tensor((PE_num * M, N), torch.float16, allocator=allocator).zero_() | ||
| barrier = tilelang.tensor((PE_num), torch.int32, allocator=allocator).zero_() | ||
|
|
||
| torch.cuda.synchronize() | ||
| dist.barrier(group_size) | ||
|
|
||
| # Warmup | ||
| for _ in range(args.warmup): | ||
| kernel(src, dst, barrier) | ||
| dst.zero_() | ||
| torch.cuda.synchronize() | ||
| dist.barrier(group_size) | ||
|
|
||
| start = torch.cuda.Event(enable_timing=True) | ||
| end = torch.cuda.Event(enable_timing=True) | ||
| start.record() | ||
| for _ in range(args.iter): | ||
| kernel(src, dst, barrier) | ||
| torch.cuda.synchronize() | ||
| dist.barrier(group_size) | ||
| end.record() | ||
| torch.cuda.synchronize() | ||
| dist.barrier(group_size) | ||
| elapsed_time = start.elapsed_time(end) / args.iter | ||
| print( | ||
| f"Rank {local_rank} Average Kernel execution time: {elapsed_time:.3f} ms, Bandwidth: {2 * PE_num * M * N / (elapsed_time * 1e6):.3f} GB/s" | ||
| ) | ||
|
|
||
| # Torch Reference | ||
| torch.cuda.synchronize() | ||
| dst_ref = torch.zeros((PE_num * M, N), dtype=torch.float16, device="cuda") | ||
| dist.all_to_all_single(dst_ref, src, group=group_size) | ||
| torch.cuda.synchronize() | ||
|
|
||
| if torch.allclose(dst, dst_ref, atol=1e-2, rtol=1e-2): | ||
| print(f"Rank {local_rank} Verification Passed! ✅") | ||
| else: | ||
| max_diff = (dst - dst_ref).abs().max() | ||
| print(f"Rank {local_rank} Verification Failed! ❌ Max diff: {max_diff}") | ||
| print(f"dst: {dst}") | ||
| print(f"dst_ref: {dst_ref}") | ||
|
|
||
| dist.destroy_process_group() | ||
|
|
||
|
|
||
| if __name__ == "__main__": | ||
| parser = argparse.ArgumentParser() | ||
| parser.add_argument("--PE_num", type=int, default=8) | ||
| parser.add_argument("--M", type=int, default=8192) | ||
| parser.add_argument("--N", type=int, default=7168) | ||
| parser.add_argument("--warmup", type=int, default=5, help="Number of warmup iterations") | ||
| parser.add_argument("--iter", type=int, default=10, help="Number of benchmark iterations") | ||
|
|
||
| args = parser.parse_args() | ||
| torch.multiprocessing.spawn(run_alltoall, args=(args.PE_num, args), nprocs=args.PE_num) | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused
barrierparameter in kernel signature.The
barrierparameter is declared in the kernel signature but never used within the kernel body. This could indicate either:Given this is an all-to-all operation, typically a barrier or synchronization mechanism is needed to ensure all ranks have completed their transfers before the kernel returns. Currently, only
T.fence_sysis called which provides memory ordering but not inter-rank synchronization.💡 Suggested fix: Either use the barrier or remove it
Option 1 - Add barrier synchronization:
T.put_block( src=T.address_of(src[dst_rank * M + by * block_M, 0]), dst=T.address_of(dst[rank[0] * M + by * block_M, 0]), size=block_M * block_N, dst_pe=dst_rank, ) T.fence_sys(sem=T.MemorySemantic.RELEASE) + T.barrier_blocks(barrier) return mainOption 2 - Remove unused parameter:
`@T.prim_func` def main( src: T.Tensor((PE_num * M, N), "float16"), dst: T.Tensor((PE_num * M, N), "float16"), - barrier: T.Tensor((PE_num), "int32"), ):📝 Committable suggestion
🧰 Tools
🪛 Ruff (0.14.14)
[warning] 16-16: Unused function argument:
barrier(ARG001)
[warning] 21-21: Unpacked variable
bzis never usedPrefix it with an underscore or any other dummy variable pattern
(RUF059)
🤖 Prompt for AI Agents