Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use transform for execution to get torch_compile executable #1500

Merged
merged 7 commits into from
Dec 3, 2024

Conversation

t-vi
Copy link
Collaborator

@t-vi t-vi commented Dec 2, 2024

Transform for execution used to produce duplicate names, this changes the implementation to be close to interpret_trace (with the plan of merging the two eventually).

Unfortunately, there seems to be some bad interaction, the newly skipped tests have been leading to segfaults, this needs investigation.

Fixes: #1131
Also:
Fixes: #1501

@t-vi t-vi force-pushed the tom/torch_compile_transform_for_ex branch from ebaf17f to 24d6a8d Compare December 3, 2024 09:46
@t-vi t-vi marked this pull request as ready for review December 3, 2024 11:57
@t-vi t-vi requested review from mruberry and lantiga as code owners December 3, 2024 11:57
@t-vi t-vi enabled auto-merge (squash) December 3, 2024 13:04
@@ -240,6 +240,23 @@ def split_forward_backward(computation_trc: TraceCtx, compile_data, compile_stat
skip_output=False,
skip_subsymbols=False,
)

# remove duplicates
# The NVFuser (and possibly others) fusion pass applied on the forward during has a
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is super interesting!

fyi @jjsjann123 @kshitij12345 @IvanYashchuk

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, do duplicates cause any functional issue?

Copy link
Collaborator

@lantiga lantiga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, also great docstring addition! Let's merge as is

return self.swap_map.get(variableify(v), v)

def add_unprocessed_bsyms(self, bsyms):
self.unprocessed_bsyms[:0] = bsyms
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could probably add self.unprocessed_bsyms and self.new_bsyms to __init__ for readability - not mandatory

@t-vi t-vi merged commit b82f59c into main Dec 3, 2024
41 checks passed
@t-vi t-vi deleted the tom/torch_compile_transform_for_ex branch December 3, 2024 20:38
@t-vi t-vi mentioned this pull request Dec 6, 2024
1 task
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants