-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[torchao float8tensor] #1415
Draft
crcrpar
wants to merge
101
commits into
main
Choose a base branch
from
crpa/subclass-torchao_float8tensor
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
[torchao float8tensor] #1415
Changes from all commits
Commits
Show all changes
101 commits
Select commit
Hold shift + click to select a range
ae4a28b
workaround for `.__init__` call on the output of `_make_wrapper_subcl…
crcrpar b0a4710
the test case works
crcrpar bd01157
attribute access to subclass proxy seems functioning
crcrpar ac0d9fe
simplify if-else in `SubclassTensorProxy.__init__`
crcrpar d79afcc
stricter type check of tensors
crcrpar 2c06edd
support `MySubclass(...)` called inside of `torch.autograd.Function`
crcrpar 8c0f39e
explanation
crcrpar 5ae4d2e
failing test case as starter
crcrpar 2e6e008
add path of SubclassTensorProxy in `tensorproxy`
crcrpar 2a99349
add no-op tensor subclass transform
crcrpar 8d677ee
transfer #1345
crcrpar 22b007b
make the subclass check more meticulous
crcrpar 3e1d4b0
fake_tensor.foo -> foo
crcrpar 5f1d65e
simplify `subclass_proxy_to_flatten`
crcrpar e2b3b43
handle `PrimIDs.RETURN` earlier
crcrpar aa4351c
give created subclass required attributes
crcrpar 1d9e5a3
remove `subclass_type_to_attr_names`
crcrpar 335df5c
remove `requires_desugarring`
crcrpar 409798d
import cleanup
crcrpar e9027ba
avoid flattening non-tensor args of subclass ctor
crcrpar ca67d6f
add path of SubclassTensorProxy in `tensorproxy`
crcrpar 6e3c8b2
phase 1 for backward test
crcrpar 4cded75
check backward is runnable with subclass arguments
crcrpar 5daa159
bwd run with tensor creation inside of trace
crcrpar d8ce1b1
flatten Function.apply of converter
crcrpar f7b4976
torchao small test
crcrpar 76177fa
placeholder-ish attributes/methods for `_make_wrapper_subclass`
crcrpar 4cb28cd
[autograd.Function lookaside] `dce` to wipe out redundant bsyms
crcrpar ca51fdb
Give unpack bsyms to traces generated inside `Function` lookaside
crcrpar b504417
some tweaks
crcrpar 141cba0
revert pytree changes
crcrpar a1c6471
imports for tensor subclass ctor
crcrpar 16fde2f
define bind-postprocess
crcrpar 2ece401
xfail, for now
crcrpar 778d8a0
fix type set creation and add bsym postprocess for torchex
crcrpar 52a9c1b
printer translating thunder dtype/device to torchs
crcrpar c474c2d
meticulously set import_ctx of cls
crcrpar 3cf828d
dry
crcrpar 8082fc3
test failure info update
crcrpar 3eef261
cosmetic
crcrpar 6679d9a
better repr & type string
crcrpar c026711
use `transpose` instead of `permute`
crcrpar 75a03ef
better typestring
crcrpar b46f24f
num bsyms check
crcrpar 9db718e
allow tensor subclasses with non-empty metadata
crcrpar 48958ab
bsyms is a list inside `trace_from_bsym_or_bsyms`
crcrpar 373b39f
the typestring has syntactic mistake; remove for now
crcrpar 75a0423
better error message for missing support of ops
crcrpar 8122c77
add `torch._scaled_mm` to auto register
crcrpar 38530ff
update error message of missing op
crcrpar eaaa012
tree_flatten tensor subclass metadata values
crcrpar 3790e42
make error msg verbose
crcrpar 4e83361
better effor message for failing map from fx node to ltorch op
crcrpar 1078712
better error message
crcrpar f817a0b
register scaled_mm
crcrpar fe57ddb
note where new bsyms come from, especially torch dispatch
crcrpar 730c287
cast `fx.immutable_{dict, list}` to `dict`/`list`
crcrpar e943081
printer and bind_postprocess for `__tensor_flatten__` & `__tensor_unf…
crcrpar 3a62aaf
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 06c7eeb
xfail reason
crcrpar 1de88f3
cosmetic
crcrpar 5c197cf
simplify subclass output handling
crcrpar 4b9e67f
Unrolling tensor subclasses in fwd/bwd split (#1489)
crcrpar 759e01d
reduce return values by one
crcrpar c4c89b0
clarify the error is numeric
crcrpar eae9834
add bfloat16 to test parametrization
crcrpar 0f41b5e
torch_compile_ex style transform for execution
crcrpar 70b576b
update test
crcrpar 66d67f9
clarify nothing is put into thunder.jit when thunderfx
crcrpar bfdbe5a
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 7f14349
shorter header for torch dispatch result
crcrpar 481b3b2
try to tell if the trace is backward or not by checking certain bsyms
crcrpar 96e5562
update test
crcrpar 595d32c
check bsym.args itself before its first name
crcrpar 6b4330d
warn tensor subclass support
crcrpar 4d8d375
test update
crcrpar 87e7354
more meticulous bsym check to tell if the trace is bwd
crcrpar 46c485d
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] d8309dc
remove `flat_trace_args_spec`
crcrpar c1246e6
fix wrong rebase output
crcrpar e226903
fix typo
crcrpar 0c54374
add tensor subclass transform output to traces
crcrpar d28c6ae
bring back unexpectedly deleted line
crcrpar ee91436
add note about the behavioral difference
crcrpar 9fe5deb
DCE for ``tensor.__tensor_flatten__``
crcrpar cc86afb
update regex of assert raises
crcrpar b793ffb
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] cdefd06
`torch._scaled_matmul` decomposition
crcrpar 47f5cc2
fix check and add missing cast
crcrpar c838ea0
no consumer map
crcrpar 8d15622
add `flatten_tensor_subclass` to docs
crcrpar 54645b3
fix column major check & add dtype check of data mat
crcrpar b7801b0
use existing ones
crcrpar 4e6d9fd
fix nvfuser impl
crcrpar 97e7ee2
add device check
crcrpar 8693dc7
rename to dce from cse
crcrpar caa2242
rename to `enable_scaled_mm`
crcrpar 6a6fe59
update cond
crcrpar fce281b
getnv(a,...) -> getnv(b,...)
crcrpar e5d26fc
remove nvfuser scaled_mm
crcrpar 56c69df
remove flattening and unflattening
crcrpar File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
double check if this is really necessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
without dce's, torchao.float8 tests fail: