Skip to content

Commit

Permalink
Added two questions, addressed comments.
Browse files Browse the repository at this point in the history
  • Loading branch information
apaz-cli committed Mar 26, 2024
1 parent 0546f8d commit 93ebc73
Showing 1 changed file with 27 additions and 4 deletions.
31 changes: 27 additions & 4 deletions docs/source/basic/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,21 @@ Thunder FAQ
1. How does Thunder compare to ``torch.compile``?
=================================================

Both `Thunder <https://github.com/Lightning-AI/lightning-thunder>`_ and `torch.compile <https://pytorch.org/docs/stable/torch.compiler.html#torch-compiler-overview>`_ are both deep learning compilers that take a pytorch module or callable, and return a callable. As such, it seems reasonable to compare them that way. With that said, the focus of the projects are completely different.
Both `Thunder <https://github.com/Lightning-AI/lightning-thunder>`_ and `torch.compile <https://pytorch.org/docs/stable/torch.compiler.html#torch-compiler-overview>`_ are both deep learning compilers that take a pytorch module or callable, and return a callable. It seems reasonable to compare them that way. With that said, the focus of the projects are completely different.

Torch compile is a framework for generating optimized kernels. Thunder is a framework for layering optimizations. It generates none of these optimizations itself, instead delegating to other libraries, including ``torch.compile`` and `nvfuser <https://github.com/NVIDIA/Fuser>`_.
Torch compile is a framework for generating optimized kernels. Its focus is on making pytorch code run faster by generating optimized kernels, with minimal code changes. Thunder is a framework for layering optimizations. It generates none of these optimizations itself, instead delegating to other libraries, including ``torch.compile`` and `nvfuser <https://github.com/NVIDIA/Fuser>`_. Our focus is on understandability, extendability, and usability.

As such, the two are not necessarily comparable. Or, they are, but you would be comparing against a default configuration, which we expect you to change.
Modern deep learning optimization often involves stitching various kernels together, often from different sources. Thunder is designed to make it easy to use these tools together, and easy to add new tools to the mix.

As such, the two are not necessarily comparable. Or, they are, but you would be comparing against a default configuration, which we expect you to extend and change anyway.



============================================
2. How can I use torch.compile with Thunder?
============================================

The correct way to use ``torch.compile`` with Thunder is as an executor. This gives you finer grained control over which parts of your model are handled by which
The correct way to use ``torch.compile`` with Thunder is to use the executor. This way, you get finer grained control over which parts of the model are handled by which executor.

Calling ``torch.compile()`` and then ``thunder.jit()`` is not what you want. It doesn't give a good error message right now, but it should not work.

Expand Down Expand Up @@ -56,3 +58,24 @@ Thunder is in alpha. There will be bugs, and many torch operations are not suppo
If you need certain operations supported for your model, please let us know by creating an issue. We plan to get to all of them (with the exception of any :doc:`sharp edges <sharp_edges>`), but your issues help us prioritize which to do first.

There are potentially any number of other problems which could arise. Some of the problems are known, some may not be. Check out the :doc:`sharp edges <sharp_edges>` page. If what you're seeing still doesn't make sense, let us know by creating an issue.


=======================================
5. Does Thunder support dynamic shapes?
=======================================

No, not at the moment. Meta functions operate on the exact shapes of the tensor proxies that pass through them. This is a limitation of the current implementation, and we plan to incorporate dynamic shapes in the future. We're looking for input on how to best incorporate dynamic shapes, so if you have any ideas, please let us know by creating an issue or reaching out.


================================================================
6. Does Thunder support (or plan to support) inplace operations?
================================================================

Not at the moment. Implementing inplace operations would require tracking which tensors in a trace have been modified by operations in our optimization passes, which currently we represent as purely functional. All deep learning compiler frameworks have to deal with the problem of tensor aliasing in some way. The way we've chosen for now is to pretend that the problem doesn't exist.

The common solution is to represent programs in `SSA form <https://en.wikipedia.org/wiki/Static_single-assignment_form>`_, or do some form of SSA-inspired variable renaming, but this is a much less understandable representation than a list of symbols in a trace. At the same time, it would complicate optimization passes and require rewriting many of them to handle these aliasing rules.

There also exists the problem that some backends, like Triton and nvfuser, or a potential Jax backend, are not actually good at handling inplace operations either. They, like Thunder, expect functional programs, and if we simply decide to forward these operations to the executor, not every executor would be able to implement every operator. For a simple torch operator like ``x.add_(y)``, this is unsatisfactory to us. We would ideally like to be able to convert instances of ``add_()`` to ``add()`` for executors. But every trace needs to be executable.

We want to support inplace operations eventually, but we are attached to traces as our program representation of choice for optimization passes. Much like with dynamic shapes, if you have any ideas on how to best incorporate inplace operations, please let us know.

0 comments on commit 93ebc73

Please sign in to comment.