Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Relaxed Operator Fusion #37

Merged
merged 3 commits into from
Nov 3, 2023
Merged

Implement Relaxed Operator Fusion #37

merged 3 commits into from
Nov 3, 2023

Commits on Nov 3, 2023

  1. Implement Relaxed Operator Fusion

    This commit implements Relaxed Operator Fusion in InkFuse. We extend all
    existing tests to now also test our ROF backend.
    
    Incremental Fusion conceptually supports relaxed operator fusion. The tuple
    buffers that we can install between arbitrary suboperators are similar
    to ROF staging points.
    
    This leads to a relatively small code change to integrate ROF in InkFuse.
    The main modifications are all found in the `PipelineExecutor`.
    
    At a high-level, ROF in InkFuse goes through the following steps:
    1. A `Suboperator` can attach a `ROFStrategy` in its
       `OptimizationProperties`. This indicates whether the suboperator
       prefers compilation or vectorization.
    2. The optimization properties become easy to use through an
       `ROFScopeGuard` in `Pipeline.h`. When we decay an operator into
       suboperators, this `ROFScopeGuard` can simply be set up during
       suboperator creation and indicates that the generated suboperators
       should all be vectorized.
    3. The `PipelineExecutor` now splits the topological order of the
       suboperators into maximum connected components that have
       vectorized or JIT preference. The JIT components are then compiled
       ahead of time.
    4. The ROF backend then iterates through the suboperator topological
       sort. Any interpreted comment uses the pre-compiled primitives. Any
       compiled component uses the JIT code.
    
    In a next step, we will extend our benchmarking binaries with ROF
    support and start performance measuring the ROF backend.
    wagjamin committed Nov 3, 2023
    Configuration menu
    Copy the full SHA
    33e1b93 View commit details
    Browse the repository at this point in the history
  2. Fix Filter Code Generation

    Our ROF implementation showed some correctness issues in how we
    implement code generation for filters.
    
    A filter looks as follows:
    ```
        /- IU Prov 1 ----------------> Filter 1 ---> Sink 1
    Src                            /
        \- IU Prov 2 -> FilterScope ---> Filter 2 -> Sink 2
                   \-----------------/
    ```
    
    The IUs going from FilterScope to the filters are void-typed pseudo IUs.
    The problem was that we could run into cases where FilterScope would
    generate its nested `IF` before both IU providers were opened.
    
    In those cases, we would generate the IU provider iterator only within
    the nested filter scope, causing it to "lag" behind the other iterator,
    producing incorrect results.
    
    The core problem is that we do not model a code generation dependency
    between `IU Prov 1 -> FilterScope`.
    
    This commit ensures that all input IU providers generate their code
    before the if is opened. Now, `Filter 1` and `Filter 2` only request
    code generation of `FilterScope`, and `FilterScope` requests generating
    both IU providers.
    
    This is done in somewhat hacky way, if we rebuilt the system today we
    should not model code generation dependencies through void-typed pseudo
    IUs. We should instead probably model IU and codegen dependencies
    separately.
    wagjamin committed Nov 3, 2023
    Configuration menu
    Copy the full SHA
    3264623 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    62a747a View commit details
    Browse the repository at this point in the history