Skip to content

Commit

Permalink
Merge pull request iree-org#6949 from not-jenni:main-to-google
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 394551308
  • Loading branch information
iree-copybara-bot committed Sep 2, 2021
2 parents 5425639 + bfd507f commit 570e6f2
Show file tree
Hide file tree
Showing 59 changed files with 1,014 additions and 447 deletions.
62 changes: 62 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# IREE Benchmarks

This directory contains configuration definition for IREE's continuous
benchmarks suite. Benchmark results are posted to https://perf.iree.dev.

The https://buildkite.com/iree/iree-benchmark Buildkite pipeline runs on each
commit to the `main` branch and posts those results to the dashboard. The
pipeline also runs on pull requests with the `buildkite:benchmark` label,
posting results compared against their base commit as comments.

## Types of benchmarks

```
├── TensorFlow
│ * models authored in TensorFlow and imported with `iree-import-tf`
└── TFLite
* models converted to TensorFlow Lite and imported with `iree-import-tflite`
```

## Adding new benchmarks

### Machine learning model latency

1. Pick the model you want to benchmark and find its source, which could be
a Python script, TensorFlow SavedModel from https://tfhub.dev/, TensorFlow
Lite FlatBuffer, or some other format with a supported path into IREE. The
model can optionally include trained weights if those are important for
benchmarking.

2. Import the model into an MLIR file that IREE can compile using the core
`iree-translate` tool. For TensorFlow models use `iree-import-tf`, for
TensorFlow Lite models use `iree-import-tflite`, etc. Take notes for where
the model came from and how it was imported in case the MLIR file needs to
be regenerated in the future.

We may further automate this over time, such as by importing from Python
sources as part of the benchmarks pipeline directly (see
https://github.com/google/iree/issues/6942). For now, here are some
references:

* https://gist.github.com/antiagainst/35b0989bd0188dd9df4630bb0cf778f2
* https://colab.research.google.com/gist/ScottTodd/10838c0ccc87fa6d1b1c72e0fabea064/iree-keyword_spotting_streaming-benchmarks.ipynb

3. Package the imported .mlir model file(s) for storage (see
[iree_mlir_benchmark_suite.cmake](../build_tools/cmake/iree_mlir_benchmark_suite.cmake)
and [download_file.py](../scripts/download_file.py)), then upload them to the
`iree-model-artifacts` Google Cloud Storage bucket with the help of a team
member. Files currently hosted in that bucket can be viewed at
https://storage.googleapis.com/iree-model-artifacts/index.html.

4. Edit the appropriate `CMakeLists.txt` file under this directory to include
your desired benchmark configuration with the `iree_mlir_benchmark_suite`
function. You can test your change by running the
https://buildkite.com/iree/iree-benchmark pipeline on a GitHub pull request
with the `buildkite:benchmark` label.

5. Once your changes are merged to the `main` branch, results will start to
appear on the benchmarks dashboard at https://perf.iree.dev.

### Other project metrics

TODO(#6161): Collect metrics for miscellaneous IREE system states
6 changes: 3 additions & 3 deletions docs/website/docs/building-from-source/android.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ devices from the command line. Install it following the
Build and install on your host machine:

``` shell
cmake -B ../iree-build/ \
cmake -GNinja -B ../iree-build/ \
-DCMAKE_INSTALL_PREFIX=../iree-build/install \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
.
Expand All @@ -56,7 +56,7 @@ Build the runtime using the Android NDK toolchain:
=== "Linux and MacOS"

``` shell
cmake -B ../iree-build-android/ \
cmake -GNinja -B ../iree-build-android/ \
-DCMAKE_TOOLCHAIN_FILE="${ANDROID_NDK?}/build/cmake/android.toolchain.cmake" \
-DIREE_HOST_BINARY_ROOT="$PWD/../iree-build/install" \
-DANDROID_ABI="arm64-v8a" \
Expand All @@ -69,7 +69,7 @@ Build the runtime using the Android NDK toolchain:
=== "Windows"

``` shell
cmake -B ../iree-build-android/ \
cmake -GNinja -B ../iree-build-android/ \
-DCMAKE_TOOLCHAIN_FILE="%ANDROID_NDK%/build/cmake/android.toolchain.cmake" \
-DIREE_HOST_BINARY_ROOT="%CD%/../iree-build/install" \
-DANDROID_ABI="arm64-v8a" \
Expand Down
66 changes: 50 additions & 16 deletions docs/website/docs/building-from-source/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,21 +9,29 @@ compiler:

=== "Linux and MacOS"

<!-- TODO(scotttodd): annotation about gcc vs clang -->
1. Install a compiler/linker (typically "clang" and "lld" package).

``` shell
sudo apt install cmake clang
export CC=clang
export CXX=clang++
```
2. Install [CMake](https://cmake.org/download/) (typically "cmake" package).

3. Install [Ninja](https://ninja-build.org/) (typically "ninja-build"
package).

On a relatively recent Debian/Ubuntu:

``` shell
sudo apt install cmake ninja-build clang lld
```

=== "Windows"

1. Install CMake from the
1. Install MSVC from Visual Studio or "Tools for Visual Studio" on the
[official downloads page](https://visualstudio.microsoft.com/downloads/)

2. Install CMake from the
[official downloads page](https://cmake.org/download/)

2. Install MSVC from Visual Studio or "Tools for Visual Studio" on the
[official downloads page](https://visualstudio.microsoft.com/downloads/)
3. Install Ninja either from the
[official site](https://ninja-build.org/).

!!! note
You will need to initialize MSVC by running `vcvarsall.bat` to use it
Expand All @@ -44,8 +52,38 @@ git submodule update --init

Configure then build all targets using CMake:

Configure CMake:

=== "Linux and MacOS"

``` shell
# Recommended for simple development using clang and lld:
cmake -GNinja -B ../iree-build/ -S . \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DIREE_ENABLE_ASSERTIONS=ON \
-DCMAKE_C_COMPILER=clang \
-DCMAKE_CXX_COMPILER=clang++ \
-DIREE_ENABLE_LLD=ON

# Alternately, with system compiler and your choice of CMake generator:
# cmake -B ../iree-build/ -S .

# Additional quality of life CMake flags:
# Enable ccache:
# -DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache
```

=== "Windows"

``` shell
cmake -GNinja -B ../iree-build/ -S . \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DIREE_ENABLE_ASSERTIONS=ON
```

Build:

``` shell
cmake -B ../iree-build/ -DCMAKE_BUILD_TYPE=RelWithDebInfo .
cmake --build ../iree-build/
```

Expand All @@ -59,12 +97,8 @@ cmake --build ../iree-build/
for general details.

???+ Tip
Most IREE Core devs use [Ninja](https://ninja-build.org/) as the CMake
generator. The benefit is that it works the same across all platforms and
automatically takes advantage of parallelism. to use it, add a `-GNinja`
argument to your initial cmake command (and make sure to install
`ninja-build` from either your favorite OS package manager, or generically
via `python -m pip install ninja`).
You are welcome to try different CMake generators, but IREE devs and CIs
exclusively use [Ninja](https://ninja-build.org/).


## What's next?
Expand Down
3 changes: 2 additions & 1 deletion docs/website/docs/building-from-source/optional-features.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@ Also see [instructions for installing pre-built binaries](../bindings/python.md)

``` shell
cmake \
-GNinja \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DIREE_BUILD_PYTHON_BINDINGS=ON \
-DPython3_EXECUTABLE="$(which python)" \
Expand All @@ -102,7 +103,7 @@ Also see [instructions for installing pre-built binaries](../bindings/python.md)
=== "Windows"

``` powershell
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DIREE_BUILD_PYTHON_BINDINGS=ON .
cmake -GNinja -DCMAKE_BUILD_TYPE=RelWithDebInfo -DIREE_BUILD_PYTHON_BINDINGS=ON .
cmake --build .

# Add bindings\python to PYTHONPATH and use the API.
Expand Down
4 changes: 2 additions & 2 deletions docs/website/docs/building-from-source/riscv.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ For RISC-V vector extensions support, see
Build and install on your host machine:

``` shell
cmake -B ../iree-build/ \
cmake -GNinja -B ../iree-build/ \
-DCMAKE_INSTALL_PREFIX=../iree-build/install \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
.
Expand All @@ -72,7 +72,7 @@ as a reference of how to set up the cmake configuration.
#### RISC-V 64-bit Linux target

```shell
cmake -B ../iree-build-riscv/ \
cmake -GNinja -B ../iree-build-riscv/ \
-DCMAKE_TOOLCHAIN_FILE="./build_tools/cmake/riscv.toolchain.cmake" \
-DIREE_HOST_BINARY_ROOT=$(realpath ../iree-build-host/install) \
-DRISCV_CPU=rv64 \
Expand Down
1 change: 0 additions & 1 deletion iree/compiler/Codegen/Common/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ cc_library(
"LinalgBufferizePass.cpp",
"OptimizeVectorTransferPass.cpp",
"SetNumWorkgroupsPass.cpp",
"ShapeToLLVMConversion.cpp",
"VectorizeConv.cpp",
"VectorizeMMT4d.cpp",
],
Expand Down
1 change: 0 additions & 1 deletion iree/compiler/Codegen/Common/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ iree_cc_library(
"LinalgBufferizePass.cpp"
"OptimizeVectorTransferPass.cpp"
"SetNumWorkgroupsPass.cpp"
"ShapeToLLVMConversion.cpp"
"VectorizeConv.cpp"
"VectorizeMMT4d.cpp"
DEPS
Expand Down
20 changes: 15 additions & 5 deletions iree/compiler/Codegen/Common/CleanupBufferAllocViewPass.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
#include "iree/compiler/Dialect/HAL/IR/HALOps.h"
#include "mlir/Dialect/Linalg/IR/LinalgOps.h"
#include "mlir/IR/BuiltinAttributes.h"
#include "mlir/IR/BuiltinTypes.h"
#include "mlir/IR/Matchers.h"
#include "mlir/IR/PatternMatch.h"
#include "mlir/Transforms/GreedyPatternRewriteDriver.h"
Expand Down Expand Up @@ -51,6 +52,13 @@ struct FoldReshapeIntoInterfaceTensorLoad : OpRewritePattern<TensorReshapeOp> {

LogicalResult matchAndRewrite(TensorReshapeOp reshapeOp,
PatternRewriter &rewriter) const override {
// TODO(antigainst): enable dynamic shape support once they are needed.
auto reshapeSrcType = reshapeOp.src().getType().template cast<ShapedType>();
auto reshapeDstType = reshapeOp.getType().template cast<ShapedType>();
if (!reshapeSrcType.hasStaticShape() || !reshapeDstType.hasStaticShape()) {
return failure();
}

auto loadOp =
reshapeOp.src()
.template getDefiningOp<IREE::Flow::DispatchTensorLoadOp>();
Expand All @@ -66,16 +74,18 @@ struct FoldReshapeIntoInterfaceTensorLoad : OpRewritePattern<TensorReshapeOp> {
loadOp.source()
.template getDefiningOp<IREE::HAL::InterfaceBindingSubspanOp>();
if (!subspanOp) return failure();
assert(subspanOp.dynamic_dims().empty());

auto tensorAccess = subspanOp.getType()
.template cast<IREE::Flow::DispatchTensorType>()
.getAccess();
auto newSubspanType = IREE::Flow::DispatchTensorType::get(
subspanOp.getType()
.template cast<IREE::Flow::DispatchTensorType>()
.getAccess(),
reshapeOp.getResultType());
tensorAccess, reshapeOp.getResultType());

Value newSubspanOp = rewriter.create<IREE::HAL::InterfaceBindingSubspanOp>(
subspanOp.getLoc(), newSubspanType, subspanOp.binding(),
subspanOp.byte_offset(), subspanOp.byte_length());
subspanOp.byte_offset(), subspanOp.byte_length(),
subspanOp.dynamic_dims());

rewriter.replaceOpWithNewOp<IREE::Flow::DispatchTensorLoadOp>(
reshapeOp, reshapeOp.getResultType(), newSubspanOp);
Expand Down
Loading

0 comments on commit 570e6f2

Please sign in to comment.