__ __ __ __ __ __ __ __ __ ___ __
/\ | \ / \ |__) /\ / ` / _` |__) /\ / ` / \ |\/| |__) | | |__ |__)
/~~\ |__/ \__/ | \ /~~\ \__, \__> | \ /~~\ \__, \__/ | | | | |___ |___ | \
An MLIR project for CGRA SoC (FDRA Repository). ADORA includes two compilers designed for the FDRA CGRA SoC:
tensor-opt
·High-level tensor transformation tool for CGRA-based DNN acceleration.
·Converts commonly used tensor operations to ADORATensor built-in operations (e.g., linalg.matmul-> ADORATensor.Gemm).
·Focuses on operator-level dataflow optimizations before hardware mapping.
cgra-opt
·Mid-level automated lowering and transformation framework.
·Performs affine optimizations for C kernels and tensor operations not directly supported in the ADORATensor dialect.
·Generates Data-Flow Graphs (DFGs) for subsequent mapping.
·Can be executed independently of specific hardware information.
cgra-mapper
·Low-level mapper that maps DFGs to the Architecture Description Graph (ADG).
·Handles hardware-specific scheduling, placement, and resource allocation.
·Generates RISC-V execution files for the Rocket+CGRA SoC.
·Or generates pytest files for the AXI-CGRA simulation envioronment.
-
tools: Contains the main functions fortensor-opt,cgra-optandcgra-mapper -
include/lib: C++ headers and source files forcgra-optandtensor-opt -
mapper: C++ source files forcgra-mapper -
build_tools: Bash scripts for building LLVM and Adora -
experiment: Includes ML benchmarks and C benchmarks (e.g., Polybench). Follow the instructions below to run them. More benchmarks will be added soon. -
env.sh: Change the environment variables to your own, and source it.
LLVM Commit: 26eb4285b56edd8c897642078d91f16ff0fd3472 (Same as Polygeist GitHub Repository)
LLVM Commit: b270525f730be6e7196667925f5a9bfa153262e9 (Same as ONNX-MLIR v5.0.0)
You can download the specified version from the following link: LLVM GitHub Repository
To install LLVM, follow script:
./build_tools/build_llvm.shUpdate the LLVM installation path in the following files to your own path:
build_adora.sh
After making the changes, run build_adora.sh.
Tip: It is recommended to execute the script line-by-line (copy-paste using Ctrl+C/Ctrl+V) for better control.
You can download and install the Rocket+CGRA SoC from the appropriate repository:
FDRA Repository
or you wanna run with a light-weighted python api(cocotb): MatrixMeld
Download and install torch-mlir.
Our supported version is torch-mlir_20250127.357. (Corresponding torch-vision version: 0.1.6.dev0)
You can find the official repository here:
Torch MLIR GitHub Repository
Download and install Polygeist to run C benchmarks.
You can find the Polygeist repository here:
Polygeist GitHub Repository
The experiment directory contains pre-transformed MLIR files ready to use.
Before running an example, you need to update the environment paths in env.sh:
######################
# Set your own environment paths in env.sh:
# - CGRVOPT_PROJECT_PATH: Path to your `cgra-opt` project (e.g., xxxx/cgra-opt)
# - CGRA_ADG_PATH: Path to the hardware information generated by the Chisel hardware generator
# - CHIPYARD_DIR: Path to Chipyard for final RISC-V compilation and linking
#
# Note:
# - If you only want to generate the DFG and do not need to simulate the benchmark,
# the CHIPYARD_DIR path is not required.
######################
source env.sh # Update paths in env.sh as described above.To run 'deriche' example from Polybench benchmark, you can go with:
cd experiment/Cbenchmarks/Polybench/medley/deriche/deriche_mini
### From C source file to MLIR. This step can be skipped since the kernel is already in MLIR format.
bash scripts/0_compileCtoMLIR.sh
### For hardware-independent compilation
bash scripts/1_kernel_opt.sh
bash scripts/2_kernel_dfgs.sh
### For mapping and riscv link (Chipyard must be installed)
bash scripts/3_kernel_map.sh
bash scripts/4_get_all_asms.sh
bash scripts/5_compile_and_link.sh
### The final linked RISC-V executable file can now be simulated using tools such as VCS or Verilator.@inproceedings{
title={Adora Compiler: End-to-End Optimization for High-Efficiency Dataflow Acceleration and Task Pipelining on CGRAs},
author={Lou, Jiahang and Zhu, Qilong and Dai, Yuan and Zhong, Zewei and Yin, Wenbo and Wang, Lingli},
booktitle={2025 62nd ACM/IEEE Design Automation Conference (DAC)},
year={2025},
organization={IEEE}
}