Skip to content

FDU-ME-ARC/adora-compiler

Repository files navigation

      __   __   __           __   __   __           __   __         __          ___  __  
 /\  |  \ /  \ |__)  /\     /  ` / _` |__)  /\     /  ` /  \  |\/| |__) | |    |__  |__) 
/~~\ |__/ \__/ |  \ /~~\    \__, \__> |  \ /~~\    \__, \__/  |  | |    | |___ |___ |  \ 
                                                                                           

ADORA: Adaptive Dataflow Optimization for Reconfigurable Architectures.

An MLIR project for CGRA SoC (FDRA Repository). ADORA includes two compilers designed for the FDRA CGRA SoC:

  • tensor-opt

·High-level tensor transformation tool for CGRA-based DNN acceleration.

·Converts commonly used tensor operations to ADORATensor built-in operations (e.g., linalg.matmul-> ADORATensor.Gemm).

·Focuses on operator-level dataflow optimizations before hardware mapping.

  • cgra-opt

·Mid-level automated lowering and transformation framework.

·Performs affine optimizations for C kernels and tensor operations not directly supported in the ADORATensor dialect.

·Generates Data-Flow Graphs (DFGs) for subsequent mapping.

·Can be executed independently of specific hardware information.

  • cgra-mapper

·Low-level mapper that maps DFGs to the Architecture Description Graph (ADG).

·Handles hardware-specific scheduling, placement, and resource allocation.

·Generates RISC-V execution files for the Rocket+CGRA SoC.

·Or generates pytest files for the AXI-CGRA simulation envioronment.

If you have any issues related to this repository, please don't hesitate to get in touch!

Directories:

  • tools : Contains the main functions for tensor-opt, cgra-opt and cgra-mapper

  • include / lib : C++ headers and source files for cgra-opt and tensor-opt

  • mapper : C++ source files for cgra-mapper

  • build_tools : Bash scripts for building LLVM and Adora

  • experiment : Includes ML benchmarks and C benchmarks (e.g., Polybench). Follow the instructions below to run them. More benchmarks will be added soon.

  • env.sh : Change the environment variables to your own, and source it.


Build

1 LLVM-18

LLVM Commit: 26eb4285b56edd8c897642078d91f16ff0fd3472 (Same as Polygeist GitHub Repository)

LLVM Commit: b270525f730be6e7196667925f5a9bfa153262e9 (Same as ONNX-MLIR v5.0.0)

You can download the specified version from the following link: LLVM GitHub Repository

To install LLVM, follow script:

./build_tools/build_llvm.sh

2 Adora(This project)

Update the LLVM installation path in the following files to your own path:

  • build_adora.sh

After making the changes, run build_adora.sh.
Tip: It is recommended to execute the script line-by-line (copy-paste using Ctrl+C/Ctrl+V) for better control.

3 Rocket+CGRA SoC

You can download and install the Rocket+CGRA SoC from the appropriate repository:
FDRA Repository

or you wanna run with a light-weighted python api(cocotb): MatrixMeld

4 Other Dependencies You May Need

For AI Benchmarks: torch-mlir

Download and install torch-mlir.
Our supported version is torch-mlir_20250127.357. (Corresponding torch-vision version: 0.1.6.dev0)

You can find the official repository here:
Torch MLIR GitHub Repository

For C Benchmarks: Polygeist

Download and install Polygeist to run C benchmarks.
You can find the Polygeist repository here:
Polygeist GitHub Repository


Run an Example

The experiment directory contains pre-transformed MLIR files ready to use.

Step 1: Set Up Environment Variables

Before running an example, you need to update the environment paths in env.sh:

######################
# Set your own environment paths in env.sh:
# - CGRVOPT_PROJECT_PATH: Path to your `cgra-opt` project (e.g., xxxx/cgra-opt)
# - CGRA_ADG_PATH: Path to the hardware information generated by the Chisel hardware generator
# - CHIPYARD_DIR: Path to Chipyard for final RISC-V compilation and linking
#
# Note:
# - If you only want to generate the DFG and do not need to simulate the benchmark,
#   the CHIPYARD_DIR path is not required.
######################
source env.sh  # Update paths in env.sh as described above.

Step 2: Run 'deriche' example: C -> mlir -> DFG -> mapping result -> executable file

To run 'deriche' example from Polybench benchmark, you can go with:

cd experiment/Cbenchmarks/Polybench/medley/deriche/deriche_mini

### From C source file to MLIR. This step can be skipped since the kernel is already in MLIR format.
bash scripts/0_compileCtoMLIR.sh

### For hardware-independent compilation
bash scripts/1_kernel_opt.sh
bash scripts/2_kernel_dfgs.sh

### For mapping and riscv link (Chipyard must be installed)
bash scripts/3_kernel_map.sh
bash scripts/4_get_all_asms.sh
bash scripts/5_compile_and_link.sh

### The final linked RISC-V executable file can now be simulated using tools such as VCS or Verilator.

# Related Publications

@inproceedings{
  title={Adora Compiler: End-to-End Optimization for High-Efficiency Dataflow Acceleration and Task Pipelining on CGRAs},
  author={Lou, Jiahang and Zhu, Qilong and Dai, Yuan and Zhong, Zewei and Yin, Wenbo and Wang, Lingli},
  booktitle={2025 62nd ACM/IEEE Design Automation Conference (DAC)},
  year={2025},
  organization={IEEE}
}

About

MLIR-based Compiler for CGRA SoC

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published