Skip to content

camlab-ethz/TensorGalerkin

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

166 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

TensorGalerkin

Python 3.8+ PyTorch License: MIT

TensorGalerkin is a unified algorithmic framework for the numerical solution, constrained optimization, and physics-informed learning of PDEs with a variational structure. Its high efficiency stems from a novel Map-Reduce paradigm for Galerkin assembly that is fully GPU-compliant and seamlessly integrates with PyTorch's automatic differentiation.

TensorGalerkin Overview

Core Architecture

TensorGalerkin reformulates Galerkin assembly as a strictly tensorized Map-Reduce operation:

Stage Description Implementation
Batch-Map Evaluates all local stiffness matrices/load vectors in parallel via dense tensor contractions Single torch.einsum kernel
Sparse-Reduce Aggregates local contributions to global system via precomputed routing matrices Deterministic SpMM

This architecture reduces the computational graph to O(1) monolithic nodes, eliminating Python interpreter overhead and maximizing GPU throughput.

Scope of This Repository

Note: This repository contains the core Map-Reduce algorithm for TensorGalerkin assembly, along with example applications demonstrating its use in PDE solving and physics-informed learning.

For comprehensive, production-ready implementations, please refer to our dedicated repositories:

  • πŸ”§ TensorMesh (coming soon): Full-featured GPU-accelerated differentiable FEM solver support self-defined weak form.
  • 🧠 TensorPils (coming soon): Complete physics-informed operator learning framework for weak formulation powered by TensorGalerkin.

Downstream Applications

The TensorGalerkin assembly paradigm enables three downstream applications:

Framework Application Status
TensorMesh Numerical PDE Solver (FEM) examples here; full comprehensive solver in separate repo
TensorPils Physics-Informed Learning System examples here; full well-organized framework in separate repo
TensorOpt PDE-Constrained Optimization Integrated into TensorMesh

Key Features

  • High-Performance Assembly: Tensorized element operations fused into a single GPU kernel, avoiding scatter-add loops
  • Analytical Gradients: Spatial derivatives computed via shape function gradients, bypassing expensive AD passes
  • Unstructured Mesh Support: Works on arbitrary triangular/quadrilateral meshes
  • Multiple PDEs: Supports elliptic (Poisson etc.), parabolic (Heat etc.), and hyperbolic (Wave etc.) equations
  • Zero-Compilation Agility: PyTorch eager execution handles dynamic meshes without JIT recompilation overhead
  • Fully Differentiable: End-to-end gradient flow for optimization and learning tasks

Installation

From Source

git clone https://github.com/camlab-ethz/TensorGalerkin.git
cd TensorGalerkin
pip install -e .

Dependencies Only

pip install -r requirements.txt

Requirements

  • Python >= 3.8
  • PyTorch >= 1.12.0
  • PyTorch Geometric >= 2.0.0
  • NumPy, SciPy, Matplotlib
  • Gmsh (mesh generation)
  • MeshIO (mesh I/O)

Quick Start

Numerical PDE Solver (FEM)

import numpy as np
from src.datasets import Gmsh
from src.datasets.generators import PoissonGen

# Generate unstructured mesh
mesh = Gmsh.gen_rectangle(chara_length=0.02)

# Set boundary conditions (zero Dirichlet)
mesh.point_data['boundary_value'] = np.zeros_like(mesh.point_data['boundary_mask'], dtype=float)

# Generate random source function
f = PoissonGen.Random.source(mesh.points)

# Solve Poisson equation using FEM (TensorGalerkin assembly under the hood)
u = PoissonGen.Random.solution(mesh, f)

print(f"Solved on {mesh.points.shape[0]} nodes")

Optimizing PDE Residual

import numpy as np
import torch
from src.datasets import Gmsh
from src.equations import PoissonEquation

# Generate mesh and set up equation
mesh = Gmsh.gen_rectangle(chara_length=0.02)
mesh.point_data['boundary_value'] = np.zeros_like(mesh.point_data['boundary_mask'], dtype=float)
equation = PoissonEquation(mesh)

# Source term
f = torch.ones(mesh.points.shape[0])

# Optimize: minimize Galerkin residual ||K @ u - F||^2
u = torch.nn.Parameter(torch.zeros(mesh.points.shape[0]))
optimizer = torch.optim.Adam([u], lr=0.01)

for epoch in range(1000):
    optimizer.zero_grad()
    residual = equation.compute_residual_fast(u, f)
    loss = (residual ** 2).mean()
    loss.backward()
    optimizer.step()

Project Structure

TensorGalerkin/
β”œβ”€β”€ src/                      # Core library
β”‚   β”œβ”€β”€ discretization/       # Shape functions, Gaussian quadrature, tensor API
β”‚   β”œβ”€β”€ equations/            # PDE weak forms: Poisson, Heat, Wave, AC, Helmholtz
β”‚   β”œβ”€β”€ datasets/             # Mesh generation (Gmsh), PDE data generators
β”‚   β”œβ”€β”€ models/               # GNN architectures: SIGN, GraphSAGE, GAT, etc.
β”‚   β”œβ”€β”€ training/             # Optimizers, trainer base classes, utilities
β”‚   └── utils/                # Boundary conditions, sparse operations
β”œβ”€β”€ examples/                 # Example scripts
β”‚   β”œβ”€β”€ poisson_solver.py     # Compare TensorPils vs PINN/VPINN/DeepRitz
β”‚   └── wave_solver.py        # Wave equation neural operator (Galerkin + Data loss)
β”œβ”€β”€ notebooks/                # Jupyter tutorials
└── test/                     # Unit tests

Supported Equations

Equation Type Strong Form Variational Form
Poisson Elliptic $-\nabla \cdot (\rho \nabla u) = f$ $\int_\Omega \rho \nabla u \cdot \nabla v = \int_\Omega f v$
Helmholtz Elliptic $-\nabla^2 u - k^2 u = f$ $\int_\Omega \nabla u \cdot \nabla v - k^2 \int_\Omega u v = \int_\Omega f v$
Heat Parabolic $\partial_t u - \alpha \nabla^2 u = f$ $\langle \partial_t u, v \rangle_M + \alpha , \mathsf{a}(u, v) = \ell(v)$
Wave Hyperbolic $\partial_{tt} u - c^2 \nabla^2 u = 0$ $\langle \partial_{tt} u, v \rangle_M + c^2 , \mathsf{a}(u, v) = 0$
Allen-Cahn Parabolic $\partial_t u - a^2 \nabla^2 u + \varepsilon^2 u(u^2-1) = 0$ Semi-linear with reaction term

Examples

Neural PDE Solver Comparison (Poisson)

# TensorPils (ours) - uses analytical shape gradients
python examples/poisson_solver.py --method ggl --epochs 10000

# PINN - requires 2nd-order AD
python examples/poisson_solver.py --method pinn --epochs 10000

# Deep Ritz - energy minimization
python examples/poisson_solver.py --method deepritz --epochs 10000

Wave Equation Neural Operator

Train a GNN-based neural operator for the wave equation with Galerkin loss, data loss, or both:

# Galerkin-only (physics-informed, no labeled data needed)
bash examples/run_wave_solver_galerkin.sh

Citation

If you use TensorGalerkin in your research, please cite:

@article{wen2026tensorgalerkin,
  title={Learning, Solving and Optimizing PDEs with TensorGalerkin: 
         an Efficient High-Performance Galerkin Assembly Algorithm},
  author={Wen, Shizheng and Chi, Mingyuan and Yu, Tianwei and Moseley, Ben 
          and Michelis, Mike Yan and Ren, Pu and Sun, Hao and Mishra, Siddhartha},
  journal={arXiv preprint arXiv:2602.05052},
  year={2026}
}

License

This project is licensed under the MIT License - see the LICENSE file for details.


About

About code release of "Learning, Solving and Optimizing PDEs with TensorGalerkin: an efficient high-performance Galerkin assembly algorithm"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors