A comprehensive tutorial series on Physics-Informed Neural Networks (PINNs) — teaching how to embed physical laws directly into neural network training. This course progresses from simple ODEs to complex PDEs, building intuition through hands-on examples.
By completing this course, you will:
- Understand the PINN paradigm: Learn why and when to embed physics into neural networks
- Master automatic differentiation: Use PyTorch autograd to compute derivatives through networks
- Solve forward problems: Predict system behavior given physical parameters and equations
- Solve inverse problems: Estimate unknown physical parameters from noisy observations
- Handle increasing complexity: Progress from 1D ODEs to 2D PDEs with shock formation
- Recognize PINN limitations: Understand when PINNs struggle (e.g., sharp gradients, shocks)
The course consists of three progressively challenging notebooks:
File: notebooks/01_simple_harmonic_oscillator.ipynb
| Topic | Description |
|---|---|
| Physics | Undamped oscillator: |
| Concepts | Loss functions, vanilla NN vs PINN, initial conditions |
| Key Insight | Physics constraints prevent overfitting and enable extrapolation |
What you'll implement:
- Neural network architecture with tanh activation
- Data loss (MSE) for fitting observations
- Physics loss encoding the ODE
- Comparison: vanilla NN fails to extrapolate, PINN succeeds
File: notebooks/02_damped_harmonic_oscillator.ipynb
| Topic | Description |
|---|---|
| Physics | Damped oscillator: |
| Regimes | Underdamped, critically damped, overdamped |
| Advanced | Inverse problem — learn damping coefficient from data |
What you'll implement:
- Forward problem for all three damping regimes
- Inverse problem: estimate unknown
$\gamma$ from noisy observations - Parameterized PINN: one network for any
$(d, \omega_0)$ values
Key equations:
| Regime | Condition | Solution Behavior |
|---|---|---|
| Underdamped | Oscillates with decay | |
| Critically damped | Fastest return to equilibrium | |
| Overdamped | Slow exponential decay |
File: notebooks/03_burgers_equation.ipynb
| Topic | Description |
|---|---|
| Physics | Viscous Burgers: |
| Challenge | Shock formation, nonlinear PDE, 2D input (t, x) |
| Validation | Finite-difference reference solver comparison |
What you'll implement:
- 2D PINN architecture:
$(t, x) \rightarrow u$ - PDE residual with mixed partial derivatives
- Shock diagnostics: gradient magnitude heatmaps
- Viscosity sweep: see how PINN accuracy degrades with sharper shocks
Key physics:
- Shock formation time:
$t^* = 1/\pi \approx 0.318$ for$u(0,x) = -\sin(\pi x)$ - Low viscosity → sharp shocks → higher PINN error
- Collocation density study: more physics points → better accuracy
- Python 3.13 or higher
- Basic understanding of:
- Neural networks and PyTorch
- Differential equations (ODEs and PDEs)
- Calculus (derivatives, gradients)
# Clone the repository
git clone https://github.com/DarshKodwani/DomainDefinedDeepLearning.git
cd DomainDefinedDeepLearning
# Install dependencies with uv
uv sync
# Launch Jupyter
uv run jupyter notebook# Clone the repository
git clone https://github.com/DarshKodwani/DomainDefinedDeepLearning.git
cd DomainDefinedDeepLearning
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Launch Jupyter
jupyter notebook- Install Docker and the Dev Containers extension
- Open the repository in VS Code
- Click "Reopen in Container" when prompted (or use
Cmd/Ctrl + Shift + P→ "Dev Containers: Open Folder in Container")
| Package | Version | Purpose |
|---|---|---|
torch |
≥2.9.1 | Neural networks, autograd |
numpy |
≥2.4.0 | Numerical operations |
matplotlib |
≥3.10.8 | Visualization |
notebook |
≥7.5.1 | Jupyter notebooks |
ipykernel |
≥7.1.0 | Jupyter kernel |
Each notebook follows a consistent structure:
- Learning Objectives — Clear goals at the start
- Physics Background — Mathematical derivation with LaTeX equations
- Implementation — Code cells with
# TODOcomments for students - Validation — Assert statements to verify implementations
- Visualization — Rich plots showing physics and PINN behavior
- Summary — Key takeaways and references
Student exercises are marked with:
# TODO: Description of what to implement
# Hint: Helpful guidance
# SOLUTION START
# ... instructor solution ...
# SOLUTION END| Concept | Notebook 1 | Notebook 2 | Notebook 3 |
|---|---|---|---|
| Neural network basics | ✅ | ✅ | ✅ |
| Automatic differentiation | ✅ | ✅ | ✅ |
| Physics loss function | ✅ | ✅ | ✅ |
| Initial conditions | ✅ | ✅ | ✅ |
| Boundary conditions | — | — | ✅ |
| Forward problem | ✅ | ✅ | ✅ |
| Inverse problem | — | ✅ | — |
| Multiple regimes | — | ✅ | ✅ |
| Parameterized PINN | — | ✅ | — |
| Shock/sharp gradients | — | — | ✅ |
| Numerical validation | — | — | ✅ |
Traditional neural networks learn purely from data. PINNs augment this with physical constraints:
Where:
-
$\mathcal{L}_{\text{data}}$ : Fit observations (e.g., initial/boundary conditions) -
$\mathcal{L}_{\text{physics}}$ : Satisfy the governing differential equation
| Traditional ML | Physics-Informed ML |
|---|---|
| Requires lots of data | Works with sparse data |
| May violate physics | Respects physical laws |
| Interpolation only | Can extrapolate |
| Black box | Interpretable constraints |
✅ Good for:
- Sparse or expensive data
- Well-known governing equations
- Smooth solutions
- Inverse problems (parameter estimation)
- Sharp gradients / shocks
- Highly turbulent flows
- Discontinuous solutions
- Very high-dimensional problems
DomainDefinedDeepLearning/
├── README.md # This file
├── LICENSE # MIT License
├── pyproject.toml # Project configuration
├── requirements.txt # Pip dependencies
├── notebooks/
│ ├── 01_simple_harmonic_oscillator.ipynb
│ ├── 02_damped_harmonic_oscillator.ipynb
│ └── 03_burgers_equation.ipynb
├── plots/ # Generated figures
└── .devcontainer/ # VS Code dev container config
-
Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378, 686-707. DOI
-
Lagaris, I. E., Likas, A., & Fotiadis, D. I. (1998). Artificial neural networks for solving ordinary and partial differential equations. IEEE Transactions on Neural Networks, 9(5), 987-1000. DOI
-
Karniadakis, G. E., et al. (2021). Physics-informed machine learning. Nature Reviews Physics, 3(6), 422-440. DOI
- DeepXDE Library — Popular PINN library
- NVIDIA Modulus — Industrial-scale physics-ML framework
- PyTorch Autograd Tutorial
We use Ruff for linting and formatting:
# Check for issues
uv run ruff check notebooks/
# Auto-fix issues
uv run ruff check --fix notebooks/
# Format code
uv run ruff format notebooks/Configuration in pyproject.toml:
[tool.ruff]
line-length = 150
[tool.ruff.lint]
extend-select = ["I"] # Import sorting- Fork the repository
- Create a feature branch
- Make your changes
- Run
ruff checkandruff format - Submit a pull request
This project is licensed under the MIT License — see the LICENSE file for details.
- Physics-Informed Neural Networks Course Development Team
- Microsoft AI for Good Research Lab
- The PINN community for foundational research
- PyTorch team for automatic differentiation
- Students and instructors who provided feedback
Last updated: December 2025