Skip to content

msacchi/MiniOps.jl

Repository files navigation

MiniOps.jl

A lightweight, matrix-free linear operator framework for inverse problems, signal processing, seismic imaging, and PDE/FWI kernels.

MiniOps allows you to write expressions such as:

y  = A * x
x̂  = A' * y
L  = R * F * A
z  = L * x

where A, F, R, … are linear operators, not matrices.

The goals:

  • Minimal but powerful abstraction
  • Matrix-free forward/adjoint operators
  • Shape-agnostic (1D, 2D, ND arrays)
  • Real/complex type-agnostic
  • Clean algebra: composition, adjoint, scalar multiplication
  • Essential operators:
    • 1D convolution
    • columnwise convolution
    • sampling
    • scaling & diagonal ops
    • FFT / inverse FFT
  • Diagnostics: adjoint test, linearity test, norm estimate (max eigenvalue of A'A)

MiniOps supports two design families of linear operators:

  • Size-agnostic operators (generic, work on any array)
  • Size-fixed operators (optimized, assume a known grid)

Both approaches have advantages and trade-offs.

Size-Agnostic Operators: A size-agnostic operator does not know the shape of the input a priori. It simply takes whatever array you give it and applies the transform. Example: unitary FFT defined as closures:

F = fft_op()          # works for any shape
y = F * x
x2 = F' * y

Size-Fixed (Planned) Operators: A size-fixed operator stores shape information at construction time. This enables precomputation and better performance.

F = fft_op(x0)        # remembers size(x0)
y = F * x

When to Use Which Approach

Use size-agnostic operators for:

  • Classroom demonstrations.
  • Quick experiments.
  • Code that must accept arbitrary input.

Use size-fixed operators for:

  • Production solvers.
  • Iterative schemes (ISTA, FISTA, CG, LSQR).
  • Full waveform inversion, RTM, Radon transforms and any operator used inside a loop called many times

1. Installation & Usage

Inside the package folder:

using Pkg
Pkg.activate(".")
Pkg.instantiate()

Then:

using MiniOps

For development:

using Pkg
Pkg.develop(path="/path/to/MiniOps")

2. Core Concept: Op

An Op represents a matrix-free linear operator:

struct Op{F,FT}
    f::F      # forward  x -> A*x
    ft::FT    # adjoint  y -> A'*y
    m::Int    # number of elements in codomain (optional)
    n::Int    # number of elements in domain   (optional)
    name::Symbol
end

Key ideas:

  • f/ft are closures (forward/adjoint).
  • m and n default to -1 (shape unknown).
  • Operators compose:
L = R * F * A
  • Adjoint works automatically:
At = A'
  • Operators work on arrays of any shape.

Primary overloads:

Expression Meaning
A * x forward
A(x) same as above
A' * y adjoint
C = A * B composition
size(A) returns (m,n)
α * A scalar multiplication

3. Helper: with_shape

Assign meaningful (m,n) based on a sample input:

A_shaped = with_shape(A, x0)

Sets:

  • n = length(vec(x0))
  • m = length(vec(A*x0))

Example:

F = fft_op()
X0 = randn(ComplexF64,128,128)
F = with_shape(F, X0)
@show size(F)

4. Built-in Operators

4.1 conv1d_op(h)

1D convolution operator.

A = conv1d_op(h)
y = A * x

4.2 conv1d_cols_op(h)

Columnwise convolution (applied to each trace).

A = conv1d_cols_op(h)
Y = A * X
Z = A' * Y

4.3 sampling_op(idx, full_size)

Generalized sampling (masking): forward extracts samples, adjoint scatters back.


4.4 scaling_op(α)

Scalar multiplication operator.


4.5 diag_op(w)

Diagonal operator: y = w .* x


4.6 fft_op()

Matrix-free FFT/bFFT operator.

F = fft_op()
Y = F * X
X2 = F' * Y

4.7 pad_op()

P = pad_op( (4,4,4), (2,2,2), (2,2,2))
A = P*randn(4,4,4)
# A has been padded with 2 samples at beggining and end
# of each dim

5. Diagnostics

5.1 adjoint_test(A, x, y)

Validates inner-product identity:

$$ \langle Ax, y \rangle \approx \langle x, A'y \rangle. $$


5.2 linearity_test(A, x, z)

Checks linearity.


5.3 opnorm_power(A, x0)

Power-method estimate of the spectral norm sn=‖A‖₂. Maximum eig of A'A is sn²


5.4 is_selfadjoint(A, x)

Tests whether A ≈ A'.


6. Solvers


6.1 ista(A, y, u0, mu, step_size; niter=100, verbose=false)

Iterative Soft-Thresholding Algorithm ||A x - y||_2^2 + mu ||x||_1


6.2 cgls(A, b, mu, x0; tol=1e-6, max_iter=1000)

Conjugate Gradients to minmize ||A x-b||_2^2 + mu ||x||_2^2


7. Extending MiniOps

function my_op()
    f  = x -> forward_code(x)
    ft = y -> adjoint_code(y)
    Op(f, ft; m=-1, n=-1, name=:my_op)
end

Run:

adjoint_test(A, randn(size), randn(size))

8. More

Check 00_demo.ipynb, 01_demo.ipynb, 02_demo.ipynb