From 96d7ff3828c0b77d9e8685a886a9ed30acd3ef45 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sun, 19 Nov 2023 20:52:13 +0000 Subject: [PATCH] build based on 1a4cba9 --- v2.2.6/LICENSE/index.html | 2 +- v2.2.6/api/index.html | 36 +- v2.2.6/examples/index.html | 17 +- v2.2.6/figures/plot_banana.svg | 2404 ++++++++++++++ v2.2.6/figures/plot_cbanana.svg | 5445 +++++++++++++++++++++++++++++++ v2.2.6/index.html | 2 +- v2.2.6/search/index.html | 2 +- v2.2.6/search_index.js | 2 +- 8 files changed, 7881 insertions(+), 29 deletions(-) create mode 100644 v2.2.6/figures/plot_banana.svg create mode 100644 v2.2.6/figures/plot_cbanana.svg diff --git a/v2.2.6/LICENSE/index.html b/v2.2.6/LICENSE/index.html index 789e21f8..0a6c8a43 100644 --- a/v2.2.6/LICENSE/index.html +++ b/v2.2.6/LICENSE/index.html @@ -1,2 +1,2 @@ -LICENSE · Invertible Networks

MIT License

Copyright (c) 2020 SLIM group @ Georgia Institute of Technology

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

+LICENSE · Invertible Networks

MIT License

Copyright (c) 2020 SLIM group @ Georgia Institute of Technology

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

diff --git a/v2.2.6/api/index.html b/v2.2.6/api/index.html index 273154c5..8b4d4185 100644 --- a/v2.2.6/api/index.html +++ b/v2.2.6/api/index.html @@ -1,53 +1,53 @@ -API Reference · Invertible Networks

Invertible Networks API reference

InvertibleNetworks.get_gradsMethod
P = get_grads(NL::Invertible)

Returns a cell array of all parameters gradients in the network or layer. Each cell entry contains a reference to the original parameter's gradient; i.e. modifying the paramters in P, modifies the parameters in NL.

source
InvertibleNetworks.get_paramsMethod
P = get_params(NL::Invertible)

Returns a cell array of all parameters in the network or layer. Each cell entry contains a reference to the original parameter; i.e. modifying the paramters in P, modifies the parameters in NL.

source
InvertibleNetworks.clear_grad!Method
clear_grad!(NL::NeuralNetLayer)

or

clear_grad!(P::AbstractArray{Parameter, 1})

Set gradients of each Parameter in the network layer to nothing.

source

Activation functions

InvertibleNetworks.GaLUgradMethod
Δx = GaLUgrad(Δy, x)

Backpropagate data residual through GaLU activation.

Input:

  • Δy: residual

  • x: original input (since not invertible)

Output:

  • Δx: backpropagated residual

See also: GaLU

source
InvertibleNetworks.ReLUgradMethod
Δx = ReLUgrad(Δy, x)

Backpropagate data residual through ReLU function.

Input:

  • Δy: data residual

  • x: original input (since not invertible)

Output:

  • Δx: backpropagated residual

See also: ReLU

source
InvertibleNetworks.SigmoidGradMethod
Δx = SigmoidGrad(Δy, y; x=nothing, low=nothing, high=nothing)

Backpropagate data residual through Sigmoid function. Can be shifted and scaled such that output is (low,high]

Input:

  • Δy: residual

  • y: original output

  • x: original input, if y not available (in this case, set y=nothing)

  • low: if provided then scale and shift such that output is (low,high]

  • high: if provided then scale and shift such that output is (low,high]

Output:

  • Δx: backpropagated residual

See also: Sigmoid, SigmoidInv

source

Dimensions manipulation

InvertibleNetworks.Haar_squeezeMethod
Y = Haar_squeeze(X)

Perform a 1-level channelwise 2D/3D (lifting) Haar transform of X and squeeze output of each transform to increase channels by factor of 4 in 4D tensor or by factor of 8 in 5D channels.

Input:

  • X: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize

Output:

if 4D tensor:

  • Y: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize

or if 5D tensor:

  • Y: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize

See also: wavelet_unsqueeze, Haar_unsqueeze, HaarLift, squeeze, unsqueeze

source
InvertibleNetworks.invHaar_unsqueezeMethod
X = invHaar_unsqueeze(Y)

Perform a 1-level inverse 2D/3D Haar transform of Y and unsqueeze output. This reduces the number of channels by factor of 4 in 4D tensors or by factor of 8 in 5D tensors and increases each spatial dimension by a factor of 2. Inverse operation of Haar_squeeze.

Input:

  • Y: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize

Output:

If 4D tensor:

  • X: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize

If 5D tensor:

  • X: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize

See also: wavelet_unsqueeze, Haar_unsqueeze, HaarLift, squeeze, unsqueeze

source
InvertibleNetworks.squeezeMethod
Y = squeeze(X; pattern="column")

Squeeze operation that is only a reshape.

Reshape input image such that each spatial dimension is reduced by a factor of 2, while the number of channels is increased by a factor of 4 if 4D tensor and increased by a factor of 8 if 5D tensor.

Input:

  • X: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize

  • pattern: Squeezing pattern

     1 2 3 4        1 1 3 3        1 3 1 3
    +API Reference · Invertible Networks

    Invertible Networks API reference

    InvertibleNetworks.get_gradsMethod
    P = get_grads(NL::Invertible)

    Returns a cell array of all parameters gradients in the network or layer. Each cell entry contains a reference to the original parameter's gradient; i.e. modifying the paramters in P, modifies the parameters in NL.

    source
    InvertibleNetworks.get_paramsMethod
    P = get_params(NL::Invertible)

    Returns a cell array of all parameters in the network or layer. Each cell entry contains a reference to the original parameter; i.e. modifying the paramters in P, modifies the parameters in NL.

    source
    InvertibleNetworks.clear_grad!Method
    clear_grad!(NL::NeuralNetLayer)

    or

    clear_grad!(P::AbstractArray{Parameter, 1})

    Set gradients of each Parameter in the network layer to nothing.

    source

    Activation functions

    InvertibleNetworks.GaLUgradMethod
    Δx = GaLUgrad(Δy, x)

    Backpropagate data residual through GaLU activation.

    Input:

    • Δy: residual

    • x: original input (since not invertible)

    Output:

    • Δx: backpropagated residual

    See also: GaLU

    source
    InvertibleNetworks.ReLUgradMethod
    Δx = ReLUgrad(Δy, x)

    Backpropagate data residual through ReLU function.

    Input:

    • Δy: data residual

    • x: original input (since not invertible)

    Output:

    • Δx: backpropagated residual

    See also: ReLU

    source
    InvertibleNetworks.SigmoidGradMethod
    Δx = SigmoidGrad(Δy, y; x=nothing, low=nothing, high=nothing)

    Backpropagate data residual through Sigmoid function. Can be shifted and scaled such that output is (low,high]

    Input:

    • Δy: residual

    • y: original output

    • x: original input, if y not available (in this case, set y=nothing)

    • low: if provided then scale and shift such that output is (low,high]

    • high: if provided then scale and shift such that output is (low,high]

    Output:

    • Δx: backpropagated residual

    See also: Sigmoid, SigmoidInv

    source

    Dimensions manipulation

    InvertibleNetworks.Haar_squeezeMethod
    Y = Haar_squeeze(X)

    Perform a 1-level channelwise 2D/3D (lifting) Haar transform of X and squeeze output of each transform to increase channels by factor of 4 in 4D tensor or by factor of 8 in 5D channels.

    Input:

    • X: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize

    Output:

    if 4D tensor:

    • Y: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize

    or if 5D tensor:

    • Y: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize

    See also: wavelet_unsqueeze, Haar_unsqueeze, HaarLift, squeeze, unsqueeze

    source
    InvertibleNetworks.invHaar_unsqueezeMethod
    X = invHaar_unsqueeze(Y)

    Perform a 1-level inverse 2D/3D Haar transform of Y and unsqueeze output. This reduces the number of channels by factor of 4 in 4D tensors or by factor of 8 in 5D tensors and increases each spatial dimension by a factor of 2. Inverse operation of Haar_squeeze.

    Input:

    • Y: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize

    Output:

    If 4D tensor:

    • X: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize

    If 5D tensor:

    • X: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize

    See also: wavelet_unsqueeze, Haar_unsqueeze, HaarLift, squeeze, unsqueeze

    source
    InvertibleNetworks.squeezeMethod
    Y = squeeze(X; pattern="column")

    Squeeze operation that is only a reshape.

    Reshape input image such that each spatial dimension is reduced by a factor of 2, while the number of channels is increased by a factor of 4 if 4D tensor and increased by a factor of 8 if 5D tensor.

    Input:

    • X: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize

    • pattern: Squeezing pattern

       1 2 3 4        1 1 3 3        1 3 1 3
        1 2 3 4        1 1 3 3        2 4 2 4
        1 2 3 4        2 2 4 4        1 3 1 3
        1 2 3 4        2 2 4 4        2 4 2 4
       
      - column          patch       checkerboard

    Output: if 4D tensor:

    • Y: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize

    or if 5D tensor:

    • Y: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize

    See also: unsqueeze, wavelet_squeeze, wavelet_unsqueeze

    source
    InvertibleNetworks.tensor_catMethod
    X = tensor_cat(Y, Z)

    Concatenate ND input tensors along the channel dimension. Inverse operation of tensor_split.

    Input:

    • Y, Z: ND input tensors, each of dimensions nx [x ny [x nz]] x n_channel x batchsize

    Output:

    • X: ND output tensor of dimensions nx [x ny [x nz]] x n_channel*2 x batchsize

    See also: tensor_split

    source
    InvertibleNetworks.tensor_splitMethod
    Y, Z = tensor_split(X)

    Split ND input tensor in half along the channel dimension. Inverse operation of tensor_cat.

    Input:

    • X: ND input tensor of dimensions nx [x ny [x nz]] x n_channel x batchsize

    Output:

    • Y, Z: ND output tensors, each of dimensions nx [x ny [x nz]] x n_channel/2 x batchsize

    See also: tensor_cat

    source
    InvertibleNetworks.unsqueezeMethod
    X = unsqueeze(Y; pattern="column")

    Undo squeezing operation by reshaping input image such that each spatial dimension is increased by a factor of 2, while the number of channels is decreased by a factor of 4 if 4D tensor of decreased by a factor of 8 if a 5D tensor.

    Input:

    • Y: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize

    • pattern: Squeezing pattern

           1 2 3 4        1 1 3 3        1 3 1 3
      + column          patch       checkerboard

    Output: if 4D tensor:

    • Y: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize

    or if 5D tensor:

    • Y: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize

    See also: unsqueeze, wavelet_squeeze, wavelet_unsqueeze

    source
    InvertibleNetworks.tensor_catMethod
    X = tensor_cat(Y, Z)

    Concatenate ND input tensors along the channel dimension. Inverse operation of tensor_split.

    Input:

    • Y, Z: ND input tensors, each of dimensions nx [x ny [x nz]] x n_channel x batchsize

    Output:

    • X: ND output tensor of dimensions nx [x ny [x nz]] x n_channel*2 x batchsize

    See also: tensor_split

    source
    InvertibleNetworks.tensor_splitMethod
    Y, Z = tensor_split(X)

    Split ND input tensor in half along the channel dimension. Inverse operation of tensor_cat.

    Input:

    • X: ND input tensor of dimensions nx [x ny [x nz]] x n_channel x batchsize

    Output:

    • Y, Z: ND output tensors, each of dimensions nx [x ny [x nz]] x n_channel/2 x batchsize

    See also: tensor_cat

    source
    InvertibleNetworks.unsqueezeMethod
    X = unsqueeze(Y; pattern="column")

    Undo squeezing operation by reshaping input image such that each spatial dimension is increased by a factor of 2, while the number of channels is decreased by a factor of 4 if 4D tensor of decreased by a factor of 8 if a 5D tensor.

    Input:

    • Y: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize

    • pattern: Squeezing pattern

           1 2 3 4        1 1 3 3        1 3 1 3
            1 2 3 4        1 1 3 3        2 4 2 4
            1 2 3 4        2 2 4 4        1 3 1 3
            1 2 3 4        2 2 4 4        2 4 2 4
       
      -     column          patch       checkerboard

    Output: If 4D tensor:

    • X: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize

    If 5D tensor:

    • X: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize

    See also: squeeze, wavelet_squeeze, wavelet_unsqueeze

    source
    InvertibleNetworks.wavelet_squeezeMethod
    Y = wavelet_squeeze(X; type=WT.db1)

    Perform a 1-level channelwise 2D wavelet transform of X and squeeze output of each transform to increase number of channels by a factor of 4 if input is 4D tensor or by a factor of 8 if a 5D tensor.

    Input:

    • X: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize

    • type: Wavelet filter type. Possible values are WT.haar for Haar wavelets, WT.coif2, WT.coif4, etc. for Coiflet wavelets, or WT.db1, WT.db2, etc. for Daubechies wavetlets. See https://github.com/JuliaDSP/Wavelets.jl for a full list.

    Output: if 4D tensor:

    • Y: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize

    or if 5D tensor:

    • Y: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize

    See also: wavelet_unsqueeze, squeeze, unsqueeze

    source
    InvertibleNetworks.wavelet_unsqueezeMethod
    X = wavelet_unsqueeze(Y; type=WT.db1)

    Perform a 1-level inverse 2D wavelet transform of Y and unsqueeze output. This reduces the number of channels by factor of 4 if 4D tensor or by a factor of 8 if 5D tensor and increases each spatial dimension by a factor of 2. Inverse operation of wavelet_squeeze.

    Input:

    • Y: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize

    • type: Wavelet filter type. Possible values are haar for Haar wavelets,

    coif2, coif4, etc. for Coiflet wavelets, or db1, db2, etc. for Daubechies wavetlets. See https://github.com/JuliaDSP/Wavelets.jl for a full list.

    Output: If 4D tensor:

    • X: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize

    If 5D tensor:

    • X: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize

    See also: wavelet_squeeze, squeeze, unsqueeze

    source

    Layers

    InvertibleNetworks.ActNormType
    AN = ActNorm(k; logdet=false)

    Create activation normalization layer. The parameters are initialized during the first use, such that the output has zero mean and unit variance along channels for the current mini-batch size.

    Input:

    • k: number of channels

    • logdet: bool to indicate whether to compute the logdet

    Output:

    • AN: Network layer for activation normalization.

    Usage:

    • Forward mode: Y, logdet = AN.forward(X)

    • Inverse mode: X = AN.inverse(Y)

    • Backward mode: ΔX, X = AN.backward(ΔY, Y)

    Trainable parameters:

    • Scaling factor AN.s

    • Bias AN.b

    See also: get_params, clear_grad!

    source
    InvertibleNetworks.AffineLayerType
    AL = AffineLayer(nx, ny, nc; logdet=false)

    Create a layer for an affine transformation.

    Input:

    • nx, ny,nc`: input dimensions and number of channels

    • logdet: bool to indicate whether to compute the logdet

    Output:

    • AL: Network layer for affine transformation.

    Usage:

    • Forward mode: Y, logdet = AL.forward(X)

    • Inverse mode: X = AL.inverse(Y)

    • Backward mode: ΔX, X = AL.backward(ΔY, Y)

    Trainable parameters:

    • Scaling factor AL.s

    • Bias AL.b

    See also: get_params, clear_grad!

    source
    InvertibleNetworks.ConditionalLayerGlowType
    CL = ConditionalLayerGlow(C::Conv1x1, RB::ResidualBlock; logdet=false)

    or

    CL = ConditionalLayerGlow(n_in, n_cond, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false, ndims=2) (2D)
    +     column          patch       checkerboard

Output: If 4D tensor:

  • X: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize

If 5D tensor:

  • X: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize

See also: squeeze, wavelet_squeeze, wavelet_unsqueeze

source
InvertibleNetworks.wavelet_squeezeMethod
Y = wavelet_squeeze(X; type=WT.db1)

Perform a 1-level channelwise 2D wavelet transform of X and squeeze output of each transform to increase number of channels by a factor of 4 if input is 4D tensor or by a factor of 8 if a 5D tensor.

Input:

  • X: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize

  • type: Wavelet filter type. Possible values are WT.haar for Haar wavelets, WT.coif2, WT.coif4, etc. for Coiflet wavelets, or WT.db1, WT.db2, etc. for Daubechies wavetlets. See https://github.com/JuliaDSP/Wavelets.jl for a full list.

Output: if 4D tensor:

  • Y: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize

or if 5D tensor:

  • Y: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize

See also: wavelet_unsqueeze, squeeze, unsqueeze

source
InvertibleNetworks.wavelet_unsqueezeMethod
X = wavelet_unsqueeze(Y; type=WT.db1)

Perform a 1-level inverse 2D wavelet transform of Y and unsqueeze output. This reduces the number of channels by factor of 4 if 4D tensor or by a factor of 8 if 5D tensor and increases each spatial dimension by a factor of 2. Inverse operation of wavelet_squeeze.

Input:

  • Y: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize

  • type: Wavelet filter type. Possible values are haar for Haar wavelets,

coif2, coif4, etc. for Coiflet wavelets, or db1, db2, etc. for Daubechies wavetlets. See https://github.com/JuliaDSP/Wavelets.jl for a full list.

Output: If 4D tensor:

  • X: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize

If 5D tensor:

  • X: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize

See also: wavelet_squeeze, squeeze, unsqueeze

source

Layers

InvertibleNetworks.ActNormType
AN = ActNorm(k; logdet=false)

Create activation normalization layer. The parameters are initialized during the first use, such that the output has zero mean and unit variance along channels for the current mini-batch size.

Input:

  • k: number of channels

  • logdet: bool to indicate whether to compute the logdet

Output:

  • AN: Network layer for activation normalization.

Usage:

  • Forward mode: Y, logdet = AN.forward(X)

  • Inverse mode: X = AN.inverse(Y)

  • Backward mode: ΔX, X = AN.backward(ΔY, Y)

Trainable parameters:

  • Scaling factor AN.s

  • Bias AN.b

See also: get_params, clear_grad!

source
InvertibleNetworks.AffineLayerType
AL = AffineLayer(nx, ny, nc; logdet=false)

Create a layer for an affine transformation.

Input:

  • nx, ny,nc`: input dimensions and number of channels

  • logdet: bool to indicate whether to compute the logdet

Output:

  • AL: Network layer for affine transformation.

Usage:

  • Forward mode: Y, logdet = AL.forward(X)

  • Inverse mode: X = AL.inverse(Y)

  • Backward mode: ΔX, X = AL.backward(ΔY, Y)

Trainable parameters:

  • Scaling factor AL.s

  • Bias AL.b

See also: get_params, clear_grad!

source
InvertibleNetworks.ConditionalLayerGlowType
CL = ConditionalLayerGlow(C::Conv1x1, RB::ResidualBlock; logdet=false)

or

CL = ConditionalLayerGlow(n_in, n_cond, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false, ndims=2) (2D)
 
 CL = ConditionalLayerGlow(n_in, n_cond, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false, ndims=3) (3D)
 
-CL = ConditionalLayerGlowGlow3D(n_in, n_cond, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false) (3D)

Create a Real NVP-style invertible conditional coupling layer based on 1x1 convolutions and a residual block.

Input:

  • C::Conv1x1: 1x1 convolution layer

  • RB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.

  • logdet: bool to indicate whether to compte the logdet of the layer

or

  • n_in,n_out, n_hidden: number of channels for: passive input, conditioned input and hidden layer

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

  • ndims : number of dimensions

Output:

  • CL: Invertible Real NVP conditional coupling layer.

Usage:

  • Forward mode: Y, logdet = CL.forward(X, C) (if constructed with logdet=true)

  • Inverse mode: X = CL.inverse(Y, C)

  • Backward mode: ΔX, X = CL.backward(ΔY, Y, C)

Trainable parameters:

  • None in CL itself

  • Trainable parameters in residual block CL.RB and 1x1 convolution layer CL.C

See also: Conv1x1, ResidualBlock, get_params, clear_grad!

source
InvertibleNetworks.ConditionalLayerHINTType
CH = ConditionalLayerHINT(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, permute=true, ndims=2) (2D)
+CL = ConditionalLayerGlowGlow3D(n_in, n_cond, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false) (3D)

Create a Real NVP-style invertible conditional coupling layer based on 1x1 convolutions and a residual block.

Input:

  • C::Conv1x1: 1x1 convolution layer

  • RB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.

  • logdet: bool to indicate whether to compte the logdet of the layer

or

  • n_in,n_out, n_hidden: number of channels for: passive input, conditioned input and hidden layer

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

  • ndims : number of dimensions

Output:

  • CL: Invertible Real NVP conditional coupling layer.

Usage:

  • Forward mode: Y, logdet = CL.forward(X, C) (if constructed with logdet=true)

  • Inverse mode: X = CL.inverse(Y, C)

  • Backward mode: ΔX, X = CL.backward(ΔY, Y, C)

Trainable parameters:

  • None in CL itself

  • Trainable parameters in residual block CL.RB and 1x1 convolution layer CL.C

See also: Conv1x1, ResidualBlock, get_params, clear_grad!

source
InvertibleNetworks.ConditionalLayerHINTType
CH = ConditionalLayerHINT(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, permute=true, ndims=2) (2D)
 
-CH = ConditionalLayerHINT3D(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, permute=true) (3D)

Create a conditional HINT layer based on coupling blocks and 1 level recursion.

Input:

  • n_in, n_hidden: number of input and hidden channels of both X and Y

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

  • permute: bool to indicate whether to permute X and Y. Default is true

  • ndims : number of dimensions

Output:

  • CH: Conditional HINT coupling layer.

Usage:

  • Forward mode: Zx, Zy, logdet = CH.forward_X(X, Y)

  • Inverse mode: X, Y = CH.inverse(Zx, Zy)

  • Backward mode: ΔX, ΔY, X, Y = CH.backward(ΔZx, ΔZy, Zx, Zy)

  • Forward mode Y: Zy = CH.forward_Y(Y)

  • Inverse mode Y: Y = CH.inverse(Zy)

Trainable parameters:

  • None in CH itself

  • Trainable parameters in coupling layers CH.CL_X, CH.CL_Y, CH.CL_YX and in permutation layers CH.C_X and CH.C_Y.

See also: CouplingLayerBasic, ResidualBlock, get_params, clear_grad!

source
InvertibleNetworks.ConditionalResidualBlockType
RB = ConditionalResidualBlock(nx1, nx2, nx_in, ny1, ny2, ny_in, n_hidden, batchsize; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)

Create a (non-invertible) conditional residual block, consisting of one dense and three convolutional layers with ReLU activation functions. The dense operator maps the data to the image space and both tensors are concatenated and fed to the subsequent convolutional layers.

Input:

  • nx1, nx2, nx_in: spatial dimensions and no. of channels of input image

  • ny1, ny2, ny_in: spatial dimensions and no. of channels of input data

  • n_hidden: number of hidden channels

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: strides for the first and third convolution (s1) and the second convolution (s2)

or

Output:

  • RB: conditional residual block layer

Usage:

  • Forward mode: Zx, Zy = RB.forward(X, Y)

  • Backward mode: ΔX, ΔY = RB.backward(ΔZx, ΔZy, X, Y)

Trainable parameters:

  • Convolutional kernel weights RB.W0, RB.W1, RB.W2 and RB.W3

  • Bias terms RB.b0, RB.b1 and RB.b2

See also: get_params, clear_grad!

source
InvertibleNetworks.Conv1x1Type
C = Conv1x1(k; logdet=false)

or

C = Conv1x1(v1, v2, v3; logdet=false)

Create network layer for 1x1 convolutions using Householder reflections.

Input:

  • k: number of channels

  • v1, v2, v3: Vectors from which to construct matrix.

  • logdet: if true, returns logdet in forward pass (which is always zero)

Output:

  • C: Network layer for 1x1 convolutions with Householder reflections.

Usage:

  • Forward mode: Y, logdet = C.forward(X)

  • Backward mode: ΔX, X = C.backward((ΔY, Y))

Trainable parameters:

  • Householder vectors C.v1, C.v2, C.v3

See also: get_params, clear_grad!

source
InvertibleNetworks.CouplingLayerBasicType
CL = CouplingLayerBasic(RB::ResidualBlock; logdet=false)

or

CL = CouplingLayerBasic(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, logdet=false, ndims=2) (2D)
+CH = ConditionalLayerHINT3D(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, permute=true) (3D)

Create a conditional HINT layer based on coupling blocks and 1 level recursion.

Input:

  • n_in, n_hidden: number of input and hidden channels of both X and Y

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

  • permute: bool to indicate whether to permute X and Y. Default is true

  • ndims : number of dimensions

Output:

  • CH: Conditional HINT coupling layer.

Usage:

  • Forward mode: Zx, Zy, logdet = CH.forward_X(X, Y)

  • Inverse mode: X, Y = CH.inverse(Zx, Zy)

  • Backward mode: ΔX, ΔY, X, Y = CH.backward(ΔZx, ΔZy, Zx, Zy)

  • Forward mode Y: Zy = CH.forward_Y(Y)

  • Inverse mode Y: Y = CH.inverse(Zy)

Trainable parameters:

  • None in CH itself

  • Trainable parameters in coupling layers CH.CL_X, CH.CL_Y, CH.CL_YX and in permutation layers CH.C_X and CH.C_Y.

See also: CouplingLayerBasic, ResidualBlock, get_params, clear_grad!

source
InvertibleNetworks.ConditionalResidualBlockType
RB = ConditionalResidualBlock(nx1, nx2, nx_in, ny1, ny2, ny_in, n_hidden, batchsize; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)

Create a (non-invertible) conditional residual block, consisting of one dense and three convolutional layers with ReLU activation functions. The dense operator maps the data to the image space and both tensors are concatenated and fed to the subsequent convolutional layers.

Input:

  • nx1, nx2, nx_in: spatial dimensions and no. of channels of input image

  • ny1, ny2, ny_in: spatial dimensions and no. of channels of input data

  • n_hidden: number of hidden channels

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: strides for the first and third convolution (s1) and the second convolution (s2)

or

Output:

  • RB: conditional residual block layer

Usage:

  • Forward mode: Zx, Zy = RB.forward(X, Y)

  • Backward mode: ΔX, ΔY = RB.backward(ΔZx, ΔZy, X, Y)

Trainable parameters:

  • Convolutional kernel weights RB.W0, RB.W1, RB.W2 and RB.W3

  • Bias terms RB.b0, RB.b1 and RB.b2

See also: get_params, clear_grad!

source
InvertibleNetworks.Conv1x1Type
C = Conv1x1(k; logdet=false)

or

C = Conv1x1(v1, v2, v3; logdet=false)

Create network layer for 1x1 convolutions using Householder reflections.

Input:

  • k: number of channels

  • v1, v2, v3: Vectors from which to construct matrix.

  • logdet: if true, returns logdet in forward pass (which is always zero)

Output:

  • C: Network layer for 1x1 convolutions with Householder reflections.

Usage:

  • Forward mode: Y, logdet = C.forward(X)

  • Backward mode: ΔX, X = C.backward((ΔY, Y))

Trainable parameters:

  • Householder vectors C.v1, C.v2, C.v3

See also: get_params, clear_grad!

source
InvertibleNetworks.CouplingLayerBasicType
CL = CouplingLayerBasic(RB::ResidualBlock; logdet=false)

or

CL = CouplingLayerBasic(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, logdet=false, ndims=2) (2D)
 
 CL = CouplingLayerBasic(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, logdet=false, ndims=3) (3D)
 
-CL = CouplingLayerBasic3D(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, logdet=false) (3D)

Create a Real NVP-style invertible coupling layer with a residual block.

Input:

  • RB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.

  • logdet: bool to indicate whether to compte the logdet of the layer

or

  • n_in, n_hidden: number of input and hidden channels

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s1)

  • ndims : Number of dimensions

Output:

  • CL: Invertible Real NVP coupling layer.

Usage:

  • Forward mode: Y1, Y2, logdet = CL.forward(X1, X2) (if constructed with logdet=true)

  • Inverse mode: X1, X2 = CL.inverse(Y1, Y2)

  • Backward mode: ΔX1, ΔX2, X1, X2 = CL.backward(ΔY1, ΔY2, Y1, Y2)

Trainable parameters:

  • None in CL itself

  • Trainable parameters in residual block CL.RB

See also: ResidualBlock, get_params, clear_grad!

source
InvertibleNetworks.CouplingLayerGlowType
CL = CouplingLayerGlow(C::Conv1x1, RB::ResidualBlock; logdet=false)

or

CL = CouplingLayerGlow(n_in, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false, ndims=2) (2D)
+CL = CouplingLayerBasic3D(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, logdet=false) (3D)

Create a Real NVP-style invertible coupling layer with a residual block.

Input:

  • RB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.

  • logdet: bool to indicate whether to compte the logdet of the layer

or

  • n_in, n_hidden: number of input and hidden channels

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s1)

  • ndims : Number of dimensions

Output:

  • CL: Invertible Real NVP coupling layer.

Usage:

  • Forward mode: Y1, Y2, logdet = CL.forward(X1, X2) (if constructed with logdet=true)

  • Inverse mode: X1, X2 = CL.inverse(Y1, Y2)

  • Backward mode: ΔX1, ΔX2, X1, X2 = CL.backward(ΔY1, ΔY2, Y1, Y2)

Trainable parameters:

  • None in CL itself

  • Trainable parameters in residual block CL.RB

See also: ResidualBlock, get_params, clear_grad!

source
InvertibleNetworks.CouplingLayerGlowType
CL = CouplingLayerGlow(C::Conv1x1, RB::ResidualBlock; logdet=false)

or

CL = CouplingLayerGlow(n_in, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false, ndims=2) (2D)
 
 CL = CouplingLayerGlow(n_in, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false, ndims=3) (3D)
 
-CL = CouplingLayerGlow3D(n_in, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false) (3D)

Create a Real NVP-style invertible coupling layer based on 1x1 convolutions and a residual block.

Input:

  • C::Conv1x1: 1x1 convolution layer

  • RB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.

  • logdet: bool to indicate whether to compte the logdet of the layer

or

  • n_in, n_hidden: number of input and hidden channels

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

  • ndims : number of dimensions

Output:

  • CL: Invertible Real NVP coupling layer.

Usage:

  • Forward mode: Y, logdet = CL.forward(X) (if constructed with logdet=true)

  • Inverse mode: X = CL.inverse(Y)

  • Backward mode: ΔX, X = CL.backward(ΔY, Y)

Trainable parameters:

  • None in CL itself

  • Trainable parameters in residual block CL.RB and 1x1 convolution layer CL.C

See also: Conv1x1, ResidualBlock, get_params, clear_grad!

source
InvertibleNetworks.CouplingLayerHINTType
H = CouplingLayerHINT(n_in, n_hidden; logdet=false, permute="none", k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, ndims=2) (2D)
+CL = CouplingLayerGlow3D(n_in, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false) (3D)

Create a Real NVP-style invertible coupling layer based on 1x1 convolutions and a residual block.

Input:

  • C::Conv1x1: 1x1 convolution layer

  • RB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.

  • logdet: bool to indicate whether to compte the logdet of the layer

or

  • n_in, n_hidden: number of input and hidden channels

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

  • ndims : number of dimensions

Output:

  • CL: Invertible Real NVP coupling layer.

Usage:

  • Forward mode: Y, logdet = CL.forward(X) (if constructed with logdet=true)

  • Inverse mode: X = CL.inverse(Y)

  • Backward mode: ΔX, X = CL.backward(ΔY, Y)

Trainable parameters:

  • None in CL itself

  • Trainable parameters in residual block CL.RB and 1x1 convolution layer CL.C

See also: Conv1x1, ResidualBlock, get_params, clear_grad!

source
InvertibleNetworks.CouplingLayerHINTType
H = CouplingLayerHINT(n_in, n_hidden; logdet=false, permute="none", k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, ndims=2) (2D)
 
 H = CouplingLayerHINT(n_in, n_hidden; logdet=false, permute="none", k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, ndims=3) (3D)
 
-H = CouplingLayerHINT3D(n_in, n_hidden; logdet=false, permute="none", k1=3, k2=3, p1=1, p2=1, s1=1, s2=1) (3D)

Create a recursive HINT-style invertible layer based on coupling blocks.

Input:

  • n_in, n_hidden: number of input and hidden channels

  • logdet: bool to indicate whether to return the log determinant. Default is false.

  • permute: string to specify permutation. Options are "none", "lower", "both" or "full".

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

  • ndims : number of dimensions

Output:

  • H: Recursive invertible HINT coupling layer.

Usage:

  • Forward mode: Y = H.forward(X)

  • Inverse mode: X = H.inverse(Y)

  • Backward mode: ΔX, X = H.backward(ΔY, Y)

Trainable parameters:

  • None in H itself

  • Trainable parameters in coupling layers H.CL

See also: CouplingLayerBasic, ResidualBlock, get_params, clear_grad!

source
InvertibleNetworks.CouplingLayerIRIMType
IL = CouplingLayerIRIM(C::Conv1x1, RB::ResidualBlock)

or

IL = CouplingLayerIRIM(n_in, n_hidden; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, logdet=false, ndims=2) (2D)
+H = CouplingLayerHINT3D(n_in, n_hidden; logdet=false, permute="none", k1=3, k2=3, p1=1, p2=1, s1=1, s2=1) (3D)

Create a recursive HINT-style invertible layer based on coupling blocks.

Input:

  • n_in, n_hidden: number of input and hidden channels

  • logdet: bool to indicate whether to return the log determinant. Default is false.

  • permute: string to specify permutation. Options are "none", "lower", "both" or "full".

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

  • ndims : number of dimensions

Output:

  • H: Recursive invertible HINT coupling layer.

Usage:

  • Forward mode: Y = H.forward(X)

  • Inverse mode: X = H.inverse(Y)

  • Backward mode: ΔX, X = H.backward(ΔY, Y)

Trainable parameters:

  • None in H itself

  • Trainable parameters in coupling layers H.CL

See also: CouplingLayerBasic, ResidualBlock, get_params, clear_grad!

source
InvertibleNetworks.CouplingLayerIRIMType
IL = CouplingLayerIRIM(C::Conv1x1, RB::ResidualBlock)

or

IL = CouplingLayerIRIM(n_in, n_hidden; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, logdet=false, ndims=2) (2D)
 
 IL = CouplingLayerIRIM(n_in, n_hidden; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, logdet=false, ndims=3) (3D)
 
-IL = CouplingLayerIRIM3D(n_in, n_hidden; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, logdet=false) (3D)

Create an i-RIM invertible coupling layer based on 1x1 convolutions and a residual block.

Input:

  • C::Conv1x1: 1x1 convolution layer
  • RB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.

or

  • nx, ny, nz: spatial dimensions of input
  • n_in, n_hidden: number of input and hidden channels

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

Output:

  • IL: Invertible i-RIM coupling layer.

Usage:

  • Forward mode: Y = IL.forward(X)

  • Inverse mode: X = IL.inverse(Y)

  • Backward mode: ΔX, X = IL.backward(ΔY, Y)

Trainable parameters:

  • None in IL itself

  • Trainable parameters in residual block IL.RB and 1x1 convolution layer IL.C

See also: Conv1x1, ResidualBlock!, get_params, clear_grad!

source
InvertibleNetworks.FluxBlockType
FB = FluxBlock(model::Chain)

Create a (non-invertible) neural network block from a Flux network.

Input:

  • model: Flux neural network of type Chain

Output:

  • FB: residual block layer

Usage:

  • Forward mode: Y = FB.forward(X)

  • Backward mode: ΔX = FB.backward(ΔY, X)

Trainable parameters:

  • Network parameters given by Flux.parameters(model)

See also: Chain, get_params, clear_grad!

source
InvertibleNetworks.HyperbolicLayerType
HyperbolicLayer(n_in, kernel, stride, pad; action=0, α=1f0, n_hidden=1)
+IL = CouplingLayerIRIM3D(n_in, n_hidden; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, logdet=false) (3D)

Create an i-RIM invertible coupling layer based on 1x1 convolutions and a residual block.

Input:

  • C::Conv1x1: 1x1 convolution layer
  • RB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.

or

  • nx, ny, nz: spatial dimensions of input
  • n_in, n_hidden: number of input and hidden channels

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

Output:

  • IL: Invertible i-RIM coupling layer.

Usage:

  • Forward mode: Y = IL.forward(X)

  • Inverse mode: X = IL.inverse(Y)

  • Backward mode: ΔX, X = IL.backward(ΔY, Y)

Trainable parameters:

  • None in IL itself

  • Trainable parameters in residual block IL.RB and 1x1 convolution layer IL.C

See also: Conv1x1, ResidualBlock!, get_params, clear_grad!

source
InvertibleNetworks.FluxBlockType
FB = FluxBlock(model::Chain)

Create a (non-invertible) neural network block from a Flux network.

Input:

  • model: Flux neural network of type Chain

Output:

  • FB: residual block layer

Usage:

  • Forward mode: Y = FB.forward(X)

  • Backward mode: ΔX = FB.backward(ΔY, X)

Trainable parameters:

  • Network parameters given by Flux.parameters(model)

See also: Chain, get_params, clear_grad!

source
InvertibleNetworks.HyperbolicLayerType
HyperbolicLayer(n_in, kernel, stride, pad; action=0, α=1f0, n_hidden=1)
 HyperbolicLayer(n_in, kernel, stride, pad; action=0, α=1f0, n_hidden=1, ndims=2)
 HyperbolicLayer3D(n_in, kernel, stride, pad; action=0, α=1f0, n_hidden=1)

or

HyperbolicLayer(W, b, stride, pad; action=0, α=1f0)
-HyperbolicLayer3D(W, b, stride, pad; action=0, α=1f0)

Create an invertible hyperbolic coupling layer.

Input:

  • kernel, stride, pad: Kernel size, stride and padding of the convolutional operator

  • action: String that defines whether layer keeps the number of channels fixed (0), increases it by a factor of 4 (or 8 in 3D) (1) or decreased it by a factor of 4 (or 8) (-1).

  • W, b: Convolutional weight and bias. W has dimensions of (kernel, kernel, n_in, n_in). b has dimensions of n_in.

  • α: Step size for second time derivative. Default is 1.

  • n_hidden: Increase the no. of channels by n_hidden in the forward convolution. After applying the transpose convolution, the dimensions are back to the input dimensions.

  • ndims: Number of dimension of the input (2 for 2D, 3 for 3D)

Output:

  • HL: Invertible hyperbolic coupling layer

Usage:

  • Forward mode: X_curr, X_new = HL.forward(X_prev, X_curr)

  • Inverse mode: X_prev, X_curr = HL.inverse(X_curr, X_new)

  • Backward mode: ΔX_prev, ΔX_curr, X_prev, X_curr = HL.backward(ΔX_curr, ΔX_new, X_curr, X_new)

Trainable parameters:

  • HL.W: Convolutional kernel

  • HL.b: Bias

See also: get_params, clear_grad!

source
InvertibleNetworks.ResidualBlockType
RB = ResidualBlock(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, fan=false)
+HyperbolicLayer3D(W, b, stride, pad; action=0, α=1f0)

Create an invertible hyperbolic coupling layer.

Input:

  • kernel, stride, pad: Kernel size, stride and padding of the convolutional operator

  • action: String that defines whether layer keeps the number of channels fixed (0), increases it by a factor of 4 (or 8 in 3D) (1) or decreased it by a factor of 4 (or 8) (-1).

  • W, b: Convolutional weight and bias. W has dimensions of (kernel, kernel, n_in, n_in). b has dimensions of n_in.

  • α: Step size for second time derivative. Default is 1.

  • n_hidden: Increase the no. of channels by n_hidden in the forward convolution. After applying the transpose convolution, the dimensions are back to the input dimensions.

  • ndims: Number of dimension of the input (2 for 2D, 3 for 3D)

Output:

  • HL: Invertible hyperbolic coupling layer

Usage:

  • Forward mode: X_curr, X_new = HL.forward(X_prev, X_curr)

  • Inverse mode: X_prev, X_curr = HL.inverse(X_curr, X_new)

  • Backward mode: ΔX_prev, ΔX_curr, X_prev, X_curr = HL.backward(ΔX_curr, ΔX_new, X_curr, X_new)

Trainable parameters:

  • HL.W: Convolutional kernel

  • HL.b: Bias

See also: get_params, clear_grad!

source
InvertibleNetworks.ResidualBlockType
RB = ResidualBlock(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, fan=false)
 RB = ResidualBlock3D(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, fan=false)

or

RB = ResidualBlock(W1, W2, W3, b1, b2; p1=1, p2=1, s1=1, s2=1, fan=false)
-RB = ResidualBlock3D(W1, W2, W3, b1, b2; p1=1, p2=1, s1=1, s2=1, fan=false)

Create a (non-invertible) residual block, consisting of three convolutional layers and activation functions. The first convolution is a downsampling operation with a stride equal to the kernel dimension. The last convolution is the corresponding transpose operation and upsamples the data to either its original dimensions or to twice the number of input channels (for fan=true). The first and second layer contain a bias term.

Input:

  • n_in: number of input channels

  • n_hidden: number of hidden channels

  • n_out: number of ouput channels

  • activation: activation type between conv layers and final output

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

  • fan: bool to indicate whether the ouput has twice the number of input channels. For fan=false, the last activation function is a gated linear unit (thereby bringing the output back to the original dimensions). For fan=true, the last activation is a ReLU, in which case the output has twice the number of channels as the input.

or

  • W1, W2, W3: 4D tensors of convolutional weights

  • b1, b2: bias terms

Output:

  • RB: residual block layer

Usage:

  • Forward mode: Y = RB.forward(X)

  • Backward mode: ΔX = RB.backward(ΔY, X)

Trainable parameters:

  • Convolutional kernel weights RB.W1, RB.W2 and RB.W3

  • Bias terms RB.b1 and RB.b2

See also: get_params, clear_grad!

source

Networks

InvertibleNetworks.NetworkConditionalGlowType
G = NetworkGlow(n_in, n_cond, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)
+RB = ResidualBlock3D(W1, W2, W3, b1, b2; p1=1, p2=1, s1=1, s2=1, fan=false)

Create a (non-invertible) residual block, consisting of three convolutional layers and activation functions. The first convolution is a downsampling operation with a stride equal to the kernel dimension. The last convolution is the corresponding transpose operation and upsamples the data to either its original dimensions or to twice the number of input channels (for fan=true). The first and second layer contain a bias term.

Input:

  • n_in: number of input channels

  • n_hidden: number of hidden channels

  • n_out: number of ouput channels

  • activation: activation type between conv layers and final output

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

  • fan: bool to indicate whether the ouput has twice the number of input channels. For fan=false, the last activation function is a gated linear unit (thereby bringing the output back to the original dimensions). For fan=true, the last activation is a ReLU, in which case the output has twice the number of channels as the input.

or

  • W1, W2, W3: 4D tensors of convolutional weights

  • b1, b2: bias terms

Output:

  • RB: residual block layer

Usage:

  • Forward mode: Y = RB.forward(X)

  • Backward mode: ΔX = RB.backward(ΔY, X)

Trainable parameters:

  • Convolutional kernel weights RB.W1, RB.W2 and RB.W3

  • Bias terms RB.b1 and RB.b2

See also: get_params, clear_grad!

source

Networks

InvertibleNetworks.NetworkConditionalGlowType
G = NetworkGlow(n_in, n_cond, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)
 
-G = NetworkGlow3D(n_in, n_cond, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)

Create a conditional invertible network based on the Glow architecture. Each flow step in the inner loop consists of an activation normalization layer, followed by an invertible coupling layer with 1x1 convolutions and a residual block. The outer loop performs a squeezing operation prior to the inner loop, and a splitting operation afterwards.

Input:

  • 'n_in': number of input channels of variable to sample

  • 'n_cond': number of input channels of condition

  • n_hidden: number of hidden units in residual blocks

  • L: number of scales (outer loop)

  • K: number of flow steps per scale (inner loop)

  • split_scales: if true, perform squeeze operation which halves spatial dimensions and duplicates channel dimensions then split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third

operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

  • ndims : number of dimensions

  • squeeze_type : squeeze type that happens at each multiscale level

Output:

  • G: invertible Glow network.

Usage:

  • Forward mode: ZX, ZC logdet = G.forward(X, C)

  • Backward mode: ΔX, X, ΔC = G.backward(ΔZX, ZX, ZC)

Trainable parameters:

  • None in G itself

  • Trainable parameters in activation normalizations G.AN[i,j] and coupling layers G.C[i,j], where i and j range from 1 to L and K respectively.

See also: ActNorm, CouplingLayerGlow!, get_params, clear_grad!

source
InvertibleNetworks.NetworkConditionalHINTType
CH = NetworkConditionalHINT(n_in, n_hidden, depth; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)
+G = NetworkGlow3D(n_in, n_cond, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)

Create a conditional invertible network based on the Glow architecture. Each flow step in the inner loop consists of an activation normalization layer, followed by an invertible coupling layer with 1x1 convolutions and a residual block. The outer loop performs a squeezing operation prior to the inner loop, and a splitting operation afterwards.

Input:

  • 'n_in': number of input channels of variable to sample

  • 'n_cond': number of input channels of condition

  • n_hidden: number of hidden units in residual blocks

  • L: number of scales (outer loop)

  • K: number of flow steps per scale (inner loop)

  • split_scales: if true, perform squeeze operation which halves spatial dimensions and duplicates channel dimensions then split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third

operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

  • ndims : number of dimensions

  • squeeze_type : squeeze type that happens at each multiscale level

Output:

  • G: invertible Glow network.

Usage:

  • Forward mode: ZX, ZC logdet = G.forward(X, C)

  • Backward mode: ΔX, X, ΔC = G.backward(ΔZX, ZX, ZC)

Trainable parameters:

  • None in G itself

  • Trainable parameters in activation normalizations G.AN[i,j] and coupling layers G.C[i,j], where i and j range from 1 to L and K respectively.

See also: ActNorm, CouplingLayerGlow!, get_params, clear_grad!

source
InvertibleNetworks.NetworkConditionalHINTType
CH = NetworkConditionalHINT(n_in, n_hidden, depth; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)
 
-CH = NetworkConditionalHINT3D(n_in, n_hidden, depth; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)

Create a conditional HINT network for data-driven generative modeling based on the change of variables formula.

Input:

  • 'n_in': number of input channels

  • n_hidden: number of hidden units in residual blocks

  • depth: number network layers

  • k1, k2: kernel size for first and third residual layer (k1) and second layer (k2)

  • p1, p2: respective padding sizes for residual block layers

  • s1, s2: respective strides for residual block layers

Output:

  • CH: conditioinal HINT network

Usage:

  • Forward mode: Zx, Zy, logdet = CH.forward(X, Y)

  • Inverse mode: X, Y = CH.inverse(Zx, Zy)

  • Backward mode: ΔX, X = CH.backward(ΔZx, ΔZy, Zx, Zy)

Trainable parameters:

  • None in CH itself

  • Trainable parameters in activation normalizations CH.AN_X[i] and CH.AN_Y[i],

and in coupling layers CH.CL[i], where i ranges from 1 to depth.

See also: ActNorm, ConditionalLayerHINT!, get_params, clear_grad!

source
InvertibleNetworks.NetworkGlowType
G = NetworkGlow(n_in, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)
+CH = NetworkConditionalHINT3D(n_in, n_hidden, depth; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)

Create a conditional HINT network for data-driven generative modeling based on the change of variables formula.

Input:

  • 'n_in': number of input channels

  • n_hidden: number of hidden units in residual blocks

  • depth: number network layers

  • k1, k2: kernel size for first and third residual layer (k1) and second layer (k2)

  • p1, p2: respective padding sizes for residual block layers

  • s1, s2: respective strides for residual block layers

Output:

  • CH: conditioinal HINT network

Usage:

  • Forward mode: Zx, Zy, logdet = CH.forward(X, Y)

  • Inverse mode: X, Y = CH.inverse(Zx, Zy)

  • Backward mode: ΔX, X = CH.backward(ΔZx, ΔZy, Zx, Zy)

Trainable parameters:

  • None in CH itself

  • Trainable parameters in activation normalizations CH.AN_X[i] and CH.AN_Y[i],

and in coupling layers CH.CL[i], where i ranges from 1 to depth.

See also: ActNorm, ConditionalLayerHINT!, get_params, clear_grad!

source
InvertibleNetworks.NetworkGlowType
G = NetworkGlow(n_in, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)
 
-G = NetworkGlow3D(n_in, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)

Create an invertible network based on the Glow architecture. Each flow step in the inner loop consists of an activation normalization layer, followed by an invertible coupling layer with 1x1 convolutions and a residual block. The outer loop performs a squeezing operation prior to the inner loop, and a splitting operation afterwards.

Input:

  • 'n_in': number of input channels

  • n_hidden: number of hidden units in residual blocks

  • L: number of scales (outer loop)

  • K: number of flow steps per scale (inner loop)

  • split_scales: if true, perform squeeze operation which halves spatial dimensions and duplicates channel dimensions then split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third

operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

  • ndims : number of dimensions

  • squeeze_type : squeeze type that happens at each multiscale level

  • logdet : boolean to turn on/off logdet term tracking and gradient calculation

Output:

  • G: invertible Glow network.

Usage:

  • Forward mode: Y, logdet = G.forward(X)

  • Backward mode: ΔX, X = G.backward(ΔY, Y)

Trainable parameters:

  • None in G itself

  • Trainable parameters in activation normalizations G.AN[i,j] and coupling layers G.C[i,j], where i and j range from 1 to L and K respectively.

See also: ActNorm, CouplingLayerGlow!, get_params, clear_grad!

source
InvertibleNetworks.NetworkHyperbolicType
H = NetworkHyperbolic(n_in, architecture; k=3, s=1, p=1, logdet=true, α=1f0)
+G = NetworkGlow3D(n_in, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)

Create an invertible network based on the Glow architecture. Each flow step in the inner loop consists of an activation normalization layer, followed by an invertible coupling layer with 1x1 convolutions and a residual block. The outer loop performs a squeezing operation prior to the inner loop, and a splitting operation afterwards.

Input:

  • 'n_in': number of input channels

  • n_hidden: number of hidden units in residual blocks

  • L: number of scales (outer loop)

  • K: number of flow steps per scale (inner loop)

  • split_scales: if true, perform squeeze operation which halves spatial dimensions and duplicates channel dimensions then split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.

  • k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third

operator, k2 is the kernel size of the second operator.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)

  • ndims : number of dimensions

  • squeeze_type : squeeze type that happens at each multiscale level

  • logdet : boolean to turn on/off logdet term tracking and gradient calculation

Output:

  • G: invertible Glow network.

Usage:

  • Forward mode: Y, logdet = G.forward(X)

  • Backward mode: ΔX, X = G.backward(ΔY, Y)

Trainable parameters:

  • None in G itself

  • Trainable parameters in activation normalizations G.AN[i,j] and coupling layers G.C[i,j], where i and j range from 1 to L and K respectively.

See also: ActNorm, CouplingLayerGlow!, get_params, clear_grad!

source
InvertibleNetworks.NetworkHyperbolicType
H = NetworkHyperbolic(n_in, architecture; k=3, s=1, p=1, logdet=true, α=1f0)
 H = NetworkHyperbolic(n_in, architecture; k=3, s=1, p=1, logdet=true, α=1f0, ndims=2)
-H = NetworkHyperbolic3D(n_in, architecture; k=3, s=1, p=1, logdet=true, α=1f0)

Create an invertible network based on hyperbolic layers. The network architecture is specified by a tuple of the form ((action1, nhidden1), (action2, nhidden2), ... ). Each inner tuple corresonds to an additional layer. The first inner tuple argument specifies whether the respective layer increases the number of channels (set to 1), decreases it (set to -1) or leaves it constant (set to 0). The second argument specifies the number of hidden units for that layer.

Input:

  • n_in: number of channels of input tensor.
  • n_hidden: number of hidden units in residual blocks

  • architecture: Tuple of tuples specifying the network architecture; ((action1, nhidden1), (action2, nhidden2))

  • k, s, p: Kernel size, stride and padding of convolutional kernels

  • logdet: Bool to indicate whether to return the logdet

  • α: Step size in hyperbolic network. Defaults to 1

  • ndims: Number of dimension

Output:

  • H: invertible hyperbolic network.

Usage:

  • Forward mode: Y_prev, Y_curr, logdet = H.forward(X_prev, X_curr)

  • Inverse mode: X_curr, X_new = H.inverse(Y_curr, Y_new)

  • Backward mode: ΔX_curr, ΔX_new, X_curr, X_new = H.backward(ΔY_curr, ΔY_new, Y_curr, Y_new)

Trainable parameters:

  • None in H itself

  • Trainable parameters in the hyperbolic layers H.HL[j].

See also: CouplingLayer!, get_params, clear_grad!

source
InvertibleNetworks.NetworkLoopType
L = NetworkLoop(n_in, n_hidden, maxiter, Ψ; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, ndims=2) (2D)
+H = NetworkHyperbolic3D(n_in, architecture; k=3, s=1, p=1, logdet=true, α=1f0)

Create an invertible network based on hyperbolic layers. The network architecture is specified by a tuple of the form ((action1, nhidden1), (action2, nhidden2), ... ). Each inner tuple corresonds to an additional layer. The first inner tuple argument specifies whether the respective layer increases the number of channels (set to 1), decreases it (set to -1) or leaves it constant (set to 0). The second argument specifies the number of hidden units for that layer.

Input:

  • n_in: number of channels of input tensor.
  • n_hidden: number of hidden units in residual blocks

  • architecture: Tuple of tuples specifying the network architecture; ((action1, nhidden1), (action2, nhidden2))

  • k, s, p: Kernel size, stride and padding of convolutional kernels

  • logdet: Bool to indicate whether to return the logdet

  • α: Step size in hyperbolic network. Defaults to 1

  • ndims: Number of dimension

Output:

  • H: invertible hyperbolic network.

Usage:

  • Forward mode: Y_prev, Y_curr, logdet = H.forward(X_prev, X_curr)

  • Inverse mode: X_curr, X_new = H.inverse(Y_curr, Y_new)

  • Backward mode: ΔX_curr, ΔX_new, X_curr, X_new = H.backward(ΔY_curr, ΔY_new, Y_curr, Y_new)

Trainable parameters:

  • None in H itself

  • Trainable parameters in the hyperbolic layers H.HL[j].

See also: CouplingLayer!, get_params, clear_grad!

source
InvertibleNetworks.NetworkLoopType
L = NetworkLoop(n_in, n_hidden, maxiter, Ψ; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, ndims=2) (2D)
 
-L = NetworkLoop3D(n_in, n_hidden, maxiter, Ψ; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1) (3D)

Create an invertibel recurrent inference machine (i-RIM) consisting of an unrooled loop for a given number of iterations.

Input:

  • 'n_in': number of input channels

  • n_hidden: number of hidden units in residual blocks

  • maxiter: number unrolled loop iterations

  • Ψ: link function

  • k1, k2: stencil sizes for convolutions in the residual blocks. The first convolution uses a stencil of size and stride k1, thereby downsampling the input. The second convolutions uses a stencil of size k2. The last layer uses a stencil of size and stride k1, but performs the transpose operation of the first convolution, thus upsampling the output to the original input size.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2) in residual block

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2) in residual block

  • ndims : number of dimensions

Output:

  • L: invertible i-RIM network.

Usage:

  • Forward mode: η_out, s_out = L.forward(η_in, s_in, d, A)

  • Inverse mode: η_in, s_in = L.inverse(η_out, s_out, d, A)

  • Backward mode: Δη_in, Δs_in, η_in, s_in = L.backward(Δη_out, Δs_out, η_out, s_out, d, A)

Trainable parameters:

  • None in L itself

  • Trainable parameters in the invertible coupling layers L.L[i], and actnorm layers L.AN[i], where i ranges from 1 to the number of loop iterations.

See also: CouplingLayerIRIM, ResidualBlock, get_params, clear_grad!

source
InvertibleNetworks.NetworkMultiScaleConditionalHINTType
CH = NetworkMultiScaleConditionalHINT(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)
+L = NetworkLoop3D(n_in, n_hidden, maxiter, Ψ; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1) (3D)

Create an invertibel recurrent inference machine (i-RIM) consisting of an unrooled loop for a given number of iterations.

Input:

  • 'n_in': number of input channels

  • n_hidden: number of hidden units in residual blocks

  • maxiter: number unrolled loop iterations

  • Ψ: link function

  • k1, k2: stencil sizes for convolutions in the residual blocks. The first convolution uses a stencil of size and stride k1, thereby downsampling the input. The second convolutions uses a stencil of size k2. The last layer uses a stencil of size and stride k1, but performs the transpose operation of the first convolution, thus upsampling the output to the original input size.

  • p1, p2: padding for the first and third convolution (p1) and the second convolution (p2) in residual block

  • s1, s2: stride for the first and third convolution (s1) and the second convolution (s2) in residual block

  • ndims : number of dimensions

Output:

  • L: invertible i-RIM network.

Usage:

  • Forward mode: η_out, s_out = L.forward(η_in, s_in, d, A)

  • Inverse mode: η_in, s_in = L.inverse(η_out, s_out, d, A)

  • Backward mode: Δη_in, Δs_in, η_in, s_in = L.backward(Δη_out, Δs_out, η_out, s_out, d, A)

Trainable parameters:

  • None in L itself

  • Trainable parameters in the invertible coupling layers L.L[i], and actnorm layers L.AN[i], where i ranges from 1 to the number of loop iterations.

See also: CouplingLayerIRIM, ResidualBlock, get_params, clear_grad!

source
InvertibleNetworks.NetworkMultiScaleConditionalHINTType
CH = NetworkMultiScaleConditionalHINT(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)
 
-CH = NetworkMultiScaleConditionalHINT3D(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)

Create a conditional HINT network for data-driven generative modeling based on the change of variables formula.

Input:

  • 'n_in': number of input channels
  • n_hidden: number of hidden units in residual blocks

  • L: number of scales (outer loop)

  • K: number of flow steps per scale (inner loop)

  • split_scales: if true, split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.

  • k1, k2: kernel size for first and third residual layer (k1) and second layer (k2)

  • p1, p2: respective padding sizes for residual block layers

  • s1, s2: respective strides for residual block layers

  • ndims : number of dimensions

Output:

  • CH: conditional HINT network

Usage:

  • Forward mode: Zx, Zy, logdet = CH.forward(X, Y)

  • Inverse mode: X, Y = CH.inverse(Zx, Zy)

  • Backward mode: ΔX, X = CH.backward(ΔZx, ΔZy, Zx, Zy)

Trainable parameters:

  • None in CH itself

  • Trainable parameters in activation normalizations CH.AN_X[i] and CH.AN_Y[i],

and in coupling layers CH.CL[i], where i ranges from 1 to depth.

See also: ActNorm, ConditionalLayerHINT!, get_params, clear_grad!

source
InvertibleNetworks.NetworkMultiScaleHINTType
H = NetworkMultiScaleHINT(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, ndims=2)
+CH = NetworkMultiScaleConditionalHINT3D(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)

Create a conditional HINT network for data-driven generative modeling based on the change of variables formula.

Input:

  • 'n_in': number of input channels
  • n_hidden: number of hidden units in residual blocks

  • L: number of scales (outer loop)

  • K: number of flow steps per scale (inner loop)

  • split_scales: if true, split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.

  • k1, k2: kernel size for first and third residual layer (k1) and second layer (k2)

  • p1, p2: respective padding sizes for residual block layers

  • s1, s2: respective strides for residual block layers

  • ndims : number of dimensions

Output:

  • CH: conditional HINT network

Usage:

  • Forward mode: Zx, Zy, logdet = CH.forward(X, Y)

  • Inverse mode: X, Y = CH.inverse(Zx, Zy)

  • Backward mode: ΔX, X = CH.backward(ΔZx, ΔZy, Zx, Zy)

Trainable parameters:

  • None in CH itself

  • Trainable parameters in activation normalizations CH.AN_X[i] and CH.AN_Y[i],

and in coupling layers CH.CL[i], where i ranges from 1 to depth.

See also: ActNorm, ConditionalLayerHINT!, get_params, clear_grad!

source
InvertibleNetworks.NetworkMultiScaleHINTType
H = NetworkMultiScaleHINT(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, ndims=2)
 
-H = NetworkMultiScaleHINT3D(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)

Create a multiscale HINT network for data-driven generative modeling based on the change of variables formula.

Input:

  • 'n_in': number of input channels
  • n_hidden: number of hidden units in residual blocks

  • L: number of scales (outer loop)

  • K: number of flow steps per scale (inner loop)

  • split_scales: if true, split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.

  • k1, k2: kernel size for first and third residual layer (k1) and second layer (k2)

  • p1, p2: respective padding sizes for residual block layers

  • s1, s2: respective strides for residual block layers

  • ndims : number of dimensions

Output:

  • H: multiscale HINT network

Usage:

  • Forward mode: Z, logdet = H.forward(X)

  • Inverse mode: X = H.inverse(Z)

  • Backward mode: ΔX, X = H.backward(ΔZ, Z)

Trainable parameters:

  • None in H itself

  • Trainable parameters in activation normalizations H.AN[i],

and in coupling layers H.CL[i], where i ranges from 1 to depth.

See also: ActNorm, CouplingLayerHINT!, get_params, clear_grad!

source
InvertibleNetworks.SummarizedNetType
G = SummarizedNet(cond_net, sum_net)

Create a summarized neural conditional approximator from conditional approximator condnet and summary network sumnet.

Input:

  • 'cond_net': invertible conditional distribution approximator

  • 'sum_net': Should be flux layer. summary network. Should be invariant to a dimension of interest.

Output:

  • G: summarized network.

Usage:

  • Forward mode: ZX, ZY, logdet = G.forward(X, Y)

  • Backward mode: ΔX, X, ΔY = G.backward(ΔZX, ZX, ZY; Y_save=Y)

  • inverse mode: ZX, ZY logdet = G.inverse(ZX, ZY)

Trainable parameters:

  • None in G itself

  • Trainable parameters in conditional approximator G.cond_net and smmary network G.sum_net,

See also: ActNorm, CouplingLayerGlow!, get_params, clear_grad!

source

AD Integration

+H = NetworkMultiScaleHINT3D(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)

Create a multiscale HINT network for data-driven generative modeling based on the change of variables formula.

Input:

Output:

Usage:

Trainable parameters:

and in coupling layers H.CL[i], where i ranges from 1 to depth.

See also: ActNorm, CouplingLayerHINT!, get_params, clear_grad!

source
InvertibleNetworks.SummarizedNetType
G = SummarizedNet(cond_net, sum_net)

Create a summarized neural conditional approximator from conditional approximator condnet and summary network sumnet.

Input:

  • 'cond_net': invertible conditional distribution approximator

  • 'sum_net': Should be flux layer. summary network. Should be invariant to a dimension of interest.

Output:

  • G: summarized network.

Usage:

  • Forward mode: ZX, ZY, logdet = G.forward(X, Y)

  • Backward mode: ΔX, X, ΔY = G.backward(ΔZX, ZX, ZY; Y_save=Y)

  • inverse mode: ZX, ZY logdet = G.inverse(ZX, ZY)

Trainable parameters:

  • None in G itself

  • Trainable parameters in conditional approximator G.cond_net and smmary network G.sum_net,

See also: ActNorm, CouplingLayerGlow!, get_params, clear_grad!

source

AD Integration

InvertibleNetworks.backward_update!Method

Update state in the backward pass

source
InvertibleNetworks.check_coherenceMethod

Error if mismatch between state and network

source
InvertibleNetworks.currentMethod

Get current state of the tape

source
InvertibleNetworks.forward_update!Method

Update state in the forward pass.

source
InvertibleNetworks.isa_newblockMethod

Determine if the input is related to a new block of invertible operations

source
InvertibleNetworks.reset!Method

Reset the state of the tape

source
diff --git a/v2.2.6/examples/index.html b/v2.2.6/examples/index.html index 72c3c827..bf8da3c3 100644 --- a/v2.2.6/examples/index.html +++ b/v2.2.6/examples/index.html @@ -43,8 +43,6 @@ return ΔX, X end -#################################################################################################### - # Loss function loss(X) Y, logdet = forward(X) @@ -91,9 +89,13 @@ ax3.set_xlim([-3.5,3.5]); ax3.set_ylim([0,50]) ax4 = subplot(2,2,4); plot(Y[1, 1, 1, :], Y[1, 1, 2, :], "."); title(L"Latent space: $z \sim \hat{p}_Z$") ax4.set_xlim([-3.5, 3.5]); ax4.set_ylim([-3.5, 3.5]) -savefig("plot_banana.svg") -nothing

Conditional 2D Rosenbrock/banana distribution sampling w/ cHINT

using InvertibleNetworks
-using Flux, LinearAlgebra, PyPlot
+savefig("../src/figures/plot_banana.svg")
+
+nothing

plot_banana.svg

Conditional 2D Rosenbrock/banana distribution sampling w/ cHINT

using InvertibleNetworks
+using Flux, LinearAlgebra, PyPlot, Random
+
+# Random seed
+Random.seed!(11)
 
 # Define network
 nx = 1; ny = 1; n_in = 2
@@ -187,5 +189,6 @@
 ax10 = subplot(2,5,10); plot(Zx[1, 1, 1, :], Zx[1, 1, 2, :], "."); 
 plot(Zy_fixed[1, 1, 1, :], Zy_fixed[1, 1, 2, :], "r."); title(L"Latent space: $zx \sim \hat{p}_{zx}$")
 ax10.set_xlim([-3.5, 3.5]); ax10.set_ylim([-3.5, 3.5])
-savefig("plot_cbanana.svg")
-nothing

Literature applications

The following examples show the implementation of applications from the linked papers with [InvertibleNetworks.jl]:

+savefig("../src/figures/plot_cbanana.svg") + +nothing

plot_cbanana.svg

Literature applications

The following examples show the implementation of applications from the linked papers with [InvertibleNetworks.jl]:

diff --git a/v2.2.6/figures/plot_banana.svg b/v2.2.6/figures/plot_banana.svg new file mode 100644 index 00000000..870e8382 --- /dev/null +++ b/v2.2.6/figures/plot_banana.svg @@ -0,0 +1,2404 @@ + + + + + + + + 2023-11-16T15:52:46.818010 + image/svg+xml + + + Matplotlib v3.7.2, https://matplotlib.org/ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/v2.2.6/figures/plot_cbanana.svg b/v2.2.6/figures/plot_cbanana.svg new file mode 100644 index 00000000..50818521 --- /dev/null +++ b/v2.2.6/figures/plot_cbanana.svg @@ -0,0 +1,5445 @@ + + + + + + + + 2023-11-16T15:53:43.363564 + image/svg+xml + + + Matplotlib v3.7.2, https://matplotlib.org/ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/v2.2.6/index.html b/v2.2.6/index.html index f13d8bc0..d9ae2384 100644 --- a/v2.2.6/index.html +++ b/v2.2.6/index.html @@ -1,2 +1,2 @@ -Home · Invertible Networks

InvertibleNetworks.jl documentation

About

InvertibleNetworks.jl is a package of invertible layers and networks for machine learning. The invertibility allows to backpropagate through the layers and networks without the need for storing the forward state that is recomputed on the fly, inverse propagating through it. This package is the first of its kind in Julia with memory efficient invertible layers, networks and activation functions for machine learning.

Installation

This package is registered in the Julia general registry and can be installed in the REPL package manager (]):

] add InvertibleNetworks

Authors

This package is developed and maintained by Felix J. Herrmann's SlimGroup at Georgia Institute of Technology. The main contributors of this package are:

  • Rafael Orozco, Georgia Institute of Technology (rorozco@gatech.edu)
  • Philipp Witte, Microsoft Corporation (pwitte@microsoft.com)
  • Gabrio Rizzuti, Utrecht University (g.rizzuti@umcutrecht.nl)
  • Mathias Louboutin, Georgia Institute of Technology (mlouboutin3@gatech.edu)
  • Ali Siahkoohi, Georgia Institute of Technology (alisk@gatech.edu)

References

  • Yann Dauphin, Angela Fan, Michael Auli and David Grangier, "Language modeling with gated convolutional networks", Proceedings of the 34th International Conference on Machine Learning, 2017. ArXiv

  • Laurent Dinh, Jascha Sohl-Dickstein and Samy Bengio, "Density estimation using Real NVP", International Conference on Learning Representations, 2017, ArXiv

  • Diederik P. Kingma and Prafulla Dhariwal, "Glow: Generative Flow with Invertible 1x1 Convolutions", Conference on Neural Information Processing Systems, 2018. ArXiv

  • Keegan Lensink, Eldad Haber and Bas Peters, "Fully Hyperbolic Convolutional Neural Networks", arXiv Computer Vision and Pattern Recognition, 2019. ArXiv

  • Patrick Putzky and Max Welling, "Invert to learn to invert", Advances in Neural Information Processing Systems, 2019. ArXiv

  • Jakob Kruse, Gianluca Detommaso, Robert Scheichl and Ullrich Köthe, "HINT: Hierarchical Invertible Neural Transport for Density Estimation and Bayesian Inference", arXiv Statistics and Machine Learning, 2020. ArXiv

The following publications use [InvertibleNetworks.jl]:

Acknowledgments

This package uses functions from NNlib.jl, Flux.jl and Wavelets.jl

+Home · Invertible Networks

InvertibleNetworks.jl documentation

About

InvertibleNetworks.jl is a package of invertible layers and networks for machine learning. The invertibility allows to backpropagate through the layers and networks without the need for storing the forward state that is recomputed on the fly, inverse propagating through it. This package is the first of its kind in Julia with memory efficient invertible layers, networks and activation functions for machine learning.

Installation

This package is registered in the Julia general registry and can be installed in the REPL package manager (]):

] add InvertibleNetworks

Authors

This package is developed and maintained by Felix J. Herrmann's SlimGroup at Georgia Institute of Technology. The main contributors of this package are:

  • Rafael Orozco, Georgia Institute of Technology (rorozco@gatech.edu)
  • Philipp Witte, Microsoft Corporation (pwitte@microsoft.com)
  • Gabrio Rizzuti, Utrecht University (g.rizzuti@umcutrecht.nl)
  • Mathias Louboutin, Georgia Institute of Technology (mlouboutin3@gatech.edu)
  • Ali Siahkoohi, Georgia Institute of Technology (alisk@gatech.edu)

References

  • Yann Dauphin, Angela Fan, Michael Auli and David Grangier, "Language modeling with gated convolutional networks", Proceedings of the 34th International Conference on Machine Learning, 2017. ArXiv

  • Laurent Dinh, Jascha Sohl-Dickstein and Samy Bengio, "Density estimation using Real NVP", International Conference on Learning Representations, 2017, ArXiv

  • Diederik P. Kingma and Prafulla Dhariwal, "Glow: Generative Flow with Invertible 1x1 Convolutions", Conference on Neural Information Processing Systems, 2018. ArXiv

  • Keegan Lensink, Eldad Haber and Bas Peters, "Fully Hyperbolic Convolutional Neural Networks", arXiv Computer Vision and Pattern Recognition, 2019. ArXiv

  • Patrick Putzky and Max Welling, "Invert to learn to invert", Advances in Neural Information Processing Systems, 2019. ArXiv

  • Jakob Kruse, Gianluca Detommaso, Robert Scheichl and Ullrich Köthe, "HINT: Hierarchical Invertible Neural Transport for Density Estimation and Bayesian Inference", arXiv Statistics and Machine Learning, 2020. ArXiv

The following publications use [InvertibleNetworks.jl]:

Acknowledgments

This package uses functions from NNlib.jl, Flux.jl and Wavelets.jl

diff --git a/v2.2.6/search/index.html b/v2.2.6/search/index.html index c02ef926..2fb1e14b 100644 --- a/v2.2.6/search/index.html +++ b/v2.2.6/search/index.html @@ -1,2 +1,2 @@ -Search · Invertible Networks

Loading search...

    +Search · Invertible Networks

    Loading search...

      diff --git a/v2.2.6/search_index.js b/v2.2.6/search_index.js index fbc4fd86..ec31e09c 100644 --- a/v2.2.6/search_index.js +++ b/v2.2.6/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"api/#Invertible-Networks-API-reference","page":"API Reference","title":"Invertible Networks API reference","text":"","category":"section"},{"location":"api/","page":"API Reference","title":"API Reference","text":"Modules = [InvertibleNetworks]\nOrder = [:function]\nPages = [\"neuralnet.jl\", \"parameter.jl\"]","category":"page"},{"location":"api/#InvertibleNetworks.clear_grad!-Tuple{InvertibleNetworks.Invertible}","page":"API Reference","title":"InvertibleNetworks.clear_grad!","text":"P = clear_grad!(NL::Invertible)\n\nResets the gradient of all the parameters in NL\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.get_grads-Tuple{InvertibleNetworks.Invertible}","page":"API Reference","title":"InvertibleNetworks.get_grads","text":"P = get_grads(NL::Invertible)\n\nReturns a cell array of all parameters gradients in the network or layer. Each cell entry contains a reference to the original parameter's gradient; i.e. modifying the paramters in P, modifies the parameters in NL.\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.get_params-Tuple{InvertibleNetworks.Invertible}","page":"API Reference","title":"InvertibleNetworks.get_params","text":"P = get_params(NL::Invertible)\n\nReturns a cell array of all parameters in the network or layer. Each cell entry contains a reference to the original parameter; i.e. modifying the paramters in P, modifies the parameters in NL.\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.reset!-Tuple{InvertibleNetworks.Invertible}","page":"API Reference","title":"InvertibleNetworks.reset!","text":"P = reset!(NL::Invertible)\n\nResets the data of all the parameters in NL\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.clear_grad!-Tuple{AbstractVector{Parameter}}","page":"API Reference","title":"InvertibleNetworks.clear_grad!","text":"clear_grad!(NL::NeuralNetLayer)\n\nor\n\nclear_grad!(P::AbstractArray{Parameter, 1})\n\nSet gradients of each Parameter in the network layer to nothing.\n\n\n\n\n\n","category":"method"},{"location":"api/#Activation-functions","page":"API Reference","title":"Activation functions","text":"","category":"section"},{"location":"api/","page":"API Reference","title":"API Reference","text":"Modules = [InvertibleNetworks]\nOrder = [:function]\nPages = [\"activation_functions.jl\"]","category":"page"},{"location":"api/#InvertibleNetworks.ExpClamp-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.ExpClamp","text":"y = ExpClamp(x)\n\nSoft-clamped exponential function. See also: ExpClampGrad\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.ExpClampInv-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.ExpClampInv","text":"x = ExpClampInv(y)\n\nInverse of ExpClamp function. See also: ExpClamp, ExpClampGrad\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.GaLU-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.GaLU","text":"y = GaLU(x)\n\nGated linear activation unit (not invertible).\n\nSee also: GaLUgrad\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.GaLUgrad-Union{Tuple{N}, Tuple{T}, Tuple{AbstractArray{T, N}, AbstractArray{T, N}}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.GaLUgrad","text":"Δx = GaLUgrad(Δy, x)\n\nBackpropagate data residual through GaLU activation.\n\nInput:\n\nΔy: residual\nx: original input (since not invertible)\n\nOutput:\n\nΔx: backpropagated residual\n\nSee also: GaLU\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.LeakyReLU-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.LeakyReLU","text":"y = LeakyReLU(x; slope=0.01f0)\n\nLeaky rectified linear unit.\n\nSee also: LeakyReLUinv, LeakyReLUgrad\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.LeakyReLUgrad-Union{Tuple{N}, Tuple{T}, Tuple{AbstractArray{T, N}, AbstractArray{T, N}}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.LeakyReLUgrad","text":"Δx = ReLUgrad(Δy, y; slope=0.01f0)\n\nBackpropagate data residual through leaky ReLU function.\n\nInput:\n\nΔy: residual\ny: original output\nslope: slope of non-active part of ReLU\n\nOutput:\n\nΔx: backpropagated residual\n\nSee also: LeakyReLU, LeakyReLUinv\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.LeakyReLUinv-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.LeakyReLUinv","text":"x = LeakyReLUinv(y; slope=0.01f0)\n\nInverse of leaky ReLU.\n\nSee also: LeakyReLU, LeakyReLUgrad\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.ReLU-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.ReLU","text":"y = ReLU(x)\n\nRectified linear unit (not invertible).\n\nSee also: ReLUgrad\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.ReLUgrad-Union{Tuple{N}, Tuple{T}, Tuple{AbstractArray{T, N}, AbstractArray{T, N}}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.ReLUgrad","text":"Δx = ReLUgrad(Δy, x)\n\nBackpropagate data residual through ReLU function.\n\nInput:\n\nΔy: data residual\nx: original input (since not invertible)\n\nOutput:\n\nΔx: backpropagated residual\n\nSee also: ReLU\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.Sigmoid-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.Sigmoid","text":"y = Sigmoid(x; low=0, high=1)\n\nSigmoid activation function. Shifted and scaled such that output is [low,high].\n\nSee also: SigmoidInv, SigmoidGrad\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.SigmoidGrad-Union{Tuple{N}, Tuple{T}, Tuple{AbstractArray{T, N}, AbstractArray{T, N}}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.SigmoidGrad","text":"Δx = SigmoidGrad(Δy, y; x=nothing, low=nothing, high=nothing)\n\nBackpropagate data residual through Sigmoid function. Can be shifted and scaled such that output is (low,high]\n\nInput:\n\nΔy: residual\ny: original output\nx: original input, if y not available (in this case, set y=nothing)\nlow: if provided then scale and shift such that output is (low,high]\nhigh: if provided then scale and shift such that output is (low,high]\n\nOutput:\n\nΔx: backpropagated residual\n\nSee also: Sigmoid, SigmoidInv\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks._sigmoidinv-Union{Tuple{T}, Tuple{T, Any}, Tuple{T, Any, Any}} where T","page":"API Reference","title":"InvertibleNetworks._sigmoidinv","text":"x = SigmoidInv(y; low=0, high=1f0)\n\nInverse of Sigmoid function. Shifted and scaled such that output is [low,high]\n\nSee also: Sigmoid, SigmoidGrad\n\n\n\n\n\n","category":"method"},{"location":"api/#Dimensions-manipulation","page":"API Reference","title":"Dimensions manipulation","text":"","category":"section"},{"location":"api/","page":"API Reference","title":"API Reference","text":"Modules = [InvertibleNetworks]\nOrder = [:function]\nPages = [\"dimensionality_operations.jl\"]","category":"page"},{"location":"api/#InvertibleNetworks.Haar_squeeze-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.Haar_squeeze","text":"Y = Haar_squeeze(X)\n\nPerform a 1-level channelwise 2D/3D (lifting) Haar transform of X and squeeze output of each transform to increase channels by factor of 4 in 4D tensor or by factor of 8 in 5D channels.\n\nInput:\n\nX: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize\n\nOutput:\n\nif 4D tensor:\n\nY: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize\n\nor if 5D tensor:\n\nY: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize\n\nSee also: wavelet_unsqueeze, Haar_unsqueeze, HaarLift, squeeze, unsqueeze\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.invHaar_unsqueeze-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.invHaar_unsqueeze","text":"X = invHaar_unsqueeze(Y)\n\nPerform a 1-level inverse 2D/3D Haar transform of Y and unsqueeze output. This reduces the number of channels by factor of 4 in 4D tensors or by factor of 8 in 5D tensors and increases each spatial dimension by a factor of 2. Inverse operation of Haar_squeeze.\n\nInput:\n\nY: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize\n\nOutput:\n\nIf 4D tensor:\n\nX: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize\n\nIf 5D tensor:\n\nX: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize\n\nSee also: wavelet_unsqueeze, Haar_unsqueeze, HaarLift, squeeze, unsqueeze\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.squeeze-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.squeeze","text":"Y = squeeze(X; pattern=\"column\")\n\nSqueeze operation that is only a reshape. \n\nReshape input image such that each spatial dimension is reduced by a factor of 2, while the number of channels is increased by a factor of 4 if 4D tensor and increased by a factor of 8 if 5D tensor.\n\nInput:\n\nX: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize\npattern: Squeezing pattern\n 1 2 3 4 1 1 3 3 1 3 1 3\n 1 2 3 4 1 1 3 3 2 4 2 4\n 1 2 3 4 2 2 4 4 1 3 1 3\n 1 2 3 4 2 2 4 4 2 4 2 4\n\n column patch checkerboard\n\nOutput: if 4D tensor:\n\nY: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize\n\nor if 5D tensor:\n\nY: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize\n\nSee also: unsqueeze, wavelet_squeeze, wavelet_unsqueeze\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.tensor_cat-Union{Tuple{N}, Tuple{T}, Tuple{AbstractArray{T, N}, AbstractArray{T, N}}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.tensor_cat","text":"X = tensor_cat(Y, Z)\n\nConcatenate ND input tensors along the channel dimension. Inverse operation of tensor_split.\n\nInput:\n\nY, Z: ND input tensors, each of dimensions nx [x ny [x nz]] x n_channel x batchsize\n\nOutput:\n\nX: ND output tensor of dimensions nx [x ny [x nz]] x n_channel*2 x batchsize\n\nSee also: tensor_split\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.tensor_split-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.tensor_split","text":"Y, Z = tensor_split(X)\n\nSplit ND input tensor in half along the channel dimension. Inverse operation of tensor_cat.\n\nInput:\n\nX: ND input tensor of dimensions nx [x ny [x nz]] x n_channel x batchsize\n\nOutput:\n\nY, Z: ND output tensors, each of dimensions nx [x ny [x nz]] x n_channel/2 x batchsize\n\nSee also: tensor_cat\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.unsqueeze-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.unsqueeze","text":"X = unsqueeze(Y; pattern=\"column\")\n\nUndo squeezing operation by reshaping input image such that each spatial dimension is increased by a factor of 2, while the number of channels is decreased by a factor of 4 if 4D tensor of decreased by a factor of 8 if a 5D tensor.\n\nInput:\n\nY: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize\npattern: Squeezing pattern\n 1 2 3 4 1 1 3 3 1 3 1 3\n 1 2 3 4 1 1 3 3 2 4 2 4\n 1 2 3 4 2 2 4 4 1 3 1 3\n 1 2 3 4 2 2 4 4 2 4 2 4\n\n column patch checkerboard\n\nOutput: If 4D tensor:\n\nX: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize\n\nIf 5D tensor:\n\nX: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize\n\nSee also: squeeze, wavelet_squeeze, wavelet_unsqueeze\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.wavelet_squeeze-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.wavelet_squeeze","text":"Y = wavelet_squeeze(X; type=WT.db1)\n\nPerform a 1-level channelwise 2D wavelet transform of X and squeeze output of each transform to increase number of channels by a factor of 4 if input is 4D tensor or by a factor of 8 if a 5D tensor.\n\nInput:\n\nX: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize\ntype: Wavelet filter type. Possible values are WT.haar for Haar wavelets, WT.coif2, WT.coif4, etc. for Coiflet wavelets, or WT.db1, WT.db2, etc. for Daubechies wavetlets. See https://github.com/JuliaDSP/Wavelets.jl for a full list.\n\nOutput: if 4D tensor:\n\nY: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize\n\nor if 5D tensor:\n\nY: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize\n\nSee also: wavelet_unsqueeze, squeeze, unsqueeze\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.wavelet_unsqueeze-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.wavelet_unsqueeze","text":"X = wavelet_unsqueeze(Y; type=WT.db1)\n\nPerform a 1-level inverse 2D wavelet transform of Y and unsqueeze output. This reduces the number of channels by factor of 4 if 4D tensor or by a factor of 8 if 5D tensor and increases each spatial dimension by a factor of 2. Inverse operation of wavelet_squeeze.\n\nInput:\n\nY: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize\ntype: Wavelet filter type. Possible values are haar for Haar wavelets,\n\ncoif2, coif4, etc. for Coiflet wavelets, or db1, db2, etc. for Daubechies wavetlets. See https://github.com/JuliaDSP/Wavelets.jl for a full list.\n\nOutput: If 4D tensor:\n\nX: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize\n\nIf 5D tensor:\n\nX: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize\n\nSee also: wavelet_squeeze, squeeze, unsqueeze\n\n\n\n\n\n","category":"method"},{"location":"api/#Layers","page":"API Reference","title":"Layers","text":"","category":"section"},{"location":"api/","page":"API Reference","title":"API Reference","text":"Modules = [InvertibleNetworks]\nOrder = [:type]\nFilter = t -> t<:NeuralNetLayer","category":"page"},{"location":"api/#InvertibleNetworks.ActNorm","page":"API Reference","title":"InvertibleNetworks.ActNorm","text":"AN = ActNorm(k; logdet=false)\n\nCreate activation normalization layer. The parameters are initialized during the first use, such that the output has zero mean and unit variance along channels for the current mini-batch size.\n\nInput:\n\nk: number of channels\nlogdet: bool to indicate whether to compute the logdet\n\nOutput:\n\nAN: Network layer for activation normalization.\n\nUsage:\n\nForward mode: Y, logdet = AN.forward(X)\nInverse mode: X = AN.inverse(Y)\nBackward mode: ΔX, X = AN.backward(ΔY, Y)\n\nTrainable parameters:\n\nScaling factor AN.s\nBias AN.b\n\nSee also: get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.AffineLayer","page":"API Reference","title":"InvertibleNetworks.AffineLayer","text":"AL = AffineLayer(nx, ny, nc; logdet=false)\n\nCreate a layer for an affine transformation.\n\nInput:\n\nnx, ny,nc`: input dimensions and number of channels\nlogdet: bool to indicate whether to compute the logdet\n\nOutput:\n\nAL: Network layer for affine transformation.\n\nUsage:\n\nForward mode: Y, logdet = AL.forward(X)\nInverse mode: X = AL.inverse(Y)\nBackward mode: ΔX, X = AL.backward(ΔY, Y)\n\nTrainable parameters:\n\nScaling factor AL.s\nBias AL.b\n\nSee also: get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.ConditionalLayerGlow","page":"API Reference","title":"InvertibleNetworks.ConditionalLayerGlow","text":"CL = ConditionalLayerGlow(C::Conv1x1, RB::ResidualBlock; logdet=false)\n\nor\n\nCL = ConditionalLayerGlow(n_in, n_cond, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false, ndims=2) (2D)\n\nCL = ConditionalLayerGlow(n_in, n_cond, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false, ndims=3) (3D)\n\nCL = ConditionalLayerGlowGlow3D(n_in, n_cond, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false) (3D)\n\nCreate a Real NVP-style invertible conditional coupling layer based on 1x1 convolutions and a residual block.\n\nInput:\n\nC::Conv1x1: 1x1 convolution layer\nRB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.\nlogdet: bool to indicate whether to compte the logdet of the layer\n\nor\n\nn_in,n_out, n_hidden: number of channels for: passive input, conditioned input and hidden layer\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\nndims : number of dimensions\n\nOutput:\n\nCL: Invertible Real NVP conditional coupling layer.\n\nUsage:\n\nForward mode: Y, logdet = CL.forward(X, C) (if constructed with logdet=true)\nInverse mode: X = CL.inverse(Y, C)\nBackward mode: ΔX, X = CL.backward(ΔY, Y, C)\n\nTrainable parameters:\n\nNone in CL itself\nTrainable parameters in residual block CL.RB and 1x1 convolution layer CL.C\n\nSee also: Conv1x1, ResidualBlock, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.ConditionalLayerHINT","page":"API Reference","title":"InvertibleNetworks.ConditionalLayerHINT","text":"CH = ConditionalLayerHINT(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, permute=true, ndims=2) (2D)\n\nCH = ConditionalLayerHINT3D(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, permute=true) (3D)\n\nCreate a conditional HINT layer based on coupling blocks and 1 level recursion.\n\nInput:\n\nn_in, n_hidden: number of input and hidden channels of both X and Y\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\npermute: bool to indicate whether to permute X and Y. Default is true\nndims : number of dimensions\n\nOutput:\n\nCH: Conditional HINT coupling layer.\n\nUsage:\n\nForward mode: Zx, Zy, logdet = CH.forward_X(X, Y)\nInverse mode: X, Y = CH.inverse(Zx, Zy)\nBackward mode: ΔX, ΔY, X, Y = CH.backward(ΔZx, ΔZy, Zx, Zy)\nForward mode Y: Zy = CH.forward_Y(Y)\nInverse mode Y: Y = CH.inverse(Zy)\n\nTrainable parameters:\n\nNone in CH itself\nTrainable parameters in coupling layers CH.CL_X, CH.CL_Y, CH.CL_YX and in permutation layers CH.C_X and CH.C_Y.\n\nSee also: CouplingLayerBasic, ResidualBlock, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.ConditionalResidualBlock","page":"API Reference","title":"InvertibleNetworks.ConditionalResidualBlock","text":"RB = ConditionalResidualBlock(nx1, nx2, nx_in, ny1, ny2, ny_in, n_hidden, batchsize; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)\n\nCreate a (non-invertible) conditional residual block, consisting of one dense and three convolutional layers with ReLU activation functions. The dense operator maps the data to the image space and both tensors are concatenated and fed to the subsequent convolutional layers.\n\nInput:\n\nnx1, nx2, nx_in: spatial dimensions and no. of channels of input image\nny1, ny2, ny_in: spatial dimensions and no. of channels of input data\nn_hidden: number of hidden channels\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: strides for the first and third convolution (s1) and the second convolution (s2)\n\nor\n\nOutput:\n\nRB: conditional residual block layer\n\nUsage:\n\nForward mode: Zx, Zy = RB.forward(X, Y)\nBackward mode: ΔX, ΔY = RB.backward(ΔZx, ΔZy, X, Y)\n\nTrainable parameters:\n\nConvolutional kernel weights RB.W0, RB.W1, RB.W2 and RB.W3\nBias terms RB.b0, RB.b1 and RB.b2\n\nSee also: get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.Conv1x1","page":"API Reference","title":"InvertibleNetworks.Conv1x1","text":"C = Conv1x1(k; logdet=false)\n\nor\n\nC = Conv1x1(v1, v2, v3; logdet=false)\n\nCreate network layer for 1x1 convolutions using Householder reflections.\n\nInput:\n\nk: number of channels\nv1, v2, v3: Vectors from which to construct matrix.\nlogdet: if true, returns logdet in forward pass (which is always zero)\n\nOutput:\n\nC: Network layer for 1x1 convolutions with Householder reflections.\n\nUsage:\n\nForward mode: Y, logdet = C.forward(X)\nBackward mode: ΔX, X = C.backward((ΔY, Y))\n\nTrainable parameters:\n\nHouseholder vectors C.v1, C.v2, C.v3\n\nSee also: get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.CouplingLayerBasic","page":"API Reference","title":"InvertibleNetworks.CouplingLayerBasic","text":"CL = CouplingLayerBasic(RB::ResidualBlock; logdet=false)\n\nor\n\nCL = CouplingLayerBasic(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, logdet=false, ndims=2) (2D)\n\nCL = CouplingLayerBasic(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, logdet=false, ndims=3) (3D)\n\nCL = CouplingLayerBasic3D(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, logdet=false) (3D)\n\nCreate a Real NVP-style invertible coupling layer with a residual block.\n\nInput:\n\nRB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.\nlogdet: bool to indicate whether to compte the logdet of the layer\n\nor\n\nn_in, n_hidden: number of input and hidden channels\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s1)\nndims : Number of dimensions\n\nOutput:\n\nCL: Invertible Real NVP coupling layer.\n\nUsage:\n\nForward mode: Y1, Y2, logdet = CL.forward(X1, X2) (if constructed with logdet=true)\nInverse mode: X1, X2 = CL.inverse(Y1, Y2)\nBackward mode: ΔX1, ΔX2, X1, X2 = CL.backward(ΔY1, ΔY2, Y1, Y2)\n\nTrainable parameters:\n\nNone in CL itself\nTrainable parameters in residual block CL.RB\n\nSee also: ResidualBlock, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.CouplingLayerGlow","page":"API Reference","title":"InvertibleNetworks.CouplingLayerGlow","text":"CL = CouplingLayerGlow(C::Conv1x1, RB::ResidualBlock; logdet=false)\n\nor\n\nCL = CouplingLayerGlow(n_in, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false, ndims=2) (2D)\n\nCL = CouplingLayerGlow(n_in, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false, ndims=3) (3D)\n\nCL = CouplingLayerGlow3D(n_in, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false) (3D)\n\nCreate a Real NVP-style invertible coupling layer based on 1x1 convolutions and a residual block.\n\nInput:\n\nC::Conv1x1: 1x1 convolution layer\nRB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.\nlogdet: bool to indicate whether to compte the logdet of the layer\n\nor\n\nn_in, n_hidden: number of input and hidden channels\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\nndims : number of dimensions\n\nOutput:\n\nCL: Invertible Real NVP coupling layer.\n\nUsage:\n\nForward mode: Y, logdet = CL.forward(X) (if constructed with logdet=true)\nInverse mode: X = CL.inverse(Y)\nBackward mode: ΔX, X = CL.backward(ΔY, Y)\n\nTrainable parameters:\n\nNone in CL itself\nTrainable parameters in residual block CL.RB and 1x1 convolution layer CL.C\n\nSee also: Conv1x1, ResidualBlock, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.CouplingLayerHINT","page":"API Reference","title":"InvertibleNetworks.CouplingLayerHINT","text":"H = CouplingLayerHINT(n_in, n_hidden; logdet=false, permute=\"none\", k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, ndims=2) (2D)\n\nH = CouplingLayerHINT(n_in, n_hidden; logdet=false, permute=\"none\", k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, ndims=3) (3D)\n\nH = CouplingLayerHINT3D(n_in, n_hidden; logdet=false, permute=\"none\", k1=3, k2=3, p1=1, p2=1, s1=1, s2=1) (3D)\n\nCreate a recursive HINT-style invertible layer based on coupling blocks.\n\nInput:\n\nn_in, n_hidden: number of input and hidden channels\nlogdet: bool to indicate whether to return the log determinant. Default is false.\npermute: string to specify permutation. Options are \"none\", \"lower\", \"both\" or \"full\".\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\nndims : number of dimensions\n\nOutput:\n\nH: Recursive invertible HINT coupling layer.\n\nUsage:\n\nForward mode: Y = H.forward(X)\nInverse mode: X = H.inverse(Y)\nBackward mode: ΔX, X = H.backward(ΔY, Y)\n\nTrainable parameters:\n\nNone in H itself\nTrainable parameters in coupling layers H.CL\n\nSee also: CouplingLayerBasic, ResidualBlock, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.CouplingLayerIRIM","page":"API Reference","title":"InvertibleNetworks.CouplingLayerIRIM","text":"IL = CouplingLayerIRIM(C::Conv1x1, RB::ResidualBlock)\n\nor\n\nIL = CouplingLayerIRIM(n_in, n_hidden; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, logdet=false, ndims=2) (2D)\n\nIL = CouplingLayerIRIM(n_in, n_hidden; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, logdet=false, ndims=3) (3D)\n\nIL = CouplingLayerIRIM3D(n_in, n_hidden; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, logdet=false) (3D)\n\nCreate an i-RIM invertible coupling layer based on 1x1 convolutions and a residual block. \n\nInput: \n\nC::Conv1x1: 1x1 convolution layer\n\nRB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.\n\nor\n\nnx, ny, nz: spatial dimensions of input\n\nn_in, n_hidden: number of input and hidden channels\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\n\nOutput:\n\nIL: Invertible i-RIM coupling layer.\n\nUsage:\n\nForward mode: Y = IL.forward(X)\nInverse mode: X = IL.inverse(Y)\nBackward mode: ΔX, X = IL.backward(ΔY, Y)\n\nTrainable parameters:\n\nNone in IL itself\nTrainable parameters in residual block IL.RB and 1x1 convolution layer IL.C\n\nSee also: Conv1x1, ResidualBlock!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.FluxBlock","page":"API Reference","title":"InvertibleNetworks.FluxBlock","text":"FB = FluxBlock(model::Chain)\n\nCreate a (non-invertible) neural network block from a Flux network.\n\nInput: \n\nmodel: Flux neural network of type Chain\n\nOutput:\n\nFB: residual block layer\n\nUsage:\n\nForward mode: Y = FB.forward(X)\nBackward mode: ΔX = FB.backward(ΔY, X)\n\nTrainable parameters:\n\nNetwork parameters given by Flux.parameters(model)\n\nSee also: Chain, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.HyperbolicLayer","page":"API Reference","title":"InvertibleNetworks.HyperbolicLayer","text":"HyperbolicLayer(n_in, kernel, stride, pad; action=0, α=1f0, n_hidden=1)\nHyperbolicLayer(n_in, kernel, stride, pad; action=0, α=1f0, n_hidden=1, ndims=2)\nHyperbolicLayer3D(n_in, kernel, stride, pad; action=0, α=1f0, n_hidden=1)\n\nor\n\nHyperbolicLayer(W, b, stride, pad; action=0, α=1f0)\nHyperbolicLayer3D(W, b, stride, pad; action=0, α=1f0)\n\nCreate an invertible hyperbolic coupling layer.\n\nInput:\n\nkernel, stride, pad: Kernel size, stride and padding of the convolutional operator\naction: String that defines whether layer keeps the number of channels fixed (0), increases it by a factor of 4 (or 8 in 3D) (1) or decreased it by a factor of 4 (or 8) (-1).\nW, b: Convolutional weight and bias. W has dimensions of (kernel, kernel, n_in, n_in). b has dimensions of n_in.\nα: Step size for second time derivative. Default is 1.\nn_hidden: Increase the no. of channels by n_hidden in the forward convolution. After applying the transpose convolution, the dimensions are back to the input dimensions.\nndims: Number of dimension of the input (2 for 2D, 3 for 3D)\n\nOutput:\n\nHL: Invertible hyperbolic coupling layer\n\nUsage:\n\nForward mode: X_curr, X_new = HL.forward(X_prev, X_curr)\nInverse mode: X_prev, X_curr = HL.inverse(X_curr, X_new)\nBackward mode: ΔX_prev, ΔX_curr, X_prev, X_curr = HL.backward(ΔX_curr, ΔX_new, X_curr, X_new)\n\nTrainable parameters:\n\nHL.W: Convolutional kernel\nHL.b: Bias\n\nSee also: get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.ResidualBlock","page":"API Reference","title":"InvertibleNetworks.ResidualBlock","text":"RB = ResidualBlock(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, fan=false)\nRB = ResidualBlock3D(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, fan=false)\n\nor\n\nRB = ResidualBlock(W1, W2, W3, b1, b2; p1=1, p2=1, s1=1, s2=1, fan=false)\nRB = ResidualBlock3D(W1, W2, W3, b1, b2; p1=1, p2=1, s1=1, s2=1, fan=false)\n\nCreate a (non-invertible) residual block, consisting of three convolutional layers and activation functions. The first convolution is a downsampling operation with a stride equal to the kernel dimension. The last convolution is the corresponding transpose operation and upsamples the data to either its original dimensions or to twice the number of input channels (for fan=true). The first and second layer contain a bias term.\n\nInput:\n\nn_in: number of input channels\nn_hidden: number of hidden channels\nn_out: number of ouput channels\nactivation: activation type between conv layers and final output\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\nfan: bool to indicate whether the ouput has twice the number of input channels. For fan=false, the last activation function is a gated linear unit (thereby bringing the output back to the original dimensions). For fan=true, the last activation is a ReLU, in which case the output has twice the number of channels as the input.\n\nor\n\nW1, W2, W3: 4D tensors of convolutional weights\nb1, b2: bias terms\n\nOutput:\n\nRB: residual block layer\n\nUsage:\n\nForward mode: Y = RB.forward(X)\nBackward mode: ΔX = RB.backward(ΔY, X)\n\nTrainable parameters:\n\nConvolutional kernel weights RB.W1, RB.W2 and RB.W3\nBias terms RB.b1 and RB.b2\n\nSee also: get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#Networks","page":"API Reference","title":"Networks","text":"","category":"section"},{"location":"api/","page":"API Reference","title":"API Reference","text":"Modules = [InvertibleNetworks]\nOrder = [:type]\nFilter = t -> t<:InvertibleNetwork","category":"page"},{"location":"api/#InvertibleNetworks.NetworkConditionalGlow","page":"API Reference","title":"InvertibleNetworks.NetworkConditionalGlow","text":"G = NetworkGlow(n_in, n_cond, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)\n\nG = NetworkGlow3D(n_in, n_cond, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)\n\nCreate a conditional invertible network based on the Glow architecture. Each flow step in the inner loop consists of an activation normalization layer, followed by an invertible coupling layer with 1x1 convolutions and a residual block. The outer loop performs a squeezing operation prior to the inner loop, and a splitting operation afterwards.\n\nInput: \n\n'n_in': number of input channels of variable to sample\n'n_cond': number of input channels of condition\nn_hidden: number of hidden units in residual blocks\nL: number of scales (outer loop)\nK: number of flow steps per scale (inner loop)\nsplit_scales: if true, perform squeeze operation which halves spatial dimensions and duplicates channel dimensions then split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third \n\noperator, k2 is the kernel size of the second operator.\n\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\nndims : number of dimensions\nsqueeze_type : squeeze type that happens at each multiscale level\n\nOutput:\n\nG: invertible Glow network.\n\nUsage:\n\nForward mode: ZX, ZC logdet = G.forward(X, C)\nBackward mode: ΔX, X, ΔC = G.backward(ΔZX, ZX, ZC)\n\nTrainable parameters:\n\nNone in G itself\nTrainable parameters in activation normalizations G.AN[i,j] and coupling layers G.C[i,j], where i and j range from 1 to L and K respectively.\n\nSee also: ActNorm, CouplingLayerGlow!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.NetworkConditionalHINT","page":"API Reference","title":"InvertibleNetworks.NetworkConditionalHINT","text":"CH = NetworkConditionalHINT(n_in, n_hidden, depth; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)\n\nCH = NetworkConditionalHINT3D(n_in, n_hidden, depth; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)\n\nCreate a conditional HINT network for data-driven generative modeling based on the change of variables formula.\n\nInput:\n\n'n_in': number of input channels\nn_hidden: number of hidden units in residual blocks\ndepth: number network layers\nk1, k2: kernel size for first and third residual layer (k1) and second layer (k2)\np1, p2: respective padding sizes for residual block layers\ns1, s2: respective strides for residual block layers\n\nOutput:\n\nCH: conditioinal HINT network\n\nUsage:\n\nForward mode: Zx, Zy, logdet = CH.forward(X, Y)\nInverse mode: X, Y = CH.inverse(Zx, Zy)\nBackward mode: ΔX, X = CH.backward(ΔZx, ΔZy, Zx, Zy)\n\nTrainable parameters:\n\nNone in CH itself\nTrainable parameters in activation normalizations CH.AN_X[i] and CH.AN_Y[i],\n\nand in coupling layers CH.CL[i], where i ranges from 1 to depth.\n\nSee also: ActNorm, ConditionalLayerHINT!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.NetworkGlow","page":"API Reference","title":"InvertibleNetworks.NetworkGlow","text":"G = NetworkGlow(n_in, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)\n\nG = NetworkGlow3D(n_in, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)\n\nCreate an invertible network based on the Glow architecture. Each flow step in the inner loop consists of an activation normalization layer, followed by an invertible coupling layer with 1x1 convolutions and a residual block. The outer loop performs a squeezing operation prior to the inner loop, and a splitting operation afterwards.\n\nInput: \n\n'n_in': number of input channels\nn_hidden: number of hidden units in residual blocks\nL: number of scales (outer loop)\nK: number of flow steps per scale (inner loop)\nsplit_scales: if true, perform squeeze operation which halves spatial dimensions and duplicates channel dimensions then split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third \n\noperator, k2 is the kernel size of the second operator.\n\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\nndims : number of dimensions\nsqueeze_type : squeeze type that happens at each multiscale level\nlogdet : boolean to turn on/off logdet term tracking and gradient calculation\n\nOutput:\n\nG: invertible Glow network.\n\nUsage:\n\nForward mode: Y, logdet = G.forward(X)\nBackward mode: ΔX, X = G.backward(ΔY, Y)\n\nTrainable parameters:\n\nNone in G itself\nTrainable parameters in activation normalizations G.AN[i,j] and coupling layers G.C[i,j], where i and j range from 1 to L and K respectively.\n\nSee also: ActNorm, CouplingLayerGlow!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.NetworkHyperbolic","page":"API Reference","title":"InvertibleNetworks.NetworkHyperbolic","text":"H = NetworkHyperbolic(n_in, architecture; k=3, s=1, p=1, logdet=true, α=1f0)\nH = NetworkHyperbolic(n_in, architecture; k=3, s=1, p=1, logdet=true, α=1f0, ndims=2)\nH = NetworkHyperbolic3D(n_in, architecture; k=3, s=1, p=1, logdet=true, α=1f0)\n\nCreate an invertible network based on hyperbolic layers. The network architecture is specified by a tuple of the form ((action1, nhidden1), (action2, nhidden2), ... ). Each inner tuple corresonds to an additional layer. The first inner tuple argument specifies whether the respective layer increases the number of channels (set to 1), decreases it (set to -1) or leaves it constant (set to 0). The second argument specifies the number of hidden units for that layer.\n\nInput: \n\nn_in: number of channels of input tensor.\n\nn_hidden: number of hidden units in residual blocks\narchitecture: Tuple of tuples specifying the network architecture; ((action1, nhidden1), (action2, nhidden2))\nk, s, p: Kernel size, stride and padding of convolutional kernels\n\nlogdet: Bool to indicate whether to return the logdet\nα: Step size in hyperbolic network. Defaults to 1\nndims: Number of dimension\n\nOutput:\n\nH: invertible hyperbolic network.\n\nUsage:\n\nForward mode: Y_prev, Y_curr, logdet = H.forward(X_prev, X_curr)\nInverse mode: X_curr, X_new = H.inverse(Y_curr, Y_new)\nBackward mode: ΔX_curr, ΔX_new, X_curr, X_new = H.backward(ΔY_curr, ΔY_new, Y_curr, Y_new)\n\nTrainable parameters:\n\nNone in H itself\nTrainable parameters in the hyperbolic layers H.HL[j].\n\nSee also: CouplingLayer!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.NetworkLoop","page":"API Reference","title":"InvertibleNetworks.NetworkLoop","text":"L = NetworkLoop(n_in, n_hidden, maxiter, Ψ; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, ndims=2) (2D)\n\nL = NetworkLoop3D(n_in, n_hidden, maxiter, Ψ; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1) (3D)\n\nCreate an invertibel recurrent inference machine (i-RIM) consisting of an unrooled loop for a given number of iterations.\n\nInput: \n\n'n_in': number of input channels\nn_hidden: number of hidden units in residual blocks\nmaxiter: number unrolled loop iterations\nΨ: link function\nk1, k2: stencil sizes for convolutions in the residual blocks. The first convolution uses a stencil of size and stride k1, thereby downsampling the input. The second convolutions uses a stencil of size k2. The last layer uses a stencil of size and stride k1, but performs the transpose operation of the first convolution, thus upsampling the output to the original input size.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2) in residual block\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2) in residual block\nndims : number of dimensions\n\nOutput:\n\nL: invertible i-RIM network.\n\nUsage:\n\nForward mode: η_out, s_out = L.forward(η_in, s_in, d, A)\nInverse mode: η_in, s_in = L.inverse(η_out, s_out, d, A)\nBackward mode: Δη_in, Δs_in, η_in, s_in = L.backward(Δη_out, Δs_out, η_out, s_out, d, A)\n\nTrainable parameters:\n\nNone in L itself\nTrainable parameters in the invertible coupling layers L.L[i], and actnorm layers L.AN[i], where i ranges from 1 to the number of loop iterations.\n\nSee also: CouplingLayerIRIM, ResidualBlock, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.NetworkMultiScaleConditionalHINT","page":"API Reference","title":"InvertibleNetworks.NetworkMultiScaleConditionalHINT","text":"CH = NetworkMultiScaleConditionalHINT(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)\n\nCH = NetworkMultiScaleConditionalHINT3D(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)\n\nCreate a conditional HINT network for data-driven generative modeling based on the change of variables formula.\n\nInput: \n\n'n_in': number of input channels\n\nn_hidden: number of hidden units in residual blocks\nL: number of scales (outer loop)\nK: number of flow steps per scale (inner loop)\nsplit_scales: if true, split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.\nk1, k2: kernel size for first and third residual layer (k1) and second layer (k2)\np1, p2: respective padding sizes for residual block layers\n\ns1, s2: respective strides for residual block layers\nndims : number of dimensions\n\nOutput:\n\nCH: conditional HINT network\n\nUsage:\n\nForward mode: Zx, Zy, logdet = CH.forward(X, Y)\nInverse mode: X, Y = CH.inverse(Zx, Zy)\nBackward mode: ΔX, X = CH.backward(ΔZx, ΔZy, Zx, Zy)\n\nTrainable parameters:\n\nNone in CH itself\nTrainable parameters in activation normalizations CH.AN_X[i] and CH.AN_Y[i], \n\nand in coupling layers CH.CL[i], where i ranges from 1 to depth.\n\nSee also: ActNorm, ConditionalLayerHINT!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.NetworkMultiScaleHINT","page":"API Reference","title":"InvertibleNetworks.NetworkMultiScaleHINT","text":"H = NetworkMultiScaleHINT(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, ndims=2)\n\nH = NetworkMultiScaleHINT3D(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)\n\nCreate a multiscale HINT network for data-driven generative modeling based on the change of variables formula.\n\nInput: \n\n'n_in': number of input channels\n\nn_hidden: number of hidden units in residual blocks\nL: number of scales (outer loop)\nK: number of flow steps per scale (inner loop)\nsplit_scales: if true, split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.\nk1, k2: kernel size for first and third residual layer (k1) and second layer (k2)\np1, p2: respective padding sizes for residual block layers\n\ns1, s2: respective strides for residual block layers\nndims : number of dimensions\n\nOutput:\n\nH: multiscale HINT network\n\nUsage:\n\nForward mode: Z, logdet = H.forward(X)\nInverse mode: X = H.inverse(Z)\nBackward mode: ΔX, X = H.backward(ΔZ, Z)\n\nTrainable parameters:\n\nNone in H itself\nTrainable parameters in activation normalizations H.AN[i], \n\nand in coupling layers H.CL[i], where i ranges from 1 to depth.\n\nSee also: ActNorm, CouplingLayerHINT!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.SummarizedNet","page":"API Reference","title":"InvertibleNetworks.SummarizedNet","text":"G = SummarizedNet(cond_net, sum_net)\n\nCreate a summarized neural conditional approximator from conditional approximator condnet and summary network sumnet.\n\nInput: \n\n'cond_net': invertible conditional distribution approximator\n'sum_net': Should be flux layer. summary network. Should be invariant to a dimension of interest. \n\nOutput:\n\nG: summarized network.\n\nUsage:\n\nForward mode: ZX, ZY, logdet = G.forward(X, Y)\nBackward mode: ΔX, X, ΔY = G.backward(ΔZX, ZX, ZY; Y_save=Y)\ninverse mode: ZX, ZY logdet = G.inverse(ZX, ZY)\n\nTrainable parameters:\n\nNone in G itself\nTrainable parameters in conditional approximator G.cond_net and smmary network G.sum_net,\n\nSee also: ActNorm, CouplingLayerGlow!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#AD-Integration","page":"API Reference","title":"AD Integration","text":"","category":"section"},{"location":"api/","page":"API Reference","title":"API Reference","text":"Modules = [InvertibleNetworks]\nOrder = [:function]\nPages = [\"chainrules.jl\"]","category":"page"},{"location":"api/#InvertibleNetworks.backward_update!-Union{Tuple{N}, Tuple{T}, Tuple{InvertibleNetworks.InvertibleOperationsTape, AbstractArray{T, N}}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.backward_update!","text":"Update state in the backward pass\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.check_coherence-Tuple{InvertibleNetworks.InvertibleOperationsTape, InvertibleNetworks.Invertible}","page":"API Reference","title":"InvertibleNetworks.check_coherence","text":"Error if mismatch between state and network\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.current-Tuple{InvertibleNetworks.InvertibleOperationsTape}","page":"API Reference","title":"InvertibleNetworks.current","text":"Get current state of the tape\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.forward_update!-Union{Tuple{N}, Tuple{T}, Tuple{InvertibleNetworks.InvertibleOperationsTape, AbstractArray{T, N}, AbstractArray{T, N}, Union{Nothing, T}, InvertibleNetworks.Invertible}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.forward_update!","text":"Update state in the forward pass.\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.isa_newblock-Tuple{InvertibleNetworks.InvertibleOperationsTape, Any}","page":"API Reference","title":"InvertibleNetworks.isa_newblock","text":"Determine if the input is related to a new block of invertible operations\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.reset!-Tuple{InvertibleNetworks.InvertibleOperationsTape}","page":"API Reference","title":"InvertibleNetworks.reset!","text":"Reset the state of the tape\n\n\n\n\n\n","category":"method"},{"location":"examples/#Further-examples","page":"Examples","title":"Further examples","text":"","category":"section"},{"location":"examples/","page":"Examples","title":"Examples","text":"We provide usage examples for all the layers and network in our examples subfolder. Each of the example show how to setup and use the building block for simple random variables.","category":"page"},{"location":"examples/#D-Rosenbrock/banana-distribution-sampling-w/-GLOW","page":"Examples","title":"2D Rosenbrock/banana distribution sampling w/ GLOW","text":"","category":"section"},{"location":"examples/","page":"Examples","title":"Examples","text":"using LinearAlgebra, InvertibleNetworks, PyPlot, Flux, Random\n\n# Random seed\nRandom.seed!(11)\n\n# Define network\nnx = 1; ny = 1; n_in = 2\nn_hidden = 64\nbatchsize = 20\ndepth = 4\nAN = Array{ActNorm}(undef, depth)\nL = Array{CouplingLayerGlow}(undef, depth)\nParams = Array{Parameter}(undef, 0)\n\n# Create layers\nfor j=1:depth\n AN[j] = ActNorm(n_in; logdet=true)\n L[j] = CouplingLayerGlow(n_in, n_hidden; k1=1, k2=1, p1=0, p2=0, logdet=true)\n\n # Collect parameters\n global Params = cat(Params, get_params(AN[j]); dims=1)\n global Params = cat(Params, get_params(L[j]); dims=1)\nend\n\n# Forward pass\nfunction forward(X)\n logdet = 0f0\n for j=1:depth\n X_, logdet1 = AN[j].forward(X)\n X, logdet2 = L[j].forward(X_)\n logdet += (logdet1 + logdet2)\n end\n return X, logdet\nend\n\n# Backward pass\nfunction backward(ΔX, X)\n for j=depth:-1:1\n ΔX_, X_ = L[j].backward(ΔX, X)\n ΔX, X = AN[j].backward(ΔX_, X_)\n end\n return ΔX, X\nend\n\n####################################################################################################\n\n# Loss\nfunction loss(X)\n Y, logdet = forward(X)\n f = -log_likelihood(Y) - logdet\n ΔY = -∇log_likelihood(Y)\n ΔX = backward(ΔY, Y)[1]\n return f, ΔX\nend\n\n# Training\nmaxiter = 2000\nopt = Flux.ADAM(1f-3)\nfval = zeros(Float32, maxiter)\n\nfor j=1:maxiter\n\n # Evaluate objective and gradients\n X = sample_banana(batchsize)\n fval[j] = loss(X)[1]\n\n # Update params\n for p in Params\n Flux.update!(opt, p.data, p.grad)\n end\n clear_grad!(Params)\nend\n\n####################################################################################################\n\n# Testing\ntest_size = 250\nX = sample_banana(test_size)\nY_ = forward(X)[1]\nY = randn(Float32, 1, 1, 2, test_size)\nX_ = backward(Y, Y)[2]\n\n# Plot\nfig = figure(figsize=[8,8])\nax1 = subplot(2,2,1); plot(X[1, 1, 1, :], X[1, 1, 2, :], \".\"); title(L\"Data space: $x \\sim \\hat{p}_X$\")\nax1.set_xlim([-3.5,3.5]); ax1.set_ylim([0,50])\nax2 = subplot(2,2,2); plot(Y_[1, 1, 1, :], Y_[1, 1, 2, :], \"g.\"); title(L\"Latent space: $z = f(x)$\")\nax2.set_xlim([-3.5, 3.5]); ax2.set_ylim([-3.5, 3.5])\nax3 = subplot(2,2,3); plot(X_[1, 1, 1, :], X_[1, 1, 2, :], \"g.\"); title(L\"Data space: $x = f^{-1}(z)$\")\nax3.set_xlim([-3.5,3.5]); ax3.set_ylim([0,50])\nax4 = subplot(2,2,4); plot(Y[1, 1, 1, :], Y[1, 1, 2, :], \".\"); title(L\"Latent space: $z \\sim \\hat{p}_Z$\")\nax4.set_xlim([-3.5, 3.5]); ax4.set_ylim([-3.5, 3.5])\nsavefig(\"plot_banana.svg\")\nnothing","category":"page"},{"location":"examples/","page":"Examples","title":"Examples","text":"(Image: )","category":"page"},{"location":"examples/#Conditional-2D-Rosenbrock/banana-distribution-sampling-w/-cHINT","page":"Examples","title":"Conditional 2D Rosenbrock/banana distribution sampling w/ cHINT","text":"","category":"section"},{"location":"examples/","page":"Examples","title":"Examples","text":"using InvertibleNetworks\nusing Flux, LinearAlgebra, PyPlot\n\n# Define network\nnx = 1; ny = 1; n_in = 2\nn_hidden = 64\nbatchsize = 64\ndepth = 8\n\n# Construct HINT network\nH = NetworkConditionalHINT(n_in, n_hidden, depth; k1=1, k2=1, p1=0, p2=0)\n\n# Linear forward operator\nA = randn(Float32,2,2)\nA = A / (2*opnorm(A))\n\n# Loss\nfunction loss(H, X, Y)\n Zx, Zy, logdet = H.forward(X, Y)\n f = -log_likelihood(tensor_cat(Zx, Zy)) - logdet\n ΔZ = -∇log_likelihood(tensor_cat(Zx, Zy))\n ΔZx, ΔZy = tensor_split(ΔZ)\n ΔX, ΔY = H.backward(ΔZx, ΔZy, Zx, Zy)[1:2]\n return f, ΔX, ΔY\nend\n\n# Training\nmaxiter = 1000\nopt = Flux.ADAM(1f-3)\nfval = zeros(Float32, maxiter)\n\nfor j=1:maxiter\n\n # Evaluate objective and gradients\n X = sample_banana(batchsize)\n Y = reshape(A*reshape(X, :, batchsize), nx, ny, n_in, batchsize)\n Y += .2f0*randn(Float32, nx, ny, n_in, batchsize)\n\n fval[j] = loss(H, X, Y)[1]\n\n # Update params\n for p in get_params(H)\n Flux.update!(opt, p.data, p.grad)\n end\n clear_grad!(H)\nend\n\n# Testing\ntest_size = 250\nX = sample_banana(test_size)\nY = reshape(A*reshape(X, :, test_size), nx, ny, n_in, test_size)\nY += .2f0*randn(Float32, nx, ny, n_in, test_size)\n\nZx_, Zy_ = H.forward(X, Y)[1:2]\n\nZx = randn(Float32, nx, ny, n_in, test_size)\nZy = randn(Float32, nx, ny, n_in, test_size)\nX_, Y_ = H.inverse(Zx, Zy)\n\n# Now select single fixed sample from all Ys\nX_fixed = sample_banana(1)\nY_fixed = reshape(A*vec(X_fixed), nx, ny, n_in, 1)\nY_fixed += .2f0*randn(Float32, size(X_fixed))\n\nZy_fixed = H.forward_Y(Y_fixed)\nZx = randn(Float32, nx, ny, n_in, test_size)\n\nX_post = H.inverse(Zx, Zy_fixed.*ones(Float32, nx, ny, n_in, test_size))[1]\n\n# Model/data spaces\nfig = figure(figsize=[16,6])\nax1 = subplot(2,5,1); plot(X[1, 1, 1, :], X[1, 1, 2, :], \".\"); title(L\"Model space: $x \\sim \\hat{p}_x$\")\nax1.set_xlim([-3.5, 3.5]); ax1.set_ylim([0,50])\nax2 = subplot(2,5,2); plot(Y[1, 1, 1, :], Y[1, 1, 2, :], \".\"); title(L\"Noisy data $y=Ax+n$ \")\n\nax3 = subplot(2,5,3); plot(X_[1, 1, 1, :], X_[1, 1, 2, :], \"g.\"); title(L\"Model space: $x = f(zx|zy)^{-1}$\")\nax3.set_xlim([-3.5, 3.5]); ax3.set_ylim([0,50])\nax4 = subplot(2,5,4); plot(Y_[1, 1, 1, :], Y_[1, 1, 2, :], \"g.\"); title(L\"Data space: $y = f(zx|zy)^{-1}$\")\n\nax5 = subplot(2,5,5); plot(X_post[1, 1, 1, :], X_post[1, 1, 2, :], \"g.\"); \nplot(X_fixed[1, 1, 1, :], X_fixed[1, 1, 2, :], \"r.\"); title(L\"Model space: $x = f(zx|zy_{fix})^{-1}$\")\nax5.set_xlim([-3.5, 3.5]); ax5.set_ylim([0,50])\n\n# Latent spaces\nax6 = subplot(2,5,6); plot(Zx_[1, 1, 1, :], Zx_[1, 1, 2, :], \"g.\"); title(L\"Latent space: $zx = f(x|y)$\")\nax6.set_xlim([-3.5, 3.5]); ax6.set_ylim([-3.5, 3.5])\nax7 = subplot(2,5,7); plot(Zy_[1, 1, 1, :], Zy[1, 1, 2, :], \"g.\"); title(L\"Latent space: $zy \\sim \\hat{p}_{zy}$\")\nax7.set_xlim([-3.5, 3.5]); ax7.set_ylim([-3.5, 3.5])\nax8 = subplot(2,5,9); plot(Zx[1, 1, 1, :], Zx[1, 1, 2, :], \".\"); title(L\"Latent space: $zx \\sim \\hat{p}_{zy}$\")\nax8.set_xlim([-3.5, 3.5]); ax8.set_ylim([-3.5, 3.5])\nax9 = subplot(2,5,8); plot(Zy[1, 1, 1, :], Zy[1, 1, 2, :], \".\"); title(L\"Latent space: $zy \\sim \\hat{p}_{zy}$\")\nax9.set_xlim([-3.5, 3.5]); ax9.set_ylim([-3.5, 3.5])\nax10 = subplot(2,5,10); plot(Zx[1, 1, 1, :], Zx[1, 1, 2, :], \".\"); \nplot(Zy_fixed[1, 1, 1, :], Zy_fixed[1, 1, 2, :], \"r.\"); title(L\"Latent space: $zx \\sim \\hat{p}_{zx}$\")\nax10.set_xlim([-3.5, 3.5]); ax10.set_ylim([-3.5, 3.5])\nsavefig(\"plot_cbanana.svg\")\nnothing","category":"page"},{"location":"examples/","page":"Examples","title":"Examples","text":"(Image: )","category":"page"},{"location":"examples/#Literature-applications","page":"Examples","title":"Literature applications","text":"","category":"section"},{"location":"examples/","page":"Examples","title":"Examples","text":"The following examples show the implementation of applications from the linked papers with [InvertibleNetworks.jl]:","category":"page"},{"location":"examples/","page":"Examples","title":"Examples","text":"Invertible recurrent inference machines (Putzky and Welling, 2019) (generic example)\nGenerative models with maximum likelihood via the change of variable formula (example)\nGlow: Generative flow with invertible 1x1 convolutions (Kingma and Dhariwal, 2018) (generic example, source)","category":"page"},{"location":"LICENSE/","page":"LICENSE","title":"LICENSE","text":"MIT License","category":"page"},{"location":"LICENSE/","page":"LICENSE","title":"LICENSE","text":"Copyright (c) 2020 SLIM group @ Georgia Institute of Technology","category":"page"},{"location":"LICENSE/","page":"LICENSE","title":"LICENSE","text":"Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:","category":"page"},{"location":"LICENSE/","page":"LICENSE","title":"LICENSE","text":"The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.","category":"page"},{"location":"LICENSE/","page":"LICENSE","title":"LICENSE","text":"THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.","category":"page"},{"location":"#InvertibleNetworks.jl-documentation","page":"Home","title":"InvertibleNetworks.jl documentation","text":"","category":"section"},{"location":"#About","page":"Home","title":"About","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"InvertibleNetworks.jl is a package of invertible layers and networks for machine learning. The invertibility allows to backpropagate through the layers and networks without the need for storing the forward state that is recomputed on the fly, inverse propagating through it. This package is the first of its kind in Julia with memory efficient invertible layers, networks and activation functions for machine learning.","category":"page"},{"location":"#Installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This package is registered in the Julia general registry and can be installed in the REPL package manager (]):","category":"page"},{"location":"","page":"Home","title":"Home","text":"] add InvertibleNetworks","category":"page"},{"location":"#Authors","page":"Home","title":"Authors","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This package is developed and maintained by Felix J. Herrmann's SlimGroup at Georgia Institute of Technology. The main contributors of this package are:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Rafael Orozco, Georgia Institute of Technology (rorozco@gatech.edu)\nPhilipp Witte, Microsoft Corporation (pwitte@microsoft.com)\nGabrio Rizzuti, Utrecht University (g.rizzuti@umcutrecht.nl)\nMathias Louboutin, Georgia Institute of Technology (mlouboutin3@gatech.edu)\nAli Siahkoohi, Georgia Institute of Technology (alisk@gatech.edu)","category":"page"},{"location":"#References","page":"Home","title":"References","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Yann Dauphin, Angela Fan, Michael Auli and David Grangier, \"Language modeling with gated convolutional networks\", Proceedings of the 34th International Conference on Machine Learning, 2017. ArXiv\nLaurent Dinh, Jascha Sohl-Dickstein and Samy Bengio, \"Density estimation using Real NVP\", International Conference on Learning Representations, 2017, ArXiv\nDiederik P. Kingma and Prafulla Dhariwal, \"Glow: Generative Flow with Invertible 1x1 Convolutions\", Conference on Neural Information Processing Systems, 2018. ArXiv\nKeegan Lensink, Eldad Haber and Bas Peters, \"Fully Hyperbolic Convolutional Neural Networks\", arXiv Computer Vision and Pattern Recognition, 2019. ArXiv\nPatrick Putzky and Max Welling, \"Invert to learn to invert\", Advances in Neural Information Processing Systems, 2019. ArXiv\nJakob Kruse, Gianluca Detommaso, Robert Scheichl and Ullrich Köthe, \"HINT: Hierarchical Invertible Neural Transport for Density Estimation and Bayesian Inference\", arXiv Statistics and Machine Learning, 2020. ArXiv","category":"page"},{"location":"#Related-work-and-publications","page":"Home","title":"Related work and publications","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"The following publications use [InvertibleNetworks.jl]:","category":"page"},{"location":"","page":"Home","title":"Home","text":"“Preconditioned training of normalizing flows for variational inference in inverse problems”\npaper: https://arxiv.org/abs/2101.03709\npresentation\ncode: FastApproximateInference.jl\n\"Parameterizing uncertainty by deep invertible networks, an application to reservoir characterization\"\npaper: https://arxiv.org/abs/2004.07871\npresentation\ncode: https://github.com/slimgroup/Software.SEG2020\n\"Generalized Minkowski sets for the regularization of inverse problems\"\npaper: http://arxiv.org/abs/1903.03942\ncode: SetIntersectionProjection.jl","category":"page"},{"location":"#Acknowledgments","page":"Home","title":"Acknowledgments","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This package uses functions from NNlib.jl, Flux.jl and Wavelets.jl","category":"page"}] +[{"location":"api/#Invertible-Networks-API-reference","page":"API Reference","title":"Invertible Networks API reference","text":"","category":"section"},{"location":"api/","page":"API Reference","title":"API Reference","text":"Modules = [InvertibleNetworks]\nOrder = [:function]\nPages = [\"neuralnet.jl\", \"parameter.jl\"]","category":"page"},{"location":"api/#InvertibleNetworks.clear_grad!-Tuple{InvertibleNetworks.Invertible}","page":"API Reference","title":"InvertibleNetworks.clear_grad!","text":"P = clear_grad!(NL::Invertible)\n\nResets the gradient of all the parameters in NL\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.get_grads-Tuple{InvertibleNetworks.Invertible}","page":"API Reference","title":"InvertibleNetworks.get_grads","text":"P = get_grads(NL::Invertible)\n\nReturns a cell array of all parameters gradients in the network or layer. Each cell entry contains a reference to the original parameter's gradient; i.e. modifying the paramters in P, modifies the parameters in NL.\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.get_params-Tuple{InvertibleNetworks.Invertible}","page":"API Reference","title":"InvertibleNetworks.get_params","text":"P = get_params(NL::Invertible)\n\nReturns a cell array of all parameters in the network or layer. Each cell entry contains a reference to the original parameter; i.e. modifying the paramters in P, modifies the parameters in NL.\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.reset!-Tuple{InvertibleNetworks.Invertible}","page":"API Reference","title":"InvertibleNetworks.reset!","text":"P = reset!(NL::Invertible)\n\nResets the data of all the parameters in NL\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.clear_grad!-Tuple{AbstractVector{Parameter}}","page":"API Reference","title":"InvertibleNetworks.clear_grad!","text":"clear_grad!(NL::NeuralNetLayer)\n\nor\n\nclear_grad!(P::AbstractArray{Parameter, 1})\n\nSet gradients of each Parameter in the network layer to nothing.\n\n\n\n\n\n","category":"method"},{"location":"api/#Activation-functions","page":"API Reference","title":"Activation functions","text":"","category":"section"},{"location":"api/","page":"API Reference","title":"API Reference","text":"Modules = [InvertibleNetworks]\nOrder = [:function]\nPages = [\"activation_functions.jl\"]","category":"page"},{"location":"api/#InvertibleNetworks.ExpClamp-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.ExpClamp","text":"y = ExpClamp(x)\n\nSoft-clamped exponential function. See also: ExpClampGrad\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.ExpClampInv-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.ExpClampInv","text":"x = ExpClampInv(y)\n\nInverse of ExpClamp function. See also: ExpClamp, ExpClampGrad\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.GaLU-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.GaLU","text":"y = GaLU(x)\n\nGated linear activation unit (not invertible).\n\nSee also: GaLUgrad\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.GaLUgrad-Union{Tuple{N}, Tuple{T}, Tuple{AbstractArray{T, N}, AbstractArray{T, N}}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.GaLUgrad","text":"Δx = GaLUgrad(Δy, x)\n\nBackpropagate data residual through GaLU activation.\n\nInput:\n\nΔy: residual\nx: original input (since not invertible)\n\nOutput:\n\nΔx: backpropagated residual\n\nSee also: GaLU\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.LeakyReLU-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.LeakyReLU","text":"y = LeakyReLU(x; slope=0.01f0)\n\nLeaky rectified linear unit.\n\nSee also: LeakyReLUinv, LeakyReLUgrad\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.LeakyReLUgrad-Union{Tuple{N}, Tuple{T}, Tuple{AbstractArray{T, N}, AbstractArray{T, N}}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.LeakyReLUgrad","text":"Δx = ReLUgrad(Δy, y; slope=0.01f0)\n\nBackpropagate data residual through leaky ReLU function.\n\nInput:\n\nΔy: residual\ny: original output\nslope: slope of non-active part of ReLU\n\nOutput:\n\nΔx: backpropagated residual\n\nSee also: LeakyReLU, LeakyReLUinv\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.LeakyReLUinv-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.LeakyReLUinv","text":"x = LeakyReLUinv(y; slope=0.01f0)\n\nInverse of leaky ReLU.\n\nSee also: LeakyReLU, LeakyReLUgrad\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.ReLU-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.ReLU","text":"y = ReLU(x)\n\nRectified linear unit (not invertible).\n\nSee also: ReLUgrad\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.ReLUgrad-Union{Tuple{N}, Tuple{T}, Tuple{AbstractArray{T, N}, AbstractArray{T, N}}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.ReLUgrad","text":"Δx = ReLUgrad(Δy, x)\n\nBackpropagate data residual through ReLU function.\n\nInput:\n\nΔy: data residual\nx: original input (since not invertible)\n\nOutput:\n\nΔx: backpropagated residual\n\nSee also: ReLU\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.Sigmoid-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.Sigmoid","text":"y = Sigmoid(x; low=0, high=1)\n\nSigmoid activation function. Shifted and scaled such that output is [low,high].\n\nSee also: SigmoidInv, SigmoidGrad\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.SigmoidGrad-Union{Tuple{N}, Tuple{T}, Tuple{AbstractArray{T, N}, AbstractArray{T, N}}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.SigmoidGrad","text":"Δx = SigmoidGrad(Δy, y; x=nothing, low=nothing, high=nothing)\n\nBackpropagate data residual through Sigmoid function. Can be shifted and scaled such that output is (low,high]\n\nInput:\n\nΔy: residual\ny: original output\nx: original input, if y not available (in this case, set y=nothing)\nlow: if provided then scale and shift such that output is (low,high]\nhigh: if provided then scale and shift such that output is (low,high]\n\nOutput:\n\nΔx: backpropagated residual\n\nSee also: Sigmoid, SigmoidInv\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks._sigmoidinv-Union{Tuple{T}, Tuple{T, Any}, Tuple{T, Any, Any}} where T","page":"API Reference","title":"InvertibleNetworks._sigmoidinv","text":"x = SigmoidInv(y; low=0, high=1f0)\n\nInverse of Sigmoid function. Shifted and scaled such that output is [low,high]\n\nSee also: Sigmoid, SigmoidGrad\n\n\n\n\n\n","category":"method"},{"location":"api/#Dimensions-manipulation","page":"API Reference","title":"Dimensions manipulation","text":"","category":"section"},{"location":"api/","page":"API Reference","title":"API Reference","text":"Modules = [InvertibleNetworks]\nOrder = [:function]\nPages = [\"dimensionality_operations.jl\"]","category":"page"},{"location":"api/#InvertibleNetworks.Haar_squeeze-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.Haar_squeeze","text":"Y = Haar_squeeze(X)\n\nPerform a 1-level channelwise 2D/3D (lifting) Haar transform of X and squeeze output of each transform to increase channels by factor of 4 in 4D tensor or by factor of 8 in 5D channels.\n\nInput:\n\nX: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize\n\nOutput:\n\nif 4D tensor:\n\nY: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize\n\nor if 5D tensor:\n\nY: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize\n\nSee also: wavelet_unsqueeze, Haar_unsqueeze, HaarLift, squeeze, unsqueeze\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.invHaar_unsqueeze-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.invHaar_unsqueeze","text":"X = invHaar_unsqueeze(Y)\n\nPerform a 1-level inverse 2D/3D Haar transform of Y and unsqueeze output. This reduces the number of channels by factor of 4 in 4D tensors or by factor of 8 in 5D tensors and increases each spatial dimension by a factor of 2. Inverse operation of Haar_squeeze.\n\nInput:\n\nY: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize\n\nOutput:\n\nIf 4D tensor:\n\nX: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize\n\nIf 5D tensor:\n\nX: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize\n\nSee also: wavelet_unsqueeze, Haar_unsqueeze, HaarLift, squeeze, unsqueeze\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.squeeze-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.squeeze","text":"Y = squeeze(X; pattern=\"column\")\n\nSqueeze operation that is only a reshape. \n\nReshape input image such that each spatial dimension is reduced by a factor of 2, while the number of channels is increased by a factor of 4 if 4D tensor and increased by a factor of 8 if 5D tensor.\n\nInput:\n\nX: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize\npattern: Squeezing pattern\n 1 2 3 4 1 1 3 3 1 3 1 3\n 1 2 3 4 1 1 3 3 2 4 2 4\n 1 2 3 4 2 2 4 4 1 3 1 3\n 1 2 3 4 2 2 4 4 2 4 2 4\n\n column patch checkerboard\n\nOutput: if 4D tensor:\n\nY: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize\n\nor if 5D tensor:\n\nY: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize\n\nSee also: unsqueeze, wavelet_squeeze, wavelet_unsqueeze\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.tensor_cat-Union{Tuple{N}, Tuple{T}, Tuple{AbstractArray{T, N}, AbstractArray{T, N}}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.tensor_cat","text":"X = tensor_cat(Y, Z)\n\nConcatenate ND input tensors along the channel dimension. Inverse operation of tensor_split.\n\nInput:\n\nY, Z: ND input tensors, each of dimensions nx [x ny [x nz]] x n_channel x batchsize\n\nOutput:\n\nX: ND output tensor of dimensions nx [x ny [x nz]] x n_channel*2 x batchsize\n\nSee also: tensor_split\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.tensor_split-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.tensor_split","text":"Y, Z = tensor_split(X)\n\nSplit ND input tensor in half along the channel dimension. Inverse operation of tensor_cat.\n\nInput:\n\nX: ND input tensor of dimensions nx [x ny [x nz]] x n_channel x batchsize\n\nOutput:\n\nY, Z: ND output tensors, each of dimensions nx [x ny [x nz]] x n_channel/2 x batchsize\n\nSee also: tensor_cat\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.unsqueeze-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.unsqueeze","text":"X = unsqueeze(Y; pattern=\"column\")\n\nUndo squeezing operation by reshaping input image such that each spatial dimension is increased by a factor of 2, while the number of channels is decreased by a factor of 4 if 4D tensor of decreased by a factor of 8 if a 5D tensor.\n\nInput:\n\nY: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize\npattern: Squeezing pattern\n 1 2 3 4 1 1 3 3 1 3 1 3\n 1 2 3 4 1 1 3 3 2 4 2 4\n 1 2 3 4 2 2 4 4 1 3 1 3\n 1 2 3 4 2 2 4 4 2 4 2 4\n\n column patch checkerboard\n\nOutput: If 4D tensor:\n\nX: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize\n\nIf 5D tensor:\n\nX: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize\n\nSee also: squeeze, wavelet_squeeze, wavelet_unsqueeze\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.wavelet_squeeze-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.wavelet_squeeze","text":"Y = wavelet_squeeze(X; type=WT.db1)\n\nPerform a 1-level channelwise 2D wavelet transform of X and squeeze output of each transform to increase number of channels by a factor of 4 if input is 4D tensor or by a factor of 8 if a 5D tensor.\n\nInput:\n\nX: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize\ntype: Wavelet filter type. Possible values are WT.haar for Haar wavelets, WT.coif2, WT.coif4, etc. for Coiflet wavelets, or WT.db1, WT.db2, etc. for Daubechies wavetlets. See https://github.com/JuliaDSP/Wavelets.jl for a full list.\n\nOutput: if 4D tensor:\n\nY: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize\n\nor if 5D tensor:\n\nY: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize\n\nSee also: wavelet_unsqueeze, squeeze, unsqueeze\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.wavelet_unsqueeze-Union{Tuple{AbstractArray{T, N}}, Tuple{N}, Tuple{T}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.wavelet_unsqueeze","text":"X = wavelet_unsqueeze(Y; type=WT.db1)\n\nPerform a 1-level inverse 2D wavelet transform of Y and unsqueeze output. This reduces the number of channels by factor of 4 if 4D tensor or by a factor of 8 if 5D tensor and increases each spatial dimension by a factor of 2. Inverse operation of wavelet_squeeze.\n\nInput:\n\nY: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize\ntype: Wavelet filter type. Possible values are haar for Haar wavelets,\n\ncoif2, coif4, etc. for Coiflet wavelets, or db1, db2, etc. for Daubechies wavetlets. See https://github.com/JuliaDSP/Wavelets.jl for a full list.\n\nOutput: If 4D tensor:\n\nX: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize\n\nIf 5D tensor:\n\nX: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize\n\nSee also: wavelet_squeeze, squeeze, unsqueeze\n\n\n\n\n\n","category":"method"},{"location":"api/#Layers","page":"API Reference","title":"Layers","text":"","category":"section"},{"location":"api/","page":"API Reference","title":"API Reference","text":"Modules = [InvertibleNetworks]\nOrder = [:type]\nFilter = t -> t<:NeuralNetLayer","category":"page"},{"location":"api/#InvertibleNetworks.ActNorm","page":"API Reference","title":"InvertibleNetworks.ActNorm","text":"AN = ActNorm(k; logdet=false)\n\nCreate activation normalization layer. The parameters are initialized during the first use, such that the output has zero mean and unit variance along channels for the current mini-batch size.\n\nInput:\n\nk: number of channels\nlogdet: bool to indicate whether to compute the logdet\n\nOutput:\n\nAN: Network layer for activation normalization.\n\nUsage:\n\nForward mode: Y, logdet = AN.forward(X)\nInverse mode: X = AN.inverse(Y)\nBackward mode: ΔX, X = AN.backward(ΔY, Y)\n\nTrainable parameters:\n\nScaling factor AN.s\nBias AN.b\n\nSee also: get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.AffineLayer","page":"API Reference","title":"InvertibleNetworks.AffineLayer","text":"AL = AffineLayer(nx, ny, nc; logdet=false)\n\nCreate a layer for an affine transformation.\n\nInput:\n\nnx, ny,nc`: input dimensions and number of channels\nlogdet: bool to indicate whether to compute the logdet\n\nOutput:\n\nAL: Network layer for affine transformation.\n\nUsage:\n\nForward mode: Y, logdet = AL.forward(X)\nInverse mode: X = AL.inverse(Y)\nBackward mode: ΔX, X = AL.backward(ΔY, Y)\n\nTrainable parameters:\n\nScaling factor AL.s\nBias AL.b\n\nSee also: get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.ConditionalLayerGlow","page":"API Reference","title":"InvertibleNetworks.ConditionalLayerGlow","text":"CL = ConditionalLayerGlow(C::Conv1x1, RB::ResidualBlock; logdet=false)\n\nor\n\nCL = ConditionalLayerGlow(n_in, n_cond, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false, ndims=2) (2D)\n\nCL = ConditionalLayerGlow(n_in, n_cond, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false, ndims=3) (3D)\n\nCL = ConditionalLayerGlowGlow3D(n_in, n_cond, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false) (3D)\n\nCreate a Real NVP-style invertible conditional coupling layer based on 1x1 convolutions and a residual block.\n\nInput:\n\nC::Conv1x1: 1x1 convolution layer\nRB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.\nlogdet: bool to indicate whether to compte the logdet of the layer\n\nor\n\nn_in,n_out, n_hidden: number of channels for: passive input, conditioned input and hidden layer\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\nndims : number of dimensions\n\nOutput:\n\nCL: Invertible Real NVP conditional coupling layer.\n\nUsage:\n\nForward mode: Y, logdet = CL.forward(X, C) (if constructed with logdet=true)\nInverse mode: X = CL.inverse(Y, C)\nBackward mode: ΔX, X = CL.backward(ΔY, Y, C)\n\nTrainable parameters:\n\nNone in CL itself\nTrainable parameters in residual block CL.RB and 1x1 convolution layer CL.C\n\nSee also: Conv1x1, ResidualBlock, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.ConditionalLayerHINT","page":"API Reference","title":"InvertibleNetworks.ConditionalLayerHINT","text":"CH = ConditionalLayerHINT(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, permute=true, ndims=2) (2D)\n\nCH = ConditionalLayerHINT3D(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, permute=true) (3D)\n\nCreate a conditional HINT layer based on coupling blocks and 1 level recursion.\n\nInput:\n\nn_in, n_hidden: number of input and hidden channels of both X and Y\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\npermute: bool to indicate whether to permute X and Y. Default is true\nndims : number of dimensions\n\nOutput:\n\nCH: Conditional HINT coupling layer.\n\nUsage:\n\nForward mode: Zx, Zy, logdet = CH.forward_X(X, Y)\nInverse mode: X, Y = CH.inverse(Zx, Zy)\nBackward mode: ΔX, ΔY, X, Y = CH.backward(ΔZx, ΔZy, Zx, Zy)\nForward mode Y: Zy = CH.forward_Y(Y)\nInverse mode Y: Y = CH.inverse(Zy)\n\nTrainable parameters:\n\nNone in CH itself\nTrainable parameters in coupling layers CH.CL_X, CH.CL_Y, CH.CL_YX and in permutation layers CH.C_X and CH.C_Y.\n\nSee also: CouplingLayerBasic, ResidualBlock, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.ConditionalResidualBlock","page":"API Reference","title":"InvertibleNetworks.ConditionalResidualBlock","text":"RB = ConditionalResidualBlock(nx1, nx2, nx_in, ny1, ny2, ny_in, n_hidden, batchsize; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)\n\nCreate a (non-invertible) conditional residual block, consisting of one dense and three convolutional layers with ReLU activation functions. The dense operator maps the data to the image space and both tensors are concatenated and fed to the subsequent convolutional layers.\n\nInput:\n\nnx1, nx2, nx_in: spatial dimensions and no. of channels of input image\nny1, ny2, ny_in: spatial dimensions and no. of channels of input data\nn_hidden: number of hidden channels\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: strides for the first and third convolution (s1) and the second convolution (s2)\n\nor\n\nOutput:\n\nRB: conditional residual block layer\n\nUsage:\n\nForward mode: Zx, Zy = RB.forward(X, Y)\nBackward mode: ΔX, ΔY = RB.backward(ΔZx, ΔZy, X, Y)\n\nTrainable parameters:\n\nConvolutional kernel weights RB.W0, RB.W1, RB.W2 and RB.W3\nBias terms RB.b0, RB.b1 and RB.b2\n\nSee also: get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.Conv1x1","page":"API Reference","title":"InvertibleNetworks.Conv1x1","text":"C = Conv1x1(k; logdet=false)\n\nor\n\nC = Conv1x1(v1, v2, v3; logdet=false)\n\nCreate network layer for 1x1 convolutions using Householder reflections.\n\nInput:\n\nk: number of channels\nv1, v2, v3: Vectors from which to construct matrix.\nlogdet: if true, returns logdet in forward pass (which is always zero)\n\nOutput:\n\nC: Network layer for 1x1 convolutions with Householder reflections.\n\nUsage:\n\nForward mode: Y, logdet = C.forward(X)\nBackward mode: ΔX, X = C.backward((ΔY, Y))\n\nTrainable parameters:\n\nHouseholder vectors C.v1, C.v2, C.v3\n\nSee also: get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.CouplingLayerBasic","page":"API Reference","title":"InvertibleNetworks.CouplingLayerBasic","text":"CL = CouplingLayerBasic(RB::ResidualBlock; logdet=false)\n\nor\n\nCL = CouplingLayerBasic(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, logdet=false, ndims=2) (2D)\n\nCL = CouplingLayerBasic(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, logdet=false, ndims=3) (3D)\n\nCL = CouplingLayerBasic3D(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, logdet=false) (3D)\n\nCreate a Real NVP-style invertible coupling layer with a residual block.\n\nInput:\n\nRB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.\nlogdet: bool to indicate whether to compte the logdet of the layer\n\nor\n\nn_in, n_hidden: number of input and hidden channels\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s1)\nndims : Number of dimensions\n\nOutput:\n\nCL: Invertible Real NVP coupling layer.\n\nUsage:\n\nForward mode: Y1, Y2, logdet = CL.forward(X1, X2) (if constructed with logdet=true)\nInverse mode: X1, X2 = CL.inverse(Y1, Y2)\nBackward mode: ΔX1, ΔX2, X1, X2 = CL.backward(ΔY1, ΔY2, Y1, Y2)\n\nTrainable parameters:\n\nNone in CL itself\nTrainable parameters in residual block CL.RB\n\nSee also: ResidualBlock, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.CouplingLayerGlow","page":"API Reference","title":"InvertibleNetworks.CouplingLayerGlow","text":"CL = CouplingLayerGlow(C::Conv1x1, RB::ResidualBlock; logdet=false)\n\nor\n\nCL = CouplingLayerGlow(n_in, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false, ndims=2) (2D)\n\nCL = CouplingLayerGlow(n_in, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false, ndims=3) (3D)\n\nCL = CouplingLayerGlow3D(n_in, n_hidden; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1, logdet=false) (3D)\n\nCreate a Real NVP-style invertible coupling layer based on 1x1 convolutions and a residual block.\n\nInput:\n\nC::Conv1x1: 1x1 convolution layer\nRB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.\nlogdet: bool to indicate whether to compte the logdet of the layer\n\nor\n\nn_in, n_hidden: number of input and hidden channels\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\nndims : number of dimensions\n\nOutput:\n\nCL: Invertible Real NVP coupling layer.\n\nUsage:\n\nForward mode: Y, logdet = CL.forward(X) (if constructed with logdet=true)\nInverse mode: X = CL.inverse(Y)\nBackward mode: ΔX, X = CL.backward(ΔY, Y)\n\nTrainable parameters:\n\nNone in CL itself\nTrainable parameters in residual block CL.RB and 1x1 convolution layer CL.C\n\nSee also: Conv1x1, ResidualBlock, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.CouplingLayerHINT","page":"API Reference","title":"InvertibleNetworks.CouplingLayerHINT","text":"H = CouplingLayerHINT(n_in, n_hidden; logdet=false, permute=\"none\", k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, ndims=2) (2D)\n\nH = CouplingLayerHINT(n_in, n_hidden; logdet=false, permute=\"none\", k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, ndims=3) (3D)\n\nH = CouplingLayerHINT3D(n_in, n_hidden; logdet=false, permute=\"none\", k1=3, k2=3, p1=1, p2=1, s1=1, s2=1) (3D)\n\nCreate a recursive HINT-style invertible layer based on coupling blocks.\n\nInput:\n\nn_in, n_hidden: number of input and hidden channels\nlogdet: bool to indicate whether to return the log determinant. Default is false.\npermute: string to specify permutation. Options are \"none\", \"lower\", \"both\" or \"full\".\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\nndims : number of dimensions\n\nOutput:\n\nH: Recursive invertible HINT coupling layer.\n\nUsage:\n\nForward mode: Y = H.forward(X)\nInverse mode: X = H.inverse(Y)\nBackward mode: ΔX, X = H.backward(ΔY, Y)\n\nTrainable parameters:\n\nNone in H itself\nTrainable parameters in coupling layers H.CL\n\nSee also: CouplingLayerBasic, ResidualBlock, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.CouplingLayerIRIM","page":"API Reference","title":"InvertibleNetworks.CouplingLayerIRIM","text":"IL = CouplingLayerIRIM(C::Conv1x1, RB::ResidualBlock)\n\nor\n\nIL = CouplingLayerIRIM(n_in, n_hidden; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, logdet=false, ndims=2) (2D)\n\nIL = CouplingLayerIRIM(n_in, n_hidden; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, logdet=false, ndims=3) (3D)\n\nIL = CouplingLayerIRIM3D(n_in, n_hidden; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, logdet=false) (3D)\n\nCreate an i-RIM invertible coupling layer based on 1x1 convolutions and a residual block. \n\nInput: \n\nC::Conv1x1: 1x1 convolution layer\n\nRB::ResidualBlock: residual block layer consisting of 3 convolutional layers with ReLU activations.\n\nor\n\nnx, ny, nz: spatial dimensions of input\n\nn_in, n_hidden: number of input and hidden channels\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\n\nOutput:\n\nIL: Invertible i-RIM coupling layer.\n\nUsage:\n\nForward mode: Y = IL.forward(X)\nInverse mode: X = IL.inverse(Y)\nBackward mode: ΔX, X = IL.backward(ΔY, Y)\n\nTrainable parameters:\n\nNone in IL itself\nTrainable parameters in residual block IL.RB and 1x1 convolution layer IL.C\n\nSee also: Conv1x1, ResidualBlock!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.FluxBlock","page":"API Reference","title":"InvertibleNetworks.FluxBlock","text":"FB = FluxBlock(model::Chain)\n\nCreate a (non-invertible) neural network block from a Flux network.\n\nInput: \n\nmodel: Flux neural network of type Chain\n\nOutput:\n\nFB: residual block layer\n\nUsage:\n\nForward mode: Y = FB.forward(X)\nBackward mode: ΔX = FB.backward(ΔY, X)\n\nTrainable parameters:\n\nNetwork parameters given by Flux.parameters(model)\n\nSee also: Chain, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.HyperbolicLayer","page":"API Reference","title":"InvertibleNetworks.HyperbolicLayer","text":"HyperbolicLayer(n_in, kernel, stride, pad; action=0, α=1f0, n_hidden=1)\nHyperbolicLayer(n_in, kernel, stride, pad; action=0, α=1f0, n_hidden=1, ndims=2)\nHyperbolicLayer3D(n_in, kernel, stride, pad; action=0, α=1f0, n_hidden=1)\n\nor\n\nHyperbolicLayer(W, b, stride, pad; action=0, α=1f0)\nHyperbolicLayer3D(W, b, stride, pad; action=0, α=1f0)\n\nCreate an invertible hyperbolic coupling layer.\n\nInput:\n\nkernel, stride, pad: Kernel size, stride and padding of the convolutional operator\naction: String that defines whether layer keeps the number of channels fixed (0), increases it by a factor of 4 (or 8 in 3D) (1) or decreased it by a factor of 4 (or 8) (-1).\nW, b: Convolutional weight and bias. W has dimensions of (kernel, kernel, n_in, n_in). b has dimensions of n_in.\nα: Step size for second time derivative. Default is 1.\nn_hidden: Increase the no. of channels by n_hidden in the forward convolution. After applying the transpose convolution, the dimensions are back to the input dimensions.\nndims: Number of dimension of the input (2 for 2D, 3 for 3D)\n\nOutput:\n\nHL: Invertible hyperbolic coupling layer\n\nUsage:\n\nForward mode: X_curr, X_new = HL.forward(X_prev, X_curr)\nInverse mode: X_prev, X_curr = HL.inverse(X_curr, X_new)\nBackward mode: ΔX_prev, ΔX_curr, X_prev, X_curr = HL.backward(ΔX_curr, ΔX_new, X_curr, X_new)\n\nTrainable parameters:\n\nHL.W: Convolutional kernel\nHL.b: Bias\n\nSee also: get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.ResidualBlock","page":"API Reference","title":"InvertibleNetworks.ResidualBlock","text":"RB = ResidualBlock(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, fan=false)\nRB = ResidualBlock3D(n_in, n_hidden; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, fan=false)\n\nor\n\nRB = ResidualBlock(W1, W2, W3, b1, b2; p1=1, p2=1, s1=1, s2=1, fan=false)\nRB = ResidualBlock3D(W1, W2, W3, b1, b2; p1=1, p2=1, s1=1, s2=1, fan=false)\n\nCreate a (non-invertible) residual block, consisting of three convolutional layers and activation functions. The first convolution is a downsampling operation with a stride equal to the kernel dimension. The last convolution is the corresponding transpose operation and upsamples the data to either its original dimensions or to twice the number of input channels (for fan=true). The first and second layer contain a bias term.\n\nInput:\n\nn_in: number of input channels\nn_hidden: number of hidden channels\nn_out: number of ouput channels\nactivation: activation type between conv layers and final output\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\nfan: bool to indicate whether the ouput has twice the number of input channels. For fan=false, the last activation function is a gated linear unit (thereby bringing the output back to the original dimensions). For fan=true, the last activation is a ReLU, in which case the output has twice the number of channels as the input.\n\nor\n\nW1, W2, W3: 4D tensors of convolutional weights\nb1, b2: bias terms\n\nOutput:\n\nRB: residual block layer\n\nUsage:\n\nForward mode: Y = RB.forward(X)\nBackward mode: ΔX = RB.backward(ΔY, X)\n\nTrainable parameters:\n\nConvolutional kernel weights RB.W1, RB.W2 and RB.W3\nBias terms RB.b1 and RB.b2\n\nSee also: get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#Networks","page":"API Reference","title":"Networks","text":"","category":"section"},{"location":"api/","page":"API Reference","title":"API Reference","text":"Modules = [InvertibleNetworks]\nOrder = [:type]\nFilter = t -> t<:InvertibleNetwork","category":"page"},{"location":"api/#InvertibleNetworks.NetworkConditionalGlow","page":"API Reference","title":"InvertibleNetworks.NetworkConditionalGlow","text":"G = NetworkGlow(n_in, n_cond, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)\n\nG = NetworkGlow3D(n_in, n_cond, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)\n\nCreate a conditional invertible network based on the Glow architecture. Each flow step in the inner loop consists of an activation normalization layer, followed by an invertible coupling layer with 1x1 convolutions and a residual block. The outer loop performs a squeezing operation prior to the inner loop, and a splitting operation afterwards.\n\nInput: \n\n'n_in': number of input channels of variable to sample\n'n_cond': number of input channels of condition\nn_hidden: number of hidden units in residual blocks\nL: number of scales (outer loop)\nK: number of flow steps per scale (inner loop)\nsplit_scales: if true, perform squeeze operation which halves spatial dimensions and duplicates channel dimensions then split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third \n\noperator, k2 is the kernel size of the second operator.\n\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\nndims : number of dimensions\nsqueeze_type : squeeze type that happens at each multiscale level\n\nOutput:\n\nG: invertible Glow network.\n\nUsage:\n\nForward mode: ZX, ZC logdet = G.forward(X, C)\nBackward mode: ΔX, X, ΔC = G.backward(ΔZX, ZX, ZC)\n\nTrainable parameters:\n\nNone in G itself\nTrainable parameters in activation normalizations G.AN[i,j] and coupling layers G.C[i,j], where i and j range from 1 to L and K respectively.\n\nSee also: ActNorm, CouplingLayerGlow!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.NetworkConditionalHINT","page":"API Reference","title":"InvertibleNetworks.NetworkConditionalHINT","text":"CH = NetworkConditionalHINT(n_in, n_hidden, depth; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)\n\nCH = NetworkConditionalHINT3D(n_in, n_hidden, depth; k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)\n\nCreate a conditional HINT network for data-driven generative modeling based on the change of variables formula.\n\nInput:\n\n'n_in': number of input channels\nn_hidden: number of hidden units in residual blocks\ndepth: number network layers\nk1, k2: kernel size for first and third residual layer (k1) and second layer (k2)\np1, p2: respective padding sizes for residual block layers\ns1, s2: respective strides for residual block layers\n\nOutput:\n\nCH: conditioinal HINT network\n\nUsage:\n\nForward mode: Zx, Zy, logdet = CH.forward(X, Y)\nInverse mode: X, Y = CH.inverse(Zx, Zy)\nBackward mode: ΔX, X = CH.backward(ΔZx, ΔZy, Zx, Zy)\n\nTrainable parameters:\n\nNone in CH itself\nTrainable parameters in activation normalizations CH.AN_X[i] and CH.AN_Y[i],\n\nand in coupling layers CH.CL[i], where i ranges from 1 to depth.\n\nSee also: ActNorm, ConditionalLayerHINT!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.NetworkGlow","page":"API Reference","title":"InvertibleNetworks.NetworkGlow","text":"G = NetworkGlow(n_in, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)\n\nG = NetworkGlow3D(n_in, n_hidden, L, K; k1=3, k2=1, p1=1, p2=0, s1=1, s2=1)\n\nCreate an invertible network based on the Glow architecture. Each flow step in the inner loop consists of an activation normalization layer, followed by an invertible coupling layer with 1x1 convolutions and a residual block. The outer loop performs a squeezing operation prior to the inner loop, and a splitting operation afterwards.\n\nInput: \n\n'n_in': number of input channels\nn_hidden: number of hidden units in residual blocks\nL: number of scales (outer loop)\nK: number of flow steps per scale (inner loop)\nsplit_scales: if true, perform squeeze operation which halves spatial dimensions and duplicates channel dimensions then split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.\nk1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third \n\noperator, k2 is the kernel size of the second operator.\n\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2)\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2)\nndims : number of dimensions\nsqueeze_type : squeeze type that happens at each multiscale level\nlogdet : boolean to turn on/off logdet term tracking and gradient calculation\n\nOutput:\n\nG: invertible Glow network.\n\nUsage:\n\nForward mode: Y, logdet = G.forward(X)\nBackward mode: ΔX, X = G.backward(ΔY, Y)\n\nTrainable parameters:\n\nNone in G itself\nTrainable parameters in activation normalizations G.AN[i,j] and coupling layers G.C[i,j], where i and j range from 1 to L and K respectively.\n\nSee also: ActNorm, CouplingLayerGlow!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.NetworkHyperbolic","page":"API Reference","title":"InvertibleNetworks.NetworkHyperbolic","text":"H = NetworkHyperbolic(n_in, architecture; k=3, s=1, p=1, logdet=true, α=1f0)\nH = NetworkHyperbolic(n_in, architecture; k=3, s=1, p=1, logdet=true, α=1f0, ndims=2)\nH = NetworkHyperbolic3D(n_in, architecture; k=3, s=1, p=1, logdet=true, α=1f0)\n\nCreate an invertible network based on hyperbolic layers. The network architecture is specified by a tuple of the form ((action1, nhidden1), (action2, nhidden2), ... ). Each inner tuple corresonds to an additional layer. The first inner tuple argument specifies whether the respective layer increases the number of channels (set to 1), decreases it (set to -1) or leaves it constant (set to 0). The second argument specifies the number of hidden units for that layer.\n\nInput: \n\nn_in: number of channels of input tensor.\n\nn_hidden: number of hidden units in residual blocks\narchitecture: Tuple of tuples specifying the network architecture; ((action1, nhidden1), (action2, nhidden2))\nk, s, p: Kernel size, stride and padding of convolutional kernels\n\nlogdet: Bool to indicate whether to return the logdet\nα: Step size in hyperbolic network. Defaults to 1\nndims: Number of dimension\n\nOutput:\n\nH: invertible hyperbolic network.\n\nUsage:\n\nForward mode: Y_prev, Y_curr, logdet = H.forward(X_prev, X_curr)\nInverse mode: X_curr, X_new = H.inverse(Y_curr, Y_new)\nBackward mode: ΔX_curr, ΔX_new, X_curr, X_new = H.backward(ΔY_curr, ΔY_new, Y_curr, Y_new)\n\nTrainable parameters:\n\nNone in H itself\nTrainable parameters in the hyperbolic layers H.HL[j].\n\nSee also: CouplingLayer!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.NetworkLoop","page":"API Reference","title":"InvertibleNetworks.NetworkLoop","text":"L = NetworkLoop(n_in, n_hidden, maxiter, Ψ; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1, ndims=2) (2D)\n\nL = NetworkLoop3D(n_in, n_hidden, maxiter, Ψ; k1=4, k2=3, p1=0, p2=1, s1=4, s2=1) (3D)\n\nCreate an invertibel recurrent inference machine (i-RIM) consisting of an unrooled loop for a given number of iterations.\n\nInput: \n\n'n_in': number of input channels\nn_hidden: number of hidden units in residual blocks\nmaxiter: number unrolled loop iterations\nΨ: link function\nk1, k2: stencil sizes for convolutions in the residual blocks. The first convolution uses a stencil of size and stride k1, thereby downsampling the input. The second convolutions uses a stencil of size k2. The last layer uses a stencil of size and stride k1, but performs the transpose operation of the first convolution, thus upsampling the output to the original input size.\np1, p2: padding for the first and third convolution (p1) and the second convolution (p2) in residual block\ns1, s2: stride for the first and third convolution (s1) and the second convolution (s2) in residual block\nndims : number of dimensions\n\nOutput:\n\nL: invertible i-RIM network.\n\nUsage:\n\nForward mode: η_out, s_out = L.forward(η_in, s_in, d, A)\nInverse mode: η_in, s_in = L.inverse(η_out, s_out, d, A)\nBackward mode: Δη_in, Δs_in, η_in, s_in = L.backward(Δη_out, Δs_out, η_out, s_out, d, A)\n\nTrainable parameters:\n\nNone in L itself\nTrainable parameters in the invertible coupling layers L.L[i], and actnorm layers L.AN[i], where i ranges from 1 to the number of loop iterations.\n\nSee also: CouplingLayerIRIM, ResidualBlock, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.NetworkMultiScaleConditionalHINT","page":"API Reference","title":"InvertibleNetworks.NetworkMultiScaleConditionalHINT","text":"CH = NetworkMultiScaleConditionalHINT(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)\n\nCH = NetworkMultiScaleConditionalHINT3D(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)\n\nCreate a conditional HINT network for data-driven generative modeling based on the change of variables formula.\n\nInput: \n\n'n_in': number of input channels\n\nn_hidden: number of hidden units in residual blocks\nL: number of scales (outer loop)\nK: number of flow steps per scale (inner loop)\nsplit_scales: if true, split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.\nk1, k2: kernel size for first and third residual layer (k1) and second layer (k2)\np1, p2: respective padding sizes for residual block layers\n\ns1, s2: respective strides for residual block layers\nndims : number of dimensions\n\nOutput:\n\nCH: conditional HINT network\n\nUsage:\n\nForward mode: Zx, Zy, logdet = CH.forward(X, Y)\nInverse mode: X, Y = CH.inverse(Zx, Zy)\nBackward mode: ΔX, X = CH.backward(ΔZx, ΔZy, Zx, Zy)\n\nTrainable parameters:\n\nNone in CH itself\nTrainable parameters in activation normalizations CH.AN_X[i] and CH.AN_Y[i], \n\nand in coupling layers CH.CL[i], where i ranges from 1 to depth.\n\nSee also: ActNorm, ConditionalLayerHINT!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.NetworkMultiScaleHINT","page":"API Reference","title":"InvertibleNetworks.NetworkMultiScaleHINT","text":"H = NetworkMultiScaleHINT(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1, ndims=2)\n\nH = NetworkMultiScaleHINT3D(n_in, n_hidden, L, K; split_scales=false, k1=3, k2=3, p1=1, p2=1, s1=1, s2=1)\n\nCreate a multiscale HINT network for data-driven generative modeling based on the change of variables formula.\n\nInput: \n\n'n_in': number of input channels\n\nn_hidden: number of hidden units in residual blocks\nL: number of scales (outer loop)\nK: number of flow steps per scale (inner loop)\nsplit_scales: if true, split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.\nk1, k2: kernel size for first and third residual layer (k1) and second layer (k2)\np1, p2: respective padding sizes for residual block layers\n\ns1, s2: respective strides for residual block layers\nndims : number of dimensions\n\nOutput:\n\nH: multiscale HINT network\n\nUsage:\n\nForward mode: Z, logdet = H.forward(X)\nInverse mode: X = H.inverse(Z)\nBackward mode: ΔX, X = H.backward(ΔZ, Z)\n\nTrainable parameters:\n\nNone in H itself\nTrainable parameters in activation normalizations H.AN[i], \n\nand in coupling layers H.CL[i], where i ranges from 1 to depth.\n\nSee also: ActNorm, CouplingLayerHINT!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#InvertibleNetworks.SummarizedNet","page":"API Reference","title":"InvertibleNetworks.SummarizedNet","text":"G = SummarizedNet(cond_net, sum_net)\n\nCreate a summarized neural conditional approximator from conditional approximator condnet and summary network sumnet.\n\nInput: \n\n'cond_net': invertible conditional distribution approximator\n'sum_net': Should be flux layer. summary network. Should be invariant to a dimension of interest. \n\nOutput:\n\nG: summarized network.\n\nUsage:\n\nForward mode: ZX, ZY, logdet = G.forward(X, Y)\nBackward mode: ΔX, X, ΔY = G.backward(ΔZX, ZX, ZY; Y_save=Y)\ninverse mode: ZX, ZY logdet = G.inverse(ZX, ZY)\n\nTrainable parameters:\n\nNone in G itself\nTrainable parameters in conditional approximator G.cond_net and smmary network G.sum_net,\n\nSee also: ActNorm, CouplingLayerGlow!, get_params, clear_grad!\n\n\n\n\n\n","category":"type"},{"location":"api/#AD-Integration","page":"API Reference","title":"AD Integration","text":"","category":"section"},{"location":"api/","page":"API Reference","title":"API Reference","text":"Modules = [InvertibleNetworks]\nOrder = [:function]\nPages = [\"chainrules.jl\"]","category":"page"},{"location":"api/#InvertibleNetworks.backward_update!-Union{Tuple{N}, Tuple{T}, Tuple{InvertibleNetworks.InvertibleOperationsTape, AbstractArray{T, N}}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.backward_update!","text":"Update state in the backward pass\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.check_coherence-Tuple{InvertibleNetworks.InvertibleOperationsTape, InvertibleNetworks.Invertible}","page":"API Reference","title":"InvertibleNetworks.check_coherence","text":"Error if mismatch between state and network\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.current-Tuple{InvertibleNetworks.InvertibleOperationsTape}","page":"API Reference","title":"InvertibleNetworks.current","text":"Get current state of the tape\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.forward_update!-Union{Tuple{N}, Tuple{T}, Tuple{InvertibleNetworks.InvertibleOperationsTape, AbstractArray{T, N}, AbstractArray{T, N}, Union{Nothing, T}, InvertibleNetworks.Invertible}} where {T, N}","page":"API Reference","title":"InvertibleNetworks.forward_update!","text":"Update state in the forward pass.\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.isa_newblock-Tuple{InvertibleNetworks.InvertibleOperationsTape, Any}","page":"API Reference","title":"InvertibleNetworks.isa_newblock","text":"Determine if the input is related to a new block of invertible operations\n\n\n\n\n\n","category":"method"},{"location":"api/#InvertibleNetworks.reset!-Tuple{InvertibleNetworks.InvertibleOperationsTape}","page":"API Reference","title":"InvertibleNetworks.reset!","text":"Reset the state of the tape\n\n\n\n\n\n","category":"method"},{"location":"examples/#Further-examples","page":"Examples","title":"Further examples","text":"","category":"section"},{"location":"examples/","page":"Examples","title":"Examples","text":"We provide usage examples for all the layers and network in our examples subfolder. Each of the example show how to setup and use the building block for simple random variables.","category":"page"},{"location":"examples/#D-Rosenbrock/banana-distribution-sampling-w/-GLOW","page":"Examples","title":"2D Rosenbrock/banana distribution sampling w/ GLOW","text":"","category":"section"},{"location":"examples/","page":"Examples","title":"Examples","text":"using LinearAlgebra, InvertibleNetworks, PyPlot, Flux, Random\n\n# Random seed\nRandom.seed!(11)\n\n# Define network\nnx = 1; ny = 1; n_in = 2\nn_hidden = 64\nbatchsize = 20\ndepth = 4\nAN = Array{ActNorm}(undef, depth)\nL = Array{CouplingLayerGlow}(undef, depth)\nParams = Array{Parameter}(undef, 0)\n\n# Create layers\nfor j=1:depth\n AN[j] = ActNorm(n_in; logdet=true)\n L[j] = CouplingLayerGlow(n_in, n_hidden; k1=1, k2=1, p1=0, p2=0, logdet=true)\n\n # Collect parameters\n global Params = cat(Params, get_params(AN[j]); dims=1)\n global Params = cat(Params, get_params(L[j]); dims=1)\nend\n\n# Forward pass\nfunction forward(X)\n logdet = 0f0\n for j=1:depth\n X_, logdet1 = AN[j].forward(X)\n X, logdet2 = L[j].forward(X_)\n logdet += (logdet1 + logdet2)\n end\n return X, logdet\nend\n\n# Backward pass\nfunction backward(ΔX, X)\n for j=depth:-1:1\n ΔX_, X_ = L[j].backward(ΔX, X)\n ΔX, X = AN[j].backward(ΔX_, X_)\n end\n return ΔX, X\nend\n\n# Loss\nfunction loss(X)\n Y, logdet = forward(X)\n f = -log_likelihood(Y) - logdet\n ΔY = -∇log_likelihood(Y)\n ΔX = backward(ΔY, Y)[1]\n return f, ΔX\nend\n\n# Training\nmaxiter = 2000\nopt = Flux.ADAM(1f-3)\nfval = zeros(Float32, maxiter)\n\nfor j=1:maxiter\n\n # Evaluate objective and gradients\n X = sample_banana(batchsize)\n fval[j] = loss(X)[1]\n\n # Update params\n for p in Params\n Flux.update!(opt, p.data, p.grad)\n end\n clear_grad!(Params)\nend\n\n####################################################################################################\n\n# Testing\ntest_size = 250\nX = sample_banana(test_size)\nY_ = forward(X)[1]\nY = randn(Float32, 1, 1, 2, test_size)\nX_ = backward(Y, Y)[2]\n\n# Plot\nfig = figure(figsize=[8,8])\nax1 = subplot(2,2,1); plot(X[1, 1, 1, :], X[1, 1, 2, :], \".\"); title(L\"Data space: $x \\sim \\hat{p}_X$\")\nax1.set_xlim([-3.5,3.5]); ax1.set_ylim([0,50])\nax2 = subplot(2,2,2); plot(Y_[1, 1, 1, :], Y_[1, 1, 2, :], \"g.\"); title(L\"Latent space: $z = f(x)$\")\nax2.set_xlim([-3.5, 3.5]); ax2.set_ylim([-3.5, 3.5])\nax3 = subplot(2,2,3); plot(X_[1, 1, 1, :], X_[1, 1, 2, :], \"g.\"); title(L\"Data space: $x = f^{-1}(z)$\")\nax3.set_xlim([-3.5,3.5]); ax3.set_ylim([0,50])\nax4 = subplot(2,2,4); plot(Y[1, 1, 1, :], Y[1, 1, 2, :], \".\"); title(L\"Latent space: $z \\sim \\hat{p}_Z$\")\nax4.set_xlim([-3.5, 3.5]); ax4.set_ylim([-3.5, 3.5])\nsavefig(\"../src/figures/plot_banana.svg\")\n\nnothing","category":"page"},{"location":"examples/","page":"Examples","title":"Examples","text":"(Image: plot_banana.svg)","category":"page"},{"location":"examples/#Conditional-2D-Rosenbrock/banana-distribution-sampling-w/-cHINT","page":"Examples","title":"Conditional 2D Rosenbrock/banana distribution sampling w/ cHINT","text":"","category":"section"},{"location":"examples/","page":"Examples","title":"Examples","text":"using InvertibleNetworks\nusing Flux, LinearAlgebra, PyPlot, Random\n\n# Random seed\nRandom.seed!(11)\n\n# Define network\nnx = 1; ny = 1; n_in = 2\nn_hidden = 64\nbatchsize = 64\ndepth = 8\n\n# Construct HINT network\nH = NetworkConditionalHINT(n_in, n_hidden, depth; k1=1, k2=1, p1=0, p2=0)\n\n# Linear forward operator\nA = randn(Float32,2,2)\nA = A / (2*opnorm(A))\n\n# Loss\nfunction loss(H, X, Y)\n Zx, Zy, logdet = H.forward(X, Y)\n f = -log_likelihood(tensor_cat(Zx, Zy)) - logdet\n ΔZ = -∇log_likelihood(tensor_cat(Zx, Zy))\n ΔZx, ΔZy = tensor_split(ΔZ)\n ΔX, ΔY = H.backward(ΔZx, ΔZy, Zx, Zy)[1:2]\n return f, ΔX, ΔY\nend\n\n# Training\nmaxiter = 1000\nopt = Flux.ADAM(1f-3)\nfval = zeros(Float32, maxiter)\n\nfor j=1:maxiter\n\n # Evaluate objective and gradients\n X = sample_banana(batchsize)\n Y = reshape(A*reshape(X, :, batchsize), nx, ny, n_in, batchsize)\n Y += .2f0*randn(Float32, nx, ny, n_in, batchsize)\n\n fval[j] = loss(H, X, Y)[1]\n\n # Update params\n for p in get_params(H)\n Flux.update!(opt, p.data, p.grad)\n end\n clear_grad!(H)\nend\n\n# Testing\ntest_size = 250\nX = sample_banana(test_size)\nY = reshape(A*reshape(X, :, test_size), nx, ny, n_in, test_size)\nY += .2f0*randn(Float32, nx, ny, n_in, test_size)\n\nZx_, Zy_ = H.forward(X, Y)[1:2]\n\nZx = randn(Float32, nx, ny, n_in, test_size)\nZy = randn(Float32, nx, ny, n_in, test_size)\nX_, Y_ = H.inverse(Zx, Zy)\n\n# Now select single fixed sample from all Ys\nX_fixed = sample_banana(1)\nY_fixed = reshape(A*vec(X_fixed), nx, ny, n_in, 1)\nY_fixed += .2f0*randn(Float32, size(X_fixed))\n\nZy_fixed = H.forward_Y(Y_fixed)\nZx = randn(Float32, nx, ny, n_in, test_size)\n\nX_post = H.inverse(Zx, Zy_fixed.*ones(Float32, nx, ny, n_in, test_size))[1]\n\n# Model/data spaces\nfig = figure(figsize=[16,6])\nax1 = subplot(2,5,1); plot(X[1, 1, 1, :], X[1, 1, 2, :], \".\"); title(L\"Model space: $x \\sim \\hat{p}_x$\")\nax1.set_xlim([-3.5, 3.5]); ax1.set_ylim([0,50])\nax2 = subplot(2,5,2); plot(Y[1, 1, 1, :], Y[1, 1, 2, :], \".\"); title(L\"Noisy data $y=Ax+n$ \")\n\nax3 = subplot(2,5,3); plot(X_[1, 1, 1, :], X_[1, 1, 2, :], \"g.\"); title(L\"Model space: $x = f(zx|zy)^{-1}$\")\nax3.set_xlim([-3.5, 3.5]); ax3.set_ylim([0,50])\nax4 = subplot(2,5,4); plot(Y_[1, 1, 1, :], Y_[1, 1, 2, :], \"g.\"); title(L\"Data space: $y = f(zx|zy)^{-1}$\")\n\nax5 = subplot(2,5,5); plot(X_post[1, 1, 1, :], X_post[1, 1, 2, :], \"g.\"); \nplot(X_fixed[1, 1, 1, :], X_fixed[1, 1, 2, :], \"r.\"); title(L\"Model space: $x = f(zx|zy_{fix})^{-1}$\")\nax5.set_xlim([-3.5, 3.5]); ax5.set_ylim([0,50])\n\n# Latent spaces\nax6 = subplot(2,5,6); plot(Zx_[1, 1, 1, :], Zx_[1, 1, 2, :], \"g.\"); title(L\"Latent space: $zx = f(x|y)$\")\nax6.set_xlim([-3.5, 3.5]); ax6.set_ylim([-3.5, 3.5])\nax7 = subplot(2,5,7); plot(Zy_[1, 1, 1, :], Zy[1, 1, 2, :], \"g.\"); title(L\"Latent space: $zy \\sim \\hat{p}_{zy}$\")\nax7.set_xlim([-3.5, 3.5]); ax7.set_ylim([-3.5, 3.5])\nax8 = subplot(2,5,9); plot(Zx[1, 1, 1, :], Zx[1, 1, 2, :], \".\"); title(L\"Latent space: $zx \\sim \\hat{p}_{zy}$\")\nax8.set_xlim([-3.5, 3.5]); ax8.set_ylim([-3.5, 3.5])\nax9 = subplot(2,5,8); plot(Zy[1, 1, 1, :], Zy[1, 1, 2, :], \".\"); title(L\"Latent space: $zy \\sim \\hat{p}_{zy}$\")\nax9.set_xlim([-3.5, 3.5]); ax9.set_ylim([-3.5, 3.5])\nax10 = subplot(2,5,10); plot(Zx[1, 1, 1, :], Zx[1, 1, 2, :], \".\"); \nplot(Zy_fixed[1, 1, 1, :], Zy_fixed[1, 1, 2, :], \"r.\"); title(L\"Latent space: $zx \\sim \\hat{p}_{zx}$\")\nax10.set_xlim([-3.5, 3.5]); ax10.set_ylim([-3.5, 3.5])\nsavefig(\"../src/figures/plot_cbanana.svg\")\n\nnothing","category":"page"},{"location":"examples/","page":"Examples","title":"Examples","text":"(Image: plot_cbanana.svg)","category":"page"},{"location":"examples/#Literature-applications","page":"Examples","title":"Literature applications","text":"","category":"section"},{"location":"examples/","page":"Examples","title":"Examples","text":"The following examples show the implementation of applications from the linked papers with [InvertibleNetworks.jl]:","category":"page"},{"location":"examples/","page":"Examples","title":"Examples","text":"Invertible recurrent inference machines (Putzky and Welling, 2019) (generic example)\nGenerative models with maximum likelihood via the change of variable formula (example)\nGlow: Generative flow with invertible 1x1 convolutions (Kingma and Dhariwal, 2018) (generic example, source)","category":"page"},{"location":"LICENSE/","page":"LICENSE","title":"LICENSE","text":"MIT License","category":"page"},{"location":"LICENSE/","page":"LICENSE","title":"LICENSE","text":"Copyright (c) 2020 SLIM group @ Georgia Institute of Technology","category":"page"},{"location":"LICENSE/","page":"LICENSE","title":"LICENSE","text":"Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:","category":"page"},{"location":"LICENSE/","page":"LICENSE","title":"LICENSE","text":"The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.","category":"page"},{"location":"LICENSE/","page":"LICENSE","title":"LICENSE","text":"THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.","category":"page"},{"location":"#InvertibleNetworks.jl-documentation","page":"Home","title":"InvertibleNetworks.jl documentation","text":"","category":"section"},{"location":"#About","page":"Home","title":"About","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"InvertibleNetworks.jl is a package of invertible layers and networks for machine learning. The invertibility allows to backpropagate through the layers and networks without the need for storing the forward state that is recomputed on the fly, inverse propagating through it. This package is the first of its kind in Julia with memory efficient invertible layers, networks and activation functions for machine learning.","category":"page"},{"location":"#Installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This package is registered in the Julia general registry and can be installed in the REPL package manager (]):","category":"page"},{"location":"","page":"Home","title":"Home","text":"] add InvertibleNetworks","category":"page"},{"location":"#Authors","page":"Home","title":"Authors","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This package is developed and maintained by Felix J. Herrmann's SlimGroup at Georgia Institute of Technology. The main contributors of this package are:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Rafael Orozco, Georgia Institute of Technology (rorozco@gatech.edu)\nPhilipp Witte, Microsoft Corporation (pwitte@microsoft.com)\nGabrio Rizzuti, Utrecht University (g.rizzuti@umcutrecht.nl)\nMathias Louboutin, Georgia Institute of Technology (mlouboutin3@gatech.edu)\nAli Siahkoohi, Georgia Institute of Technology (alisk@gatech.edu)","category":"page"},{"location":"#References","page":"Home","title":"References","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Yann Dauphin, Angela Fan, Michael Auli and David Grangier, \"Language modeling with gated convolutional networks\", Proceedings of the 34th International Conference on Machine Learning, 2017. ArXiv\nLaurent Dinh, Jascha Sohl-Dickstein and Samy Bengio, \"Density estimation using Real NVP\", International Conference on Learning Representations, 2017, ArXiv\nDiederik P. Kingma and Prafulla Dhariwal, \"Glow: Generative Flow with Invertible 1x1 Convolutions\", Conference on Neural Information Processing Systems, 2018. ArXiv\nKeegan Lensink, Eldad Haber and Bas Peters, \"Fully Hyperbolic Convolutional Neural Networks\", arXiv Computer Vision and Pattern Recognition, 2019. ArXiv\nPatrick Putzky and Max Welling, \"Invert to learn to invert\", Advances in Neural Information Processing Systems, 2019. ArXiv\nJakob Kruse, Gianluca Detommaso, Robert Scheichl and Ullrich Köthe, \"HINT: Hierarchical Invertible Neural Transport for Density Estimation and Bayesian Inference\", arXiv Statistics and Machine Learning, 2020. ArXiv","category":"page"},{"location":"#Related-work-and-publications","page":"Home","title":"Related work and publications","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"The following publications use [InvertibleNetworks.jl]:","category":"page"},{"location":"","page":"Home","title":"Home","text":"“Preconditioned training of normalizing flows for variational inference in inverse problems”\npaper: https://arxiv.org/abs/2101.03709\npresentation\ncode: FastApproximateInference.jl\n\"Parameterizing uncertainty by deep invertible networks, an application to reservoir characterization\"\npaper: https://arxiv.org/abs/2004.07871\npresentation\ncode: https://github.com/slimgroup/Software.SEG2020\n\"Generalized Minkowski sets for the regularization of inverse problems\"\npaper: http://arxiv.org/abs/1903.03942\ncode: SetIntersectionProjection.jl","category":"page"},{"location":"#Acknowledgments","page":"Home","title":"Acknowledgments","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This package uses functions from NNlib.jl, Flux.jl and Wavelets.jl","category":"page"}] }