Copyright (c) 2020 SLIM group @ Georgia Institute of Technology
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Settings
This document was generated with Documenter.jl on Thursday 16 November 2023. Using Julia version 1.9.4.
Copyright (c) 2020 SLIM group @ Georgia Institute of Technology
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Settings
This document was generated with Documenter.jl on Sunday 19 November 2023. Using Julia version 1.9.4.
Returns a cell array of all parameters gradients in the network or layer. Each cell entry contains a reference to the original parameter's gradient; i.e. modifying the paramters in P, modifies the parameters in NL.
Returns a cell array of all parameters in the network or layer. Each cell entry contains a reference to the original parameter; i.e. modifying the paramters in P, modifies the parameters in NL.
Perform a 1-level channelwise 2D/3D (lifting) Haar transform of X and squeeze output of each transform to increase channels by factor of 4 in 4D tensor or by factor of 8 in 5D channels.
Input:
X: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize
Output:
if 4D tensor:
Y: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize
or if 5D tensor:
Y: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize
Perform a 1-level inverse 2D/3D Haar transform of Y and unsqueeze output. This reduces the number of channels by factor of 4 in 4D tensors or by factor of 8 in 5D tensors and increases each spatial dimension by a factor of 2. Inverse operation of Haar_squeeze.
Input:
Y: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize
Output:
If 4D tensor:
X: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize
If 5D tensor:
X: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize
Reshape input image such that each spatial dimension is reduced by a factor of 2, while the number of channels is increased by a factor of 4 if 4D tensor and increased by a factor of 8 if 5D tensor.
Input:
X: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize
Returns a cell array of all parameters gradients in the network or layer. Each cell entry contains a reference to the original parameter's gradient; i.e. modifying the paramters in P, modifies the parameters in NL.
Returns a cell array of all parameters in the network or layer. Each cell entry contains a reference to the original parameter; i.e. modifying the paramters in P, modifies the parameters in NL.
Perform a 1-level channelwise 2D/3D (lifting) Haar transform of X and squeeze output of each transform to increase channels by factor of 4 in 4D tensor or by factor of 8 in 5D channels.
Input:
X: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize
Output:
if 4D tensor:
Y: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize
or if 5D tensor:
Y: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize
Perform a 1-level inverse 2D/3D Haar transform of Y and unsqueeze output. This reduces the number of channels by factor of 4 in 4D tensors or by factor of 8 in 5D tensors and increases each spatial dimension by a factor of 2. Inverse operation of Haar_squeeze.
Input:
Y: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize
Output:
If 4D tensor:
X: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize
If 5D tensor:
X: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize
Reshape input image such that each spatial dimension is reduced by a factor of 2, while the number of channels is increased by a factor of 4 if 4D tensor and increased by a factor of 8 if 5D tensor.
Input:
X: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize
Undo squeezing operation by reshaping input image such that each spatial dimension is increased by a factor of 2, while the number of channels is decreased by a factor of 4 if 4D tensor of decreased by a factor of 8 if a 5D tensor.
Input:
Y: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize
Undo squeezing operation by reshaping input image such that each spatial dimension is increased by a factor of 2, while the number of channels is decreased by a factor of 4 if 4D tensor of decreased by a factor of 8 if a 5D tensor.
Input:
Y: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize
Perform a 1-level channelwise 2D wavelet transform of X and squeeze output of each transform to increase number of channels by a factor of 4 if input is 4D tensor or by a factor of 8 if a 5D tensor.
Input:
X: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize
type: Wavelet filter type. Possible values are WT.haar for Haar wavelets, WT.coif2, WT.coif4, etc. for Coiflet wavelets, or WT.db1, WT.db2, etc. for Daubechies wavetlets. See https://github.com/JuliaDSP/Wavelets.jl for a full list.
Output: if 4D tensor:
Y: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize
or if 5D tensor:
Y: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize
Perform a 1-level inverse 2D wavelet transform of Y and unsqueeze output. This reduces the number of channels by factor of 4 if 4D tensor or by a factor of 8 if 5D tensor and increases each spatial dimension by a factor of 2. Inverse operation of wavelet_squeeze.
Input:
Y: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize
type: Wavelet filter type. Possible values are haar for Haar wavelets,
coif2, coif4, etc. for Coiflet wavelets, or db1, db2, etc. for Daubechies wavetlets. See https://github.com/JuliaDSP/Wavelets.jl for a full list.
Output: If 4D tensor:
X: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize
If 5D tensor:
X: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize
Create activation normalization layer. The parameters are initialized during the first use, such that the output has zero mean and unit variance along channels for the current mini-batch size.
Input:
k: number of channels
logdet: bool to indicate whether to compute the logdet
Perform a 1-level channelwise 2D wavelet transform of X and squeeze output of each transform to increase number of channels by a factor of 4 if input is 4D tensor or by a factor of 8 if a 5D tensor.
Input:
X: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize
type: Wavelet filter type. Possible values are WT.haar for Haar wavelets, WT.coif2, WT.coif4, etc. for Coiflet wavelets, or WT.db1, WT.db2, etc. for Daubechies wavetlets. See https://github.com/JuliaDSP/Wavelets.jl for a full list.
Output: if 4D tensor:
Y: Reshaped tensor of dimensions nx/2 x ny/2 x n_channel*4 x batchsize
or if 5D tensor:
Y: Reshaped tensor of dimensions nx/2 x ny/2 x nz/2 x n_channel*8 x batchsize
Perform a 1-level inverse 2D wavelet transform of Y and unsqueeze output. This reduces the number of channels by factor of 4 if 4D tensor or by a factor of 8 if 5D tensor and increases each spatial dimension by a factor of 2. Inverse operation of wavelet_squeeze.
Input:
Y: 4D/5D input tensor of dimensions nx x ny (x nz) x n_channel x batchsize
type: Wavelet filter type. Possible values are haar for Haar wavelets,
coif2, coif4, etc. for Coiflet wavelets, or db1, db2, etc. for Daubechies wavetlets. See https://github.com/JuliaDSP/Wavelets.jl for a full list.
Output: If 4D tensor:
X: Reshaped tensor of dimensions nx*2 x ny*2 x n_channel/4 x batchsize
If 5D tensor:
X: Reshaped tensor of dimensions nx*2 x ny*2 x nz*2 x n_channel/8 x batchsize
Create activation normalization layer. The parameters are initialized during the first use, such that the output has zero mean and unit variance along channels for the current mini-batch size.
Input:
k: number of channels
logdet: bool to indicate whether to compute the logdet
Create a (non-invertible) conditional residual block, consisting of one dense and three convolutional layers with ReLU activation functions. The dense operator maps the data to the image space and both tensors are concatenated and fed to the subsequent convolutional layers.
Input:
nx1, nx2, nx_in: spatial dimensions and no. of channels of input image
ny1, ny2, ny_in: spatial dimensions and no. of channels of input data
n_hidden: number of hidden channels
k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.
p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)
s1, s2: strides for the first and third convolution (s1) and the second convolution (s2)
Create a (non-invertible) conditional residual block, consisting of one dense and three convolutional layers with ReLU activation functions. The dense operator maps the data to the image space and both tensors are concatenated and fed to the subsequent convolutional layers.
Input:
nx1, nx2, nx_in: spatial dimensions and no. of channels of input image
ny1, ny2, ny_in: spatial dimensions and no. of channels of input data
n_hidden: number of hidden channels
k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.
p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)
s1, s2: strides for the first and third convolution (s1) and the second convolution (s2)
kernel, stride, pad: Kernel size, stride and padding of the convolutional operator
action: String that defines whether layer keeps the number of channels fixed (0), increases it by a factor of 4 (or 8 in 3D) (1) or decreased it by a factor of 4 (or 8) (-1).
W, b: Convolutional weight and bias. W has dimensions of (kernel, kernel, n_in, n_in). b has dimensions of n_in.
α: Step size for second time derivative. Default is 1.
n_hidden: Increase the no. of channels by n_hidden in the forward convolution. After applying the transpose convolution, the dimensions are back to the input dimensions.
ndims: Number of dimension of the input (2 for 2D, 3 for 3D)
kernel, stride, pad: Kernel size, stride and padding of the convolutional operator
action: String that defines whether layer keeps the number of channels fixed (0), increases it by a factor of 4 (or 8 in 3D) (1) or decreased it by a factor of 4 (or 8) (-1).
W, b: Convolutional weight and bias. W has dimensions of (kernel, kernel, n_in, n_in). b has dimensions of n_in.
α: Step size for second time derivative. Default is 1.
n_hidden: Increase the no. of channels by n_hidden in the forward convolution. After applying the transpose convolution, the dimensions are back to the input dimensions.
ndims: Number of dimension of the input (2 for 2D, 3 for 3D)
Create a (non-invertible) residual block, consisting of three convolutional layers and activation functions. The first convolution is a downsampling operation with a stride equal to the kernel dimension. The last convolution is the corresponding transpose operation and upsamples the data to either its original dimensions or to twice the number of input channels (for fan=true). The first and second layer contain a bias term.
Input:
n_in: number of input channels
n_hidden: number of hidden channels
n_out: number of ouput channels
activation: activation type between conv layers and final output
k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.
p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)
s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)
fan: bool to indicate whether the ouput has twice the number of input channels. For fan=false, the last activation function is a gated linear unit (thereby bringing the output back to the original dimensions). For fan=true, the last activation is a ReLU, in which case the output has twice the number of channels as the input.
or
W1, W2, W3: 4D tensors of convolutional weights
b1, b2: bias terms
Output:
RB: residual block layer
Usage:
Forward mode: Y = RB.forward(X)
Backward mode: ΔX = RB.backward(ΔY, X)
Trainable parameters:
Convolutional kernel weights RB.W1, RB.W2 and RB.W3
Create a (non-invertible) residual block, consisting of three convolutional layers and activation functions. The first convolution is a downsampling operation with a stride equal to the kernel dimension. The last convolution is the corresponding transpose operation and upsamples the data to either its original dimensions or to twice the number of input channels (for fan=true). The first and second layer contain a bias term.
Input:
n_in: number of input channels
n_hidden: number of hidden channels
n_out: number of ouput channels
activation: activation type between conv layers and final output
k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third operator, k2 is the kernel size of the second operator.
p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)
s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)
fan: bool to indicate whether the ouput has twice the number of input channels. For fan=false, the last activation function is a gated linear unit (thereby bringing the output back to the original dimensions). For fan=true, the last activation is a ReLU, in which case the output has twice the number of channels as the input.
or
W1, W2, W3: 4D tensors of convolutional weights
b1, b2: bias terms
Output:
RB: residual block layer
Usage:
Forward mode: Y = RB.forward(X)
Backward mode: ΔX = RB.backward(ΔY, X)
Trainable parameters:
Convolutional kernel weights RB.W1, RB.W2 and RB.W3
Create a conditional invertible network based on the Glow architecture. Each flow step in the inner loop consists of an activation normalization layer, followed by an invertible coupling layer with 1x1 convolutions and a residual block. The outer loop performs a squeezing operation prior to the inner loop, and a splitting operation afterwards.
Input:
'n_in': number of input channels of variable to sample
'n_cond': number of input channels of condition
n_hidden: number of hidden units in residual blocks
L: number of scales (outer loop)
K: number of flow steps per scale (inner loop)
split_scales: if true, perform squeeze operation which halves spatial dimensions and duplicates channel dimensions then split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.
k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third
operator, k2 is the kernel size of the second operator.
p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)
s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)
ndims : number of dimensions
squeeze_type : squeeze type that happens at each multiscale level
Create a conditional invertible network based on the Glow architecture. Each flow step in the inner loop consists of an activation normalization layer, followed by an invertible coupling layer with 1x1 convolutions and a residual block. The outer loop performs a squeezing operation prior to the inner loop, and a splitting operation afterwards.
Input:
'n_in': number of input channels of variable to sample
'n_cond': number of input channels of condition
n_hidden: number of hidden units in residual blocks
L: number of scales (outer loop)
K: number of flow steps per scale (inner loop)
split_scales: if true, perform squeeze operation which halves spatial dimensions and duplicates channel dimensions then split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.
k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third
operator, k2 is the kernel size of the second operator.
p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)
s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)
ndims : number of dimensions
squeeze_type : squeeze type that happens at each multiscale level
Create an invertible network based on the Glow architecture. Each flow step in the inner loop consists of an activation normalization layer, followed by an invertible coupling layer with 1x1 convolutions and a residual block. The outer loop performs a squeezing operation prior to the inner loop, and a splitting operation afterwards.
Input:
'n_in': number of input channels
n_hidden: number of hidden units in residual blocks
L: number of scales (outer loop)
K: number of flow steps per scale (inner loop)
split_scales: if true, perform squeeze operation which halves spatial dimensions and duplicates channel dimensions then split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.
k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third
operator, k2 is the kernel size of the second operator.
p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)
s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)
ndims : number of dimensions
squeeze_type : squeeze type that happens at each multiscale level
logdet : boolean to turn on/off logdet term tracking and gradient calculation
Output:
G: invertible Glow network.
Usage:
Forward mode: Y, logdet = G.forward(X)
Backward mode: ΔX, X = G.backward(ΔY, Y)
Trainable parameters:
None in G itself
Trainable parameters in activation normalizations G.AN[i,j] and coupling layers G.C[i,j], where i and j range from 1 to L and K respectively.
Create an invertible network based on the Glow architecture. Each flow step in the inner loop consists of an activation normalization layer, followed by an invertible coupling layer with 1x1 convolutions and a residual block. The outer loop performs a squeezing operation prior to the inner loop, and a splitting operation afterwards.
Input:
'n_in': number of input channels
n_hidden: number of hidden units in residual blocks
L: number of scales (outer loop)
K: number of flow steps per scale (inner loop)
split_scales: if true, perform squeeze operation which halves spatial dimensions and duplicates channel dimensions then split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.
k1, k2: kernel size of convolutions in residual block. k1 is the kernel of the first and third
operator, k2 is the kernel size of the second operator.
p1, p2: padding for the first and third convolution (p1) and the second convolution (p2)
s1, s2: stride for the first and third convolution (s1) and the second convolution (s2)
ndims : number of dimensions
squeeze_type : squeeze type that happens at each multiscale level
logdet : boolean to turn on/off logdet term tracking and gradient calculation
Output:
G: invertible Glow network.
Usage:
Forward mode: Y, logdet = G.forward(X)
Backward mode: ΔX, X = G.backward(ΔY, Y)
Trainable parameters:
None in G itself
Trainable parameters in activation normalizations G.AN[i,j] and coupling layers G.C[i,j], where i and j range from 1 to L and K respectively.
Create an invertible network based on hyperbolic layers. The network architecture is specified by a tuple of the form ((action1, nhidden1), (action2, nhidden2), ... ). Each inner tuple corresonds to an additional layer. The first inner tuple argument specifies whether the respective layer increases the number of channels (set to 1), decreases it (set to -1) or leaves it constant (set to 0). The second argument specifies the number of hidden units for that layer.
Input:
n_in: number of channels of input tensor.
n_hidden: number of hidden units in residual blocks
architecture: Tuple of tuples specifying the network architecture; ((action1, nhidden1), (action2, nhidden2))
k, s, p: Kernel size, stride and padding of convolutional kernels
logdet: Bool to indicate whether to return the logdet
Create an invertible network based on hyperbolic layers. The network architecture is specified by a tuple of the form ((action1, nhidden1), (action2, nhidden2), ... ). Each inner tuple corresonds to an additional layer. The first inner tuple argument specifies whether the respective layer increases the number of channels (set to 1), decreases it (set to -1) or leaves it constant (set to 0). The second argument specifies the number of hidden units for that layer.
Input:
n_in: number of channels of input tensor.
n_hidden: number of hidden units in residual blocks
architecture: Tuple of tuples specifying the network architecture; ((action1, nhidden1), (action2, nhidden2))
k, s, p: Kernel size, stride and padding of convolutional kernels
logdet: Bool to indicate whether to return the logdet
Create an invertibel recurrent inference machine (i-RIM) consisting of an unrooled loop for a given number of iterations.
Input:
'n_in': number of input channels
n_hidden: number of hidden units in residual blocks
maxiter: number unrolled loop iterations
Ψ: link function
k1, k2: stencil sizes for convolutions in the residual blocks. The first convolution uses a stencil of size and stride k1, thereby downsampling the input. The second convolutions uses a stencil of size k2. The last layer uses a stencil of size and stride k1, but performs the transpose operation of the first convolution, thus upsampling the output to the original input size.
p1, p2: padding for the first and third convolution (p1) and the second convolution (p2) in residual block
s1, s2: stride for the first and third convolution (s1) and the second convolution (s2) in residual block
ndims : number of dimensions
Output:
L: invertible i-RIM network.
Usage:
Forward mode: η_out, s_out = L.forward(η_in, s_in, d, A)
Inverse mode: η_in, s_in = L.inverse(η_out, s_out, d, A)
Create an invertibel recurrent inference machine (i-RIM) consisting of an unrooled loop for a given number of iterations.
Input:
'n_in': number of input channels
n_hidden: number of hidden units in residual blocks
maxiter: number unrolled loop iterations
Ψ: link function
k1, k2: stencil sizes for convolutions in the residual blocks. The first convolution uses a stencil of size and stride k1, thereby downsampling the input. The second convolutions uses a stencil of size k2. The last layer uses a stencil of size and stride k1, but performs the transpose operation of the first convolution, thus upsampling the output to the original input size.
p1, p2: padding for the first and third convolution (p1) and the second convolution (p2) in residual block
s1, s2: stride for the first and third convolution (s1) and the second convolution (s2) in residual block
ndims : number of dimensions
Output:
L: invertible i-RIM network.
Usage:
Forward mode: η_out, s_out = L.forward(η_in, s_in, d, A)
Inverse mode: η_in, s_in = L.inverse(η_out, s_out, d, A)
Create a conditional HINT network for data-driven generative modeling based on the change of variables formula.
Input:
'n_in': number of input channels
n_hidden: number of hidden units in residual blocks
L: number of scales (outer loop)
K: number of flow steps per scale (inner loop)
split_scales: if true, split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.
k1, k2: kernel size for first and third residual layer (k1) and second layer (k2)
p1, p2: respective padding sizes for residual block layers
s1, s2: respective strides for residual block layers
ndims : number of dimensions
Output:
CH: conditional HINT network
Usage:
Forward mode: Zx, Zy, logdet = CH.forward(X, Y)
Inverse mode: X, Y = CH.inverse(Zx, Zy)
Backward mode: ΔX, X = CH.backward(ΔZx, ΔZy, Zx, Zy)
Trainable parameters:
None in CH itself
Trainable parameters in activation normalizations CH.AN_X[i] and CH.AN_Y[i],
and in coupling layers CH.CL[i], where i ranges from 1 to depth.
Create a conditional HINT network for data-driven generative modeling based on the change of variables formula.
Input:
'n_in': number of input channels
n_hidden: number of hidden units in residual blocks
L: number of scales (outer loop)
K: number of flow steps per scale (inner loop)
split_scales: if true, split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.
k1, k2: kernel size for first and third residual layer (k1) and second layer (k2)
p1, p2: respective padding sizes for residual block layers
s1, s2: respective strides for residual block layers
ndims : number of dimensions
Output:
CH: conditional HINT network
Usage:
Forward mode: Zx, Zy, logdet = CH.forward(X, Y)
Inverse mode: X, Y = CH.inverse(Zx, Zy)
Backward mode: ΔX, X = CH.backward(ΔZx, ΔZy, Zx, Zy)
Trainable parameters:
None in CH itself
Trainable parameters in activation normalizations CH.AN_X[i] and CH.AN_Y[i],
and in coupling layers CH.CL[i], where i ranges from 1 to depth.
Create a multiscale HINT network for data-driven generative modeling based on the change of variables formula.
Input:
'n_in': number of input channels
n_hidden: number of hidden units in residual blocks
L: number of scales (outer loop)
K: number of flow steps per scale (inner loop)
split_scales: if true, split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.
k1, k2: kernel size for first and third residual layer (k1) and second layer (k2)
p1, p2: respective padding sizes for residual block layers
s1, s2: respective strides for residual block layers
ndims : number of dimensions
Output:
H: multiscale HINT network
Usage:
Forward mode: Z, logdet = H.forward(X)
Inverse mode: X = H.inverse(Z)
Backward mode: ΔX, X = H.backward(ΔZ, Z)
Trainable parameters:
None in H itself
Trainable parameters in activation normalizations H.AN[i],
and in coupling layers H.CL[i], where i ranges from 1 to depth.
Create a multiscale HINT network for data-driven generative modeling based on the change of variables formula.
Input:
'n_in': number of input channels
n_hidden: number of hidden units in residual blocks
L: number of scales (outer loop)
K: number of flow steps per scale (inner loop)
split_scales: if true, split output in half along channel dimension after each scale. Feed one half through the next layers, while saving the remaining channels for the output.
k1, k2: kernel size for first and third residual layer (k1) and second layer (k2)
p1, p2: respective padding sizes for residual block layers
s1, s2: respective strides for residual block layers
ndims : number of dimensions
Output:
H: multiscale HINT network
Usage:
Forward mode: Z, logdet = H.forward(X)
Inverse mode: X = H.inverse(Z)
Backward mode: ΔX, X = H.backward(ΔZ, Z)
Trainable parameters:
None in H itself
Trainable parameters in activation normalizations H.AN[i],
and in coupling layers H.CL[i], where i ranges from 1 to depth.