Deterministic Autoencoder
The deterministic autoencoders are a type of neural network that learns to embed high-dimensional data into a lower-dimensional space in a one-to-one fashion. The AEs
module provides the necessary tools to train these networks. The main type is the AE
struct, which is a simple feedforward neural network composed of two parts: an Encoder
and a Decoder
.
Autoencoder struct AE
AutoEncoderToolkit.AEs.AE
— Typestruct AE{E<:AbstractDeterministicEncoder, D<:AbstractDeterministicDecoder}
Autoencoder (AE) model defined for Flux.jl
Fields
encoder::E
: Neural network that encodes the input into the latent space.E
is a subtype ofAbstractDeterministicEncoder
.decoder::D
: Neural network that decodes the latent representation back to the original input space.D
is a subtype ofAbstractDeterministicDecoder
.
An AE consists of an encoder and decoder network with a bottleneck latent space in between. The encoder compresses the input into a low-dimensional representation. The decoder tries to reconstruct the original input from the point in the latent space.
Forward pass
AutoEncoderToolkit.AEs.AE
— Method(ae::AE{Encoder, Decoder})(x::AbstractArray; latent::Bool=false)
Processes the input data x
through the autoencoder (AE) that consists of an encoder and a decoder.
Arguments
x::AbstractVecOrMat{Float32}
: The data to be decoded. This can be a vector or a matrix where each column represents a separate sample.
Optional Keyword Arguments
latent::Bool
: If set totrue
, returns a dictionary containing the latent representation alongside the reconstructed data. Defaults tofalse
.
Returns
- If
latent=false
: ANamedtuple
with key:decoder
that contains the reconstructed data after processing through the encoder and decoder. - If
latent=true
: ANamedtuple
with keys:encoder
, and:decoder
, containing the corresponding values.
Description
The function first encodes the input x
using the encoder to get the encoded representation in the latent space. This latent representation is then decoded using the decoder to produce the reconstructed data. If latent
is set to true, it also returns the latent representation.
Note
Ensure the input data x
matches the expected input dimensionality for the encoder in the AE.
Loss function
MSE loss
AutoEncoderToolkit.AEs.mse_loss
— Functionmse_loss(ae::AE,
+Deterministic Autoencoders · AutoEncoderToolkit Deterministic Autoencoder
The deterministic autoencoders are a type of neural network that learns to embed high-dimensional data into a lower-dimensional space in a one-to-one fashion. The AEs
module provides the necessary tools to train these networks. The main type is the AE
struct, which is a simple feedforward neural network composed of two parts: an Encoder
and a Decoder
.
Autoencoder struct AE
AutoEncoderToolkit.AEs.AE
— Typestruct AE{E<:AbstractDeterministicEncoder, D<:AbstractDeterministicDecoder}
Autoencoder (AE) model defined for Flux.jl
Fields
encoder::E
: Neural network that encodes the input into the latent space. E
is a subtype of AbstractDeterministicEncoder
.decoder::D
: Neural network that decodes the latent representation back to the original input space. D
is a subtype of AbstractDeterministicDecoder
.
An AE consists of an encoder and decoder network with a bottleneck latent space in between. The encoder compresses the input into a low-dimensional representation. The decoder tries to reconstruct the original input from the point in the latent space.
Forward pass
AutoEncoderToolkit.AEs.AE
— Method(ae::AE{Encoder, Decoder})(x::AbstractArray; latent::Bool=false)
Processes the input data x
through the autoencoder (AE) that consists of an encoder and a decoder.
Arguments
x::AbstractVecOrMat{Float32}
: The data to be decoded. This can be a vector or a matrix where each column represents a separate sample.
Optional Keyword Arguments
latent::Bool
: If set to true
, returns a dictionary containing the latent representation alongside the reconstructed data. Defaults to false
.
Returns
- If
latent=false
: A Namedtuple
with key :decoder
that contains the reconstructed data after processing through the encoder and decoder. - If
latent=true
: A Namedtuple
with keys :encoder
, and :decoder
, containing the corresponding values.
Description
The function first encodes the input x
using the encoder to get the encoded representation in the latent space. This latent representation is then decoded using the decoder to produce the reconstructed data. If latent
is set to true, it also returns the latent representation.
Note
Ensure the input data x
matches the expected input dimensionality for the encoder in the AE.
Loss function
MSE loss
AutoEncoderToolkit.AEs.mse_loss
— Functionmse_loss(ae::AE,
x::AbstractArray;
regularization::Union{Function, Nothing}=nothing,
reg_strength::Float32=1.0f0
@@ -7,4 +7,4 @@
x_in::AbstractArray,
x_out::AbstractArray;
regularization::Union{Function, Nothing}=nothing,
- reg_strength::Float32=1.0f0)
Calculate the mean squared error (MSE) loss for an autoencoder (AE) using separate input and target output vectors.
The AE loss is computed as: loss = MSE(xout, x̂) + regstrength × reg_term
Where:
- x_out is the target output vector.
- x̂ is the reconstructed output from the AE given x_in as input.
- regstrength × regterm is an optional regularization term.
Arguments
ae::AE
: An AE model.x_in::AbstractArray
: Input vector to the AE encoder.x_out::AbstractArray
: Target output vector to compute the reconstruction error.
Optional Keyword Arguments
reg_function::Union{Function, Nothing}=nothing
: A function that computes the regularization term based on the ae outputs. Should return a Float32. This function must take as input the ae outputs and the keyword arguments provided in reg_kwargs
.reg_kwargs::NamedTuple=NamedTuple()
: Keyword arguments to pass to the regularization function.reg_strength::Number=1.0f0
: The strength of the regularization term.
Returns
- The computed loss value between the target
x_out
and its reconstructed counterpart from x_in
, including possible regularization terms.
Note
Ensure that the input data x_in
matches the expected input dimensionality for the encoder in the AE.
Training
AutoEncoderToolkit.AEs.train!
— Function`train!(ae, x, opt; loss_function, loss_kwargs...)`
Customized training function to update parameters of an autoencoder given a specified loss function.
Arguments
ae::AE
: A struct containing the elements of an autoencoder.x::AbstractArray
: Input data on which the autoencoder will be trained.opt::NamedTuple
: State of the optimizer for updating parameters. Typically initialized using Flux.Train.setup
.
Optional Keyword Arguments
loss_function::Function
: The loss function used for training. It should accept the autoencoder model and input data x
, and return a loss value.loss_kwargs::NamedTuple=NamedTuple()
: Additional arguments for the loss function.verbose::Bool=false
: If true, the loss value will be printed during training.loss_return::Bool=false
: If true, the loss value will be returned after training.
Description
Trains the autoencoder by:
- Computing the gradient of the loss with respect to the autoencoder parameters.
- Updating the autoencoder parameters using the optimizer.
train!(ae, x_in, x_out, opt; loss_function, loss_kwargs...)
Customized training function to update parameters of an autoencoder given a specified loss function.
Arguments
ae::AE
: A struct containing the elements of an autoencoder.x_in::AbstractArray
: Input data on which the autoencoder will be trained.x_out::AbstractArray
: Target output data for the autoencoder.opt::NamedTuple
: State of the optimizer for updating parameters. Typically initialized using Flux.Train.setup
.
Optional Keyword Arguments
loss_function::Function
: The loss function used for training. It should accept the autoencoder model and input data x
, and return a loss value.loss_kwargs::NamedTuple=NamedTuple()
: Additional arguments for the loss function.verbose::Bool=false
: If true, the loss value will be printed during training.loss_return::Bool=false
: If true, the loss value will be returned after training.
Description
Trains the autoencoder by:
- Computing the gradient of the loss with respect to the autoencoder parameters.
- Updating the autoencoder parameters using the optimizer.
Settings
This document was generated with Documenter.jl version 1.5.0 on Tuesday 23 July 2024. Using Julia version 1.10.4.
+ reg_strength::Float32=1.0f0)
Calculate the mean squared error (MSE) loss for an autoencoder (AE) using separate input and target output vectors.
The AE loss is computed as: loss = MSE(xout, x̂) + regstrength × reg_term
Where:
- x_out is the target output vector.
- x̂ is the reconstructed output from the AE given x_in as input.
- regstrength × regterm is an optional regularization term.
Arguments
ae::AE
: An AE model.x_in::AbstractArray
: Input vector to the AE encoder.x_out::AbstractArray
: Target output vector to compute the reconstruction error.
Optional Keyword Arguments
reg_function::Union{Function, Nothing}=nothing
: A function that computes the regularization term based on the ae outputs. Should return a Float32. This function must take as input the ae outputs and the keyword arguments provided inreg_kwargs
.reg_kwargs::NamedTuple=NamedTuple()
: Keyword arguments to pass to the regularization function.reg_strength::Number=1.0f0
: The strength of the regularization term.
Returns
- The computed loss value between the target
x_out
and its reconstructed counterpart fromx_in
, including possible regularization terms.
Note
Ensure that the input data x_in
matches the expected input dimensionality for the encoder in the AE.
Training
AutoEncoderToolkit.AEs.train!
— Function`train!(ae, x, opt; loss_function, loss_kwargs...)`
Customized training function to update parameters of an autoencoder given a specified loss function.
Arguments
ae::AE
: A struct containing the elements of an autoencoder.x::AbstractArray
: Input data on which the autoencoder will be trained.opt::NamedTuple
: State of the optimizer for updating parameters. Typically initialized usingFlux.Train.setup
.
Optional Keyword Arguments
loss_function::Function
: The loss function used for training. It should accept the autoencoder model and input datax
, and return a loss value.loss_kwargs::NamedTuple=NamedTuple()
: Additional arguments for the loss function.verbose::Bool=false
: If true, the loss value will be printed during training.loss_return::Bool=false
: If true, the loss value will be returned after training.
Description
Trains the autoencoder by:
- Computing the gradient of the loss with respect to the autoencoder parameters.
- Updating the autoencoder parameters using the optimizer.
train!(ae, x_in, x_out, opt; loss_function, loss_kwargs...)
Customized training function to update parameters of an autoencoder given a specified loss function.
Arguments
ae::AE
: A struct containing the elements of an autoencoder.x_in::AbstractArray
: Input data on which the autoencoder will be trained.x_out::AbstractArray
: Target output data for the autoencoder.opt::NamedTuple
: State of the optimizer for updating parameters. Typically initialized usingFlux.Train.setup
.
Optional Keyword Arguments
loss_function::Function
: The loss function used for training. It should accept the autoencoder model and input datax
, and return a loss value.loss_kwargs::NamedTuple=NamedTuple()
: Additional arguments for the loss function.verbose::Bool=false
: If true, the loss value will be printed during training.loss_return::Bool=false
: If true, the loss value will be returned after training.
Description
Trains the autoencoder by:
- Computing the gradient of the loss with respect to the autoencoder parameters.
- Updating the autoencoder parameters using the optimizer.