Skip to content

Commit

Permalink
build based on 3ff7d1e
Browse files Browse the repository at this point in the history
  • Loading branch information
Documenter.jl committed Jul 11, 2024
1 parent 8c07ac8 commit be267d4
Show file tree
Hide file tree
Showing 14 changed files with 14 additions and 14 deletions.
2 changes: 1 addition & 1 deletion dev/.documenter-siteinfo.json
Original file line number Diff line number Diff line change
@@ -1 +1 @@
{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-07-11T19:46:03","documenter_version":"1.5.0"}}
{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-07-11T20:09:06","documenter_version":"1.5.0"}}
2 changes: 1 addition & 1 deletion dev/ae/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@
x_in::AbstractArray,
x_out::AbstractArray;
regularization::Union{Function, Nothing}=nothing,
reg_strength::Float32=1.0f0)</code></pre><p>Calculate the mean squared error (MSE) loss for an autoencoder (AE) using separate input and target output vectors.</p><p>The AE loss is computed as: loss = MSE(x<em>out, x̂) + reg</em>strength × reg_term</p><p>Where:</p><ul><li>x_out is the target output vector.</li><li>x̂ is the reconstructed output from the AE given x_in as input.</li><li>reg<em>strength × reg</em>term is an optional regularization term.</li></ul><p><strong>Arguments</strong></p><ul><li><code>ae::AE</code>: An AE model.</li><li><code>x_in::AbstractArray</code>: Input vector to the AE encoder.</li><li><code>x_out::AbstractArray</code>: Target output vector to compute the reconstruction error.</li></ul><p><strong>Optional Keyword Arguments</strong></p><ul><li><code>reg_function::Union{Function, Nothing}=nothing</code>: A function that computes the regularization term based on the ae outputs. Should return a Float32. This function must take as input the ae outputs and the keyword arguments provided in <code>reg_kwargs</code>.</li><li><code>reg_kwargs::Union{NamedTuple,Dict}=Dict()</code>: Keyword arguments to pass to the regularization function.</li><li><code>reg_strength::Number=1.0f0</code>: The strength of the regularization term.</li></ul><p><strong>Returns</strong></p><ul><li>The computed loss value between the target <code>x_out</code> and its reconstructed counterpart from <code>x_in</code>, including possible regularization terms.</li></ul><p><strong>Note</strong></p><p>Ensure that the input data <code>x_in</code> matches the expected input dimensionality for the encoder in the AE.</p></div></section></article><h2 id="Training"><a class="docs-heading-anchor" href="#Training">Training</a><a id="Training-1"></a><a class="docs-heading-anchor-permalink" href="#Training" title="Permalink"></a></h2><article class="docstring"><header><a class="docstring-article-toggle-button fa-solid fa-chevron-down" href="javascript:;" title="Collapse docstring"></a><a class="docstring-binding" id="AutoEncoderToolkit.AEs.train!" href="#AutoEncoderToolkit.AEs.train!"><code>AutoEncoderToolkit.AEs.train!</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia hljs">`train!(ae, x, opt; loss_function, loss_kwargs...)`</code></pre><p>Customized training function to update parameters of an autoencoder given a specified loss function.</p><p><strong>Arguments</strong></p><ul><li><code>ae::AE</code>: A struct containing the elements of an autoencoder.</li><li><code>x::AbstractArray</code>: Input data on which the autoencoder will be trained.</li><li><code>opt::NamedTuple</code>: State of the optimizer for updating parameters. Typically initialized using <code>Flux.Train.setup</code>.</li></ul><p><strong>Optional Keyword Arguments</strong></p><ul><li><code>loss_function::Function</code>: The loss function used for training. It should accept the autoencoder model and input data <code>x</code>, and return a loss value.</li><li><code>loss_kwargs::Union{NamedTuple,Dict} = Dict()</code>: Additional arguments for the loss function.</li><li><code>verbose::Bool=false</code>: If true, the loss value will be printed during training.</li><li><code>loss_return::Bool=false</code>: If true, the loss value will be returned after training.</li></ul><p><strong>Description</strong></p><p>Trains the autoencoder by:</p><ol><li>Computing the gradient of the loss with respect to the autoencoder parameters.</li><li>Updating the autoencoder parameters using the optimizer.</li></ol></div></section><section><div><pre><code class="language-julia hljs">train!(ae, x_in, x_out, opt; loss_function, loss_kwargs...)</code></pre><p>Customized training function to update parameters of an autoencoder given a specified loss function.</p><p><strong>Arguments</strong></p><ul><li><code>ae::AE</code>: A struct containing the elements of an autoencoder.</li><li><code>x_in::AbstractArray</code>: Input data on which the autoencoder will be trained.</li><li><code>x_out::AbstractArray</code>: Target output data for the autoencoder.</li><li><code>opt::NamedTuple</code>: State of the optimizer for updating parameters. Typically initialized using <code>Flux.Train.setup</code>.</li></ul><p><strong>Optional Keyword Arguments</strong></p><ul><li><code>loss_function::Function</code>: The loss function used for training. It should accept the autoencoder model and input data <code>x</code>, and return a loss value.</li><li><code>loss_kwargs::Union{NamedTuple,Dict} = Dict()</code>: Additional arguments for the loss function.</li><li><code>verbose::Bool=false</code>: If true, the loss value will be printed during training.</li><li><code>loss_return::Bool=false</code>: If true, the loss value will be returned after training.</li></ul><p><strong>Description</strong></p><p>Trains the autoencoder by:</p><ol><li>Computing the gradient of the loss with respect to the autoencoder parameters.</li><li>Updating the autoencoder parameters using the optimizer.</li></ol></div></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../layers/">« Custom Layers</a><a class="docs-footer-nextpage" href="../vae/">VAE / β-VAE »</a><div class="flexbox-break"></div><p class="footer-message">Powered by <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> and the <a href="https://julialang.org/">Julia Programming Language</a>.</p></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="auto">Automatic (OS)</option><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option><option value="catppuccin-latte">catppuccin-latte</option><option value="catppuccin-frappe">catppuccin-frappe</option><option value="catppuccin-macchiato">catppuccin-macchiato</option><option value="catppuccin-mocha">catppuccin-mocha</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> version 1.5.0 on <span class="colophon-date" title="Thursday 11 July 2024 19:46">Thursday 11 July 2024</span>. Using Julia version 1.10.4.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
reg_strength::Float32=1.0f0)</code></pre><p>Calculate the mean squared error (MSE) loss for an autoencoder (AE) using separate input and target output vectors.</p><p>The AE loss is computed as: loss = MSE(x<em>out, x̂) + reg</em>strength × reg_term</p><p>Where:</p><ul><li>x_out is the target output vector.</li><li>x̂ is the reconstructed output from the AE given x_in as input.</li><li>reg<em>strength × reg</em>term is an optional regularization term.</li></ul><p><strong>Arguments</strong></p><ul><li><code>ae::AE</code>: An AE model.</li><li><code>x_in::AbstractArray</code>: Input vector to the AE encoder.</li><li><code>x_out::AbstractArray</code>: Target output vector to compute the reconstruction error.</li></ul><p><strong>Optional Keyword Arguments</strong></p><ul><li><code>reg_function::Union{Function, Nothing}=nothing</code>: A function that computes the regularization term based on the ae outputs. Should return a Float32. This function must take as input the ae outputs and the keyword arguments provided in <code>reg_kwargs</code>.</li><li><code>reg_kwargs::Union{NamedTuple,Dict}=Dict()</code>: Keyword arguments to pass to the regularization function.</li><li><code>reg_strength::Number=1.0f0</code>: The strength of the regularization term.</li></ul><p><strong>Returns</strong></p><ul><li>The computed loss value between the target <code>x_out</code> and its reconstructed counterpart from <code>x_in</code>, including possible regularization terms.</li></ul><p><strong>Note</strong></p><p>Ensure that the input data <code>x_in</code> matches the expected input dimensionality for the encoder in the AE.</p></div></section></article><h2 id="Training"><a class="docs-heading-anchor" href="#Training">Training</a><a id="Training-1"></a><a class="docs-heading-anchor-permalink" href="#Training" title="Permalink"></a></h2><article class="docstring"><header><a class="docstring-article-toggle-button fa-solid fa-chevron-down" href="javascript:;" title="Collapse docstring"></a><a class="docstring-binding" id="AutoEncoderToolkit.AEs.train!" href="#AutoEncoderToolkit.AEs.train!"><code>AutoEncoderToolkit.AEs.train!</code></a><span class="docstring-category">Function</span></header><section><div><pre><code class="language-julia hljs">`train!(ae, x, opt; loss_function, loss_kwargs...)`</code></pre><p>Customized training function to update parameters of an autoencoder given a specified loss function.</p><p><strong>Arguments</strong></p><ul><li><code>ae::AE</code>: A struct containing the elements of an autoencoder.</li><li><code>x::AbstractArray</code>: Input data on which the autoencoder will be trained.</li><li><code>opt::NamedTuple</code>: State of the optimizer for updating parameters. Typically initialized using <code>Flux.Train.setup</code>.</li></ul><p><strong>Optional Keyword Arguments</strong></p><ul><li><code>loss_function::Function</code>: The loss function used for training. It should accept the autoencoder model and input data <code>x</code>, and return a loss value.</li><li><code>loss_kwargs::Union{NamedTuple,Dict} = Dict()</code>: Additional arguments for the loss function.</li><li><code>verbose::Bool=false</code>: If true, the loss value will be printed during training.</li><li><code>loss_return::Bool=false</code>: If true, the loss value will be returned after training.</li></ul><p><strong>Description</strong></p><p>Trains the autoencoder by:</p><ol><li>Computing the gradient of the loss with respect to the autoencoder parameters.</li><li>Updating the autoencoder parameters using the optimizer.</li></ol></div></section><section><div><pre><code class="language-julia hljs">train!(ae, x_in, x_out, opt; loss_function, loss_kwargs...)</code></pre><p>Customized training function to update parameters of an autoencoder given a specified loss function.</p><p><strong>Arguments</strong></p><ul><li><code>ae::AE</code>: A struct containing the elements of an autoencoder.</li><li><code>x_in::AbstractArray</code>: Input data on which the autoencoder will be trained.</li><li><code>x_out::AbstractArray</code>: Target output data for the autoencoder.</li><li><code>opt::NamedTuple</code>: State of the optimizer for updating parameters. Typically initialized using <code>Flux.Train.setup</code>.</li></ul><p><strong>Optional Keyword Arguments</strong></p><ul><li><code>loss_function::Function</code>: The loss function used for training. It should accept the autoencoder model and input data <code>x</code>, and return a loss value.</li><li><code>loss_kwargs::Union{NamedTuple,Dict} = Dict()</code>: Additional arguments for the loss function.</li><li><code>verbose::Bool=false</code>: If true, the loss value will be printed during training.</li><li><code>loss_return::Bool=false</code>: If true, the loss value will be returned after training.</li></ul><p><strong>Description</strong></p><p>Trains the autoencoder by:</p><ol><li>Computing the gradient of the loss with respect to the autoencoder parameters.</li><li>Updating the autoencoder parameters using the optimizer.</li></ol></div></section></article></article><nav class="docs-footer"><a class="docs-footer-prevpage" href="../layers/">« Custom Layers</a><a class="docs-footer-nextpage" href="../vae/">VAE / β-VAE »</a><div class="flexbox-break"></div><p class="footer-message">Powered by <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> and the <a href="https://julialang.org/">Julia Programming Language</a>.</p></nav></div><div class="modal" id="documenter-settings"><div class="modal-background"></div><div class="modal-card"><header class="modal-card-head"><p class="modal-card-title">Settings</p><button class="delete"></button></header><section class="modal-card-body"><p><label class="label">Theme</label><div class="select"><select id="documenter-themepicker"><option value="auto">Automatic (OS)</option><option value="documenter-light">documenter-light</option><option value="documenter-dark">documenter-dark</option><option value="catppuccin-latte">catppuccin-latte</option><option value="catppuccin-frappe">catppuccin-frappe</option><option value="catppuccin-macchiato">catppuccin-macchiato</option><option value="catppuccin-mocha">catppuccin-mocha</option></select></div></p><hr/><p>This document was generated with <a href="https://github.com/JuliaDocs/Documenter.jl">Documenter.jl</a> version 1.5.0 on <span class="colophon-date" title="Thursday 11 July 2024 20:09">Thursday 11 July 2024</span>. Using Julia version 1.10.4.</p></section><footer class="modal-card-foot"></footer></div></div></div></body></html>
Loading

0 comments on commit be267d4

Please sign in to comment.