Skip to content

Commit

Permalink
Deployed 3f3af57 with MkDocs version: 1.3.1
Browse files Browse the repository at this point in the history
  • Loading branch information
mrava87 committed Mar 27, 2024
1 parent b27c8be commit dc7348e
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 5 deletions.
8 changes: 4 additions & 4 deletions lectures/13_dimred/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -880,7 +880,7 @@ <h2 id="principal-component-analysis-pca">Principal Component Analysis (PCA)</h2
\begin{aligned}
||\mathbf{x}-d(\mathbf{c})||_2^2 &amp;= (\mathbf{x}-d(\mathbf{c}))^T (\mathbf{x}-d(\mathbf{c})) \\
&amp;= \mathbf{x}^T \mathbf{x} - \mathbf{x}^Td(\mathbf{c}) - d(\mathbf{c})^T \mathbf{x} + d(\mathbf{c})^T d(\mathbf{c})\\
&amp;= \mathbf{x}^T \mathbf{x} - 2 \mathbf{x}^Td(\mathbf{c}) + d(\mathbf{c})^T d(\mathbf{c})^T\\
&amp;= \mathbf{x}^T \mathbf{x} - 2 \mathbf{x}^Td(\mathbf{c}) + d(\mathbf{c})^T d(\mathbf{c})\\
\end{aligned}
\]</div>
<p>where we can ignore the first term given it does not depend on <span class="arithmatex">\(\mathbf{c}\)</span>. At this point let's consider the special
Expand Down Expand Up @@ -987,15 +987,15 @@ <h3 id="sparse-coding-or-dictionary-learning">Sparse Coding (or Dictionary Learn
<p>which mathematically can be written as:</p>
<p>$$
\begin{aligned}
\hat{\mathbf{W}}, \hat{\mathbf{h}} &amp;= \underset{\mathbf{W}, \mathbf{h}} {\mathrm{argmax}} p(\mathbf{h}|\mathbf{x})
&amp;= \underset{\mathbf{W}, \mathbf{h}} {\mathrm{argmin}} \beta ||\mathbf{x}-\mathbf{W}\mathbf{h}||_2^2 +\lambda ||\mathbf{h}||_1
\hat{\mathbf{W}}, \hat{\mathbf{c}} &amp;= \underset{\mathbf{W}, \mathbf{c}} {\mathrm{argmax}} p(\mathbf{c}|\mathbf{x})
&amp;= \underset{\mathbf{W}, \mathbf{c}} {\mathrm{argmin}} \beta ||\mathbf{x}-\mathbf{W}\mathbf{h}||_2^2 +\lambda ||\mathbf{h}||_1
\end{aligned}
$$
where <span class="arithmatex">\(\beta\)</span>, <span class="arithmatex">\(\lambda\)</span> are directly related to the parameters of the posterior distribution that we wish to maximize. This
functional can be minimized in an alternating fashion, first for <span class="arithmatex">\(\mathbf{W}\)</span>, then for <span class="arithmatex">\(\mathbf{x}\)</span>, and so on and so forth.</p>
<p>Finally, once the training process is over and <span class="arithmatex">\(\hat{\mathbf{W}}\)</span> is available, it is worth noting that sparse coding does require
solving a sparsity-promoting inverse problem for any new training sample <span class="arithmatex">\(\mathbf{x}\)</span> in order to find its best
representation <span class="arithmatex">\(\hat{\mathbf{h}}\)</span>. Nevertheless, despite the higher cost compared to for example PCA, sparse coding has shown
representation <span class="arithmatex">\(\hat{\mathbf{c}}\)</span>. Nevertheless, despite the higher cost compared to for example PCA, sparse coding has shown
great promise in both data compression and representation learning, the latter when coupled with down-the-line supervised tasks.</p>
<h2 id="autoencoders">Autoencoders</h2>
<p>Finally, we turn our attention onto nonlinear dimensionality reduction models. We should know by now that nonlinear mappings (like
Expand Down
2 changes: 1 addition & 1 deletion search/search_index.json

Large diffs are not rendered by default.

Binary file modified sitemap.xml.gz
Binary file not shown.

0 comments on commit dc7348e

Please sign in to comment.