|
6 | 6 | <meta http-equiv="X-UA-Compatible" content="IE=edge" />
|
7 | 7 | <title>10.1 Learned Features | Interpretable Machine Learning</title>
|
8 | 8 | <meta name="description" content="Machine learning algorithms usually operate as black boxes and it is unclear how they derived a certain decision. This book is a guide for practitioners to make machine learning decisions interpretable." />
|
9 |
| - <meta name="generator" content="bookdown 0.35 and GitBook 2.6.7" /> |
| 9 | + <meta name="generator" content="bookdown 0.39 and GitBook 2.6.7" /> |
10 | 10 |
|
11 | 11 | <meta property="og:title" content="10.1 Learned Features | Interpretable Machine Learning" />
|
12 | 12 | <meta property="og:type" content="book" />
|
|
23 | 23 | <meta name="author" content="Christoph Molnar" />
|
24 | 24 |
|
25 | 25 |
|
26 |
| -<meta name="date" content="2023-08-21" /> |
| 26 | +<meta name="date" content="2024-05-22" /> |
27 | 27 |
|
28 | 28 | <meta name="viewport" content="width=device-width, initial-scale=1" />
|
29 | 29 | <meta name="apple-mobile-web-app-capable" content="yes" />
|
@@ -493,7 +493,7 @@ <h2><span class="header-section-number">10.1</span> Learned Features<a href="cnn
|
493 | 493 | First, the image goes through many convolutional layers.
|
494 | 494 | In those convolutional layers, the network learns new and increasingly complex features in its layers.
|
495 | 495 | Then the transformed image information goes through the fully connected layers and turns into a classification or prediction.</p>
|
496 |
| -<div class="figure" style="text-align: center"><span style="display:block;" id="fig:unnamed-chunk-51"></span> |
| 496 | +<div class="figure" style="text-align: center"><span style="display:block;" id="fig:unnamed-chunk-49"></span> |
497 | 497 | <img src="images/cnn-features.png" alt="Features learned by a convolutional neural network (Inception V1) trained on the ImageNet data. The features range from simple features in the lower convolutional layers (left) to more abstract features in the higher convolutional layers (right). Figure from Olah, et al. (2017, CC-BY 4.0) https://distill.pub/2017/feature-visualization/appendix/." width="\textwidth" />
|
498 | 498 | <p class="caption">
|
499 | 499 | FIGURE 10.1: Features learned by a convolutional neural network (Inception V1) trained on the ImageNet data. The features range from simple features in the lower convolutional layers (left) to more abstract features in the higher convolutional layers (right). Figure from Olah, et al. (2017, CC-BY 4.0) <a href="https://distill.pub/2017/feature-visualization/appendix/" class="uri">https://distill.pub/2017/feature-visualization/appendix/</a>.
|
@@ -633,7 +633,7 @@ <h4><span class="header-section-number">10.1.2.1</span> Network Dissection Algor
|
633 | 633 | <li>Quantify the alignment of activations and labeled concepts.</li>
|
634 | 634 | </ol>
|
635 | 635 | <p>The following figure visualizes how an image is forwarded to a channel and matched with the labeled concepts.</p>
|
636 |
| -<div class="figure" style="text-align: center"><span style="display:block;" id="fig:unnamed-chunk-52"></span> |
| 636 | +<div class="figure" style="text-align: center"><span style="display:block;" id="fig:unnamed-chunk-50"></span> |
637 | 637 | <img src="images/dissection-network.png" alt="For a given input image and a trained network (fixed weights), we propagate the image forward to the target layer, upscale the activations to match the original image size and compare the maximum activations with the ground truth pixel-wise segmentation. Figure originally from http://netdissect.csail.mit.edu/." width="\textwidth" />
|
638 | 638 | <p class="caption">
|
639 | 639 | FIGURE 10.5: For a given input image and a trained network (fixed weights), we propagate the image forward to the target layer, upscale the activations to match the original image size and compare the maximum activations with the ground truth pixel-wise segmentation. Figure originally from <a href="http://netdissect.csail.mit.edu/" class="uri">http://netdissect.csail.mit.edu/</a>.
|
@@ -679,14 +679,14 @@ <h4><span class="header-section-number">10.1.2.1</span> Network Dissection Algor
|
679 | 679 | We call unit k a detector of concept c when <span class="math inline">\(IoU_{k,c}>0.04\)</span>.
|
680 | 680 | This threshold was chosen by Bau & Zhou et al (2017).</p>
|
681 | 681 | <p>The following figure illustrates intersection and union of activation mask and concept mask for a single image:</p>
|
682 |
| -<div class="figure" style="text-align: center"><span style="display:block;" id="fig:unnamed-chunk-53"></span> |
| 682 | +<div class="figure" style="text-align: center"><span style="display:block;" id="fig:unnamed-chunk-51"></span> |
683 | 683 | <img src="images/dissection-dog-exemplary.jpg" alt="The Intersection over Union (IoU) is computed by comparing the human ground truth annotation and the top activated pixels." width="\textwidth" />
|
684 | 684 | <p class="caption">
|
685 | 685 | FIGURE 10.6: The Intersection over Union (IoU) is computed by comparing the human ground truth annotation and the top activated pixels.
|
686 | 686 | </p>
|
687 | 687 | </div>
|
688 | 688 | <p>The following figure shows a unit that detects dogs:</p>
|
689 |
| -<div class="figure" style="text-align: center"><span style="display:block;" id="fig:unnamed-chunk-54"></span> |
| 689 | +<div class="figure" style="text-align: center"><span style="display:block;" id="fig:unnamed-chunk-52"></span> |
690 | 690 | <img src="images/dissection-dogs.jpeg" alt="Activation mask for inception\_4e channel 750 which detects dogs with $IoU=0.203$. Figure originally from http://netdissect.csail.mit.edu/" width="\textwidth" />
|
691 | 691 | <p class="caption">
|
692 | 692 | FIGURE 10.7: Activation mask for inception_4e channel 750 which detects dogs with <span class="math inline">\(IoU=0.203\)</span>. Figure originally from <a href="http://netdissect.csail.mit.edu/" class="uri">http://netdissect.csail.mit.edu/</a>
|
|
0 commit comments