Skip to content

Commit

Permalink
deploy: 99e9773
Browse files Browse the repository at this point in the history
  • Loading branch information
frauzufall committed Sep 25, 2024
1 parent d324435 commit 539e8fa
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion 3d-data/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
look at Python based volumetric rendering in the tutorial linked below. We will look at Napari, Pygfx, and VTK. It
is worth mentioning that he last two tools are also great resources for rendering mesh based datasets.</aside><div class=tutorial><a class=tutorial-link href=https://ida-mdc.github.io/workshop-visualization/tutorial-volume-rendering-python/><div><div class=tutorial-header><i>Separate tutorial - click this box!</i></div><div class=title><div id=qr-tutorial-volume-rendering-python class=qr-code></div><script>jQuery("#qr-tutorial-volume-rendering-python").qrcode({text:"https://ida-mdc.github.io/workshop-visualization/tutorial-volume-rendering-python/"})</script><h3>Volumetric Dataset Rendering in Python</h3></div><div class=description>A tutorial on python based tools for visualizing volumetric datasets in 3D, including napari, Pygfx, and VTK.</div></div><span class=cover-image style=background-image:url(/workshop-visualization/img/napari.png)></span></a></div></section><section id=section-10 class=repeated-heading><h2 id=visualizing-volumetric-datasets-5>Visualizing volumetric datasets</h2><h3 id=web-based-rendering-with-neuroglancer>Web based rendering with Neuroglancer</h3><aside class=notes>A web-based 3D viewer allows for interactive visualization directly in the browser without needing specialized software. These viewers can be embedded into web pages or shared with collaborators.
The following tutorial does not come with a full overview of existing web based viewers, but offers insight into a
project we are working on at MDC where we utilize Neuroglancer to display large scale mice brains online.</aside><ul><li><strong>Collaboration-friendly</strong>: Share URLs with collaborators to provide access to the 3D visualization.</li></ul><div class=tutorial><a class=tutorial-link href=https://ida-mdc.github.io/workshop-visualization/tutorial-volume-rendering-neuroglancer/><div><div class=tutorial-header><i>Separate tutorial - click this box!</i></div><div class=title><div id=qr-tutorial-volume-rendering-neuroglancer class=qr-code></div><script>jQuery("#qr-tutorial-volume-rendering-neuroglancer").qrcode({text:"https://ida-mdc.github.io/workshop-visualization/tutorial-volume-rendering-neuroglancer/"})</script><h3>Volumetric data rendering with Neuroglancer</h3></div><div class=description>Use case description of how to render voxel-based volumetric data using Neuroglancer and stream data locally or remotely for visualization.</div></div><span class=cover-image style=background-image:url(/workshop-visualization/img/neuroglancer.png)></span></a></div></section><section id=section-11><h2 id=converting-volumetric-datasets-into-meshes>Converting volumetric datasets into meshes</h2><h3 id=annotations>Annotations</h3><aside class=notes><p>Annotations can be used to add specific information to volumetric datasets, such as marking points of interest (e.g., cell locations, regions of interest) or segmenting areas of the data. Converting these annotated datasets into meshes allows for the visual representation of those specific features.</p><p>When working with <strong>unannotated</strong> volumetric datasets, you can explore the data interactively using <strong>transfer functions</strong>. Transfer functions map intensity values in the dataset to colors and opacities, allowing you to visualize different regions of the volume without defining hard boundaries. This technique is often used for soft, exploratory visualizations of the internal structures of the data.</p></aside><ul><li><strong>Transfer functions</strong>: Used for visualizing unannotated datasets, adjusting colors and opacities based on intensity values.</li></ul><aside class=notes>When converting volumetric data to <strong>meshes</strong>, it&rsquo;s necessary to draw concrete borders between the <strong>foreground</strong> (the object of interest) and the <strong>background</strong>. This is achieved through:</aside><ul><li><strong>Fixed thresholds</strong>: Used to generate meshes by separating foreground from background using a set intensity threshold.</li><li><strong>Content-based annotations</strong>: Create precise meshes by using annotated regions to define boundaries.</li></ul><div style=flex:1></div><div class=citations><ul><li><a href=https://rupress.org/jcb/article/220/2/e202010039/211599/3D-FIB-SEM-reconstruction-of-microtubule-organelle>© Müller et al. https://doi.org/10.1083/jcb.202010039</a></li></ul></div></section><section id=section-12 class=repeated-heading><h2 id=converting-volumetric-datasets-into-meshes-1>Converting volumetric datasets into meshes</h2><figure><img src=https://ida-mdc.github.io/workshop-visualization/img/annotation-conversion.jpg></figure></section><section id=section-13 class=repeated-heading><h2 id=converting-volumetric-datasets-into-meshes-2>Converting volumetric datasets into meshes</h2><h3 id=marching-cubes>Marching Cubes</h3><aside class=notes>The <strong>Marching Cubes algorithm</strong> is one of the most popular methods for extracting a 3D surface from volumetric data. It identifies the points in a voxel grid where the dataset crosses a specific threshold value (the <strong>isosurface</strong>) and uses those points to generate a mesh.</aside><figure><img src=https://ida-mdc.github.io/workshop-visualization/img/MarchingCubesEdit.svg alt="Marching cubes algorithm. Credit: Ryoshoru, Jmtrivial on Wikimedia, CC BY-SA 4.0" height=700px><figcaption><p>Marching cubes algorithm. Credit: <a href=https://commons.wikimedia.org/wiki/File:MarchingCubesEdit.svg>Ryoshoru, Jmtrivial on Wikimedia</a>, CC BY-SA 4.0</p></figcaption></figure></section><section id=section-14 class=repeated-heading><h2 id=converting-volumetric-datasets-into-meshes-3>Converting volumetric datasets into meshes</h2><h3 id=optimization>Optimization</h3><ul><li><strong>Binary masks</strong> vs. <strong>Probability maps</strong></li></ul><aside class=notes><p>When converting volumetric data to meshes, <strong>optimizing</strong> the output is crucial for achieving smooth and accurate
project we are working on at MDC where we utilize Neuroglancer to display large scale mice brains online.</aside><ul><li><strong>Collaboration-friendly</strong>: Share URLs with collaborators to provide access to the 3D visualization.</li></ul><div class=tutorial><a class=tutorial-link href=https://ida-mdc.github.io/workshop-visualization/tutorial-volume-rendering-neuroglancer/><div><div class=tutorial-header><i>Separate tutorial - click this box!</i></div><div class=title><div id=qr-tutorial-volume-rendering-neuroglancer class=qr-code></div><script>jQuery("#qr-tutorial-volume-rendering-neuroglancer").qrcode({text:"https://ida-mdc.github.io/workshop-visualization/tutorial-volume-rendering-neuroglancer/"})</script><h3>Volumetric data rendering with Neuroglancer</h3></div><div class=description>Use case description of how to render voxel-based volumetric data using Neuroglancer and stream data locally or remotely for visualization.</div></div><span class=cover-image style=background-image:url(/workshop-visualization/img/neuroglancer.png)></span></a></div></section><section id=section-11><h2 id=converting-volumetric-datasets-into-meshes>Converting volumetric datasets into meshes</h2><h3 id=annotations>Annotations</h3><aside class=notes><p>Annotations can be used to add specific information to volumetric datasets, such as marking points of interest (e.g., cell locations, regions of interest) or segmenting areas of the data. Converting these annotated datasets into meshes allows for the visual representation of those specific features.</p><p>When working with <strong>unannotated</strong> volumetric datasets, you can explore the data interactively using <strong>transfer functions</strong>. Transfer functions map intensity values in the dataset to colors and opacities, allowing you to visualize different regions of the volume without defining hard boundaries. This technique is often used for soft, exploratory visualizations of the internal structures of the data.</p></aside><ul><li><strong>Transfer functions</strong>: Used for visualizing unannotated datasets, adjusting colors and opacities based on intensity values.</li></ul><aside class=notes>When converting volumetric data to <strong>meshes</strong>, it&rsquo;s necessary to draw concrete borders between the <strong>foreground</strong> (the object of interest) and the <strong>background</strong>. This is achieved through:</aside><ul><li><strong>Fixed thresholds</strong>: Used to generate meshes by separating foreground from background using a set intensity threshold.</li><li><strong>Content-based annotations</strong>: Create precise meshes by using annotated regions to define boundaries.</li></ul></section><section id=section-12 class=repeated-heading><h2 id=converting-volumetric-datasets-into-meshes-1>Converting volumetric datasets into meshes</h2><figure><img src=https://ida-mdc.github.io/workshop-visualization/img/annotation-conversion.jpg></figure><div style=flex:1></div><div class=citations><ul><li><a href=https://rupress.org/jcb/article/220/2/e202010039/211599/3D-FIB-SEM-reconstruction-of-microtubule-organelle>© Müller et al. https://doi.org/10.1083/jcb.202010039</a></li></ul></div></section><section id=section-13 class=repeated-heading><h2 id=converting-volumetric-datasets-into-meshes-2>Converting volumetric datasets into meshes</h2><h3 id=marching-cubes>Marching Cubes</h3><aside class=notes>The <strong>Marching Cubes algorithm</strong> is one of the most popular methods for extracting a 3D surface from volumetric data. It identifies the points in a voxel grid where the dataset crosses a specific threshold value (the <strong>isosurface</strong>) and uses those points to generate a mesh.</aside><figure><img src=https://ida-mdc.github.io/workshop-visualization/img/MarchingCubesEdit.svg alt="Marching cubes algorithm. Credit: Ryoshoru, Jmtrivial on Wikimedia, CC BY-SA 4.0" height=700px><figcaption><p>Marching cubes algorithm. Credit: <a href=https://commons.wikimedia.org/wiki/File:MarchingCubesEdit.svg>Ryoshoru, Jmtrivial on Wikimedia</a>, CC BY-SA 4.0</p></figcaption></figure></section><section id=section-14 class=repeated-heading><h2 id=converting-volumetric-datasets-into-meshes-3>Converting volumetric datasets into meshes</h2><h3 id=optimization>Optimization</h3><ul><li><strong>Binary masks</strong> vs. <strong>Probability maps</strong></li></ul><aside class=notes><p>When converting volumetric data to meshes, <strong>optimizing</strong> the output is crucial for achieving smooth and accurate
results. One good approach is using <strong>probability maps</strong> rather than binary masks as input for the <strong>Marching Cubes
algorithm</strong>.</p><ul><li><strong>Binary masks</strong>: Create rough, blocky meshes because the data is thresholded into hard 0/1 values, losing subpixel detail.</li><li><strong>Probability maps</strong>: Offer smoother results, as the algorithm can detect gradients between regions, improving mesh precision at subpixel levels.</li></ul></aside><figure><img src=https://ida-mdc.github.io/workshop-visualization/img/mesh-conversion-optimization.jpg></figure></section><section id=section-15 class=repeated-heading><h2 id=converting-volumetric-datasets-into-meshes-4>Converting volumetric datasets into meshes</h2><h3 id=reducing-mesh-complexity>Reducing mesh complexity</h3><aside class=notes>Large, complex meshes can be computationally intensive to render. Reducing mesh complexity helps with performance, especially for web viewers or real-time visualization. We’ll explore some standard techniques to simplify meshes while maintaining critical details.</aside><div class=flex></div><div class=horizontal><ul><li><strong>Decimation</strong>: A process to reduce the number of polygons in a mesh while maintaining the overall shape and detail.</li><li><strong>Remeshing</strong>: Tools like MeshLab and Blender offer remeshing techniques that can optimize mesh topology for better performance.</li><li><strong>LOD (Level of Detail)</strong>: Use LOD techniques to switch between different levels of mesh complexity based on the viewer’s distance.</li></ul><figure><img src=https://ida-mdc.github.io/workshop-visualization/img/reducing-mesh-complexity.jpg></figure></div><div class=flex></div></section><section id=section-16 class=repeated-heading><h2 id=converting-volumetric-datasets-into-meshes-5>Converting volumetric datasets into meshes</h2><h3 id=conversion-scripts>Conversion scripts</h3><aside class=notes>While several tools include converting volumetric datasets into meshes, VTK has worked particularly well in our
experience. Check out the tutorial below for more details. This includes Python code snippets, but also the
Expand Down

0 comments on commit 539e8fa

Please sign in to comment.