diff --git a/README.md b/README.md index 2aa1109..bbb3ea1 100644 --- a/README.md +++ b/README.md @@ -20,6 +20,33 @@ The __free Zetane Viewer__ is a tool to help understand and accelerate discovery # **Zetane Viewer** + + + + + + ## Installation You can install the __free__ Zetane viewer for Windows, Linux and Mac, and explore ZTN and ONNX files. @@ -30,6 +57,8 @@ You can install the __free__ Zetane viewer for Windows, Linux and Mac, and explo [Download for Mac](https://download.zetane.com/zetane/Zetane-1.7.4.dmg) + + ## Tutorial In this [video](https://www.youtube.com/watch?v=J3Zd5GR_lQs&feature=youtu.be), we will show you how to load a Zetane or ONNX model, navigate the model and view different tensors: @@ -38,6 +67,9 @@ In this [video](https://www.youtube.com/watch?v=J3Zd5GR_lQs&feature=youtu.be), w Below is the step-by-step instruction of how to load and inspect a model in the Zetane viewer: + + + - ### How to load a model The viewer supports both .ONNX and .ZTN files. The ZTN files were generated from the Keras and Pytorch scripts shared in this Git repository. After launching the viewer, to load a Zetane model, simply click “Load Zetane Model” in the DATA I/O menu. To load an Onnx model, click on “Import ONNX Model” in the same menu. Below you can access the ZTN files for a few models to load. You can also access ONNX files from the [ONNX Model Zoo](https://github.com/onnx/models).

@@ -49,6 +81,8 @@ At the highest level, we have the model architecture which is composed of interc

architecture + + - ### How to navigate You may navigate the model viewer window by right clicking and dragging to explore the space and using the scroll wheel to zoom in and out. [Here](https://docs.zetane.com/interactions.html#) is the complete list of navigation instructions. You can change the behavior of the mouse wheel (either to zoom or to navigate) via the Mouse Zoom toggle in the top menu. @@ -57,6 +91,9 @@ You may navigate the model viewer window by right clicking and dragging to explo zoom

+ + + - ### Loading custom model inputs After loading a model you may want to send your own inputs to the model to inference. Zetane supports loading .npy, .npz, .png, .jpg, .pb (protobuf), .tiff, and .hdr files that match the input dimensions of the model. The Zetane engine will attempt to intelligently resize the file loaded (if possible) in order to send the data to the model. After loading and running the input, you will be able to explore in detail how your model interpreted the input data.

@@ -65,6 +102,9 @@ After loading a model you may want to send your own inputs to the model to infer tensors

+ + + - ### How to inspect different layers and feature maps For each layer, you have the option to view all the feature maps and filters by clicking on the “Show Feature Maps” on each node. You may inspect the inputs and outputs and weights and biases using the tensor view bar.

@@ -72,6 +112,9 @@ For each layer, you have the option to view all the feature maps and filters by featuremap

+ + + - ### Tensor view bar By clicking on the associated button, you can visualize inputs, outputs, weights and biases (if applicable) for each individual layer. You can also investigate the shape, type, mean and standard deviation of each tensor.

@@ -85,6 +128,9 @@ Statistics about the tensor value and its distribution is given in the histogram tensorpanel

+ + + - ### Styles of tensor visualization Tensors can be inspected in different ways, including 3D view and 2D view with and without actual values.

@@ -102,29 +148,59 @@ Tensor values and color representations of each value based on the gradient show Tensor values__ | tensor_viz_values Feature maps view when the tensor has shape of dimension 3| tensor_viz_values + + # **Models** We have generated a few ZTN models for inspecting their architecture and internal tensors in the viewer. We have also provided the code used to generate these models. + + + + ## Image Classification - [Alexnet](models/README.md#alexnet) - [EfficientNet](models/README.md#efficientnet) - [Resnet50v2](models/README.md#resnet50v2) + + + + ## Object Detection - [YoloV3](models/README.md#yolov3) - [SSD](models/README.md#ssd) + + + + ## Image Segmentation - [Unet](models/README.md#unet) + + + + ## Body, Face and Gesture Analysis - [Emotion_ferplus8](models/README.md#emotion_ferplus8) - [RFB_320](models/README.md#rfb_320) - [vgg_ilsvrc_16_age](models/README.md#vgg_ilsvrc_16_age) - [vgg_ilsvrc_16_gen](models/README.md#vgg_ilsvrc_16_gen) + + + + ## Image Manipulation - [Super resolution](models/README.md#super-resolution) - [Style transfer](models/README.md#style-transfer) + + + + ## XAI - [XAI for VGG16](models/README.md#xai-with-keras) - [XAI for Alexnet](models/README.md#xai-with-pytorch) + + + + ## Classic Machine Learning - [Sklearn Iris](models/README.md#sklearn-iris)