Skip to content

Commit

Permalink
fixes #70: Refactor the README Sequence Diagram in Mermaid (#72)
Browse files Browse the repository at this point in the history
* Update Sequence Diagram README.md

![SD_1 drawio (2)](https://github.com/ai-cfia/nachet-backend/assets/19809069/272f37dc-f4ec-449b-ba82-950c54b9f856)

issue #51: Fixing Markdown lint

issue #51: fixing MD lint
  • Loading branch information
Maxence Guindon committed Apr 8, 2024
1 parent dd6e6c2 commit 8e3f6c5
Show file tree
Hide file tree
Showing 3 changed files with 60 additions and 22 deletions.
30 changes: 28 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,34 @@
# nachet-backend
# :microscope: nachet-backend 🌱

## High level sequence diagram

![SD_1 drawio (2)](https://github.com/ai-cfia/nachet-backend/assets/19809069/272f37dc-f4ec-449b-ba82-950c54b9f856)
```mermaid
sequenceDiagram
title: High Level Sequence Diagram 1.0.0
actor Client
participant frontend
participant backend
participant EndpointAPI
participant AzureStorageAPI
Client->>+frontend: getDirectoriesList()
frontend->>+backend: HTTP POST req.
backend->>+AzureStorageAPI: get_blobs()
AzureStorageAPI-->>-backend: blobListObject
backend-->>frontend: directories list res.
frontend-->>Client: display directories
Client->>frontend: handleInference()
frontend->>backend: HTTP POST req.
backend->>+AzureStorageAPI: upload_image(image)
AzureStorageAPI-->>-backend: imageBlobObject
backend->>+EndpointAPI: get_inference_result(image)
EndpointAPI-->>-backend: inference res.
backend->>backend: process inf. result
backend-->>frontend: inference res.
frontend-->>-Client: display inference res.
backend->>+AzureStorageAPI: (async) upload_inference_result(json)
```

### Details

Expand Down
30 changes: 15 additions & 15 deletions docs/nachet-inference-documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@ selected by a parameter.

### Pipelines

Pipelines are defined as a set of models that follow each other, where the output of
one model is used as input for the next models, and so on. A pipeline contains from 1 to n
models.
Pipelines are defined as a set of models that follow each other, where the
output of one model is used as input for the next models, and so on. A pipeline
contains from 1 to n models.

#### Pipelines flowchart 1.0.0

Expand Down Expand Up @@ -141,18 +141,18 @@ sequenceDiagram

### Inference Request function

The inference request function plays a crucial role in Nachet Interactive's backend.
It requests actions from selected models or pipelines based on certain checks.
These checks include verifying that all arguments required to find or initialize
the blob container and process the image have been transmitted to the function.
It also checks if the selected pipeline is recognized by the system and if the image sent for analysis
has a valid header.

If all the above checks pass, the function initializes or finds the user blob container
and uploads the image. Next, it requests an inference from every model in the pipeline.
Each model specifies their `entry_function` (how to call and retrieve data) and whether
they have a `process_inference` function. Based on these indications, the results are returned
and stored in the cache.
The inference request function plays a crucial role in Nachet Interactive's
backend. It requests actions from selected models or pipelines based on certain
checks. These checks include verifying that all arguments required to find or
initialize the blob container and process the image have been transmitted to the
function. It also checks if the selected pipeline is recognized by the system
and if the image sent for analysis has a valid header.

If all the above checks pass, the function initializes or finds the user blob
container and uploads the image. Next, it requests an inference from every model
in the pipeline. Each model specifies their `entry_function` (how to call and
retrieve data) and whether they have a `process_inference` function. Based on
these indications, the results are returned and stored in the cache.

If no other model is called, the last result is then processed and sent to the frontend.

Expand Down
22 changes: 17 additions & 5 deletions docs/nachet-model-documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,12 @@ Nachet Interactive models' perform the following tasks:

### Request Inference Function

The request inference functions request a prediction from the specified model (such as Swin, Nachet-6seeds, etc.). If needed, the function will process the data to be readable by the next model in the pipeline. For instance, the Seed-detector only returns "seed" as a label, and its inference needs to be processed and passed to the next model which assigns the correct label to the seeds.
The request inference functions request a prediction from the specified model
(such as Swin, Nachet-6seeds, etc.). If needed, the function will process the
data to be readable by the next model in the pipeline. For instance, the
Seed-detector only returns "seed" as a label, and its inference needs to be
processed and passed to the next model which assigns the correct label to the
seeds.

## Return value of models

Expand Down Expand Up @@ -82,16 +87,21 @@ The request inference functions request a prediction from the specified model (s

### Why topN

We decided to named the top results property top N because this value can return n predictions. Usually in AI, the top 5 result are use to measure the accuracy of a model. If the correct result is the top 5, then it is considered that the prediction was true.
We decided to named the top results property top N because this value can return
n predictions. Usually in AI, the top 5 result are use to measure the accuracy
of a model. If the correct result is the top 5, then it is considered that the
prediction was true.

This is useful in case were the user have is attention on more then 1 result.

> "Top N accuracy — Top N accuracy is when you measure how often your predicted class falls in the top N values of your softmax distribution."
> "Top N accuracy — Top N accuracy is when you measure how often your predicted
> class falls in the top N values of your softmax distribution."
[Nagda, R. (2019-11-08) *Evaluating models using the Top N accuracy metrics*. Medium](https://medium.com/nanonets/evaluating-models-using-the-top-n-accuracy-metrics-c0355b36f91b)

### Box around seed

The `box` key stores the value for a specific box around a seed. This helps the frontend application build a red rectangle around every seed on the image.
The `box` key stores the value for a specific box around a seed. This helps the
frontend application build a red rectangle around every seed on the image.

![image](https://github.com/ai-cfia/nachet-backend/assets/96267006/469add8d-f40a-483f-b090-0ebcb7a8396b)

Expand Down Expand Up @@ -149,7 +159,9 @@ A list of common error models returns to the backend.
## Pipeline and model data

In order to dynamically build the pipeline in the backend from the model, the
following data structure was designed. For now, the pipelines will have two keys for their names (`model_name`, `piepline_name`) to support the frontend code until it is changed to get the name of the pipeline with the correct key.
following data structure was designed. For now, the pipelines will have two keys
for their names (`model_name`, `piepline_name`) to support the frontend code
until it is changed to get the name of the pipeline with the correct key.

```yaml
version:
Expand Down

0 comments on commit 8e3f6c5

Please sign in to comment.