Skip to content

Commit

Permalink
Updtating documentation 03 24 (#46)
Browse files Browse the repository at this point in the history
* Adding custom model documentation

* Adding run VIO localy doc

* Update tflite_serving Readme to add the conda env install

* Beautifying main README

* Update docs/running_vio_locally.md

Co-authored-by: Baptiste O'Jeanson <[email protected]>

* Modifications following requests: edit doc / local environment profile / .env file for docker connection

* linting

* Editing Hub/host differencies

* Add parameter description

* Update parameters doc

* Adding small info about .env

* Add Adapters description

---------

Co-authored-by: gireg.roussel <[email protected]>
Co-authored-by: Baptiste O'Jeanson <[email protected]>
  • Loading branch information
3 people authored Apr 11, 2024
1 parent 247c331 commit 370d88e
Show file tree
Hide file tree
Showing 11 changed files with 407 additions and 15 deletions.
50 changes: 38 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,18 @@
# VIO - Visual Inspection Orchestrator
<div align="center">
<h1>VIO - Visual Inspection Orchestrator</h1>

![CI edge_orchestrator](https://github.com/octo-technology/VIO/actions/workflows/ci_edge_orchestrator.yml/badge.svg)
![CI edge_interface](https://github.com/octo-technology/VIO/actions/workflows/ci_edge_interface.yml/badge.svg)
![GitHub issues](https://img.shields.io/github/issues/octo-technology/VIO)

Visual Inspection Orchestrator is a modular framework made to ease the deployment of VI usecases.
🎥 Visual Inspection Orchestrator is a modular framework made to ease the deployment of VI usecases 🎥

*Usecase example: Quality check of a product manufactured on an assembly line.*
</div>

VIO full documentation can be found [here](https://octo-technology.github.io/VIO/)
<h1></h1>

## 🏗️ Modular framework

The VIO modules are split between:

Expand All @@ -24,31 +28,35 @@ The VIO modules are split between:
- [The hub monitoring](docs/hub_monitoring.md)
- [The hub deployment playbook](docs/hub_deployment.md)

## Requirements
**VIO full documentation can be found [here](https://octo-technology.github.io/VIO/)**

## 🧱 Requirements

- `docker` installed
- `make` installed

## Install the framework
## 🚀 Getting started

### Install the framework

```shell
git clone [email protected]:octo-technology/VIO.git
```

## Run the stack
### Running the stack

To launch the stack you can use the [Makefile](../Makefile) on the root of the repository which define the different
target based on the [docker-compose.yml](../docker-compose.yml):
target based on the [docker-compose.yml](../docker-compose.yml) as described below, or [run the modules locally]().

### Start vio
#### Start vio

To start all edge services (orchestrator, model-serving, interface, db) with local hub monitoring (grafana):

```shell
make vio-edge-up
```

### Stop vio
#### Stop vio

To stop and delete all running services :

Expand All @@ -60,6 +68,8 @@ To check all services are up and running you can run the command `docker ps`, yo

![stack-up-with-docker](docs/images/stack-up-with-docker.png)

### Accessing the services

Once all services are up and running you can access:

- the swagger of the edge orchestrator API (OrchestratoAPI): [http://localhost:8000/docs](http://localhost:8000/docs)
Expand All @@ -72,16 +82,32 @@ launch the following actions:

![vio-architecture-stack](docs/images/edge_orchestrator-actions.png)

## Releases
## 🛰️ Technology features
- 🏠 Hosting :
- ☁️ Hub : Cloud possibilities with [Azure](https://portal.azure.com/#home) and [GCP](https://cloud.google.com/)
- 🛸 Host : Using raspberries
- 🐳 Host : [Docker](https://www.docker.com/) or locally with anaconda
- 👮 Fleet management :
- 📦 Fleet integration/deployment with [Ansible](https://docs.ansible.com/ansible/latest/index.html)
- 🕵️ Fleet supervision/observability with [Grafana](https://grafana.com/) & [Open-Telemetry](https://opentelemetry.io/docs/)
- ⚡️Backend API with [FastAPI](https://fastapi.tiangolo.com/)
- 📜 Frontend with [Vue.js](https://fr.vuejs.org/)
- 🏭 Continuous Integration & Continuous Development :
- ♟️ Github actions
- 📝️ Clean code with [Black](https://black.readthedocs.io/en/stable/index.html) & [Flake8](https://flake8.pycqa.org/en/latest/)
- ✅ Tested with [Pytest](https://docs.pytest.org/en/8.0.x/)
- 📈 [Grafana](https://grafana.com/) insight & dashboard

## 🏭 Releases

Build Type | Status | Artifacts
-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------
**Docker images** | [![Status](https://github.com/octo-technology/VIO/actions/workflows/publication_vio_images.yml/badge.svg)](https://github.com/octo-technology/VIO/actions/workflows/publication_vio_images.yml/badge.svg) | [Github registry](https://github.com/orgs/octo-technology/packages)

## License
## 📝 License

VIO is licensed under [Apache 2.0 License](docs/LICENSE.md)

## Contributing
## 🙋 Contributing

Learn more about how to get involved on [CONTRIBUTING.md](docs/CONTRIBUTING.md) guide
39 changes: 38 additions & 1 deletion docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,44 @@ class TestMyFunction:
```
- Don't mistake a stub for a mock. A mock is used to assert that it has been called (see above example). A stub
is used to simulate the returned value.


### Testing and docker images
- In order to run the tests, your docker instance will need to be connected to github, allowing docker to pull the images.
Complete the `VIO/edge_orchestrator/tests/.env` file with your github username and access token, to make this connection possible. The tocken needs `read:packages`, `write:packages` and `delete:packages` permissions.
[More informathion here](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-with-a-personal-access-token-classic)

⚠ If you are not working with a M1 processor ⚠

You may encounter some deployment problems when starting the Orchestrator's tests.
To resolve them you will have to build a docker image that fits your system using the Orchestrator's makefile and modify
the `conftest.py` file to edit the `image_name` field.
```
VIO/edge_orchestrator/tests/conftest.py
EDGE_MODEL_SERVING = {
"image_name": --NEW_DOCKER_IMAGE--,
"container_volume_path": "/tf_serving",
"host_volume_path_suffix": "edge_model_serving",
}
```
You may need to change the `starting_log` parameter and remove the call to `check_image_presence_or_pull_it_from_registry`
from the `container.py` file.

```
VIO/edge_orchestrator/tests/fixtures/containers.py
if tf_serving_host is None or tf_serving_port is None:
port_to_expose = 8501
container = TfServingContainer(
image=image_name,
port_to_expose=port_to_expose,
env={"MODEL_NAME": exposed_model_name},
host_volume_path=host_volume_path,
container_volume_path=container_volume_path,
)
container.start("INFO: Application startup complete.")
```

## Versioning strategy
- Git tutorial:
- [Basic git tutorial](http://rogerdudler.github.io/git-guide/)
Expand Down
57 changes: 57 additions & 0 deletions docs/adding_a_custom_model.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# Adding a custom model to VIO

## Model export & VIO Configuration
### Model format
The edge model serving supports models of 3 types : TensorFlow, TensorFlowLite and Torch.

This note will present how to add a custom TensorFlowLite model in VIO. The process is similar for the two other types,
for which you can follow the respective ReadMe files ([Torch serving](../edge_model_serving/torch_serving/README.md) and for
[TensorFlow serving](../edge_model_serving/tf_serving/README.md)) and work in their respective edge sub-folder.

Comming soon: Integration with [Hugging Face](https://huggingface.co/)

### Saving the model
The model has to be given to the Edge_serving module. Export your custom model to tflite and store it as
`VIO/edge_model_serving/models/tflite/<model_folder_name>/<model_name>.tflite`. (If needed add a .txt file with the
labels/class names)

The Edge_orchestrator has to know about the new model that is available. To do so, complete the inventory file
`VIO/edge_orchestrator/config/inventory.json` with all the information required depending on you model type under the
````models ```` category. Note that the model name variable should fit the model folder name. You can refer to [this subsection](edge_orchestrator.md#add-a-new-model).


### Creating the configuration files
Now that all the components know about your new model, you will need to create a configuration that will use your custom
model. Create a new JSON file in `VIO/edge_orchestrator/config/station_configs` with any config name. You can follow the
configuration of this file in the [Add a new configuration](edge_orchestrator.md#add-a-new-configuration-) subsection.

## Adapting the code to your model - Optional

There are two layers of post-processing that may need to be edited to integrate your model. At the Edge Serving inference
level & for the Edge Orchestrator reception.

- Detection model

The implemented methods are designed to support Mobilenet_SSD format, where the output of the model is
`List[List[Boxes], List[Classes], List[Scores]]` and box format is `[ymin, xmin, ymax, xmax]`.

If your custom model doesn't fit this format, you can add custom post-processing methods.

The Edge Serving `VIO/edge_model_serving/tflite_serving/src/tflite_serving/api_routes.py` calling the model does a first
treatment. Its purpose is separating model's output tensor into a dictionary of the final boxes coordinates, classes and
scores.

The results are then post processed at the Orchestrator `VIO/edge_orchestrator/edge_orchestrator/infrastructure/
model_forward/tf_serving_detection_wrapper.py` level to filter the detections of the desired classes and convert the box
coordinates to `Box: [xmin, ymin, xmax, ymax]` then converts the information into a dictionary having a key for each
detection.

- Classification & other models

The process is exactly the same as for the detection model, the only difference will be at the Orchestrator level.
Instead of modifying the `tf_serving_detection_wrapper.py` file select the file that corresponds to your model,
modifying the `classification` or `detection_and_classification` wrappers. And you may not have to handle boxes coordinates.




158 changes: 158 additions & 0 deletions docs/edge_orchestrator.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,6 +201,20 @@ Here's a template of a config file.
The comments are only here to guide you, you should delete them in your new json config.
Station config : Camera | Description
-----------------|-----------------------------------------------------------------------
`type` | Camera type can be `fake`, `pi_camera` and `usb_camera`. `pi_camera` will be used for raspberry deployment. `usb_camera` is used when it is required to find a camera or webcam connected to the edge. A `fake` camera will not capture image but pick a random .jpg or .png file in the folder pointed by the "input_images_folder" parameter, which will be located in edge_orchestrator/data/<input_images_folder>.
`input_images_folder` | Used with `fake` cameras, is the path to the folder from which the pictures are taken.
`position` | Used for metadata, purpose of saving the camera parameters in the future
`exposition` | Used for metadata, purpose of saving the camera parameters in the future
`models_graph` | Pipeline of models used during inference. Dictionary of models, containing their names, depencecies to other models and all its possible parameters.
`camera_rule` | Dictionary, key `name` containing the rule name and key `parameters` containing the selected rule's inputs
For the item rules, just inform the rule's `name` and `parameters` as a dictionary of the inputs.
## Add a new model
- All our models are in tflite format. In order to add an already trained model in the ```flite_serving ``` folder.
Expand Down Expand Up @@ -245,6 +259,21 @@ Inside this folder should be the .tflite model and if needed a .txt file with th
}
```
Model parameters | Description
------------------------|---------------------------------------------------------------------------
`category` | Model's category, can be `object_detection`, `classification` or `object_detection_with_classification`
`version` | Model's version, used in the API link, should be 1 __mais c'est pas utilisé__
`model_type` | Type of model used, is `Mobilenet` or `yolo`. Mobilenet models return boxes as [ymin, xmin, ymax, xmax] and Yolo as [x_center, y_center, width, height]
`image_resolution` | List of ints corresponding to the x.y image size ingested by the model
`depends_on` | Used to design model pipelines, is a list of models' names
`class_names` | List of the label names as a list of strings
`class_names_path` | Path to the labels files, the file should be located under the `edge_orchestrator/data` folder
`class_to_detect` | List of label names that will be detected (for Mobilenet)
`output: detection_boxes` | For detection models, name which will be given to the predicted boxes
`output: detection_scores` | For detection models, name which will be given to the predicted scores
`output: detection_classes` | For detection models, name which will be given to the predicted classes
`output: detection_metadata` | For detection models, name which will be given to the predicted metadata
`objectness_threshold` | Score threshold under which an object won't be detected
## Add new camera rule
In order to make a final decision i.e the item rule, we first need camera rules. Each camera gets a rule.
Expand Down Expand Up @@ -293,3 +322,132 @@ which get the good method from the name of the item rule in the station config f
The camera and item rules are called in the edge_orchestrator method ```edge_orchestrator/edge_orchestrator/domain/use_cases/edge_orchestrator.py```
in the ``` apply_business_rules ``` function.
## Adapters description
### Binary storage adapter
When an image is captured by any camera, VIO is saving the image in a storage. The binary storage adapter is responsible
this process. 4 binary storage systems are implemented in VIO:
- File System Binary Storage: Saves the image in the filesystem under the `VIO/edge_orchestrator/data/storage` folder.
- Memory Binary Storage: Saves the image in memory as a dictionary.
- Azure Container Binary Storage: Saves the images in an Azure Blob Storage container.
- GCP Binary Storage: Saves the images in a Google Cloud Storage bucket.
Theses adapters are implemented in the `edge_orchestrator/edge_orchestrator/infrastructure/binary_storage` folder and
the base mock class is defined `edge_orchestrator/edge_orchestrator/domain/ports/binary_storage.py`.
### Camera adapter
The camera adapter is responsible for localizing the connected cameras and capturing images, 3 camera systems are
implemented in VIO and are chosen in the model configuration:
- Fake Camera: Picks a random .jpg or .png file in the folder pointed by the "input_images_folder" parameter,
which will be located in edge_orchestrator/data/<input_images_folder>.
- Pi Camera: Used for Raspberry deployments to capture the images.
- USB Camera: Used when to find the connected cameras or webcams and capture images.
Theses adapters are implemented in the `edge_orchestrator/edge_orchestrator/infrastructure/camera` folder and the base
Camera class from which the adapters inherit is defined in `edge_orchestrator/edge_orchestrator/domain/models/camera.py`.
### Inventory adapter
Used to store the configuration settings. One adapter is available for json configuration files.
- Json Inventory: Reads the configuration from a json file.
This adapter is implemented in the `edge_orchestrator/edge_orchestrator/infrastructure/inventory` folder and the base
mock Inventory class is defined in `edge_orchestrator/edge_orchestrator/domain/ports/inventory.py`.
### Metadata storage adapter
When a task is done, the configuration and the results are saved in a metadata storage. An example of the stored data
is shown below:
<details><summary><b>Metadata json</b></summary><p>
```
"serial_number": "serial_number",
"category": "category",
"station_config": "yolo_coco_nano_with_1_fake_camera",
"cameras": {
"camera_id4": {
"brightness": null,
"exposition": 100,
"position": "back",
"source": "people_dataset"
}
},
"received_time": "2024-04-02 11:22:12",
"inferences": {
"camera_id4": {
"model_id4": {
"object_1": {
"label": "person",
"location": [
0.2731,
0.1679,
0.5308,
0.9438
],
"score": 0.9098637104034424,
"metadata": null
},
"object_2": {
"label": "person",
"location": [
0.1099,
0.351,
0.2252,
0.6945
],
"score": 0.559946596622467,
"metadata": null
}
}
}
},
"decision": "OK",
"state": "Done",
"error": null,
"id": "03a7adc7-59d5-4190-8160-4a71fd07cac5"
```
</p></details>
4 metadata storage systems are implemented in VIO:
- File System Metadata Storage: Saves the metadata in the filesystem under the `VIO/edge_orchestrator/edge_orchestrator/data/storage` folder.
- Memory Metadata Storage: Saves the metadata in memory as a dictionary.
- Azure Container Metadata Storage: Saves the metadata in an Azure Blob Storage container.
- GCP Metadata Storage: Saves the metadata in a Google Cloud Bucket.
- MongoDB Metadata Storage: Saves the metadata in a MongoDB database.
Theses adapters are implemented in the `edge_orchestrator/edge_orchestrator/infrastructure/metadata_storage` folder and
the base mock class is defined `edge_orchestrator/edge_orchestrator/domain/ports/metadata_storage.py`.
### Model forward adapter
The model forward adapter is responsible for the model inference, it performs the inference with the required post and
pre-processing. 5 model forward systems are implemented in VIO:
- Fake Model Forward: Returns a random inference result.
- TF Serving Wrapper: Redirect the prediction task to one of the 3 following Tensor Flow model forwarders.
- TF Serving Detection Wrapper: Performs the inference with a detection model.
- TF Serving Classification Wrapper: Performs the inference with a classification model.
- TF Serving Detection and Classification Wrapper: Performs the inference with a detection and classification model.
Theses adapters are implemented in the `edge_orchestrator/edge_orchestrator/infrastructure/model_forward` folder and
the base mock class is defined `edge_orchestrator/edge_orchestrator/domain/ports/model_forward.py`.
### Station config adapter
Used to store the station configuration settings. One adapter is available for json configuration files.
This adapter is implemented in the `edge_orchestrator/edge_orchestrator/infrastructure/station_config` folder and the base
mock StationConfig class is defined in `edge_orchestrator/edge_orchestrator/domain/ports/station_config.py`.
### Telemetry sink adapter
Sends the telemetry data to a sink for further processing and analysis. 3 telemetry sink systems are implemented in VIO:
- Fake Telemetry Sink: Does nothing.
- Azure Telemetry Sink: Sends the telemetry data to an Azure IoT Hub Module.
- Postgresql Telemetry Sink: Sends the telemetry data to a Postgresql database.
Theses adapters are implemented in the `edge_orchestrator/edge_orchestrator/infrastructure/telemetry_sink` folder and
the base mock class is defined `edge_orchestrator/edge_orchestrator/domain/ports/telemetry_sink.py`.
Loading

0 comments on commit 370d88e

Please sign in to comment.