Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updtating documentation 03 24 #46

Merged
merged 18 commits into from
Apr 11, 2024
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 38 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,18 @@
# VIO - Visual Inspection Orchestrator
<div align="center">
<h1>VIO - Visual Inspection Orchestrator</h1>

![CI edge_orchestrator](https://github.com/octo-technology/VIO/actions/workflows/ci_edge_orchestrator.yml/badge.svg)
![CI edge_interface](https://github.com/octo-technology/VIO/actions/workflows/ci_edge_interface.yml/badge.svg)
![GitHub issues](https://img.shields.io/github/issues/octo-technology/VIO)

Visual Inspection Orchestrator is a modular framework made to ease the deployment of VI usecases.
🎥 Visual Inspection Orchestrator is a modular framework made to ease the deployment of VI usecases 🎥

*Usecase example: Quality check of a product manufactured on an assembly line.*
</div>

VIO full documentation can be found [here](https://octo-technology.github.io/VIO/)
<h1></h1>

## 🏗️ Modular framework

The VIO modules are split between:

Expand All @@ -24,31 +28,35 @@ The VIO modules are split between:
- [The hub monitoring](docs/hub_monitoring.md)
- [The hub deployment playbook](docs/hub_deployment.md)

## Requirements
**VIO full documentation can be found [here](https://octo-technology.github.io/VIO/)**

## 🧱 Requirements

- `docker` installed
- `make` installed

## Install the framework
## 🚀 Getting started

### Install the framework

```shell
git clone [email protected]:octo-technology/VIO.git
```

## Run the stack
### Running the stack

To launch the stack you can use the [Makefile](../Makefile) on the root of the repository which define the different
target based on the [docker-compose.yml](../docker-compose.yml):
target based on the [docker-compose.yml](../docker-compose.yml) as described below, or [run the modules locally]().

### Start vio
#### Start vio

To start all edge services (orchestrator, model-serving, interface, db) with local hub monitoring (grafana):

```shell
make vio-edge-up
```

### Stop vio
#### Stop vio

To stop and delete all running services :

Expand All @@ -60,6 +68,8 @@ To check all services are up and running you can run the command `docker ps`, yo

![stack-up-with-docker](docs/images/stack-up-with-docker.png)

### Accessing the services

Once all services are up and running you can access:

- the swagger of the edge orchestrator API (OrchestratoAPI): [http://localhost:8000/docs](http://localhost:8000/docs)
Expand All @@ -72,16 +82,32 @@ launch the following actions:

![vio-architecture-stack](docs/images/edge_orchestrator-actions.png)

## Releases
## 🛰️ Technology features
- 🏠 Hosting :
- ☁️ Hub : Cloud possibilities with [Azure](https://portal.azure.com/#home) and [GCP](https://cloud.google.com/)
- 🛸 Host : Using raspberries
- 🐳 Host : [Docker](https://www.docker.com/) or locally with anaconda
- 👮 Fleet management :
- 📦 Fleet integration/deployment with [Ansible](https://docs.ansible.com/ansible/latest/index.html)
- 🕵️ Fleet supervision/observability with [Grafana](https://grafana.com/) & [Open-Telemetry](https://opentelemetry.io/docs/)
- ⚡️Backend API with [FastAPI](https://fastapi.tiangolo.com/)
- 📜 Frontend with [Vue.js](https://fr.vuejs.org/)
- 🏭 Continuous Integration & Continuous Development :
- ♟️ Github actions
- 📝️ Clean code with [Black](https://black.readthedocs.io/en/stable/index.html) & [Flake8](https://flake8.pycqa.org/en/latest/)
- ✅ Tested with [Pytest](https://docs.pytest.org/en/8.0.x/)
- 📈 [Grafana](https://grafana.com/) insight & dashboard

## 🏭 Releases

Build Type | Status | Artifacts
-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------
**Docker images** | [![Status](https://github.com/octo-technology/VIO/actions/workflows/publication_vio_images.yml/badge.svg)](https://github.com/octo-technology/VIO/actions/workflows/publication_vio_images.yml/badge.svg) | [Github registry](https://github.com/orgs/octo-technology/packages)

## License
## 📝 License

VIO is licensed under [Apache 2.0 License](docs/LICENSE.md)

## Contributing
## 🙋 Contributing

Learn more about how to get involved on [CONTRIBUTING.md](docs/CONTRIBUTING.md) guide
39 changes: 38 additions & 1 deletion docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,44 @@ class TestMyFunction:
```
- Don't mistake a stub for a mock. A mock is used to assert that it has been called (see above example). A stub
is used to simulate the returned value.


### Testing and docker images
- In order to run the tests, your docker instance will need to be connected to github, allowing docker to pull the images.
Complete the `VIO/edge_orchestrator/tests/.env` file with your github username and access token, or
by following [these steps](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-with-a-personal-access-token-classic) to make this connection possible.

⚠ If you are not working with a M1 processor ⚠
ThibautLeibel marked this conversation as resolved.
Show resolved Hide resolved

You may encounter some deployment problems when starting the Orchestrator's tests.
To resolve them you will have to build a docker image that fits your system using the Orchestrator's makefile and modify
the `conftest.py` file to edit the `image_name` field.
```
VIO/edge_orchestrator/tests/conftest.py

EDGE_MODEL_SERVING = {
"image_name": --NEW_DOCKER_IMAGE--,
"container_volume_path": "/tf_serving",
"host_volume_path_suffix": "edge_model_serving",
}
```
You will need to change the `starting_log` parameter and remove the call to `check_image_presence_or_pull_it_from_registry`
from the `container.py` file.

```
VIO/edge_orchestrator/tests/fixtures/containers.py

if tf_serving_host is None or tf_serving_port is None:
port_to_expose = 8501
container = TfServingContainer(
image=image_name,
port_to_expose=port_to_expose,
env={"MODEL_NAME": exposed_model_name},
host_volume_path=host_volume_path,
container_volume_path=container_volume_path,
)
container.start("INFO: Application startup complete.")
```

## Versioning strategy
- Git tutorial:
- [Basic git tutorial](http://rogerdudler.github.io/git-guide/)
Expand Down
57 changes: 57 additions & 0 deletions docs/adding_a_custom_model.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# Adding a custom model to VIO

## Model export & VIO Configuration
### Model format
The edge model serving supports models of 3 types : TensorFlow, TensorFlowLite and Torch.

This note will present how to add a custom TensorFlowLite model in VIO. The process is similar for the two other types,
for which you can follow the respective ReadMe files ([Torch serving](../edge_model_serving/torch_serving/README.md) and for
[TensorFlow serving](../edge_model_serving/tf_serving/README.md)) and work in their respective edge sub-folder.

ThibautLeibel marked this conversation as resolved.
Show resolved Hide resolved
Comming soon: Integration with [Hugging Face](https://huggingface.co/)

### Saving the model
The model has to be given to the Edge_serving module. Export your custom model to tflite and store it as
`VIO/edge_model_serving/models/tflite/<model_folder_name>/<model_name>.tflite`. (If needed add a .txt file with the
labels/class names)

The Edge_orchestrator has to know about the new model that is available. To do so, complete the inventory file
`VIO/edge_orchestrator/config/inventory.json` with all the information required depending on you model type under the
````models ```` category. Note that the model name variable should fit the model folder name. You can refer to [this subsection](edge_orchestrator.md#add-a-new-model).


### Creating the configuration files
Now that all the components know about your new model, you will need to create a configuration that will use your custom
model. Create a new JSON file in `VIO/edge_orchestrator/config/station_configs` with any config name. You can follow the
configuration of this file in the [Add a new configuration](edge_orchestrator.md#add-a-new-configuration-) subsection.

## Adapting the code to your model - Optional
bojeanson marked this conversation as resolved.
Show resolved Hide resolved

There are two layers of post-processing that may need to be edited to integrate your model. At the Edge Serving inference
level & for the Edge Orchestrator reception.

- Detection model

The implemented methods are designed to support Mobilenet_SSD format, where the output of the model is
`List[List[Boxes], List[Classes], List[Scores]]` and box format is `[ymin, xmin, ymax, xmax]`.

If your custom model doesn't fit this format, you can add custom post-processing methods.

The Edge Serving `VIO/edge_model_serving/tflite_serving/src/tflite_serving/api_routes.py` calling the model does a first
treatment. Its purpose is separating model's output tensor into a dictionary of the final boxes coordinates, classes and
scores.

The results are then post processed at the Orchestrator `VIO/edge_orchestrator/edge_orchestrator/infrastructure/
model_forward/tf_serving_detection_wrapper.py` level to filter the detections of the desired classes and convert the box
coordinates to `Box: [xmin, ymin, xmax, ymax]` then converts the information into a dictionary having a key for each
detection.

- Classification & other models

The process is exactly the same as for the detection model, the only difference will be at the Orchestrator level.
Instead of modifying the `tf_serving_detection_wrapper.py` file select the file that corresponds to your model,
modifying the `classification` or `detection_and_classification` wrappers. And you may not have to handle boxes coordinates.




30 changes: 30 additions & 0 deletions docs/edge_orchestrator.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,6 +201,20 @@ Here's a template of a config file.

The comments are only here to guide you, you should delete them in your new json config.

Station config : Camera | Description
-----------------|-----------------------------------------------------------------------
`type` | Camera type can be `fake`, `pi_camera` and `usb_camera`. `pi_camera` will be used for raspberry deployment. `usb_camera` is used when it is required to find a camera or webcam connected to the edge. A `fake` camera will not capture image but pick a random .jpg or .png file in the folder pointed by the "input_images_folder" parameter, which will be located in edge_orchestrator/data/<input_images_folder>.
`input_images_folder` | Used with `fake` cameras, is the path to the folder from which the pictures are taken.
`position` | Used for metadata
`exposition` | Used for metadata
ThibautLeibel marked this conversation as resolved.
Show resolved Hide resolved
`models_graph` | Pipeline of models used during inference. Dictionary of models, containing their names, depencecies to other models and all its possible parameters.
ThibautLeibel marked this conversation as resolved.
Show resolved Hide resolved
`camera_rule` | Dictionary, key `name` containing the rule name and key `parameters` containing the selected rule's inputs

For the item rules, just inform the rule's `name` and `parameters` as a dictionary of the inputs.




## Add a new model

- All our models are in tflite format. In order to add an already trained model in the ```flite_serving ``` folder.
Expand Down Expand Up @@ -245,6 +259,22 @@ Inside this folder should be the .tflite model and if needed a .txt file with th
}
```

Model parameters | Description
------------------------|---------------------------------------------------------------------------
`category` | Model's category, can be `object_detection`, `classification` or `object_detection_with_classification`
`version` | Model's version, used in the API link, should be 1 __mais c'est pas utilisé__
`model_type` | Type of model used, is `Mobilenet` or `yolo`. Mobilenet models return boxes as [ymin, xmin, ymax, xmax] and Yolo as [x_center, y_center, width, height]
`image_resolution` | List of ints corresponding to the x.y image size ingested by the model
`depends_on` | Used to design model pipelines, is a list of models' names
`class_names` | List of the label names as a list of strings
`class_names_path` | Path to the labels files, the file should be located under the `edge_orchestrator/data` folder
`class_to_detect` | List of label names that will be detected (for Mobilenet)
`number_of_boxes` | Useless ?
ThibautLeibel marked this conversation as resolved.
Show resolved Hide resolved
`output: detection_boxes` | For detection models, name which will be given to the predicted boxes
`output: detection_scores` | For detection models, name which will be given to the predicted scores
`output: detection_classes` | For detection models, name which will be given to the predicted classes
ThibautLeibel marked this conversation as resolved.
Show resolved Hide resolved
`output: detection_metadata` | For detection models, name which will be given to the predicted metadata
`objectness_threshold` | Score threshold under which an object won't be detected

## Add new camera rule
In order to make a final decision i.e the item rule, we first need camera rules. Each camera gets a rule.
Expand Down
51 changes: 51 additions & 0 deletions docs/running_vio_locally.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Running VIO locally

In order to use VIO locally we are going to start 3 modules in 3 different terminals. Some of them will need conda installed.
The most direct way to install conda on MacOS is via Homebrew:
```
brew update
brew install --cask miniconda
```

## Running the edge model serving
The edge model serving is the module that is going to do the inference computing using the stored models (_it does the_ `.predict()`"). It is called
by the edge orchestrator.

You can follow the conda environment installation from the
[edge model serving's ReadMe](../edge_model_serving/tflite_serving/README.md) file. Once it is done you can start the
server using the make command.

```
make run_tflite_serving
```

## Running the edge orchestrator
The edge orchestrator will administrate the configuration, images captures, storage and communication with the edge
models for inference then applying business rules.
The following commands will create a package of the orchestrator environment as [described here](edge_orchestrator.md)
```
cd edge_orchestrator
make conda_env
make install
pip install -e .[dev]
```

It is now required to edit the configured environment according to a local run.
In the `VIO/edge_orchestrator/edge_orchestrator/api_config.py` file, inform the local profile on line 7.

```
def load_config():
configuration = os.environ.get("API_CONFIG", "local")
```

Now start the server :

```
python -m edge_orchestrator
```

## Running the interface
The interface is connected to the edge model serving, facilitating its usage.

You can follow the Edge Interface's [ReadMe file](../edge_interface/README.md) commands to run this part.

2 changes: 1 addition & 1 deletion edge_interface/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# edge_interface

## Development setup

```
brew update
brew install node
```

## Project setup
```
cd edge_interface
npm install
```

Expand Down
13 changes: 12 additions & 1 deletion edge_model_serving/tflite_serving/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,25 @@ Expose tensorflow-lite models via a rest API. Currently object, face & scene det
## Setup
In this process we create a virtual environment (venv), then install tensorflow-lite [as per these instructions](https://www.tensorflow.org/lite/guide/python) which is platform specific, and finally install the remaining requirements. **Note** on an RPi (only) it is necessary to system wide install pip3, numpy, pillow.

All instructions for mac:
All instructions for mac using venv :
```
python3 -m venv venv
source venv/bin/activate
pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-macosx_10_14_x86_64.whl
pip3 install -r requirements.txt
```

Or using the makefiles to set up a conda env :

```
cd edge_model_serving/tflite_serving

make tflite_serving
conda activate tflite_serving
make install_tflite_mac
pip3 install --extra-index-url https://google-coral.github.io/py-repo/ tflite_runtime
```

## Models
For convenience a couple of models are included in this repo and used by default. A description of each model is included in its directory. Additional models are available [here](https://github.com/google-coral/edgetpu/tree/master/test_data).

Expand Down
4 changes: 4 additions & 0 deletions edge_orchestrator/edge_orchestrator/api_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,10 @@ def load_config():
from edge_orchestrator.environment.docker import Docker

configuration_class = Docker
elif configuration == "local":
from edge_orchestrator.environment.local import Local

configuration_class = Local
elif configuration == "default":
from edge_orchestrator.environment.default import Default

Expand Down
Loading
Loading