From 3e41fb5bb1f3287ea0fdad4439a2e286d2749dd7 Mon Sep 17 00:00:00 2001 From: Maxence Guindon Date: Wed, 28 Feb 2024 11:55:54 -0500 Subject: [PATCH] fixes #51: Update sequence diagram to reflect change in code fixes #51: Update sequence diagram fixes #51: Update doc string fixes #51: update README and TESTING --- README.md | 39 +++++++--- TESTING.md | 8 +- docs/nachet-inference-documentation.md | 101 +++++++++++++------------ model_request/model_request.py | 6 +- 4 files changed, 88 insertions(+), 66 deletions(-) diff --git a/README.md b/README.md index 2be756c1..4a9fab5a 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,7 @@ # nachet-backend ## High level sequence diagram + ![SD_1 drawio (2)](https://github.com/ai-cfia/nachet-backend/assets/19809069/272f37dc-f4ec-449b-ba82-950c54b9f856) ### Details @@ -12,41 +13,57 @@ - Inference results from model endpoint are directly handled in `model_inference/inference.py` **** - + ### RUNNING NACHET-BACKEND FROM DEVCONTAINER + When you are developping, you can run the program while in the devcontainer by using this command: + ```bash hypercorn -b :8080 app:app ``` ### RUNNING NACHET-BACKEND AS A DOCKER CONTAINER -If you want to run the program as a Docker container (e.g., for production), use: + +If you want to run the program as a Docker container (e.g., for production), use: + ```bash docker build -t nachet-backend . docker run -p 8080:8080 -v $(pwd):/app nachet-backend ``` ### TESTING NACHET-BACKEND -To test the program, use this command: + +To test the program, use this command: + ```bash python -m unittest discover -s tests ``` **** + ### ENVIRONMENT VARIABLES + Start by making a copy of `.env.template` and renaming it `.env`. For the backend to function, you will need to add the missing values: -* **NACHET_AZURE_STORAGE_CONNECTION_STRING**: Connection string to access +- **NACHET_AZURE_STORAGE_CONNECTION_STRING**: Connection string to access external storage (Azure Blob Storage). -* **NACHET_MODEL_ENDPOINT_REST_URL**: Endpoint to communicate with deployed - model for inferencing. -* **NACHET_MODEL_ENDPOINT_ACCESS_KEY**: Key used when consuming online endpoint. -* **NACHET_DATA**: Url to access nachet-data repository -* **NACHET_HEALTH_MESSAGE**: Health check message for the server. +- **NACHET_MODEL_ENDPOINT_REST_URL**: Endpoint to communicate with deployed + model for inferencing. +- **NACHET_MODEL_ENDPOINT_ACCESS_KEY**: Key used when consuming online endpoint. +- **NACHET_DATA**: Url to access nachet-data repository +- **NACHET_SUBSCRIPTION_ID** +- **NACHET_RESOURCE_GROUP** +- **NACHET_WORKSPACE** +- **NACHET_MODEL** +- **NACHET_BLOB_PIPELINE_NAME** +- **NACHET_BLOB_PIPELINE_VERSION** +- **NACHET_BLOB_PIPELINE_DECRYPTION_KEY** **** -### DEPLOYING NACHET + +### DEPLOYING NACHET + If you need help deploying Nachet for your own needs, please contact us at -cfia.ai-ia.acia@inspection.gc.ca. +. diff --git a/TESTING.md b/TESTING.md index bc96f6e8..eebf9bdd 100644 --- a/TESTING.md +++ b/TESTING.md @@ -1,6 +1,12 @@ # Testing documentation -To test the backend, you can either use the automatic test in run_test.py or +To start the automatic test, you can use the following command: + +```bash +python -m unittest discover -s tests +``` + +You also have the option to run automatic test in run_test.py or manually test the functionality with the frontend. [See frontend testing documentation](https://github.com/ai-cfia/nachet-frontend/blob/main/TESTING.md) diff --git a/docs/nachet-inference-documentation.md b/docs/nachet-inference-documentation.md index 9ef7c70f..da194c06 100644 --- a/docs/nachet-inference-documentation.md +++ b/docs/nachet-inference-documentation.md @@ -54,7 +54,7 @@ to a model and receive the result. *Suggestion: we could call the pipeline a method if we don't want to mix terms.* -# Sequence Diagram for inference request 1.1.0 +# Sequence Diagram for inference request 1.2.1 ```mermaid sequenceDiagram @@ -65,20 +65,21 @@ sequenceDiagram participant Blob storage participant Model + Backend-)+Backend: run() Note over Backend,Blob storage: initialisation Backend-)Backend: before_serving() - Backend-)Backend: get_pipelines_models() + Backend-)Backend: get_pipelines() alt - Backend-)Blob storage: HTTP POST req. - Blob storage--)Backend: return pipelines_models.json + Backend-)+Blob storage: HTTP POST req. + Blob storage--)-Backend: return pipelines_models.json else - Backend-)Frontend: error 400 No pipeline found + Backend-)Frontend: error 500 Failed to retrieve data from the repository end Note over Backend,Blob storage: end of initialisation - Client->>Frontend: applicationStart() + Client->>+Frontend: applicationStart() Frontend-)Backend: HTTP POST req. - Backend-)Backend: get_pipelines_names() + Backend-)Backend: get_model_endpoints_metadata() Backend--)Frontend: Pipelines names res. Note left of Backend: return pipelines names and metadata @@ -86,51 +87,51 @@ sequenceDiagram Client-->>Frontend: client ask action from specific pipeline Frontend-)Backend: HTTP POST req. Backend-)Backend: inference_request(pipeline_name, folder_name, container_name, imageDims, image) - alt missing argument and image and pipeline validation - Backend--)Frontend: Error 400 missing arguments - Backend--)Frontend: Error 400 Model not found - Backend--)Frontend: Error 400 Invalid image header - else no missing argument and validation pass - Backend-)Backend: mount_container(connection_string(Environnement Variable, container_name)) - Backend-)Blob storage: HTTP POST req. - Blob storage--)Backend: container_client - - Backend-)Backend: upload_image(container_client, folder_name, image_bytes, hash_value) - Backend-)Blob storage: HTTP POST req. - Blob storage--)Backend: blob_name - - Backend-)Backend: get_blob(container_client, blob_name) - Backend-)Blob storage: HTTP POST req. - Blob storage--)Backend: blob - - loop for every model in pipeline - Backend-)Backend: model.entry_function(model, previous_result) - note over Backend, Blob storage: Every model has is own entry_function - Backend-)Backend: request_factory(previous_result, model) - Backend-)Backend: urllib.request.Request(endpoint_url, body, header) - Backend-)Model: HTTP POST req. - Model--)Backend: Result res. - alt if model has process_inference_function - Backend-) Backend: model.inference_function(previous_result, result_json) - end - alt next model is not None - note over Backend, Blob storage: restart the loop process - Backend-)Backend: record_result(model, result) - Backend-)Blob storage: HTTP POST req. - note over Backend, Blob storage: record the result produced by the model - - end - end - - par Backend to Frontend - Backend-)Backend: inference.process_inference_results(data, imageDims) - Backend--)Frontend: Processed result res. - and Backend to Blob storage - Backend-)Backend: upload_inference_result(container_client, folder_name, result_json_string, hash_value) - Backend-)Blob storage: HTTP POST req. + alt missing arguments + Backend-)Frontend: Error 400 missing arguments + end + alt wrong pipeline name + Backend-)Frontend: Error 400 wrong pipeline name + end + alt wrong header + Backend-)Frontend: Error 400 wrong header on file + end + + Backend-)Backend: mount_container(connection_string(Environnement Variable, container_name)) + Backend-)+Blob storage: HTTP POST req. + Blob storage--)-Backend: container_client + + Backend-)Backend: Generate Hash(image_bytes) + + Backend-)Backend: upload_image(container_client, folder_name, image_bytes, hash_value) + Backend-)+Blob storage: HTTP POST req. + Blob storage--)-Backend: blob_name + + Backend-)Backend: get_blob(container_client, blob_name) + Backend-)+Blob storage: HTTP POST req. + Blob storage--)-Backend: blob + + loop for every model in pipeline + Backend-)Backend: model.entry_function(model, previous_result) + note over Backend, Blob storage: Every model has is own entry_function + Backend-)Backend: request_factory(previous_result, model) + Backend-)Backend: urllib.request.Request(endpoint_url, body, header) + Backend-)+Model: HTTP POST req. + Model--)-Backend: Result res. + alt if model has process_inference_function + Backend-) Backend: model.inference_function(previous_result, result_json) end end - Frontend--)Client: display result + note over Backend, Blob storage: End of the loop + par Backend to Frontend + Backend-)Backend: inference.process_inference_results(result_json, imageDims) + Backend--)Frontend: Processed result res. + Frontend--)-Client: display result + and Backend to Blob storage + note over Backend, Blob storage: record the result produced by the model + Backend-)Backend: upload_inference_result(container_client, folder_name, result_json_string, hash_value) + Backend-)-Blob storage: HTTP POST req. + end ``` ![footer_for_diagram](https://github.com/ai-cfia/nachet-backend/assets/96267006/cf378d6f-5b20-4e1d-8665-2ba65ed54f8e) diff --git a/model_request/model_request.py b/model_request/model_request.py index ca73c17c..6ecd6097 100644 --- a/model_request/model_request.py +++ b/model_request/model_request.py @@ -6,10 +6,8 @@ async def request_factory(img_bytes: str | bytes, model: namedtuple) -> Request: """ Args: img_bytes (str | bytes): The image data as either a string or bytes. - endpoint_url (str): The URL of the AI model endpoint. - api_key (str): The API key for accessing the AI model. - model_name (str): The name of the AI model. - + model: (namedtuple): A tuple containing all the information necessary + to get the model inference. Returns: Request: The request object for calling the AI model. """