Skip to content

Commit

Permalink
fixes #51: Update sequence diagram to reflect change in code
Browse files Browse the repository at this point in the history
fixes #51: Update sequence diagram

fixes #51: Update doc string

fixes #51: update README and TESTING
  • Loading branch information
Maxence Guindon committed Apr 8, 2024
1 parent f33b541 commit 3e41fb5
Show file tree
Hide file tree
Showing 4 changed files with 88 additions and 66 deletions.
39 changes: 28 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# nachet-backend

## High level sequence diagram

![SD_1 drawio (2)](https://github.com/ai-cfia/nachet-backend/assets/19809069/272f37dc-f4ec-449b-ba82-950c54b9f856)

### Details
Expand All @@ -12,41 +13,57 @@
- Inference results from model endpoint are directly handled in `model_inference/inference.py`

****

### RUNNING NACHET-BACKEND FROM DEVCONTAINER

When you are developping, you can run the program while in the devcontainer by
using this command:

```bash
hypercorn -b :8080 app:app
```

### RUNNING NACHET-BACKEND AS A DOCKER CONTAINER
If you want to run the program as a Docker container (e.g., for production), use:

If you want to run the program as a Docker container (e.g., for production), use:

```bash
docker build -t nachet-backend .
docker run -p 8080:8080 -v $(pwd):/app nachet-backend
```

### TESTING NACHET-BACKEND
To test the program, use this command:

To test the program, use this command:

```bash
python -m unittest discover -s tests
```

****

### ENVIRONMENT VARIABLES

Start by making a copy of `.env.template` and renaming it `.env`. For the
backend to function, you will need to add the missing values:

* **NACHET_AZURE_STORAGE_CONNECTION_STRING**: Connection string to access
- **NACHET_AZURE_STORAGE_CONNECTION_STRING**: Connection string to access
external storage (Azure Blob Storage).
* **NACHET_MODEL_ENDPOINT_REST_URL**: Endpoint to communicate with deployed
model for inferencing.
* **NACHET_MODEL_ENDPOINT_ACCESS_KEY**: Key used when consuming online endpoint.
* **NACHET_DATA**: Url to access nachet-data repository
* **NACHET_HEALTH_MESSAGE**: Health check message for the server.
- **NACHET_MODEL_ENDPOINT_REST_URL**: Endpoint to communicate with deployed
model for inferencing.
- **NACHET_MODEL_ENDPOINT_ACCESS_KEY**: Key used when consuming online endpoint.
- **NACHET_DATA**: Url to access nachet-data repository
- **NACHET_SUBSCRIPTION_ID**
- **NACHET_RESOURCE_GROUP**
- **NACHET_WORKSPACE**
- **NACHET_MODEL**
- **NACHET_BLOB_PIPELINE_NAME**
- **NACHET_BLOB_PIPELINE_VERSION**
- **NACHET_BLOB_PIPELINE_DECRYPTION_KEY**

****
### DEPLOYING NACHET

### DEPLOYING NACHET

If you need help deploying Nachet for your own needs, please contact us at
[email protected].
<[email protected]>.
8 changes: 7 additions & 1 deletion TESTING.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,12 @@
# Testing documentation

To test the backend, you can either use the automatic test in run_test.py or
To start the automatic test, you can use the following command:

```bash
python -m unittest discover -s tests
```

You also have the option to run automatic test in run_test.py or
manually test the functionality with the frontend. [See frontend testing
documentation](https://github.com/ai-cfia/nachet-frontend/blob/main/TESTING.md)

Expand Down
101 changes: 51 additions & 50 deletions docs/nachet-inference-documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ to a model and receive the result.

*Suggestion: we could call the pipeline a method if we don't want to mix terms.*

# Sequence Diagram for inference request 1.1.0
# Sequence Diagram for inference request 1.2.1

```mermaid
sequenceDiagram
Expand All @@ -65,72 +65,73 @@ sequenceDiagram
participant Blob storage
participant Model
Backend-)+Backend: run()
Note over Backend,Blob storage: initialisation
Backend-)Backend: before_serving()
Backend-)Backend: get_pipelines_models()
Backend-)Backend: get_pipelines()
alt
Backend-)Blob storage: HTTP POST req.
Blob storage--)Backend: return pipelines_models.json
Backend-)+Blob storage: HTTP POST req.
Blob storage--)-Backend: return pipelines_models.json
else
Backend-)Frontend: error 400 No pipeline found
Backend-)Frontend: error 500 Failed to retrieve data from the repository
end
Note over Backend,Blob storage: end of initialisation
Client->>Frontend: applicationStart()
Client->>+Frontend: applicationStart()
Frontend-)Backend: HTTP POST req.
Backend-)Backend: get_pipelines_names()
Backend-)Backend: get_model_endpoints_metadata()
Backend--)Frontend: Pipelines names res.
Note left of Backend: return pipelines names and metadata
Frontend->>Client: application is ready
Client-->>Frontend: client ask action from specific pipeline
Frontend-)Backend: HTTP POST req.
Backend-)Backend: inference_request(pipeline_name, folder_name, container_name, imageDims, image)
alt missing argument and image and pipeline validation
Backend--)Frontend: Error 400 missing arguments
Backend--)Frontend: Error 400 Model not found
Backend--)Frontend: Error 400 Invalid image header
else no missing argument and validation pass
Backend-)Backend: mount_container(connection_string(Environnement Variable, container_name))
Backend-)Blob storage: HTTP POST req.
Blob storage--)Backend: container_client
Backend-)Backend: upload_image(container_client, folder_name, image_bytes, hash_value)
Backend-)Blob storage: HTTP POST req.
Blob storage--)Backend: blob_name
Backend-)Backend: get_blob(container_client, blob_name)
Backend-)Blob storage: HTTP POST req.
Blob storage--)Backend: blob
loop for every model in pipeline
Backend-)Backend: model.entry_function(model, previous_result)
note over Backend, Blob storage: Every model has is own entry_function
Backend-)Backend: request_factory(previous_result, model)
Backend-)Backend: urllib.request.Request(endpoint_url, body, header)
Backend-)Model: HTTP POST req.
Model--)Backend: Result res.
alt if model has process_inference_function
Backend-) Backend: model.inference_function(previous_result, result_json)
end
alt next model is not None
note over Backend, Blob storage: restart the loop process
Backend-)Backend: record_result(model, result)
Backend-)Blob storage: HTTP POST req.
note over Backend, Blob storage: record the result produced by the model
end
end
par Backend to Frontend
Backend-)Backend: inference.process_inference_results(data, imageDims)
Backend--)Frontend: Processed result res.
and Backend to Blob storage
Backend-)Backend: upload_inference_result(container_client, folder_name, result_json_string, hash_value)
Backend-)Blob storage: HTTP POST req.
alt missing arguments
Backend-)Frontend: Error 400 missing arguments
end
alt wrong pipeline name
Backend-)Frontend: Error 400 wrong pipeline name
end
alt wrong header
Backend-)Frontend: Error 400 wrong header on file
end
Backend-)Backend: mount_container(connection_string(Environnement Variable, container_name))
Backend-)+Blob storage: HTTP POST req.
Blob storage--)-Backend: container_client
Backend-)Backend: Generate Hash(image_bytes)
Backend-)Backend: upload_image(container_client, folder_name, image_bytes, hash_value)
Backend-)+Blob storage: HTTP POST req.
Blob storage--)-Backend: blob_name
Backend-)Backend: get_blob(container_client, blob_name)
Backend-)+Blob storage: HTTP POST req.
Blob storage--)-Backend: blob
loop for every model in pipeline
Backend-)Backend: model.entry_function(model, previous_result)
note over Backend, Blob storage: Every model has is own entry_function
Backend-)Backend: request_factory(previous_result, model)
Backend-)Backend: urllib.request.Request(endpoint_url, body, header)
Backend-)+Model: HTTP POST req.
Model--)-Backend: Result res.
alt if model has process_inference_function
Backend-) Backend: model.inference_function(previous_result, result_json)
end
end
Frontend--)Client: display result
note over Backend, Blob storage: End of the loop
par Backend to Frontend
Backend-)Backend: inference.process_inference_results(result_json, imageDims)
Backend--)Frontend: Processed result res.
Frontend--)-Client: display result
and Backend to Blob storage
note over Backend, Blob storage: record the result produced by the model
Backend-)Backend: upload_inference_result(container_client, folder_name, result_json_string, hash_value)
Backend-)-Blob storage: HTTP POST req.
end
```

![footer_for_diagram](https://github.com/ai-cfia/nachet-backend/assets/96267006/cf378d6f-5b20-4e1d-8665-2ba65ed54f8e)
Expand Down
6 changes: 2 additions & 4 deletions model_request/model_request.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,8 @@ async def request_factory(img_bytes: str | bytes, model: namedtuple) -> Request:
"""
Args:
img_bytes (str | bytes): The image data as either a string or bytes.
endpoint_url (str): The URL of the AI model endpoint.
api_key (str): The API key for accessing the AI model.
model_name (str): The name of the AI model.
model: (namedtuple): A tuple containing all the information necessary
to get the model inference.
Returns:
Request: The request object for calling the AI model.
"""
Expand Down

0 comments on commit 3e41fb5

Please sign in to comment.