Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yolo tflite serving #45

Merged
merged 34 commits into from
Apr 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
631f593
Adding rocket-burn-detection configuration
ThibautLeibel Feb 29, 2024
4541a80
Adding unexpected label rule for camera rules
ThibautLeibel Feb 29, 2024
33ca084
Adding post/pre processing for the yolo model & adding opencv (to be …
ThibautLeibel Feb 29, 2024
e22e0a9
Moving box size adaptation to app vue, scaling to the page's image di…
ThibautLeibel Mar 5, 2024
192a710
Editing file save system to gather captures from same Session/Date/Co…
ThibautLeibel Mar 11, 2024
ccd8e82
Adding a 'Capture Video' button to capture several shots in a row
ThibautLeibel Mar 11, 2024
d9b41be
Removing opencv dependency
ThibautLeibel Mar 13, 2024
5cbfa31
Adding a model_type variable to config, used to separate yolo models …
ThibautLeibel Mar 13, 2024
fc38e77
Adding a burn severity calculation and a global severity parameter
ThibautLeibel Mar 13, 2024
86bb955
Fixing small mistakes & cleaning edge_serving code
ThibautLeibel Mar 14, 2024
6158d2c
Correcting failed unit tests & fixing small issues
ThibautLeibel Mar 15, 2024
230d2fd
Linting files
ThibautLeibel Mar 15, 2024
864bd5b
Linting and Editing burn severity to be a function of darkness only
ThibautLeibel Mar 15, 2024
d82ee63
All detection boxes are normalized coordinates
ThibautLeibel Mar 21, 2024
713a7d1
Updating file system storages and corresponding tests
ThibautLeibel Mar 21, 2024
e082f2f
Adding config name used by the BinaryStorage systems instead of the f…
ThibautLeibel Mar 21, 2024
fe56c0b
Increasing consistency between yolo postprocess & mobilenet, using cl…
ThibautLeibel Mar 21, 2024
3d4ed70
Fixing some unit test, linting
ThibautLeibel Mar 21, 2024
4141736
Fixing tests
ThibautLeibel Mar 22, 2024
30fa36f
Forgottent lint
ThibautLeibel Mar 22, 2024
7c25ab1
Fix the unit test
ThibautLeibel Mar 22, 2024
75739db
Fixing integration test?
ThibautLeibel Mar 22, 2024
eca6ed4
Fixing test for good
ThibautLeibel Mar 22, 2024
1c081b9
Fixing/Adding tests
ThibautLeibel Mar 27, 2024
256a755
Small mistake
ThibautLeibel Mar 27, 2024
da19b13
Renaming detection parameters & nms
ThibautLeibel Mar 28, 2024
20b69b6
lint
ThibautLeibel Mar 28, 2024
9797e54
Rever default changes
ThibautLeibel Mar 28, 2024
0cabdef
lint
ThibautLeibel Mar 28, 2024
8f3ff2b
Editing Metadata storage to add config name to the path
ThibautLeibel Mar 29, 2024
133b1f6
linting
ThibautLeibel Mar 29, 2024
519be66
Adding a test yolo model
ThibautLeibel Apr 2, 2024
ab17339
Removing test that can't be True if docker image not updated
ThibautLeibel Apr 2, 2024
e56f320
correct last functional
ThibautLeibel Apr 2, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 36 additions & 18 deletions edge_interface/src/components/Inference.vue
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<template>
<div class="mr-4 container">
<v-btn color="blue-grey" class="ma-2 white--text" @click="trigger">
<v-btn color="blue-grey" class="ma-2 white--text" @click="trigger" @call-trigger="trigger" id="trigger-button">
Trigger
<v-icon right dark>
mdi-cloud-upload
Expand All @@ -23,27 +23,32 @@
</p>
<div v-for="(object, index) in predictedItem" :key="index">
<h3>{{ object.cameraId }}</h3>
<div>
<img class="img-responsive" :src="object.image_url" />
<div class="inference-image">
<img class="img-responsive" ref="image" :src="object.image_url" @load="on_image_loaded" />
<div v-for="(inference, model_id) in object.inferences" :key="model_id">
<div v-for="(result, object_id) in inference" :key="object_id">
<div v-if="'location' in result">
<Box
:x-min="result['location'][0]"
:y-min="result['location'][1]"
:x-max="result['location'][2]"
:y-max="result['location'][3]"
/>
<div v-if="inference !== 'NO_DECISION'">
<div v-for="(result, object_id) in inference" :key="object_id">
<div v-if="'location' in result">
<Box
v-if="imgLoaded"
v-bind:x-min="xoffset + result['location'][0] * width"
v-bind:y-min="yoffset + result['location'][1] * height"
v-bind:x-max="xoffset + result['location'][2] * width"
v-bind:y-max="yoffset + result['location'][3] * height"
/>
</div>
</div>
</div>
</div>
</div>
<div v-for="(inference, model_id) in object.inferences" :key="model_id">
<h4>{{ model_id }}</h4>
<div v-for="(result, object_id) in inference" :key="object_id">
<span>{{ object_id }}</span>
<div v-for="(value, key) in result" :key="key">
<span>{{ key }}: {{ value }}</span>
<div v-if="inference !== 'NO_DECISION'">
<div v-for="(result, object_id) in inference" :key="object_id">
<span>{{ object_id }}</span>
<div v-for="(value, key) in result" :key="key">
<span>{{ key }}: {{ value }}</span>
</div>
</div>
</div>
</div>
Expand All @@ -68,9 +73,23 @@ export default {
itemId: null,
statusList: null,
state: undefined,
decision: undefined
decision: undefined,
imgLoaded: false,
height: null,
width: null,
xoffset: null,
yoffset: null
}),
methods: {
on_image_loaded() {
const img = this.$refs.image[0]
this.height = img.height
this.width = img.width
this.xoffset = img.offsetLeft
this.yoffset = img.offsetTop
console.log('Image size : ', this.height, this.width)
this.imgLoaded = true
},
getColor(status) {
if (this.statusList[status] > this.statusList[this.state]) {
return 'red'
Expand Down Expand Up @@ -139,8 +158,7 @@ export default {
<style lang="scss" scoped>
.result {
display: inline-block;
vertical-align: top;
padding: 0 5rem 0 5rem;
position: relative;
}

.container {
Expand Down
15 changes: 15 additions & 0 deletions edge_interface/src/views/UploadView.vue
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,9 @@
<v-btn v-if="!isStart" color="error" class="mr-4" @click="onStart">
Start Camera
</v-btn>
<v-btn v-if="isStart" color="error" class="mr-4" @click="onStartVid">
Start Video Capture
</v-btn>
</div>
</div>
</div>
Expand Down Expand Up @@ -151,6 +154,18 @@ export default {
this.isStart = true
this.$refs.webcam.start()
},
onStartVid() {
const execTime = 60 * 1000 // 1 minute
const intervalID = setInterval(this.captureTrigger, 2000)
setTimeout(() => {
clearTimeout(intervalID)
}, execTime)
},
captureTrigger() {
this.onCapture()
const button = document.getElementById('trigger-button')
button.click()
},
checkDeviceId(device) {
return device.deviceId === this.deviceId
},
Expand Down
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,11 @@

import numpy as np
from fastapi import APIRouter, HTTPException, Request
from tflite_serving.utils.yolo_postprocessing import (
yolo_extract_boxes_information,
non_max_suppression,
compute_severities,
)

JSONObject = Dict[AnyStr, Any]
JSONArray = List[Any]
Expand Down Expand Up @@ -44,8 +49,13 @@ async def predict(
input_data = payload[b"inputs"]
input_array = np.array(input_data, dtype=input_dtype)

interpreter.set_tensor(input_details[0]["index"], input_array)
model_type = None
if b"model_type" in payload.keys():
model_type = payload[b"model_type"]
if model_type == "yolo":
input_array /= 255

interpreter.set_tensor(input_details[0]["index"], input_array)
interpreter.invoke()
# Process image and get predictions
prediction = {}
Expand All @@ -67,8 +77,32 @@ async def predict(
"detection_boxes": boxes.tolist(),
"detection_classes": classes.tolist(),
"detection_scores": scores.tolist(),
"severities": [None],
}
}
elif model_type == "yolo":
outputs = interpreter.get_tensor(output_details[0]["index"])[0]

# Rotate the tensor
temp_output = []
for i in range(len(outputs[0]), 0, -1):
temp_output.append(list(map(lambda x: x[i - 1], outputs)))
outputs = np.array(temp_output)

# Extracting the boxes information to select only the most relevant ones
boxes, scores, class_ids = yolo_extract_boxes_information(outputs)
boxes, scores, class_ids = non_max_suppression(boxes, scores, class_ids)
severities = compute_severities(input_array[0], boxes)

prediction = {
"outputs": {
"detection_boxes": [boxes],
"detection_classes": [class_ids],
"detection_scores": [scores],
"severities": [severities],
}
}

elif len(output_details) == 1:
scores = interpreter.get_tensor(output_details[0]["index"])
logging.warning(
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
import numpy as np
from typing import List


def yolo_extract_boxes_information(outputs):
rows = outputs.shape[0]

boxes = []
scores = []
class_ids = []

for i in range(rows):
classes_scores = outputs[i][4:]
max_score, max_class_id = max((v, i) for i, v in enumerate(classes_scores))

box = [
float(outputs[i][0] - (0.5 * outputs[i][2])),
float(outputs[i][1] - (0.5 * outputs[i][3])),
float(outputs[i][0] + (0.5 * outputs[i][2])),
float(outputs[i][1] + (0.5 * outputs[i][3])),
]
boxes.append(box)
scores.append(float(max_score))
class_ids.append(float(max_class_id))

return boxes, scores, class_ids


def compute_severities(image: np.array, boxes: List):
severities = []
for box in boxes:
severities.append(compute_box_severity(image, box))
return severities


def compute_box_severity(image: np.array, box: List):
x1_pixel_index = int(box[0] * len(image))
y1_pixel_index = int(box[1] * len(image[0]))
x2_pixel_index = int(box[2] * len(image))
y2_pixel_index = int(box[3] * len(image[0]))

# Reshape to only the pixels in the detection box & as a list of pixels instead of a 2D array of them
image_detection = image[
x1_pixel_index:x2_pixel_index, y1_pixel_index:y2_pixel_index, :
]
image_detection = image_detection.reshape(-1, 3)

# Filtering out light pixels
mask_dark_pixels = np.all(image_detection < 0.5, axis=1)
# Looking at severity as mean darkness
dark_colors = image_detection[mask_dark_pixels].flatten()
if dark_colors.size == 0:
return 0.1
else:
severity = round((0.5 - dark_colors.mean()) * 2, 2)
return severity


def non_max_suppression(
boxes, scores, class_ids, score_threshold=0.4, iou_threshold=0.45
):
non_max_suppression_parameters_checks(score_threshold, iou_threshold)

nms_result_boxes = []
nms_result_scores = []
nms_result_classes = []

# Cut values with low confidence
scores = np.array(scores)
mask_low_confidence = scores > score_threshold
scores = scores[mask_low_confidence].tolist()
boxes = np.array(boxes)[mask_low_confidence].tolist()
class_ids = np.array(class_ids)[mask_low_confidence].tolist()

# Performing the Non-max suppression loop
while len(boxes) != 0:
delete_index_list = []

# Locating & Saving max confidence box
highest_confidence_index = scores.index(max(scores))
highest_confidence_box = boxes[highest_confidence_index]

nms_result_boxes.append(highest_confidence_box)
nms_result_scores.append(scores[highest_confidence_index])
nms_result_classes.append(class_ids[highest_confidence_index])
delete_index_list.append(highest_confidence_index)

# Iterating to analyse the iou scores
for index_box, box in enumerate(boxes):
iou = compute_iou(highest_confidence_box, box)
if iou > iou_threshold:
delete_index_list.append(index_box)

# Rebuild the box list to remove the boxes that were close to the last max score
boxes = [
box
for index_box, box in enumerate(boxes)
if index_box not in delete_index_list
]
scores = [
score
for index_score, score in enumerate(scores)
if index_score not in delete_index_list
]
class_ids = [
class_id
for index_class, class_id in enumerate(class_ids)
if index_class not in delete_index_list
]

return nms_result_boxes, nms_result_scores, nms_result_classes


def non_max_suppression_parameters_checks(conf_thres, iou_thres):
assert (
0 <= conf_thres <= 1
), f"Invalid Confidence threshold {conf_thres}, valid values are between 0.0 and 1.0"
assert (
0 <= iou_thres <= 1
), f"Invalid IoU {iou_thres}, valid values are between 0.0 and 1.0"


def compute_iou(box1, box2):
box1_width = box1[2] - box1[0]
box1_height = box1[3] - box1[1]
box2_width = box2[2] - box2[0]
box2_height = box2[3] - box2[1]

# Check if centers of boxes are close enough to be intersected
if (
abs((box1[0] + box1_width / 2) - (box2[0] + box2_width / 2))
< 0.5 * (box2_width + box1_width)
) & (
abs((box1[1] + box1_height / 2) - (box2[1] + box2_height / 2))
< 0.5 * (box2_height + box1_height)
):
intersection_area = (max(box1[2], box2[2]) - min(box1[0], box2[0])) * (
max(box1[3], box2[3]) - min(box1[1], box2[1])
)
union_area = (
box1_width * box1_height + box2_width * box2_height - intersection_area
)
return intersection_area / union_area
return 0
Loading
Loading