From d78318713db17de0a8501f2b82f22a1fe9e26206 Mon Sep 17 00:00:00 2001 From: Peik Etzell Date: Sat, 11 Nov 2023 23:41:02 +0200 Subject: [PATCH] More to readme --- README.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 4815bab..8ef319c 100644 --- a/README.md +++ b/README.md @@ -45,7 +45,10 @@ We take concerns about social media seriously and design our app in a way that a We used state-of-the-art computer vision models to extract pose data from video data. With this we can compare user submissions to a reference choreography, and give user scores depending on how accurately they follow the original moves. -We used the YOLOv8n-pose model to extract pose data efficiently. +We used the _YOLOv8n-pose_ model to extract pose data efficiently on the server after streaming webcam footage from the client. The pose data is compared using _dynamic time warping_, which can alleviate syncing issues between the submission and reference. + +The frontend is implemented in Vite, while the backend is a mixture of Go and Python together with Pocketbase as a database. + ## References