From 12e6f63a20321831cd5190c36d6cf6ec333fe809 Mon Sep 17 00:00:00 2001 From: sumn2u Date: Sun, 7 Jan 2024 21:34:24 -0600 Subject: [PATCH] fix image indent --- paper/paper.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/paper/paper.md b/paper/paper.md index da5af52..fb3810f 100644 --- a/paper/paper.md +++ b/paper/paper.md @@ -34,18 +34,19 @@ Recent advancements leverage deep learning models to streamline waste sorting an Integration of machine learning models with mobile devices presents a promising avenue for precise waste management [@narayan_deepwaste:_2021]. The use of optimized deep learning techniques in an app demonstrates potential, achieving an accuracy of 0.881 in waste classification. However, limitations persist, prompting the introduction of Deep Waste, a mobile app employing computer vision to classify waste into ten types. Using transfer learning [@5288526], Deep Waste attains a remarkable 96.41% precision on the test set, functioning both online and offline. -The model was trained with Tesla T4 GPU and uses EfficientNetV2 [@tan2021efficientnetv2] model as a base model with addition of agumentation layer. Adam was used as an optmizer with intital learning rate of 0.01. Which was later optmised using [optuna](https://optuna.org/) to create more accurate optimization parameters. The training and validation loss is shown in \autoref{fig:training_vs_val_loss} whereas \autoref{fig:training_vs_val_accuracy} shows training and validation accuracy on the performed experiment[^2]. +The model was trained with Tesla T4 GPU and uses EfficientNetV2 [@tan2021efficientnetv2] model as a base model with addition of agumentation layer. Adam was used as an optmizer with intital learning rate of 0.01. Which was later optmised using [optuna](https://optuna.org/) to create more accurate optimization parameters. -![Training and Validation loss at different epochs\label{fig:training_vs_val_loss}](training_vs_val_loss.png){width="60%"} -![Training and Validation accuracy at different epochs\label{fig:training_vs_val_accuracy}](training_vs_val_accuracy.png){width="60%"} +![Training and Validation loss at different epochs\label{fig:training_vs_val_loss}](training_vs_val_loss.png) +![Training and Validation accuracy at different epochs\label{fig:training_vs_val_accuracy}](training_vs_val_accuracy.png) + The training and validation loss is shown in \autoref{fig:training_vs_val_loss} whereas \autoref{fig:training_vs_val_accuracy} shows training and validation accuracy on the performed [experiment](https://www.kaggle.com/code/sumn2u/garbage-classification-transfer-learning) The confusion matix of the modle is shown in \autoref{fig:confusion_matrix}. -![Confusion Matrix\label{fig:confusion_matrix}](confusion_matrix.png){width="60%"} -[^2]: [https://www.kaggle.com/code/sumn2u/garbage-classification-transfer-learning](https://www.kaggle.com/code/sumn2u/garbage-classification-transfer-learning). +![Confusion Matrix\label{fig:confusion_matrix}](confusion_matrix.png) + # Workflow