From ec9b87df679ff06262583d9593402f4b36583991 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Gy=C3=B6rgy=20M=C3=A1rk=20Kis?= Date: Mon, 8 Nov 2021 12:33:27 +0100 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index ceb6b5b..a607838 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ This code and approach was written and tested on a Hungarian media sentiment cor Instead of fine-tuning a BERT model, we extract contextual embeddings from the hidden layers and use those as classical inputs for ML approaches. ## Results -The code was benchmarked against a fine-tuned XLM-Roberta on the same corpus, and reached the following topline results (Roberta result in brackets): 8-way sentiment classification weighted F1: 0.65 [0.73], with a range of category-level F1s of 0.35-0.72 [0.51-0.79]; 3-way classification weighted F1: 0.77 [0.82], 0.58-0.82 [0.51-0.87]. The code was run in a Google Colab GPU-supported free notebook. +The approach was benchmarked against embeddings from a non fine-tuned XLM-Roberta, Hilbert, and fine-tuned XLM-Roberta on the same corpus, and reached the following topline results (Roberta result in brackets): 8-way sentiment classification weighted F1: 0.65 [0.73], with a range of category-level F1s of 0.35-0.72 [0.51-0.79]; 3-way classification weighted F1: 0.77 [0.82], 0.58-0.82 [0.51-0.87]. The code was run in a Google Colab GPU-supported free notebook. ![image](https://user-images.githubusercontent.com/23291101/140734165-1ef1e008-b3f9-4b6d-ba19-0454ecf8d510.png)