Skip to content

My Bachelor's Thesis: Reviewing the Consistency of Semantical Capabilities of Large Language Models - a Word-in-Context Benchmark Evaluation Framework and Utility Library

Notifications You must be signed in to change notification settings

Fabbernat/Thesis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

My Bachelor's Thesis: Analyzing the Consistency of Semantical Capabilities of Large Language Models - a Word-in-Context Benchmark Evaluation Framework and Utility Library

You can test the semantical sentence-understanding capabilities of any* Hugging Face model

src/Framework - The module where it happens

src/Framework module

Input:

Output:

  • Detailed statistics and analytics of the model's answers to the input.

* almost any. You need to make your own scripts to test unsupported models. src/Framework has been thoroughly tested on Qwen/Qwen2.5-0.5B-Instruct though, so this and similar models will grantedly work.

The paper [Outdated]:

Analyzing the Consistency of Semantical Capabilities of Large Language Models

Pdf TeX Source:

GitHub

My home page:

Bernát Fábián

Word in Context (WiC) Task

By design, word embeddings are unable to model the dynamic nature of words' semantics, i.e., the property of words to correspond to potentially different meanings. To address this limitation, dozens of specialized meaning representation techniques such as sense or contextualized embeddings have been proposed. However, despite the popularity of research on this topic, very few evaluation benchmarks exist that specifically focus on the dynamic semantics of words. In this paper we show that existing models have surpassed the performance ceiling of the standard evaluation dataset for the purpose, i.e., Stanford Contextual Word Similarity, and highlight its shortcomings. To address the lack of a suitable benchmark, Pilehvar and his team put forward a large-scale Word in Context dataset, called WiC, based on annotations curated by experts, for generic evaluation of context-sensitive representations. WiC is released in https://pilehvar.github.io/wic/.

This repository contains an algorithm to achieve as much accuracy as possible on the WiC binary classification task. Each instance in WiC has a target word w for which two contexts are provided, each invoking a specific meaning of w. The task is to determine whether the occurrences of w in the two contexts share the same meaning or not, clearly requiring an ability to identify the word’s semantic category. The WiC task is defined over supersenses (Pilehvar and Camacho-Collados, 2019) – the negative examples include a word used in two different supersenses and the positive ones include a word used in the same supersense.

How to use the app?

  1. Clone the Repo. Python interpreter needed. It is recommended to use PyCharm
  2. navigate to src/Framework/ModelInputPreparer/main.py and run main() (in PyCharm just click the green triangle)
  3. You see the results in the .out files
  4. do the same with the HuggingFaceModelInferencer and the ModelOutputProcessor modules, or just run the src/Framework/globalMain.py to execute all three modules at once
  5. Check the results in the .out files
  6. That's it!

Example result:

image

The Colab Notebook [outdated]:

WiC POS Tagging Word Comparison Notebook

Illustration of results [outdated]:

Large Language Models Table image

Usage of the scripts [outdated]:

image image image image

About

My Bachelor's Thesis: Reviewing the Consistency of Semantical Capabilities of Large Language Models - a Word-in-Context Benchmark Evaluation Framework and Utility Library

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •