Skip to content

Commit

Permalink
EMNLP publishing
Browse files Browse the repository at this point in the history
  • Loading branch information
spaidataiga committed Oct 11, 2024
1 parent 4811e28 commit cd7f001
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 2 deletions.
4 changes: 2 additions & 2 deletions _posts/2024-07-24-ConfirmationBias.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
layout: post
title: From Internal Conflict to Contextual Adaptation of Language Models.
title: DYNAMICQA: Tracing Internal Knowledge Conflicts in Language Models
subtitle: SV Marjanović, H Yu, P Atanasova, M Maistro, C Lioma, I Augenstein
# cover-img: /assets/img/path.jpg
# thumbnail-img: /assets/img/thumb.png
Expand All @@ -9,4 +9,4 @@ tags: [knowledge conflict, intra-memory conflict, dataset, uncertainty]
# author: Sharon Smith and Barry Simpson
---

Knowledge-intensive language understanding tasks require Language Models (LMs) to integrate relevant context, mitigating their inherent weaknesses, such as incomplete or outdated knowledge. Nevertheless, studies indicate that LMs often ignore the provided context as it can conflict with the pre-existing LM's memory learned during pre-training. Moreover, conflicting knowledge can already be present in the LM's parameters, termed intra-memory conflict. Existing works have studied the two types of knowledge conflicts only in isolation. We conjecture that the (degree of) intra-memory conflicts can in turn affect LM's handling of context-memory conflicts. To study this, we introduce the DYNAMICQA dataset, which includes facts with a temporal dynamic nature where a fact can change with a varying time frequency and disputable dynamic facts, which can change depending on the viewpoint. DYNAMICQA is the first to include real-world knowledge conflicts and provide context to study the link between the different types of knowledge conflicts. With the proposed dataset, we assess the use of uncertainty for measuring the intra-memory conflict and introduce a novel Coherent Persuasion (CP) score to evaluate the context's ability to sway LM's semantic output. In this [preprint](https://arxiv.org/abs/2407.17023), our extensive experiments reveal that static facts, which are unlikely to change, are more easily updated with additional context, relative to temporal and disputable facts.
Knowledge-intensive language understanding tasks require Language Models (LMs) to integrate relevant context, mitigating their inherent weaknesses, such as incomplete or outdated knowledge. However, conflicting knowledge can be present in the LM's parameters, termed intra-memory conflict, which can affect a model's propensity to accept contextual knowledge. To study the effect of intra-memory conflict on an LM's ability to accept relevant context, we utilize two knowledge conflict measures and a novel dataset containing inherently conflicting data, DynamicQA. This dataset includes facts with a temporal dynamic nature where facts can change over time and disputable dynamic facts, which can change depending on the viewpoint. DynamicQA is the first to include real-world knowledge conflicts and provide context to study the link between the different types of knowledge conflicts. We also evaluate several measures on their ability to reflect the presence of intra-memory conflict: semantic entropy and a novel coherent persuasion score. With our extensive experiments, in this [EMNLP 2024 paper](https://arxiv.org/abs/2407.17023), we verify that LMs exhibit a greater degree of intra-memory conflict with dynamic facts compared to facts that have a single truth value. Furthermore, we reveal that facts with intra-memory conflict are harder to update with context, suggesting that retrieval-augmented generation will struggle with the most commonly adapted facts.
1 change: 1 addition & 0 deletions other.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ title: News
subtitle: Recent attention on work that I have produced
---
## 2024
* My paper "DYNAMICQA: Tracing Internal Knowledge Conflicts in Language Models" has been accepted to EMNLP Findings 2024 in Miami, USA.
* I've been invited to present at the *Implicit Biases in Humans and Machines* Workshop at DTU.
* I attended ACL 2024 in Bangkok, Thailand to present my paper "Investigating the Impact of Model Instability on Explanations and Uncertainty."
* I was a student volunteer at LREC-COLING 2024 in Torino, Italy
Expand Down

0 comments on commit cd7f001

Please sign in to comment.