Skip to content

Commit 7747f95

Browse files
author
zhanbojun1
committed
update news
1 parent 8d5dc4b commit 7747f95

File tree

6 files changed

+69
-0
lines changed

6 files changed

+69
-0
lines changed

_posts/2025-01-12-cskt.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
---
2+
layout: post
3+
title: 'csKT: Addressing Cold-start Problem in Knowledge Tracing via Kernel Bias and Cone Attention'
4+
date: 2025-01-12T16:00:00.000+00:00
5+
tags: model
6+
categories: []
7+
author: ''
8+
post_image: "/assets/images/posts/cskt.jpg"
9+
post_format: ''
10+
trending: true
11+
12+
---
13+
We added csKT into our pyKT package.
14+
15+
The link is [here](https://pykt-toolkit.readthedocs.io/en/latest/models.html#cskt) and the API is [here](https://pykt-toolkit.readthedocs.io/en/latest/pykt.models.html#module-pykt.models.cskt).
16+
17+
Original paper can be found at [Bai Y, Li X, Liu Z, et al. "csKT: Addressing Cold-start Problem in Knowledge Tracing via Kernel Bias and Cone Attention." Proceedings of the Expert Systems with Applications. 2025.
18+
](https://www.sciencedirect.com/science/article/pii/S0957417424028550)
19+
20+
Title: csKT: Addressing Cold-start Problem in Knowledge Tracing via Kernel Bias and Cone Attention
21+
22+
Abstract: Knowledge tracing (KT) is the task of predicting students’ future performances based on their past interactions in online learning systems. When new students enter the system with short interaction sequences, the cold-start problem commonly arises in KT. Although existing deep learning based KT models exhibit impressive performance, it remains challenging for these models to be trained on short student interaction sequences and maintain stable prediction accuracy as the number of student interactions increases. In this paper, we propose cold-start KT (csKT) to address this problem. Specifically, csKT employs kernel bias to guide learning from short sequences and ensure accurate predictions for longer sequences, and it also introduces cone attention to better capture complex hierarchical relationships between knowledge components in cold-start scenarios. We evaluate csKT on four public real-world educational datasets, where it demonstrated superior performance over a broad range of deep learning based KT models using common evaluation metrics in cold-start scenarios. Additionally, we conduct ablation studies and produce visualizations to verify the effectiveness of our csKT model.

_posts/2025-02-15-fluckt.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
---
2+
layout: post
3+
title: 'Cognitive Fluctuations Enhanced Attention Network for Knowledge Tracing'
4+
date: 2025-02-15T16:00:00.000+00:00
5+
tags: model
6+
categories: []
7+
author: ''
8+
post_image: "/assets/images/posts/fluckt.png"
9+
post_format: ''
10+
trending: true
11+
12+
---
13+
We added flucKT into our pyKT package.
14+
15+
The link is [here](https://pykt-toolkit.readthedocs.io/en/latest/models.html#fluckt) and the API is [here](https://pykt-toolkit.readthedocs.io/en/latest/pykt.models.html#module-pykt.models.fluckt).
16+
17+
Original paper can be found at [Hou M, Li X, Guo T, et al. "Cognitive Fluctuations Enhanced Attention Network for Knowledge Tracing." Proceedings of the 39th Annual AAAI Conference on Artificial Intelligence. 2025.
18+
](https://drive.google.com/file/d/1tRW2j5cmjj5asYMwNvd2z6BEB313QcSg/view)
19+
20+
Title: Cognitive Fluctuations Enhanced Attention Network for Knowledge Tracing
21+
22+
Abstract: Knowledge tracing (KT) involves using the historical recordsof student-learning interactions to anticipate their performance on forthcoming questions. Central to this process is the modeling of human cognition to gain deeper insights into how knowledge is acquired and retained. Human cognition is characterized by two key features: long-term cognitive trends, reflecting the gradual accumulation and stabilization of knowledge over time, and short-term cognitive fluctuations, which arise from transient factors such as forgetting or momentary lapses in attention. Although existing attention-based KT models effectively capture long-term cognitive trends, they often fail to adequately address short-term cognitive fluctuations. These limitations lead to overly smoothed cognitive features and reduced model performance, especially when the test data length exceeds the training data length. To address these problems, we propose FlucKT, a novel shortterm cognitive fluctuations enhanced attention network for KT tasks. FlucKT improves the attention mechanism in two ways: First, by using a decomposition-based layer with causal convolution to separate and dynamically reweight long-term and short-term cognitive features. Second, by introducing a kernelized bias attention score penalty to enhance focus on short-term fluctuations, improving length generalization capabilities. Our contributions are validated through extensive experiments on three real-world datasets, demonstrating significant improvements in length generalization and prediction performance.

_posts/2025-02-15-lefokt.md

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
---
2+
layout: post
3+
title: 'Rethinking and Improving Student Learning and Forgetting Processes for Attention Based Knowledge Tracing Models'
4+
date: 2025-02-15T16:00:00.000+00:00
5+
tags: model
6+
categories: []
7+
author: ''
8+
post_image: "/assets/images/posts/lefokt.png"
9+
post_format: ''
10+
trending: true
11+
12+
---
13+
We added LefoKT into our pyKT package.
14+
15+
The link is [here](https://pykt-toolkit.readthedocs.io/en/latest/models.html#lefokt) and the API is [here](https://pykt-toolkit.readthedocs.io/en/latest/pykt.models.html#module-pykt.models.lefokt).
16+
17+
Original paper can be found at [Bai Y, Li X, Liu Z, et al. "Rethinking and Improving Student Learning and Forgetting Processes for Attention Based Knowledge Tracing Models." Proceedings of the 39th Annual AAAI Conference on Artificial Intelligence. 2025.
18+
](https://aaai.org/conference/aaai/aaai-25/)
19+
20+
Title: Rethinking and Improving Student Learning and Forgetting Processes for Attention Based Knowledge Tracing Models
21+
22+
Abstract: Knowledge tracing (KT) models students’ knowledge states and predicts their future performance based on their historical interaction data. However, attention based KT models
23+
struggle to accurately capture diverse forgetting behaviors in ever-growing interaction sequences. First, existing models use uniform time decay matrices, conflating forgetting representations with problem relevance. Second, the fixed-length window prediction paradigm fails to model continuous forgetting processes in expanding sequences. To address these
24+
challenges, this paper introduces LefoKT, a unified architecture that enhances attention based KT models by incorporating proposed relative forgetting attention. LefoKT improves
25+
forgetting modeling through relative forgetting attention to decouple forgetting patterns from problem relevance. It also enhances attention based KT models’ length extrapolation capability for capturing continuous forgetting processes in ever-growing interaction sequences. Extensive experimental results on three datasets validate the effectiveness of LefoKT.

assets/images/posts/cskt.jpg

528 KB
Loading

assets/images/posts/fluckt.png

59.5 KB
Loading

assets/images/posts/lefokt.png

270 KB
Loading

0 commit comments

Comments
 (0)