Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new paper: Understanding Knowledge Hijack Mechanism in In-context Learning through Associative Memory #37

Closed
wyzh0912 opened this issue Dec 20, 2024 · 0 comments

Comments

@wyzh0912
Copy link
Contributor

Title: Understanding Knowledge Hijack Mechanism in In-context Learning through Associative Memory
Head: Induction Head
Published: arxiv
Summary:

  • Innovation: Investigated how the model prioritizes in-context knowledge versus global knowledge when generating outputs, using both theoretical analysis and experimental evaluation.
  • Tasks: Analyzed two key aspects of induction heads: the impact of positional encoding on the oversight, and the contributions of in-context information and pretrained knowledge during inference.
  • Significant Result: Induction head learned by a transformer using RPE can avoid oversight of in-context knowledge within the framework of associative memory
@fan2goa1 fan2goa1 closed this as completed Jan 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants