Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why is memo work in LLM unlearning? #1

Closed
Yuda-Jin opened this issue Oct 11, 2024 · 2 comments
Closed

Why is memo work in LLM unlearning? #1

Yuda-Jin opened this issue Oct 11, 2024 · 2 comments

Comments

@Yuda-Jin
Copy link

In my opinion, memo only provide a set of Inversion Fact as training data.

@Carol-gutianle
Copy link
Owner

Thank you for your attention. MEMO not only provides a set of inverted facts, but also provides memory signals for unlearn. Because too many inverted facts will cause the unlearned model to be more inclined to normal wrong answers, we need to find the smallest subset that meets the unlearn requirements, and MEMO provides an anchor point for this process.

@Carol-gutianle
Copy link
Owner

Carol-gutianle commented Oct 15, 2024

And why do memory signals work? I think models have different abilities to master new knowledge with different levels of memory. For example, llama2 is more likely to learn to inverted facts with a high level of memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants