Skip to content
/ SAGE Public

The official implementation of our ACL 2025 paper "Why Not Act on What You Know? Unleashing Safety Potential of LLMs via Self-Aware Guard Enhancement".

License

Notifications You must be signed in to change notification settings

NJUNLP/SAGE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

14 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

SAGE

The data and defense methods used in our ACL 2025 (Findings) paper "Why Not Act on What You Know? Unleashing Safety Potential of LLMs via Self-Aware Guard Enhancement".

๐Ÿ“ Abstract

Large Language Models (LLMs) have shown impressive capabilities across various tasks but remain vulnerable to meticulously crafted jailbreak attacks. In this paper, we identify a critical safety gap: while LLMs are adept at detecting jailbreak prompts, they often produce unsafe responses when directly processing these inputs. Inspired by this insight, we propose SAGE (Self-Aware Guard Enhancement), a training-free defense strategy designed to align LLMs' strong safety discrimination performance with their relatively weaker safety generation ability. SAGE consists of two core components: a Discriminative Analysis Module and a Discriminative Response Module, enhancing resilience against sophisticated jailbreak attempts through flexible safety discrimination instructions. Extensive experiments demonstrate SAGE's effectiveness and robustness across various open-source and closed-source LLMs of different sizes and architectures, achieving an average 99% defense success rate against numerous complex and covert jailbreak methods while maintaining helpfulness on general benchmarks. We further conduct mechanistic interpretability analysis through hidden states and attention distributions, revealing the underlying mechanisms of this detection-generation discrepancy. Our work thus contributes to developing future LLMs with coherent safety awareness and generation behavior.

๐Ÿ“ง Contact

If you have any questions about our work, please feel free to contact us via the following email:

Peng Ding: [email protected]

Jun Kuang: [email protected]

Shujian Huang: [email protected]

๐Ÿ“š Citation

If you find this work useful in your own research, please feel free to leave a starโญ๏ธ and cite our paper:

@article{ding2025not,
  title={Why Not Act on What You Know? Unleashing Safety Potential of LLMs via Self-Aware Guard Enhancement},
  author={Ding, Peng and Kuang, Jun and Wang, Zongyu and Cao, Xuezhi and Cai, Xunliang and Chen, Jiajun and Huang, Shujian},
  journal={arXiv preprint arXiv:2505.12060},
  year={2025}
}

About

The official implementation of our ACL 2025 paper "Why Not Act on What You Know? Unleashing Safety Potential of LLMs via Self-Aware Guard Enhancement".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages