Skip to content

Commit a064cc8

Browse files
authored
Merge pull request #1522 from kvcache-ai/Atream-patch-9
Update README with Citation link
2 parents 44d4265 + 8ef6111 commit a064cc8

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99

1010
</p>
1111
<h3>A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations</h3>
12-
<strong><a href="#show-cases">🌟 Show Cases</a> | <a href="#quick-start">🚀 Quick Start</a> | <a href="#tutorial">📃 Tutorial</a> | <a href="https://github.com/kvcache-ai/ktransformers/discussions">💬 Discussion </a>|<a href="#FAQ"> 🙋 FAQ</a> </strong>
12+
<strong><a href="#show-cases">🌟 Show Cases</a> | <a href="#quick-start">🚀 Quick Start</a> | <a href="#tutorial">📃 Tutorial</a> | <a href="#Citation">🔥 Citation </a> | <a href="https://github.com/kvcache-ai/ktransformers/discussions">💬 Discussion </a>|<a href="#FAQ"> 🙋 FAQ</a> </strong>
1313
</div>
1414

1515
<h2 id="intro">🎉 Introduction</h2>
@@ -185,7 +185,7 @@ You can find example rule templates for optimizing DeepSeek-V2 and Qwen2-57B-A14
185185

186186
If you are interested in our design principles and the implementation of the injection framework, please refer to the [design document](doc/en/deepseek-v2-injection.md).
187187

188-
## Citation
188+
<h2 id="Citation">🔥 Citation</h2>
189189

190190
If you use KTransformers for your research, please cite our [paper](https://madsys.cs.tsinghua.edu.cn/publication/ktransformers-unleashing-the-full-potential-of-cpu/gpu-hybrid-inference-for-moe-models/):
191191

0 commit comments

Comments
 (0)