From 2b32b81ae5a2bdbfb9bd0d44b932547d55adb308 Mon Sep 17 00:00:00 2001 From: whyiug Date: Fri, 8 Nov 2024 12:44:58 +0800 Subject: [PATCH] [Doc] Update FAQ links in spec_decode.rst (#9662) Signed-off-by: whyiug Signed-off-by: Sumit Dubey --- docs/source/models/spec_decode.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/models/spec_decode.rst b/docs/source/models/spec_decode.rst index b02c80aebec69..d57ffec53215d 100644 --- a/docs/source/models/spec_decode.rst +++ b/docs/source/models/spec_decode.rst @@ -182,7 +182,7 @@ speculative decoding, breaking down the guarantees into three key areas: 3. **vLLM Logprob Stability** - vLLM does not currently guarantee stable token log probabilities (logprobs). This can result in different outputs for the same request across runs. For more details, see the FAQ section - titled *Can the output of a prompt vary across runs in vLLM?* in the `FAQs <../serving/faq.rst>`_. + titled *Can the output of a prompt vary across runs in vLLM?* in the `FAQs <../serving/faq>`_. **Conclusion** @@ -197,7 +197,7 @@ can occur due to following factors: **Mitigation Strategies** -For mitigation strategies, please refer to the FAQ entry *Can the output of a prompt vary across runs in vLLM?* in the `FAQs <../serving/faq.rst>`_. +For mitigation strategies, please refer to the FAQ entry *Can the output of a prompt vary across runs in vLLM?* in the `FAQs <../serving/faq>`_. Resources for vLLM contributors -------------------------------