-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Restricting prior internal knowledge for RAG #32
Comments
Hi @JasonIsaac , I am also working with Mistral-7B-Instruct-v0.2 and have also a lot of questions about the prompt format. But I can't find answers - and sorry I can't answer your question too. But maybe we can discuss the topic a little bit more? Do you use the chat-template? The problem for me is that this link seems to be the only source of documentation about how to define a prompt. But the documentation is very poor. It only tells us that the prompt format is very important and that I also got an answer from another discussion that newlines are also very important inside a prompt. Are you using the line breaks in your example deliberately? |
Hi @rsoika , Thanks for you reply. Yes, I came to know we can use the llama2 format which uses
With the above prompt, the model is able to:
Restricting answers to out of scope questions is still an unknown. Can you give an example of a complex prompt?. Yes, line breaks are deliberate, even I read about it somewhere else. |
Hi @JasonIsaac , yes you prompt template is very very interesting too :-) In the early beginning I also started with the I show you my current prompt, that I use to analyze/summarize the content of business documents. (e.g. an Invoice Document
Have you seen this discussion which brings up - at least for me - now new insights. |
One more question: how long is your final prompt? In my case it can be more the 8KB |
Hello Everyone,
I am building RAG application with model Mistral-7B-Instruct-v0.2. It works well for questions related to the knowledge content. But I want to restrict the LLM from answering to any out of scope questions. This is the current prompt I am using:
What is the effective way to stop LLM from answering questions if it is out of scope?.
Can we prompt like above and restrict or is there a better way?
Thanks in Advance
The text was updated successfully, but these errors were encountered: