You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The library CAPPR allows the probabilities for each completion to be calculated. Although these are not usable as a measure of certainty in LLMs, I have a strong need to get access to them.
Currently, we can use this code to get categorical answers:
import outlines
model = outlines.models.transformers("mistralai/Mistral-7B-v0.1")
prompt = """You are a sentiment-labelling assistant.
Is the following review positive or negative?
Review: This restaurant is just awesome!
"""
generator = outlines.generate.choice(model, ["Positive", "Negative"])
answer = generator(prompt)
print(answer) # Positive
I want this "probability" method:
import outlines
model = outlines.models.transformers("mistralai/Mistral-7B-v0.1")
prompt = """You are a sentiment-labelling assistant.
Is the following review positive or negative?
Review: This restaurant is just awesome!
"""
generator = outlines.generate.probability(model, ["Positive", "Negative"])
answer = generator(prompt)
print(answer) # [0.8. 0.2]
The text was updated successfully, but these errors were encountered:
The library CAPPR allows the probabilities for each completion to be calculated. Although these are not usable as a measure of certainty in LLMs, I have a strong need to get access to them.
Currently, we can use this code to get categorical answers:
I want this "probability" method:
The text was updated successfully, but these errors were encountered: