You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add a small content warning to the end of AI chats to discourage users from taking AI responses at face value.
Claude has this "Claude can make mistakes" line:
I want something like this, but I also want to show confidence in our model (well, or prompt actually, but we are abstracting that). Also I'm not so worried about hallucinations or straight up errors - I'm worried about style, about subtle side effects, about whether the user has really engaged with the problem. Because the hardest part of most software is the design, rather than the syntax.
Note that the warning should only appear on the latest chat. So it's sort of pinned to the last reply.
This was penciled in here https://github.com/OpenFn/lightning/pull/2478/files (search for Read, review and verify) - you can re-use the template to save time. Although it appears on each message, it doesn't have the "sticky" behaviour.
Suggestions:
Read, Review, Verify
This response was generated by an algorithm (still my favourite)
AI-generated response
?
The text was updated successfully, but these errors were encountered:
I'm still not sure about this phrase - "review and verify" basically mean the same thing. I'm still working on it. Keen to get the disclaimer added to the assistant ASAP
If someone picks up this ticket, can you google around to see if anyone has written any kind of responsible guide to using an LLM, and link it here?
I'm thinking of articles or guidelines or blog posts that provide gentle guidance to users on how to use LLMs. How to ask the right question, how to consider responses, how to be a little bit skeptical, how to not take things at face value.
I don't know if any such thing exists, but it would save me a lot of time if it does :)
Add a small content warning to the end of AI chats to discourage users from taking AI responses at face value.
Claude has this "Claude can make mistakes" line:
I want something like this, but I also want to show confidence in our model (well, or prompt actually, but we are abstracting that). Also I'm not so worried about hallucinations or straight up errors - I'm worried about style, about subtle side effects, about whether the user has really engaged with the problem. Because the hardest part of most software is the design, rather than the syntax.
Note that the warning should only appear on the latest chat. So it's sort of pinned to the last reply.
This was penciled in here https://github.com/OpenFn/lightning/pull/2478/files (search for
Read, review and verify
) - you can re-use the template to save time. Although it appears on each message, it doesn't have the "sticky" behaviour.Suggestions:
The text was updated successfully, but these errors were encountered: