Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI Assistant: Read, Review and Verify #2492

Open
josephjclark opened this issue Sep 11, 2024 · 2 comments · May be fixed by #2762
Open

AI Assistant: Read, Review and Verify #2492

josephjclark opened this issue Sep 11, 2024 · 2 comments · May be fixed by #2762
Assignees

Comments

@josephjclark
Copy link
Contributor

josephjclark commented Sep 11, 2024

Add a small content warning to the end of AI chats to discourage users from taking AI responses at face value.

Claude has this "Claude can make mistakes" line:

Image

I want something like this, but I also want to show confidence in our model (well, or prompt actually, but we are abstracting that). Also I'm not so worried about hallucinations or straight up errors - I'm worried about style, about subtle side effects, about whether the user has really engaged with the problem. Because the hardest part of most software is the design, rather than the syntax.

Note that the warning should only appear on the latest chat. So it's sort of pinned to the last reply.

This was penciled in here https://github.com/OpenFn/lightning/pull/2478/files (search for Read, review and verify) - you can re-use the template to save time. Although it appears on each message, it doesn't have the "sticky" behaviour.

Suggestions:

  • Read, Review, Verify
  • This response was generated by an algorithm (still my favourite)
  • AI-generated response
  • ?
@github-project-automation github-project-automation bot moved this to New Issues in v2 Sep 11, 2024
@josephjclark
Copy link
Contributor Author

I'm still not sure about this phrase - "review and verify" basically mean the same thing. I'm still working on it. Keen to get the disclaimer added to the assistant ASAP

@josephjclark
Copy link
Contributor Author

If someone picks up this ticket, can you google around to see if anyone has written any kind of responsible guide to using an LLM, and link it here?

I'm thinking of articles or guidelines or blog posts that provide gentle guidance to users on how to use LLMs. How to ask the right question, how to consider responses, how to be a little bit skeptical, how to not take things at face value.

I don't know if any such thing exists, but it would save me a lot of time if it does :)

@josephjclark josephjclark moved this from Backlog to Ready in v2 Dec 4, 2024
@elias-ba elias-ba self-assigned this Dec 7, 2024
@elias-ba elias-ba moved this from Ready to In review in v2 Dec 7, 2024
@elias-ba elias-ba linked a pull request Dec 7, 2024 that will close this issue
11 tasks
@theroinaochieng theroinaochieng moved this from In review to Ready in v2 Dec 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Ready
Development

Successfully merging a pull request may close this issue.

2 participants