Skip to content
This repository has been archived by the owner on May 12, 2023. It is now read-only.

How to chat with history? #37

Open
redthing1 opened this issue Apr 6, 2023 · 4 comments
Open

How to chat with history? #37

redthing1 opened this issue Apr 6, 2023 · 4 comments

Comments

@redthing1
Copy link

I want to use this to run a chat session so that future generate calls have the previous context. But I don't want to re-tokenize and re-run the old tokens again right? Is there some easy way to do this?

@absadiki
Copy link
Collaborator

absadiki commented Apr 7, 2023

@redthing1, Yes you can use interactive mode in the generate function, it behaves the same as the main example of llama.cpp, I will add a CLI soon to see how it works.
I will make it more convenient soon!

model.generate('hi," , new_text_callback=callback, n_predict=25, interactive=True)

@redthing1
Copy link
Author

Oh perfect, thank you!

@mattorp
Copy link

mattorp commented Apr 14, 2023

What's the approach for this when using the langchain wrapper?

https://github.com/hwchase17/langchain/blob/master/langchain/llms/gpt4all.py

@absadiki
Copy link
Collaborator

absadiki commented May 2, 2023

Hi @redthing1,

The history now is kept as long as the model variable is alive (see Please try Interactive Dialogue from the readme page.)

You can call model.reset() to reset the history.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants