Skip to content

Commit

Permalink
feat: explore methods for retaining the chat history
Browse files Browse the repository at this point in the history
  • Loading branch information
cindyli committed Jun 21, 2024
1 parent ead77be commit 50758d2
Show file tree
Hide file tree
Showing 5 changed files with 286 additions and 7 deletions.
35 changes: 30 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,39 +58,64 @@ with generating new Bliss symbols etc.

### Llama2

Conclusion: useful
**Conclusion**: useful

See the [Llama2FineTuning.md](./docs/Llama2FineTuning.md) in the [documentation](./docs) folder for details
on how to fine tune, evaluation results and the conclusion about how useful it is.

### StyleGAN3

Conclusion: not useful
**Conclusion**: not useful

See the [TrainStyleGAN3Model.md](./docs/TrainStyleGAN3Model.md) in the [documentation](./docs) folder for details
on how to train this model, training results and the conclusion about how useful it is.

### StyleGAN2-ADA

Conclusion: shows promise
**Conclusion**: shows promise

See the [StyleGAN2-ADATraining.md](./docs/StyleGAN2-ADATraining.md) in the [documentation](./docs) folder for details
on how to train this model and training results.

### Texture Inversion

Conclusion: not useful
**Conclusion**: not useful

See the [Texture Inversion documentation](./notebooks/README.md) for details.

## Preserving Information

### RAG (Retrieval-augmented generation)

**Conclusion**: useful

RAG (Retrieval-augmented generation) technique is explored to resolve ambiguities by retrieving relevant contextual
information from external sources, enabling the language model to generate more accurate and reliable responses.

See [RAG.md](./docs/RAG.md) for more details.

### Reflection over Chat History

**Conclusion**: useful

When users have a back-and-forth conversation, the application requires a form of "memory" to retain and incorporate past interactions into its current processing. Two methods are explored to achieve this:

1. Summarizing the chat history and providing it as contextual input.
2. Using prompt engineering to instruct the language model to consider the past conversation.

The second method, prompt engineering, yields more desired responses than summarizing chat history.

See [ReflectChatHistory.md](./docs/RAG.md) for more details.

## Notebooks

[`/notebooks`](./notebooks/) directory contains all notebooks used for training or fine-tuning various models.
Each notebook usually comes with a accompanying `dockerfile.yml` to elaborate the environment that the notebook was
running in.

## Jobs
[`/jobs`](./jobs/) directory contains all jobs used for training or fine-tuning various models.
[`/jobs`](./jobs/) directory contains all jobs and scripts used for training or fine-tuning various models, as well
as other explorations with RAG (Retrieval-augmented generation) and preserving chat history.

## Utility Scripts

Expand Down
6 changes: 4 additions & 2 deletions docs/RAG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ training data, potentially leading to factual errors or inconsistencies. Read
[What Is Retrieval-Augmented Generation, aka RAG?](https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/)
for more information.

In a co-design session with an AAC (Augmentative and Alternative Communication)) user, RAG can
In a co-design session with an AAC (Augmentative and Alternative Communication) user, RAG can
be particularly useful. When the user expressed a desire to invite "Roy nephew" to her birthday
party, the ambiguity occurred as to whether "Roy" and "nephew" referred to the same person or
different individuals. Traditional language models might interpret this statement inconsistently,
Expand All @@ -18,8 +18,10 @@ containing relevant information about the user's family members and their relati
retrieving and incorporating this contextual information into the language model's input, RAG
can disambiguate the user's intent and generate a more accurate response.

The RAG experiments are located in the `jobs/RAG` directory. It contains these scripts:
The RAG experiment is located in the `jobs/RAG` directory. It contains these scripts:

* `requirements.txt`: contains python dependencies for setting up the environment to run
the python script.
* `rag.py`: use RAG to address the "Roy nephew" issue described above.

## Run Scripts Locally
Expand Down
77 changes: 77 additions & 0 deletions docs/ReflectChatHistory.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Reflection over Chat History

When users have a back-and-forth conversation, the application requires a form of "memory" to retain and incorporate
past interactions into its current processing. Two methods are explored to achieve this:

1. Summarizing the chat history and providing it as contextual input.
2. Using prompt engineering to instruct the language model to consider the past conversation.

The second method, prompt engineering, yields more desired responses than summarizing chat history.

The scripts for this experiment is located in the `jobs/RAG` directory.

## Method 1: Summarizing the Chat History

### Steps

1. Summarize the past conversation and include it in the prompt as contextual information.
2. Include a specified number of the most recent conversation exchanges in the prompt for additional context.
3. Instruct the language model to convert the telegraphic replies from the AAC user into full sentences to continue
the conversation.

### Result

The conversion process struggles to effectively utilize the provided summary, often resulting in inaccurate full
sentences.

### Scripts

* `requirements.txt`: Lists the Python dependencies needed to set up the environment.
* `chat_history_with_summary.py`: Implements the steps described above and displays the output.

## Method 2: Using Prompt Engineering

### Steps

1. Include the past conversation in the prompt as contextual information.
2. Instruct the language model to reference this context when converting the telegraphic replies from the AAC user
into full sentences to continue the conversation.

### Result

The converted sentences are more accurate and appropriate compared to those generated using Method 1.

### Scripts

* `requirements.txt`: Lists the Python dependencies needed to set up the environment.
* `chat_history_with_prompt.py`: Implements the steps described above and displays the output.

## Run Scripts Locally

### Prerequisites

* [Ollama](https://github.com/ollama/ollama) to run language models locally
* Follow [README](https://github.com/ollama/ollama?tab=readme-ov-file#customize-a-model) to
install and run Ollama on a local computer.
* If you are currently in a activated virtual environment, deactivate it.

### Create/Activitate Virtual Environment
* Go to the RAG scripts directory
- `cd jobs/RAG`

* [Create the virtual environment](https://docs.python.org/3/library/venv.html)
(one time setup):
- `python -m venv .venv`

* Activate (every command-line session):
- Windows: `.\.venv\Scripts\activate`
- Mac/Linux: `source .venv/bin/activate`

* Install Python Dependencies (Only run once for the installation)
- `pip install -r requirements.txt`

### Run Scripts
* Run `chat_history_with_summary.py` or `chat_history_with_prompt.py`
- `python chat_history_with_summary.py` or `python chat_history_with_prompt.py`
- The last two responses in the exectution result shows the language model's output
with and without the contextual information.
72 changes: 72 additions & 0 deletions jobs/RAG/chat_history_with_prompt.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
# Copyright (c) 2024, Inclusive Design Institute
#
# Licensed under the BSD 3-Clause License. You may not use this file except
# in compliance with this License.
#
# You may obtain a copy of the BSD 3-Clause License at
# https://github.com/inclusive-design/baby-bliss-bot/blob/main/LICENSE

from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

# from langchain_core.globals import set_debug
# set_debug(True)

# Define the Ollama model to use
model = "llama3"

# Telegraphic reply to be translated
message_to_convert = "she love cooking like share recipes"

# Conversation history
chat_history = [
"John: Have you heard about the new Italian restaurant downtown?",
"Elain: Yes, I did! Sarah mentioned it to me yesterday. She said the pasta there is amazing.",
"John: I was thinking of going there this weekend. Want to join?",
"Elain: That sounds great! Maybe we can invite Sarah too.",
"John: Good idea. By the way, did you catch the latest episode of that mystery series we were discussing last week?",
"Elain: Oh, the one with the detective in New York? Yes, I watched it last night. It was so intense!",
"John: I know, right? I didn't expect that plot twist at the end. Do you think Sarah has seen it yet?",
"Elain: I'm not sure. She was pretty busy with work the last time we talked. We should ask her when we see her at the restaurant.",
"John: Definitely. Speaking of Sarah, did she tell you about her trip to Italy next month?",
"Elain: Yes, she did. She's so excited about it! She's planning to visit a lot of historical sites.",
"John: I bet she'll have a great time. Maybe she can bring back some authentic Italian recipes for us to try.",
]

# Instantiate the chat model and split the conversation history
llm = ChatOllama(model=model)

# Create prompt template
prompt_template_with_context = """
Elaine prefers to talk using telegraphic messages.
Given a chat history and Elain's latest response which
might reference context in the chat history, convert
Elain's response to full sentences. Only respond with
converted full sentences.
Chat history:
{chat_history}
Elaine's response:
{message_to_convert}
"""

prompt = ChatPromptTemplate.from_template(prompt_template_with_context)

# using LangChain Expressive Language (LCEL) chain syntax
chain = prompt | llm | StrOutputParser()

print("====== Response without chat history ======")

print(chain.invoke({
"chat_history": "",
"message_to_convert": message_to_convert
}) + "\n")

print("====== Response with chat history ======")

print(chain.invoke({
"chat_history": "\n".join(chat_history),
"message_to_convert": message_to_convert
}) + "\n")
103 changes: 103 additions & 0 deletions jobs/RAG/chat_history_with_summary.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# Copyright (c) 2024, Inclusive Design Institute
#
# Licensed under the BSD 3-Clause License. You may not use this file except
# in compliance with this License.
#
# You may obtain a copy of the BSD 3-Clause License at
# https://github.com/inclusive-design/baby-bliss-bot/blob/main/LICENSE

from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

# from langchain_core.globals import set_debug
# set_debug(True)

# Define the Ollama model to use
model = "llama3"

# Define the number of the most recent chats to be passed in as the most recent chats.
# The summary of chats before the most recent will be passed in as another context element.
num_of_recent_chat = 1

# Telegraphic reply to be translated
message_to_convert = "she love cooking like share recipes"

# Chat history
chat_history = [
"John: Have you heard about the new Italian restaurant downtown?",
"Elain: Yes, I did! Sarah mentioned it to me yesterday. She said the pasta there is amazing.",
"John: I was thinking of going there this weekend. Want to join?",
"Elain: That sounds great! Maybe we can invite Sarah too.",
"John: Good idea. By the way, did you catch the latest episode of that mystery series we were discussing last week?",
"Elain: Oh, the one with the detective in New York? Yes, I watched it last night. It was so intense!",
"John: I know, right? I didn't expect that plot twist at the end. Do you think Sarah has seen it yet?",
"Elain: I'm not sure. She was pretty busy with work the last time we talked. We should ask her when we see her at the restaurant.",
"John: Definitely. Speaking of Sarah, did she tell you about her trip to Italy next month?",
"Elain: Yes, she did. She's so excited about it! She's planning to visit a lot of historical sites.",
"John: I bet she'll have a great time. Maybe she can bring back some authentic Italian recipes for us to try.",
]
recent_chat_array = []
earlier_chat_array = []

# 1. Instantiate the chat model and split the chat history
llm = ChatOllama(model=model)

if (len(chat_history) > num_of_recent_chat):
recent_chat_array = chat_history[-num_of_recent_chat:]
earlier_chat_array = chat_history[:-num_of_recent_chat]
else:
recent_chat_array = chat_history
earlier_chat_array = []

# 2. Summarize earlier chat
if (len(earlier_chat_array) > 0):
summarizer_prompt = ChatPromptTemplate.from_template("Summarize the following chat history. Provide only the summary, without any additional comments or context. \nChat history: {chat_history}")
chain = summarizer_prompt | llm | StrOutputParser()
summary = chain.invoke({
"chat_history": "\n".join(earlier_chat_array)
})
print("====== Summary ======")
print(f"{summary}\n")

# 3. concetenate recent chat into a string
recent_chat_string = "\n".join(recent_chat_array)
print("====== Recent Chat ======")
print(f"{recent_chat_string}\n")

# Create prompt template
prompt_template_with_context = """
### Elaine prefers to talk using telegraphic messages. Help to convert Elaine's reply to a chat into full sentences in first-person. Only respond with the converted full sentences.
### This is the chat summary:
{summary}
### This is the most recent chat between Elaine and others:
{recent_chat}
### This is Elaine's most recent response to continue the chat. Please convert:
{message_to_convert}
"""

prompt = ChatPromptTemplate.from_template(prompt_template_with_context)

# using LangChain Expressive Language (LCEL) chain syntax
chain = prompt | llm | StrOutputParser()

print("====== Response without chat history ======")

print(chain.invoke({
"summary": "",
"recent_chat": recent_chat_string,
"message_to_convert": message_to_convert
}) + "\n")

print("====== Response with chat history ======")

print(chain.invoke({
"summary": summary,
"recent_chat": recent_chat_string,
"message_to_convert": message_to_convert
}) + "\n")

0 comments on commit 50758d2

Please sign in to comment.