Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I get the answer without adding memory again? #1976

Open
water-in-stone opened this issue Oct 21, 2024 · 3 comments
Open

How can I get the answer without adding memory again? #1976

water-in-stone opened this issue Oct 21, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@water-in-stone
Copy link

🐛 Describe the bug

import os
from openai import OpenAI
from mem0 import Memory
from extract_info import extract_structured_info
from extract_info import read_file
from extract_info import get_file_address

os.environ["OPENAI_API_KEY"] = "sk-test"

config = {
    "http_client_proxies"
    "llm": {
        "provider": "openai",
        "config": {
            "model": "gpt-4o",
            "temperature": 0.2,
            "max_tokens": 1500,
        },
    },
    "embedder": {"provider": "openai", "config": {"model": "text-embedding-3-large"}},
    "vector_store": {
        "provider": "qdrant",
        "config": {
            "collection_name": "test",
            "embedding_model_dims": 3072,
        },
    },
    "version": "v1.1",
}


class BrowserAIAgent:
    def __init__(self):
        self.client = OpenAI()
        self.memory = Memory.from_config(config)
        self.messages = [
            {
                "role": "system",
                "content": "You are a personal AI Assistant for Browser.",
            }
        ]

    def ask_question(self, question, user_id):
        # Fetch previous related memories
        previous_memories = self.search_memories(question, user_id=user_id)
        prompt = question
        if previous_memories:
            prompt = f"User input: {question}\n Previous memories: {previous_memories}"
        # enable multiple round chat
        self.messages.append({"role": "user", "content": prompt})

        response = self.client.chat.completions.create(
            model="gpt-4o", messages=self.messages
        )
        answer = response.choices[0].message.content
        self.messages.append({"role": "assistant", "content": answer})

        return answer

    def get_memories(self, user_id):
        memories = self.memory.get_all(user_id=user_id)
        return [m["memory"] for m in memories["memories"]]

    def search_memories(self, query, user_id):
        memories = self.memory.search(query, user_id=user_id)
        return [m["memory"] for m in memories["memories"]]

    def add_memory(self, memory, user_id, metadata):
        self.memory.add(memory, user_id=user_id, metadata=metadata)


user_id = "browser"
ai_assistant = BrowserAIAgent()


def add_memories_for_browser():
    add_browser_docs("/Users/browser/docs/document")


def add_browser_docs(directory_path):
    files_address = get_file_address(directory_path)

    for file_address in files_address:
        file_path = os.path.join(directory_path, file_address)
        print("try to analyze ", file_address, ", file path is ", file_path)
        markdown_text = read_file(file_path)
        structured_info = extract_structured_info(markdown_text)

        ai_assistant.add_memory(
            markdown_text, user_id=user_id, metadata=structured_info
        )


def main():
    # I have added some documents for the Browser before, and I don't want to add them again.
    # add_memories_for_browser()

    while True:
        question = input("Question: ")
        if question.lower() in ["q", "exit"]:
            print("Exiting...")
            break

        answer = ai_assistant.ask_question(question, user_id=user_id)
        print(f"Answer: {answer}")
        memories = ai_assistant.get_memories(user_id=user_id)
        print("Memories:")
        for memory in memories:
            print(f"- {memory}")
        print("-----")


if __name__ == "__main__":
    main()

My source code is shown above and stored in a file named 'agent.py'. I can input a question and get the correct answer after I execute python agent.py. However, I cannot get the correct answer when I execute agent.py again with the same question. Is Mem0 only able to store data in memory and not on the hard disk?

@water-in-stone water-in-stone changed the title How can I get answer without add memory again? How can I get the answer without adding memory again? Oct 21, 2024
@spike-spiegel-21
Copy link
Collaborator

Hi @water-in-stone , It looks like you are not persisting your vector store. It means that the vector store configuration that you are using currently:

"vector_store": {
        "provider": "qdrant",
        "config": {
            "collection_name": "test",
            "embedding_model_dims": 3072,
        },
    }

will create a temporary collection in tmp/qdrant that you will loose as soon as you close your program agent.py. I recommend you to use docker container configuration by using these commands:

docker pull qdrant/qdrant

docker run -p 6333:6333 -p 6334:6334 \
    -v $(pwd)/qdrant_storage:/qdrant/storage:z \
    qdrant/qdrant

and then use these configurations:

"vector_store": {
        "provider": "qdrant",
        "config": {
            "host": "localhost",
            "port": 6333,
        }
    },

@water-in-stone
Copy link
Author

Hi @water-in-stone , It looks like you are not persisting your vector store. It means that the vector store configuration that you are using currently:

"vector_store": {
        "provider": "qdrant",
        "config": {
            "collection_name": "test",
            "embedding_model_dims": 3072,
        },
    }

will create a temporary collection in tmp/qdrant that you will loose as soon as you close your program agent.py. I recommend you to use docker container configuration by using these commands:

docker pull qdrant/qdrant

docker run -p 6333:6333 -p 6334:6334 \
    -v $(pwd)/qdrant_storage:/qdrant/storage:z \
    qdrant/qdrant

and then use these configurations:

"vector_store": {
        "provider": "qdrant",
        "config": {
            "host": "localhost",
            "port": 6333,
        }
    },

Good answer! I will use configurations you provided.

@water-in-stone
Copy link
Author

@spike-spiegel-21 Due to some complex reasons, I cannot use Docker. How can I use a local Qdrant database without Docker?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants