Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to add and retreive memory #1971

Open
JadeVexo opened this issue Oct 17, 2024 · 14 comments
Open

Unable to add and retreive memory #1971

JadeVexo opened this issue Oct 17, 2024 · 14 comments
Assignees
Labels
bug Something isn't working

Comments

@JadeVexo
Copy link

🐛 Describe the bug

This is the code that I am running.

from mem0 import Memory

config = {
    "llm": {
        "provider": "ollama",
        "config": {
            "model": "llama3.1:latest",
            "temperature": 0,
            "max_tokens": 8000,
            "ollama_base_url": "http://localhost:11434",  # Ensure this URL is correct
        },
    },
    "embedder": {
        "provider": "ollama",
        "config": {
            "model": "nomic-embed-text:latest",
            # Alternatively, you can use "snowflake-arctic-embed:latest"
            "ollama_base_url": "http://localhost:11434",
        },
    },
    "version": "v1.1",
}

# Initialize Memory with the configuration
m = Memory.from_config(config)

# Add a memory
m.add("I'm visiting Paris", user_id="john")

# Retrieve memories
memories = m.get_all()
print(memories)

But the output is like this:

{'results': []}
@zinyando
Copy link

Not everything is saved to memory, so in a chat conversation, you might need to have a few back-and-forth chats before something is saved to memory.

See a similar issue #1970

@JadeVexo
Copy link
Author

Not everything is saved to memory, so in a chat conversation, you might need to have a few back-and-forth chats before something is saved to memory.

See a similar issue #1970

I tried this

from mem0 import Memory

config = {
    "llm": {
        "provider": "ollama",
        "config": {
            "model": "llama3.1:latest",
            "temperature": 0,
            "max_tokens": 8000,
            "ollama_base_url": "http://localhost:11434",  # Ensure this URL is correct
        },
    },
        "embedder": {
        "provider": "ollama",
        "config": {
            "model": "nomic-embed-text:latest",
            # Alternatively, you can use "snowflake-arctic-embed:latest"
            "ollama_base_url": "http://localhost:11434",
        },
    },
    "version": "v1.1"
}

# Initialize Memory with the configuration
m = Memory.from_config(config)

# Add a memory
m.add("I'm visiting Paris", user_id="john")
m.add("I'm visiting lebanon", user_id="john")
m.add("I'm visiting China", user_id="john")
m.add("I'm visiting India", user_id="john")
m.add("I'm visiting Japan", user_id="john")
m.add("I'm visiting USA", user_id="john")
m.add("I'm visiting Canada", user_id="john")
m.add("I'm visiting Mexico", user_id="john")
m.add("I'm visiting London", user_id="john")

# Retrieve memories
memories = m.get_all()
print(memories)

I still get the same result. I dont understand what you mean by a chat conversation

@zinyando
Copy link

Try adding the user_id to the get_all() call.

...

memories = m.get_all(user_id="john", output_format="v1.0")
print(memories)

@JadeVexo
Copy link
Author

 memories = m.get_all(user_id="john", output_format="v1.0")
TypeError: Memory.get_all() got an unexpected keyword argument 'output_format'

I just did

# Retrieve memories
memories = m.get_all(user_id="john")
print(memories)

and I get the same result. What am I doing wrong

@deshraj
Copy link
Collaborator

deshraj commented Oct 17, 2024

Thanks for reporting the issue @JadeVexo. We will look into this issue asap.

@deshraj deshraj assigned Dev-Khant and unassigned prateekchhikara Oct 17, 2024
@JadeVexo
Copy link
Author

@Dev-Khant Any idea why the issue appears?

@Dev-Khant
Copy link
Collaborator

Dev-Khant commented Oct 19, 2024

Hey @JadeVexo I'm checking the issue and will share an update here soon. And can please let us know what's the size of llama model you are using?

@ketangangal
Copy link
Contributor

Interesting issue !!!

@spike-spiegel-21
Copy link
Collaborator

This is the same issue I faced initially. The model that I was using ollama run llama3.1 with 8B parameters and nomic-embed-text:latest 1.5. Looks like the model is not able to process the prompts. The issue will not come with openai apis.

@JadeVexo
Copy link
Author

@spike-spiegel-21 I do want to run these using ollama locally. would this work if I use a more powerful model?

@spike-spiegel-21
Copy link
Collaborator

spike-spiegel-21 commented Oct 21, 2024

@spike-spiegel-21 I do want to run these using ollama locally. would this work if I use a more powerful model?

Ideally it should process the prompts and work fine with powerful model. Unfortunately I don't have the GPU to run one :(

@JadeVexo
Copy link
Author

JadeVexo commented Oct 21, 2024

Ive tried to use a more powerful model @spike-spiegel-21 with this code and I get an error

from mem0 import Memory

config = {
    "llm": {
        "provider": "ollama",
        "config": {
            "model": "llama3.1:70b",
            "temperature": 0,
            "max_tokens": 8000,
            "ollama_base_url": "http://localhost:11434",  # Ensure this URL is correct
        },
    },
        "embedder": {
        "provider": "ollama",
        "config": {
            "model": "nomic-embed-text:latest",
            # Alternatively, you can use "snowflake-arctic-embed:latest"
            "ollama_base_url": "http://localhost:11434",
        },
    },
    "version": "v1.1"
}

# Initialize Memory with the configuration
m = Memory.from_config(config)

# Add a memory
m.add("I'm visiting London", user_id="john")

# Retrieve memories
memories = m.get_all(user_id="john")
print(memories)

I think it has something to do with the embedder so I tried using different embedders and it doesnt seem to be working

Traceback (most recent call last):
  File "/mnt/d_disk/ed21b069/Sentient-AI/sentient_ai_neural_engine/mem0/ollama_setup.py", line 28, in <module>
    m.add("I'm visiting London", user_id="john")
  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/mem0/memory/main.py", line 109, in add
    vector_store_result = future1.result()
  File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/mem0/memory/main.py", line 155, in _add_to_vector_store
    existing_memories = self.vector_store.search(
  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/mem0/vector_stores/qdrant.py", line 143, in search
    hits = self.client.search(
  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/qdrant_client.py", line 387, in search
    return self._client.search(
  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/local/qdrant_local.py", line 204, in search
    return collection.search(
  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/local/local_collection.py", line 573, in search
    scores = calculate_distance(query_vector, vectors, distance)
  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/local/distances.py", line 152, in calculate_distance
    return cosine_similarity(query, vectors)
  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/local/distances.py", line 94, in cosine_similarity
    return np.dot(vectors, query)
ValueError: shapes (0,1536) and (1024,) not aligned: 1536 (dim 1) != 1024 (dim 0)

@parshvadaftari
Copy link
Contributor

Ive tried to use a more powerful model @spike-spiegel-21 with this code and I get an error

from mem0 import Memory



config = {

    "llm": {

        "provider": "ollama",

        "config": {

            "model": "llama3.1:70b",

            "temperature": 0,

            "max_tokens": 8000,

            "ollama_base_url": "http://localhost:11434",  # Ensure this URL is correct

        },

    },

        "embedder": {

        "provider": "ollama",

        "config": {

            "model": "nomic-embed-text:latest",

            # Alternatively, you can use "snowflake-arctic-embed:latest"

            "ollama_base_url": "http://localhost:11434",

        },

    },

    "version": "v1.1"

}



# Initialize Memory with the configuration

m = Memory.from_config(config)



# Add a memory

m.add("I'm visiting London", user_id="john")



# Retrieve memories

memories = m.get_all(user_id="john")

print(memories)

I think it has something to do with the embedder so I tried using different embedders and it doesnt seem to be working

Traceback (most recent call last):

  File "/mnt/d_disk/ed21b069/Sentient-AI/sentient_ai_neural_engine/mem0/ollama_setup.py", line 28, in <module>

    m.add("I'm visiting London", user_id="john")

  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/mem0/memory/main.py", line 109, in add

    vector_store_result = future1.result()

  File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result

    return self.__get_result()

  File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result

    raise self._exception

  File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run

    result = self.fn(*self.args, **self.kwargs)

  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/mem0/memory/main.py", line 155, in _add_to_vector_store

    existing_memories = self.vector_store.search(

  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/mem0/vector_stores/qdrant.py", line 143, in search

    hits = self.client.search(

  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/qdrant_client.py", line 387, in search

    return self._client.search(

  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/local/qdrant_local.py", line 204, in search

    return collection.search(

  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/local/local_collection.py", line 573, in search

    scores = calculate_distance(query_vector, vectors, distance)

  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/local/distances.py", line 152, in calculate_distance

    return cosine_similarity(query, vectors)

  File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/local/distances.py", line 94, in cosine_similarity

    return np.dot(vectors, query)

ValueError: shapes (0,1536) and (1024,) not aligned: 1536 (dim 1) != 1024 (dim 0)

@JadeVexo If you're changing the embedder, it's default dimensions change as well. So what happens over here is your previous embedder had dimensions of 1024 and the new one has 1536 which doesn't match. You either changing the dimensions to 1536 or 1024 or delete the already existing chromaDB's storage name and change it to new one.

@Dev-Khant
Copy link
Collaborator

@spike-spiegel-21 I do want to run these using ollama locally. would this work if I use a more powerful model?

No it should work with smaller models as well. We are working on updating the prompt, will update here once fixed.

@Dev-Khant Dev-Khant added the bug Something isn't working label Nov 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

8 participants