-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to add and retreive memory #1971
Comments
Not everything is saved to memory, so in a chat conversation, you might need to have a few back-and-forth chats before something is saved to memory. See a similar issue #1970 |
I tried this from mem0 import Memory
config = {
"llm": {
"provider": "ollama",
"config": {
"model": "llama3.1:latest",
"temperature": 0,
"max_tokens": 8000,
"ollama_base_url": "http://localhost:11434", # Ensure this URL is correct
},
},
"embedder": {
"provider": "ollama",
"config": {
"model": "nomic-embed-text:latest",
# Alternatively, you can use "snowflake-arctic-embed:latest"
"ollama_base_url": "http://localhost:11434",
},
},
"version": "v1.1"
}
# Initialize Memory with the configuration
m = Memory.from_config(config)
# Add a memory
m.add("I'm visiting Paris", user_id="john")
m.add("I'm visiting lebanon", user_id="john")
m.add("I'm visiting China", user_id="john")
m.add("I'm visiting India", user_id="john")
m.add("I'm visiting Japan", user_id="john")
m.add("I'm visiting USA", user_id="john")
m.add("I'm visiting Canada", user_id="john")
m.add("I'm visiting Mexico", user_id="john")
m.add("I'm visiting London", user_id="john")
# Retrieve memories
memories = m.get_all()
print(memories) I still get the same result. I dont understand what you mean by a chat conversation |
Try adding the
|
memories = m.get_all(user_id="john", output_format="v1.0")
TypeError: Memory.get_all() got an unexpected keyword argument 'output_format' I just did # Retrieve memories
memories = m.get_all(user_id="john")
print(memories) and I get the same result. What am I doing wrong |
Thanks for reporting the issue @JadeVexo. We will look into this issue asap. |
@Dev-Khant Any idea why the issue appears? |
Hey @JadeVexo I'm checking the issue and will share an update here soon. And can please let us know what's the size of llama model you are using? |
Interesting issue !!! |
This is the same issue I faced initially. The model that I was using |
@spike-spiegel-21 I do want to run these using ollama locally. would this work if I use a more powerful model? |
Ideally it should process the prompts and work fine with powerful model. Unfortunately I don't have the GPU to run one :( |
Ive tried to use a more powerful model @spike-spiegel-21 with this code and I get an error from mem0 import Memory
config = {
"llm": {
"provider": "ollama",
"config": {
"model": "llama3.1:70b",
"temperature": 0,
"max_tokens": 8000,
"ollama_base_url": "http://localhost:11434", # Ensure this URL is correct
},
},
"embedder": {
"provider": "ollama",
"config": {
"model": "nomic-embed-text:latest",
# Alternatively, you can use "snowflake-arctic-embed:latest"
"ollama_base_url": "http://localhost:11434",
},
},
"version": "v1.1"
}
# Initialize Memory with the configuration
m = Memory.from_config(config)
# Add a memory
m.add("I'm visiting London", user_id="john")
# Retrieve memories
memories = m.get_all(user_id="john")
print(memories) I think it has something to do with the embedder so I tried using different embedders and it doesnt seem to be working Traceback (most recent call last):
File "/mnt/d_disk/ed21b069/Sentient-AI/sentient_ai_neural_engine/mem0/ollama_setup.py", line 28, in <module>
m.add("I'm visiting London", user_id="john")
File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/mem0/memory/main.py", line 109, in add
vector_store_result = future1.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/mem0/memory/main.py", line 155, in _add_to_vector_store
existing_memories = self.vector_store.search(
File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/mem0/vector_stores/qdrant.py", line 143, in search
hits = self.client.search(
File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/qdrant_client.py", line 387, in search
return self._client.search(
File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/local/qdrant_local.py", line 204, in search
return collection.search(
File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/local/local_collection.py", line 573, in search
scores = calculate_distance(query_vector, vectors, distance)
File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/local/distances.py", line 152, in calculate_distance
return cosine_similarity(query, vectors)
File "/mnt/d_disk/ed21b069/sentient_ai_playgrounds/lib/python3.10/site-packages/qdrant_client/local/distances.py", line 94, in cosine_similarity
return np.dot(vectors, query)
ValueError: shapes (0,1536) and (1024,) not aligned: 1536 (dim 1) != 1024 (dim 0) |
@JadeVexo If you're changing the embedder, it's default dimensions change as well. So what happens over here is your previous embedder had dimensions of 1024 and the new one has 1536 which doesn't match. You either changing the dimensions to 1536 or 1024 or delete the already existing chromaDB's storage name and change it to new one. |
No it should work with smaller models as well. We are working on updating the prompt, will update here once fixed. |
🐛 Describe the bug
This is the code that I am running.
But the output is like this:
{'results': []}
The text was updated successfully, but these errors were encountered: