We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi there,
Below are the steps that I ran the python code:
import os from mem0 import Memory
os.environ["OPENAI_API_KEY"] = "hello world"
config = { "graph_store": { "provider": "neo4j", "config": { "url": "neo4j://localhost:7687", "username": "neo4j", "password": "Whatisup2024" }, "llm": { "provider": "ollama", "config": { "model": "llama3.2:1b", "temperature": 0.2, "max_tokens": 4096, "ollama_base_url": "http://localhost:11434", }, } }, "llm": { "provider": "ollama", "config": { "model": "llama3.2:1b", "temperature": 0.2, "max_tokens": 1024, "ollama_base_url": "http://localhost:11434", }, }, "embedder": { "provider": "ollama", "config": { "ollama_base_url": "http://localhost:11434", "model": "nomic-embed-text:latest" }, }, "version": "v1.1" }
m = Memory.from_config(config) m.add("I'm visiting Paris", user_id="john")
Got the below error message:
Below is my test code with qdrant as vector_store:
config = { "vector_store": { "provider": "qdrant", "config": { "collection_name": "mem0", "host": "localhost", "embedding_model_dims": 768, "port": 6333, } }, "llm": { "provider": "ollama", "config": { "model": "llama3.2:1b", "temperature": 0.2, "max_tokens": 1024, "ollama_base_url": "http://localhost:11434", }, }, "embedder": { "provider": "ollama", "config": { "ollama_base_url": "http://localhost:11434", "embedding_dims": 768, "model": "nomic-embed-text:latest" }, } #"version": "v1.1" }
print(config) m = Memory.from_config(config) m.add("I'm visiting Paris", user_id="john")
Got below error:
Any feedback is appreciated.
BR Kimi
The text was updated successfully, but these errors were encountered:
No branches or pull requests
🐛 Describe the bug
Hi there,
Below are the steps that I ran the python code:
import os
from mem0 import Memory
os.environ["OPENAI_API_KEY"] = "hello world"
config = {
"graph_store": {
"provider": "neo4j",
"config": {
"url": "neo4j://localhost:7687",
"username": "neo4j",
"password": "Whatisup2024"
},
"llm": {
"provider": "ollama",
"config": {
"model": "llama3.2:1b",
"temperature": 0.2,
"max_tokens": 4096,
"ollama_base_url": "http://localhost:11434",
},
}
},
"llm": {
"provider": "ollama",
"config": {
"model": "llama3.2:1b",
"temperature": 0.2,
"max_tokens": 1024,
"ollama_base_url": "http://localhost:11434",
},
},
"embedder": {
"provider": "ollama",
"config": {
"ollama_base_url": "http://localhost:11434",
"model": "nomic-embed-text:latest"
},
},
"version": "v1.1"
}
m = Memory.from_config(config)
m.add("I'm visiting Paris", user_id="john")
Got the below error message:
Below is my test code with qdrant as vector_store:
import os
from mem0 import Memory
os.environ["OPENAI_API_KEY"] = "hello world"
config = {
"vector_store": {
"provider": "qdrant",
"config": {
"collection_name": "mem0",
"host": "localhost",
"embedding_model_dims": 768,
"port": 6333,
}
},
"llm": {
"provider": "ollama",
"config": {
"model": "llama3.2:1b",
"temperature": 0.2,
"max_tokens": 1024,
"ollama_base_url": "http://localhost:11434",
},
},
"embedder": {
"provider": "ollama",
"config": {
"ollama_base_url": "http://localhost:11434",
"embedding_dims": 768,
"model": "nomic-embed-text:latest"
},
}
#"version": "v1.1"
}
print(config)
m = Memory.from_config(config)
m.add("I'm visiting Paris", user_id="john")
Got below error:
Any feedback is appreciated.
BR
Kimi
The text was updated successfully, but these errors were encountered: