use models specific to LightRAG/NanoGraphRAG processing #512
vap0rtranz
started this conversation in
Ideas
Replies: 1 comment
-
Just to clarify, I looks like the model used for the graph processing is hardcoded. I've tried a few reconfgurations in the UI thinking it would change the model, like changing the default LLM. LightRAG always uses Llama3.2 1B. Oddly enough, my local Ollama doesn't even have the Llama-3.2-1B-Instruct pulled to run. .... Logs:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
How could models trained for GraphRAG/Light/Nano processing be integrated with Kotaemon?
I've found that the generic models, like Llama3 and Qwen2, are slow even on GPU.
Models have been trained for GraphRAG processing. For example, Triplex. Triplex authors claim it is far more efficient.
How can the app's settings be changed to add a GraphRAG specific model to run locally?
I'm thinking a workflow like the setup for Embeddings to use embed specific models via Ollama.
Beta Was this translation helpful? Give feedback.
All reactions