Large Language Models typically handle context through a linear sequence of tokens, leading to several limitations:
- Limited Memory Span: Context windows have a fixed size, causing distant information to be forgotten
- No Prioritization: All context receives equal importance regardless of relevance
- Linear Structure: Relationships between concepts aren't explicitly captured
- Inconsistency: LLMs can lose track of their own reasoning across long conversations
Graph of Thoughts reimagines LLM memory as a dynamic, structured knowledge graph that enables:
- Semantic Retrieval: Fetch only the most relevant context based on semantic similarity
- Priority-Based Decay: More important or recent information persists longer
- Relationship Preservation: Explicitly capture connections between concepts
- Reasoning Traceability: Follow the model's thought process through graph exploration
Instead of a token window, we store information in a directed graph where:
- Nodes represent concepts, facts, or user inputs
- Edges capture relationships and dependencies between nodes
- Embeddings enable semantic similarity search
- Importance scores determine which nodes to keep or prune
The LLM itself participates in creating its memory structure by:
- Generating structured JSON representing its reasoning process
- Identifying key concepts and relationships
- Updating the graph with new knowledge
- Following chains of thought through the graph
The system maintains context relevance by:
- Retrieving the most semantically similar nodes for each query
- Automatically decaying node importance over time
- Pruning less relevant information when the graph grows too large
- Preserving critical paths of reasoning even as details fade
pip install -r requirements.txt
I'm actively exploring:
Reinforcement learning for optimizing decay functions Multi-modal graphs incorporating images and code Hierarchical summarization for pruning without information loss Knowledge distillation between graph instances
📄 License MIT License 🤝 Contributing Contributions welcome! See CONTRIBUTING.md for details.