Skip to content

GRAMI-AI is a cutting-edge, async-first AI agent framework designed to solve complex computational challenges through intelligent, collaborative agent interactions. Built with unprecedented flexibility, this library empowers developers to create sophisticated, context-aware AI systems that can adapt, learn, and collaborate across diverse domains.

License

Notifications You must be signed in to change notification settings

yafatek/grami-ai

Repository files navigation

GRAMI-AI: Dynamic AI Agent Framework

Version Python Versions License GitHub Stars

GRAMI Token

Buy Now

πŸ“‹ Table of Contents

🌟 Overview

GRAMI-AI is a cutting-edge, async-first AI agent framework designed for building sophisticated AI applications. With support for multiple LLM providers, advanced memory management, and streaming capabilities, GRAMI-AI enables developers to create powerful, context-aware AI systems.

Why GRAMI-AI?

  • Async-First: Built for high-performance asynchronous operations
  • Provider Agnostic: Support for Gemini, OpenAI, Anthropic, and Ollama
  • Advanced Memory: LRU and Redis-based memory management
  • Streaming Support: Efficient token-by-token streaming responses
  • Enterprise Ready: Production-grade security and scalability

πŸš€ Key Features

LLM Providers

  • Gemini (Google's latest LLM)
  • OpenAI (GPT models)
  • Anthropic (Claude)
  • Ollama (Local models)

Memory Management

  • LRU Memory (In-memory caching)
  • Redis Memory (Distributed caching)
  • Custom memory providers

Communication

  • Synchronous messaging
  • Asynchronous streaming
  • WebSocket support
  • Custom interfaces

πŸ’» Installation

pip install grami-ai

πŸ”‘ API Key Setup

Before using GRAMI-AI, you need to set up your API keys. You can do this by setting environment variables:

export GEMINI_API_KEY="your-gemini-api-key"
# Or for other providers:
export OPENAI_API_KEY="your-openai-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"

Or using a .env file:

GEMINI_API_KEY=your-gemini-api-key
OPENAI_API_KEY=your-openai-api-key
ANTHROPIC_API_KEY=your-anthropic-api-key

🎯 Quick Start

Here's a simple example of how to create an AI agent using GRAMI-AI:

from grami.agents import AsyncAgent
from grami.providers.gemini_provider import GeminiProvider
from grami.memory.lru import LRUMemory
import asyncio
import os

async def main():
    # Initialize memory and provider
    memory = LRUMemory(capacity=5)
    provider = GeminiProvider(
        api_key=os.getenv("GEMINI_API_KEY"),
        generation_config={
            "temperature": 0.9,
            "top_p": 0.9,
            "top_k": 40,
            "max_output_tokens": 1000,
            "candidate_count": 1
        }
    )
    
    # Create agent
    agent = AsyncAgent(
        name="MyAssistant",
        llm=provider,
        memory=memory,
        system_instructions="You are a helpful AI assistant."
    )
    
    # Example: Using streaming responses
    message = "Tell me a short story about AI."
    async for chunk in agent.stream_message(message):
        print(chunk, end="", flush=True)
    print("\n")
    
    # Example: Using non-streaming responses
    response = await agent.send_message("What's the weather like today?")
    print(f"Response: {response}")

if __name__ == "__main__":
    asyncio.run(main())

πŸ“š Provider Examples

Gemini Provider

from grami.providers.gemini_provider import GeminiProvider
from grami.memory.lru import LRUMemory

# Initialize with memory
provider = GeminiProvider(
    api_key="YOUR_API_KEY",
    model="gemini-pro",  # Optional, defaults to gemini-pro
    generation_config={   # Optional
        "temperature": 0.7,
        "top_p": 0.8,
        "top_k": 40
    }
)

# Add memory provider
memory = LRUMemory(capacity=100)
provider.set_memory_provider(memory)

# Regular message
response = await provider.send_message("What is AI?")

# Streaming response
async for chunk in provider.stream_message("Tell me a story"):
    print(chunk, end="", flush=True)

🧠 Memory Management

LRU Memory

from grami.memory.lru import LRUMemory

# Initialize with capacity
memory = LRUMemory(capacity=100)

# Add to agent
agent = AsyncAgent(
    name="MemoryAgent",
    llm=provider,
    memory=memory
)

Redis Memory

from grami.memory.redis import RedisMemory

# Initialize Redis memory
memory = RedisMemory(
    host="localhost",
    port=6379,
    capacity=1000
)

# Add to provider
provider.set_memory_provider(memory)

🌊 Streaming Capabilities

Basic Streaming

async def stream_example():
    async for chunk in provider.stream_message("Generate a story"):
        print(chunk, end="", flush=True)

Streaming with Memory

async def stream_with_memory():
    # First message
    response = await provider.send_message("My name is Alice")
    
    # Stream follow-up (will remember context)
    async for chunk in provider.stream_message("What's my name?"):
        print(chunk, end="", flush=True)

πŸ—Ί Development Roadmap

Core Framework Design

  • Implement AsyncAgent base class with dynamic configuration
  • Create flexible system instruction definition mechanism
  • Design abstract LLM provider interface
  • Develop dynamic role and persona assignment system
  • Comprehensive async example configurations
    • Memory with streaming
    • Memory without streaming
    • No memory with streaming
    • No memory without streaming
  • Implement multi-modal agent capabilities (text, image, video)

LLM Provider Abstraction

  • Unified interface for diverse LLM providers
  • Google Gemini integration
    • Basic message sending
    • Streaming support
    • Memory integration
  • OpenAI ChatGPT integration
    • Basic message sending
    • Streaming implementation
    • Memory support
  • Anthropic Claude integration
  • Ollama local LLM support
  • Standardize function/tool calling across providers
  • Dynamic prompt engineering support
  • Provider-specific configuration handling

Communication Interfaces

  • WebSocket real-time communication
  • REST API endpoint design
  • Kafka inter-agent communication
  • gRPC support
  • Event-driven agent notification system
  • Secure communication protocols

Memory and State Management

  • Pluggable memory providers
  • In-memory state storage (LRU)
  • Redis distributed memory
  • DynamoDB scalable storage
  • S3 content storage
  • Conversation and task history tracking
  • Global state management for agent crews
  • Persistent task and interaction logs
  • Advanced memory indexing
  • Memory compression techniques

Tool and Function Ecosystem

  • Extensible tool integration framework
  • Default utility tools
    • Kafka message publisher
    • Web search utility
    • Content analysis tool
  • Provider-specific function calling support
  • Community tool marketplace
  • Easy custom tool development

Agent Crew Collaboration

  • Inter-agent communication protocol
  • Workflow and task delegation mechanisms
  • Approval and review workflows
  • Notification and escalation systems
  • Dynamic team composition
  • Shared context and memory management

Use Case Implementations

  • Digital Agency workflow template
    • Growth Manager agent
    • Content Creator agent
    • Trend Researcher agent
    • Media Creation agent
  • Customer interaction management
  • Approval and revision cycles

Security and Compliance

  • Secure credential management
  • Role-based access control
  • Audit logging
  • Compliance with data protection regulations

Performance and Scalability

  • Async-first design
  • Horizontal scaling support
  • Performance benchmarking
  • Resource optimization

Testing and Quality

  • Comprehensive unit testing
  • Integration testing for agent interactions
  • Mocking frameworks for LLM providers
  • Continuous integration setup

Documentation and Community

  • Detailed API documentation
  • Comprehensive developer guides
  • Example use case implementations
  • Contribution guidelines
  • Community tool submission process
  • Regular maintenance and updates

Future Roadmap

  • Payment integration solutions
  • Advanced agent collaboration patterns
  • Specialized industry-specific agents
  • Enhanced security features
  • Extended provider support

πŸ“ TODO List

  • Add support for Gemini provider
  • Implement advanced caching strategies (LRU)
  • Add WebSocket support for real-time communication
  • Create comprehensive test suite
  • Add support for function calling
  • Implement conversation branching
  • Add support for multi-modal inputs
  • Enhance error handling and logging
  • Add rate limiting and quota management
  • Create detailed API documentation
  • Add support for custom prompt templates
  • Implement conversation summarization
  • Add support for multiple languages
  • Implement fine-tuning capabilities
  • Add support for model quantization
  • Create a web-based demo
  • Add support for batch processing
  • Implement conversation history export/import
  • Add support for custom model hosting
  • Create visualization tools for conversation flows
  • Implement automated testing pipeline
  • Add support for conversation analytics
  • Create deployment guides for various platforms
  • Implement automated documentation generation
  • Add support for model performance monitoring
  • Create benchmarking tools
  • Implement A/B testing capabilities
  • Add support for custom tokenizers
  • Create model evaluation tools
  • Implement conversation templates
  • Add support for conversation routing
  • Create debugging tools
  • Implement conversation validation
  • Add support for custom memory backends
  • Create conversation backup/restore features
  • Implement conversation filtering
  • Add support for conversation tagging
  • Create conversation search capabilities
  • Implement conversation versioning
  • Add support for conversation merging
  • Create conversation export formats
  • Implement conversation import validation
  • Add support for conversation scheduling
  • Create conversation monitoring tools
  • Implement conversation archiving
  • Add support for conversation encryption
  • Create conversation access control
  • Implement conversation rate limiting
  • Add support for conversation quotas
  • Create conversation usage analytics
  • Implement conversation cost tracking
  • Add support for conversation billing
  • Create conversation audit logs
  • Implement conversation compliance checks
  • Add support for conversation retention policies
  • Create conversation backup strategies
  • Implement conversation recovery procedures
  • Add support for conversation migration
  • Create conversation optimization tools
  • Implement conversation caching strategies
  • Add support for conversation compression
  • Create conversation performance metrics
  • Implement conversation health checks
  • Add support for conversation monitoring
  • Create conversation alerting system
  • Implement conversation debugging tools
  • Add support for conversation profiling
  • Create conversation testing framework
  • Implement conversation documentation
  • Add support for conversation examples
  • Create conversation tutorials
  • Implement conversation guides
  • Add support for conversation best practices
  • Create conversation security guidelines

🀝 Contributing

We welcome contributions! Please feel free to submit a Pull Request.

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ”— Links

πŸ“§ Support

For support, email [email protected] or create an issue on GitHub.

About

GRAMI-AI is a cutting-edge, async-first AI agent framework designed to solve complex computational challenges through intelligent, collaborative agent interactions. Built with unprecedented flexibility, this library empowers developers to create sophisticated, context-aware AI systems that can adapt, learn, and collaborate across diverse domains.

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages