Skip to content

Latest commit

 

History

History
71 lines (59 loc) · 2.45 KB

todo.md

File metadata and controls

71 lines (59 loc) · 2.45 KB

To Do

Below is a rough outline of proposed features and outstanding issues that are being tracked.

Although not final, items are generally sorted from highest to lowest priority.

Core

  • Migrate Chat Stream to Llama-Index
  • Implement Llama-Index Chat Engine with Memory
  • Swap Chatbox UI to Llama-Index Chat Engine
  • Function to Handle File Embeddings
  • Allow Switching of Embedding Model & Settings
  • Delete Files after Index Created/Failed
  • Support Additional Import Options
    • GitHub Repos
    • Websites
  • Export Data (Chat History, ...)
  • Docker Support
    • Windows Support
  • Extract Metadata and Load into Index
  • Faster Document Embeddings (Cuda, Batch Size, ...)
  • Swap to OpenAI compatible endpoints
  • Allow Usage of Ollama hosted embeddings
  • Enable support for additional LLM backends
    • Local AI
    • TabbyAPI
  • Remove File Type Limitations for Uploads?

User Experience

  • Show Loaders in UI (File Uploads, Conversions, ...)
  • View and Manage Imported Files
  • About Tab in Sidebar w/ Resources
  • Enable Caching
  • Allow Users to Set LLM Settings
    • System Prompt
    • Chat Mode
    • Temperature
    • top_k
    • chunk_size
    • chunk_overlap (needs to be proportional to chunk_size?)
  • Additional Error Handling
    • Starting a chat without an Ollama model set
    • Non-existent GitHub repos
    • Non-existent Embedding models
    • Non-existent Websites
    • System Level Errors (CUDA OOM, Hugging Face downtime, ...)

Code Quality

  • Refactor main.py into submodules
  • Refactor file processing logic
  • Refactor README
  • Implement Log Library
  • Improve Logging
  • Re-write Docstrings
  • Tests

Known Issues & Bugs

  • Upon sending a Chat message, the File Processing expander appears to re-run itself (seems something is not using state correctly)
  • Refreshing the page loses all state (expected Streamlit behavior; need to implement local-storage)
  • Files can be uploaded before Ollama config is set, leading to embedding errors
  • Assuming Ollama is hosted on localhost, Models are automatically loaded and selected, but the dropdown does not render the selected option

Other

  • Investigate R2R backend support/migration
  • ROCm Support -- Wanted: AMD Testers! 🔍🔴
  • Improved Windows / Windows + Docker Support