Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrates LiteLLM for Unified Access to Multiple LLM Models #5925

Open
wants to merge 11 commits into
base: preview
Choose a base branch
from

Conversation

akash-plane
Copy link

@akash-plane akash-plane commented Oct 29, 2024

Model Replacement:

  • gpt-3.5-turbo has been replaced with gpt-4o-mini due to the upcoming deprecation of the former model as announced in the OpenAI deprecation notice.

Performance and Cost Improvements:

  • gpt-4o-mini offers enhanced performance compared to gpt-3.5-turbo, as detailed in the OpenAI announcement on gpt-4o-mini
  • gpt-4o-mini is more cost-efficient, providing better value for users.

Summary by CodeRabbit

  • New Features

    • Introduced a new base class for LLM providers and specific classes for OpenAI, Anthropic, and Gemini.
    • Added configuration options for LLM API keys, providers, and models.
  • Bug Fixes

    • Corrected the variable name for retrieving the Unsplash access key.
  • Chores

    • Updated project dependencies by replacing the OpenAI package with Litellm.

Copy link
Contributor

coderabbitai bot commented Oct 29, 2024

Walkthrough

The pull request introduces a significant refactoring of the LLM (Large Language Model) integration in the application. The changes involve creating a more flexible and extensible approach to handling different LLM providers. A new base class LLMProvider is introduced, with specific provider classes for OpenAI, Anthropic, and Gemini. The project moves from a direct OpenAI implementation to using the litellm library, which allows for easier multi-provider support. Configuration management has been updated to support multiple LLM providers, with new configuration keys and improved error handling.

Changes

File Change Summary
apiserver/plane/app/views/external/base.py - Added LLMProvider base class
- Created provider-specific classes (OpenAI, Anthropic, Gemini)
- Implemented get_llm_config() and get_llm_response() functions
- Refactored GPT integration endpoints to use new LLM provider logic
apiserver/requirements/base.txt - Removed openai==1.25.0
- Added litellm==1.51.0
apiserver/plane/license/management/commands/configure_instance.py - Updated configuration keys
- Added LLM_API_KEY, LLM_PROVIDER, LLM_MODEL
- Marked GPT_ENGINE as deprecated
- Corrected UNSPLASH_ACCESS_KEY retrieval

Sequence Diagram

sequenceDiagram
    participant Client
    participant GPTIntegrationEndpoint
    participant get_llm_config
    participant get_llm_response
    participant LLMProvider

    Client->>GPTIntegrationEndpoint: Send request
    GPTIntegrationEndpoint->>get_llm_config: Retrieve API configuration
    get_llm_config-->>GPTIntegrationEndpoint: Return API key, model, provider
    GPTIntegrationEndpoint->>get_llm_response: Request LLM response
    get_llm_response->>LLMProvider: Select appropriate provider
    LLMProvider-->>get_llm_response: Generate response
    get_llm_response-->>GPTIntegrationEndpoint: Return response
    GPTIntegrationEndpoint-->>Client: Send response
Loading

Poem

🐰 A Rabbit's Ode to LLM Delight 🤖

Providers dance, a flexible scene,
OpenAI, Anthropic, Gemini's gleam
litellm weaves its magic spell
One key to rule them, integration's swell!
Hop along, code rabbit's glee! 🚀

Tip

CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command @coderabbitai generate docstrings to have CodeRabbit automatically generate docstrings for your pull request. We would love to hear your feedback on Discord.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@CLAassistant
Copy link

CLAassistant commented Oct 29, 2024

CLA assistant check
All committers have signed the CLA.

@akash-plane akash-plane marked this pull request as ready for review October 29, 2024 12:07
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (2)
apiserver/requirements/base.txt (1)

40-40: Consider upgrading to latest litellm version.

The current version (1.51.0) is a few versions behind the latest release (1.55.0). While the chosen version should work fine, newer versions include additional features and bug fixes.

-litellm==1.51.0
+litellm==1.55.0
apiserver/plane/app/views/external/base.py (1)

59-61: Add rate limiting and token limits.

Consider implementing these security measures:

  1. Add rate limiting to prevent abuse
  2. Set maximum token limits to control costs

Example implementation:

from django.core.cache import cache
from django.conf import settings

def get_rate_limit_key(request):
    return f"gpt_rate_limit_{request.user.id}"

def check_rate_limit(request):
    key = get_rate_limit_key(request)
    current = cache.get(key, 0)
    if current >= settings.GPT_RATE_LIMIT:
        return False
    cache.incr(key)
    return True

# In your view:
if not check_rate_limit(request):
    return Response(
        {"error": "Rate limit exceeded"},
        status=status.HTTP_429_TOO_MANY_REQUESTS,
    )

# Add to completion call:
response = completion(
    model=GPT_ENGINE,
    messages=[{"role": "user", "content": final_text}],
    api_key=OPENAI_API_KEY,
    max_tokens=settings.GPT_MAX_TOKENS,  # Add this
)

Also applies to: 119-121

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between b4bbe3a and c3f0327.

📒 Files selected for processing (2)
  • apiserver/plane/app/views/external/base.py (5 hunks)
  • apiserver/requirements/base.txt (1 hunks)
🔇 Additional comments (4)
apiserver/requirements/base.txt (1)

40-40: LGTM! Verify removal of openai package.

The switch to litellm is appropriate for the stated objectives. However, let's verify that no other packages depend on openai.

apiserver/plane/app/views/external/base.py (3)

1-17: LGTM! Import statements are well-organized.

The imports follow the standard pattern (stdlib → third-party → local) and correctly include the required litellm package.


94-94: ⚠️ Potential issue

Apply the same fixes as GPTIntegrationEndpoint.

The same issues are present in this endpoint:

  1. Model validation
  2. Global API key setting
  3. Missing error handling

Please apply the same fixes as suggested for the GPTIntegrationEndpoint.

Also applies to: 118-121


Line range hint 1-161: Verify the migration from OpenAI to LiteLLM across the codebase.

Let's ensure all OpenAI client usage has been migrated to LiteLLM.

apiserver/plane/app/views/external/base.py Outdated Show resolved Hide resolved
apiserver/plane/app/views/external/base.py Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (3)
apiserver/plane/app/views/external/base.py (3)

21-37: Add type hints and improve docstring.

The configuration helper function is well-structured, but could benefit from type hints and a more detailed docstring.

-def get_gpt_config():
+def get_gpt_config() -> tuple[str | None, str | None]:
     """Helper to get GPT configuration values
+
+    Returns:
+        tuple: A tuple containing (api_key, engine) where both values could be None
+        if configuration is missing
     """

39-51: Enhance error handling and type safety.

The response helper has good error handling but could be improved with type hints and more specific error handling.

-def get_gpt_response(task, prompt, api_key, engine):
+def get_gpt_response(task: str, prompt: str | None, api_key: str, engine: str) -> tuple[str | None, str | None]:
     """Helper to get GPT completion response
+
+    Args:
+        task: The main task description
+        prompt: Additional prompt context (optional)
+        api_key: The LiteLLM API key
+        engine: The LLM engine to use
+
+    Returns:
+        tuple: (response_text, error_message) where one will always be None
     """
     final_text = task + "\n" + prompt
     try:
         response = litellm.completion(
             model=engine,
             messages=[{"role": "user", "content": final_text}],
             api_key=api_key,
         )
         text = response.choices[0].message.content.strip()
         return text, None
-    except Exception as e:
+    except litellm.exceptions.OpenAIError as e:
         return None, str(e)

Line range hint 54-136: Consider architectural improvements.

  1. The GPT integration endpoints share significant code. Consider extracting common logic into a base class or mixin.
  2. There's no rate limiting for the LLM API calls, which could lead to excessive costs.

Example base class approach:

class BaseGPTEndpoint(BaseAPIView):
    def handle_gpt_request(self, request, additional_response_data=None):
        OPENAI_API_KEY, GPT_ENGINE = get_gpt_config()
        
        if not OPENAI_API_KEY or not GPT_ENGINE:
            return Response(
                {"error": "OpenAI API key and engine is required"},
                status=status.HTTP_400_BAD_REQUEST,
            )

        task = request.data.get("task", False)
        if not task:
            return Response(
                {"error": "Task is required"},
                status=status.HTTP_400_BAD_REQUEST,
            )

        text, error = get_gpt_response(task, request.data.get("prompt", False), 
                                     OPENAI_API_KEY, GPT_ENGINE)
        if error:
            return Response(
                {"error": "An error occurred while processing your request"},
                status=status.HTTP_500_INTERNAL_SERVER_ERROR,
            )

        response_data = {
            "response": text,
            "response_html": text.replace("\n", "<br/>"),
        }
        if additional_response_data:
            response_data.update(additional_response_data)
            
        return Response(response_data, status=status.HTTP_200_OK)

Would you like me to provide a complete implementation of the rate limiting solution?

🧰 Tools
🪛 GitHub Check: CodeQL

[warning] 83-83: Information exposure through an exception
Stack trace information flows to this location and may be exposed to an external user.


[warning] 91-96: Information exposure through an exception
Stack trace information flows to this location and may be exposed to an external user.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between c3f0327 and e68ab40.

📒 Files selected for processing (1)
  • apiserver/plane/app/views/external/base.py (2 hunks)
🧰 Additional context used
🪛 GitHub Check: CodeQL
apiserver/plane/app/views/external/base.py

[warning] 83-83: Information exposure through an exception
Stack trace information flows to this location and may be exposed to an external user.


[warning] 125-125: Information exposure through an exception
Stack trace information flows to this location and may be exposed to an external user.

🔇 Additional comments (1)
apiserver/plane/app/views/external/base.py (1)

1-19: LGTM! Clean import organization.

The imports are well-organized and include all necessary dependencies for the LiteLLM integration.

apiserver/plane/app/views/external/base.py Outdated Show resolved Hide resolved
apiserver/plane/app/views/external/base.py Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (2)
apiserver/plane/app/views/external/base.py (2)

21-36: Move model validation to configuration level.

Consider validating the GPT_ENGINE value when retrieving configuration to fail fast if an unsupported model is configured.

 def get_gpt_config():
     """Helper to get GPT configuration values"""
+    SUPPORTED_MODELS = ["gpt-4o-mini", "gpt-4o"]
     OPENAI_API_KEY, GPT_ENGINE = get_configuration_value([
         {
             "key": "OPENAI_API_KEY",
             "default": os.environ.get("OPENAI_API_KEY", None),
         },
         {
             "key": "GPT_ENGINE", 
             "default": os.environ.get("GPT_ENGINE", "gpt-4o-mini"),
         },
     ])
     
     if not OPENAI_API_KEY or not GPT_ENGINE:
         return None, None
+    if GPT_ENGINE not in SUPPORTED_MODELS:
+        return None, None
     return OPENAI_API_KEY, GPT_ENGINE

References to gpt-3.5-turbo need to be updated in configuration files

The following files still contain references to the deprecated gpt-3.5-turbo model and need to be updated to use gpt-4o-mini:

  • apiserver/plane/license/management/commands/configure_instance.py: Default value for GPT_ENGINE is set to "gpt-3.5-turbo"
  • admin/app/ai/form.tsx: Uses "gpt-3.5-turbo" in placeholder text
  • ENV_SETUP.md: Contains deprecated example showing "gpt-3.5-turbo"
🔗 Analysis chain

Line range hint 1-165: Verify removal of all gpt-3.5-turbo references.

Let's ensure all references to the deprecated model have been removed from the codebase.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for any remaining references to gpt-3.5-turbo
echo "Searching for gpt-3.5-turbo references..."
rg -l "gpt-3.5-turbo"

# Search for any OpenAI direct client usage that might need updating
echo "Searching for OpenAI client usage..."
rg -l "openai\.ChatCompletion"
rg -l "openai\.Completion"

Length of output: 360


Script:

#!/bin/bash
# Check the content of files that still reference gpt-3.5-turbo
echo "=== Content of configure_instance.py ==="
cat apiserver/plane/license/management/commands/configure_instance.py

echo -e "\n=== Content of form.tsx ==="
cat admin/app/ai/form.tsx

echo -e "\n=== Content of ENV_SETUP.md ==="
cat ENV_SETUP.md

Length of output: 18565

🧰 Tools
🪛 GitHub Check: CodeQL

[warning] 91-96: Information exposure through an exception
Stack trace information flows to this location and may be exposed to an external user.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between e68ab40 and 30b45bf.

📒 Files selected for processing (1)
  • apiserver/plane/app/views/external/base.py (2 hunks)
🔇 Additional comments (2)
apiserver/plane/app/views/external/base.py (2)

1-19: LGTM! Clean import organization.

The imports are well-organized and include all necessary dependencies for the LiteLLM integration.


58-65: 🛠️ Refactor suggestion

Avoid duplicating model validation logic.

The supported models list is hardcoded and duplicated. Consider moving it to a shared constant or configuration.

+# At module level
+SUPPORTED_MODELS = ["gpt-4o-mini", "gpt-4o"]
+
 class GPTIntegrationEndpoint(BaseAPIView):
     @allow_permission([ROLE.ADMIN, ROLE.MEMBER])
     def post(self, request, slug, project_id):
         OPENAI_API_KEY, GPT_ENGINE = get_gpt_config()
         
-        supported_models = ["gpt-4o-mini", "gpt-4o"]
-        if GPT_ENGINE not in supported_models:
+        if GPT_ENGINE not in SUPPORTED_MODELS:
             return Response(
-                {"error": f"Unsupported model. Please use one of: {', '.join(supported_models)}"},
+                {"error": f"Unsupported model. Please use one of: {', '.join(SUPPORTED_MODELS)}"},
                 status=status.HTTP_400_BAD_REQUEST,
             )

Likely invalid or redundant comment.

apiserver/plane/app/views/external/base.py Outdated Show resolved Hide resolved
apiserver/plane/app/views/external/base.py Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (2)
apiserver/plane/app/views/external/base.py (2)

4-7: Consider organizing imports by category

The imports could be better organized by grouping them into standard library, third-party, and local imports with consistent spacing.

# Python imports
import os

# Third party imports
import litellm
import requests
from rest_framework import status
from rest_framework.response import Response

# Module imports
from plane.app.permissions import ROLE, allow_permission

78-83: Consider extracting common error handling

Both endpoints have duplicate error handling logic. Consider extracting this into a helper function.

+def handle_gpt_error(error):
+    """Helper to handle GPT errors consistently"""
+    return Response(
+        {"error": "An internal error has occurred."},
+        status=status.HTTP_500_INTERNAL_SERVER_ERROR,
+    )

 # In both endpoints:
-        if not text and error:
-            return Response(
-                {"error": "An internal error has occurred."},
-                status=status.HTTP_500_INTERNAL_SERVER_ERROR,
-            )
+        if not text and error:
+            return handle_gpt_error(error)

Also applies to: 116-121

🧰 Tools
🪛 Ruff (0.8.0)

78-78: Line too long (107 > 88)

(E501)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between bcd37bc and 6226613.

📒 Files selected for processing (1)
  • apiserver/plane/app/views/external/base.py (2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.0)
apiserver/plane/app/views/external/base.py

62-62: Line too long (98 > 88)

(E501)


78-78: Line too long (107 > 88)

(E501)


116-116: Line too long (107 > 88)

(E501)

🔇 Additional comments (2)
apiserver/plane/app/views/external/base.py (2)

39-51: ⚠️ Potential issue

Enhance error handling security in get_gpt_response

The function currently exposes raw error messages which could leak sensitive information.

 def get_gpt_response(task, prompt, api_key, engine):
     """Helper to get GPT completion response"""
     final_text = task + "\n" + prompt
     try:
         response = litellm.completion(
             model=engine,
             messages=[{"role": "user", "content": final_text}],
             api_key=api_key,
         )
         text = response.choices[0].message.content.strip()
         return text, None
     except Exception as e:
-        return None, str(e)
+        # Log the actual error for debugging
+        import logging
+        logging.error(f"LiteLLM API error: {str(e)}")
+        return None, "Failed to process request"

57-64: 🛠️ Refactor suggestion

Move model validation to configuration helper

The model validation should be moved to get_gpt_config to ensure consistent validation across all endpoints. This is currently missing in WorkspaceGPTIntegrationEndpoint.

🧰 Tools
🪛 Ruff (0.8.0)

62-62: Line too long (98 > 88)

(E501)

apiserver/plane/app/views/external/base.py Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (1)
apiserver/plane/app/views/external/base.py (1)

80-85: Consolidate error handling.

Both endpoints have duplicate error handling logic. Consider creating a helper function.

+def handle_gpt_error(error):
+    """Helper to handle GPT errors consistently"""
+    return Response(
+        {"error": "An internal error has occurred."},
+        status=status.HTTP_500_INTERNAL_SERVER_ERROR,
+    )

 # In both endpoints:
-        if not text and error:
-            return Response(
-                {"error": "An internal error has occurred."},
-                status=status.HTTP_500_INTERNAL_SERVER_ERROR,
-            )
+        if not text and error:
+            return handle_gpt_error(error)

Also applies to: 118-123

🧰 Tools
🪛 Ruff (0.8.0)

80-80: Line too long (107 > 88)

(E501)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 6226613 and 954fd2f.

📒 Files selected for processing (1)
  • apiserver/plane/app/views/external/base.py (2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.0)
apiserver/plane/app/views/external/base.py

64-64: Line too long (98 > 88)

(E501)


80-80: Line too long (107 > 88)

(E501)


118-118: Line too long (107 > 88)

(E501)

🔇 Additional comments (4)
apiserver/plane/app/views/external/base.py (4)

1-20: LGTM! Clean import organization.

The imports are well-organized into logical categories and the new litellm dependency is correctly imported.


61-66: Remove duplicate model validation.

This validation should be moved to get_gpt_config() as suggested earlier. Once moved, this code block can be removed.

🧰 Tools
🪛 Ruff (0.8.0)

64-64: Line too long (98 > 88)

(E501)


22-37: 🛠️ Refactor suggestion

Move model validation to configuration helper.

The model validation logic should be centralized in get_gpt_config() instead of being duplicated in endpoints.

 def get_gpt_config():
     """Helper to get GPT configuration values"""
+    SUPPORTED_MODELS = ["gpt-4o-mini", "gpt-4o"]
     OPENAI_API_KEY, GPT_ENGINE = get_configuration_value([
         {
             "key": "OPENAI_API_KEY",
             "default": os.environ.get("OPENAI_API_KEY", None),
         },
         {
             "key": "GPT_ENGINE",
             "default": os.environ.get("GPT_ENGINE", "gpt-4o-mini"),
         },
     ])
     
+    if GPT_ENGINE not in SUPPORTED_MODELS:
+        return None, None
     if not OPENAI_API_KEY or not GPT_ENGINE:
         return None, None
     return OPENAI_API_KEY, GPT_ENGINE

Likely invalid or redundant comment.


104-123: ⚠️ Potential issue

Add missing model validation in WorkspaceGPTIntegrationEndpoint.

The model validation present in GPTIntegrationEndpoint is missing here, leading to inconsistent behavior.

         OPENAI_API_KEY, GPT_ENGINE = get_gpt_config()
         
+        supported_models = ["gpt-4o-mini", "gpt-4o"]
+        if GPT_ENGINE not in supported_models:
+            return Response(
+                {"error": f"Unsupported model. Please use one of: {', '.join(supported_models)}"},
+                status=status.HTTP_400_BAD_REQUEST,
+            )
+
         if not OPENAI_API_KEY or not GPT_ENGINE:

Note: This reinforces the need to move model validation to get_gpt_config().

Likely invalid or redundant comment.

🧰 Tools
🪛 Ruff (0.8.0)

118-118: Line too long (107 > 88)

(E501)

apiserver/plane/app/views/external/base.py Outdated Show resolved Hide resolved
apiserver/plane/app/views/external/base.py Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (2)
.env.example (1)

36-38: Consider adding validation hints for API keys.

While the structure is good, it would be helpful to add comments indicating the expected format or length of API keys for each provider.

Apply this diff to add validation hints:

-OPENAI_API_KEY=your-openai-api-key
-ANTHROPIC_API_KEY=your-anthropic-api-key
-GEMINI_API_KEY=your-gemini-api-key
+OPENAI_API_KEY=your-openai-api-key  # Format: sk-... (51 characters)
+ANTHROPIC_API_KEY=your-anthropic-api-key  # Format: sk-ant-... (32+ characters)
+GEMINI_API_KEY=your-gemini-api-key  # Format: AI... (39 characters)
apiserver/plane/app/views/external/base.py (1)

38-69: Consider adding provider-specific configuration validation.

While the provider implementations are good, they could benefit from additional validation specific to each provider.

Consider adding a validation method to the base class:

 class LLMProvider:
     name: str = ""
     models: List[str] = []
     api_key_env: str = ""
     default_model: str = ""
+
+    @classmethod
+    def validate_api_key(cls, api_key: str) -> bool:
+        """Validate provider-specific API key format"""
+        return True

 class OpenAIProvider(LLMProvider):
     name = "OpenAI"
     models = ["gpt-3.5-turbo", "gpt-4o-mini", "gpt-4o", "o1-mini", "o1-preview"]
     api_key_env = "OPENAI_API_KEY"
     default_model = "gpt-4o-mini"
+
+    @classmethod
+    def validate_api_key(cls, api_key: str) -> bool:
+        return api_key.startswith("sk-") and len(api_key) >= 51
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 954fd2f and 65acf0c.

📒 Files selected for processing (2)
  • .env.example (1 hunks)
  • apiserver/plane/app/views/external/base.py (2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
apiserver/plane/app/views/external/base.py

112-112: Line too long (109 > 88)

(E501)


149-149: Line too long (105 > 88)

(E501)


187-187: Line too long (105 > 88)

(E501)

🔇 Additional comments (3)
.env.example (1)

32-34: LGTM: Clear configuration for LLM provider selection.

The configuration allows for flexible provider selection with clear documentation of supported options.

apiserver/plane/app/views/external/base.py (2)

3-7: LGTM: Clean import organization.

The imports are well-organized with clear separation between Python standard library, third-party packages, and module imports.


23-36: LGTM: Well-designed base class for LLM providers.

The LLMProvider base class provides a clean interface with all necessary attributes and methods for provider implementations.

apiserver/plane/app/views/external/base.py Show resolved Hide resolved
apiserver/plane/app/views/external/base.py Outdated Show resolved Hide resolved
apiserver/plane/app/views/external/base.py Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (3)
apiserver/plane/app/views/external/base.py (3)

23-69: Consider enhancing provider configuration management.

The provider implementation is good, but could benefit from these improvements:

  1. Add validation for required class attributes
  2. Consider adding a method to validate API key format
  3. Add documentation for each provider's specific requirements
 class LLMProvider:
     """Base class for LLM provider configurations"""
     name: str = ""
     models: List[str] = []
     api_key_env: str = ""
     default_model: str = ""
 
+    def __init_subclass__(cls) -> None:
+        """Validate required attributes are set"""
+        required = ['name', 'models', 'api_key_env', 'default_model']
+        for attr in required:
+            if not getattr(cls, attr):
+                raise ValueError(f"{cls.__name__} must define {attr}")
+        if cls.default_model not in cls.models:
+            raise ValueError(f"Default model {cls.default_model} not in supported models")
+
+    @classmethod
+    def validate_api_key(cls, api_key: str) -> bool:
+        """Validate API key format"""
+        return bool(api_key and isinstance(api_key, str))

71-114: Improve error message handling in get_llm_config.

The error handling is good, but the error messages could be more user-friendly and consistent.

     if not provider:
         log_exception(ValueError(f"Unsupported provider: {provider_key}"))
-        return None, None, None
+        return None, None, "unsupported_provider"
 
     api_key, _ = get_configuration_value([
         {
             "key": provider.api_key_env,
             "default": os.environ.get(provider.api_key_env, None),
         }
     ])
 
     if not api_key:
         log_exception(ValueError(f"Missing API key for provider: {provider.name}"))
-        return None, None, None
+        return None, None, "missing_api_key"

116-140: Enhance error handling and improve code formatting.

The error handling is good, but could be improved with rate limiting information and better formatting.

     except Exception as e:
         log_exception(e)
         error_type = e.__class__.__name__
+        base_error = f"Error from {provider}"
         if error_type == "AuthenticationError":
-            return None, f"Invalid API key for {provider}"
+            return None, f"{base_error}: Invalid API key"
         elif error_type == "RateLimitError":
-            return None, f"Rate limit exceeded for {provider}"
+            return None, (
+                f"{base_error}: Rate limit exceeded. "
+                "Please try again in a few minutes"
+            )
         else:
-            return None, f"Error occurred while generating response from {provider}"
+            return None, f"{base_error}: Unexpected error occurred"
🧰 Tools
🪛 Ruff (0.8.2)

116-116: Line too long (109 > 88)

(E501)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 65acf0c and 2360af5.

📒 Files selected for processing (1)
  • apiserver/plane/app/views/external/base.py (2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
apiserver/plane/app/views/external/base.py

116-116: Line too long (109 > 88)

(E501)


158-158: Line too long (105 > 88)

(E501)


196-196: Line too long (105 > 88)

(E501)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (1)
apiserver/plane/license/management/commands/configure_instance.py (1)

Line range hint 164-168: Fix GPT_ENGINE configuration.

The GPT_ENGINE configuration has two issues:

  1. It still defaults to "gpt-3.5-turbo" which contradicts the PR objective of transitioning to "gpt-4o-mini"
  2. The category is incorrectly set to "SMTP" instead of "AI"

Apply this diff to fix the configuration:

             {
                 "key": "GPT_ENGINE",
-                "value": os.environ.get("GPT_ENGINE", "gpt-3.5-turbo"),
-                "category": "SMTP",
+                "value": os.environ.get("GPT_ENGINE", "gpt-4o-mini"),
+                "category": "AI",
                 "is_encrypted": False,
             },
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2360af5 and 411e16a.

📒 Files selected for processing (1)
  • apiserver/plane/license/management/commands/configure_instance.py (2 hunks)
🔇 Additional comments (3)
apiserver/plane/license/management/commands/configure_instance.py (3)

140-151: LGTM! New AI provider keys are properly configured.

The new AI provider keys (ANTHROPIC_API_KEY, GEMINI_API_KEY) are:

  • Properly encrypted for security
  • Follow the existing configuration pattern
  • Grouped under the "AI" category

152-163: LGTM! LLM configuration looks good.

The LLM configuration (LLM_PROVIDER, LLM_MODEL):

  • Has sensible defaults (openai, gpt-4o-mini)
  • Properly unencrypted as they're not sensitive
  • Grouped under the "AI" category

137-137: Verify the impact of category change.

The OPENAI_API_KEY configuration's category has been changed from "OPENAI" to "AI". While this change makes sense for supporting multiple AI providers, it might affect existing configurations.

Run the following script to check for any existing configurations that might be affected:

✅ Verification successful

Category change from "OPENAI" to "AI" is part of a broader AI provider integration

The change is safe and intentional. The search results show that:

  1. The category change is part of a broader refactoring to support multiple AI providers (OpenAI, Anthropic, Gemini).
  2. All AI-related configurations (OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY, LLM_PROVIDER, LLM_MODEL) are consistently using the "AI" category.
  3. There are no references to "OPENAI" as a category in the codebase, indicating this is a new configuration structure.
  4. The OPENAI_API_KEY is properly handled in the instance configuration and form components with the new categorization.
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for existing OPENAI_API_KEY configurations and their categories

# Search for any code that might be relying on the "OPENAI" category
rg -l "category.*OPENAI" --type py

# Search for any migrations or SQL files that might contain the old category
rg -l "OPENAI.*category" --type sql --type py

Length of output: 84


Script:

#!/bin/bash
# Let's try a broader search to understand the configuration structure and usage

# Search for OPENAI_API_KEY references
rg "OPENAI_API_KEY" -A 3 -B 3

# Search for category-related configurations
rg "category.*AI" -A 3 -B 3

# Search for any instance configuration related code
fd "configure.*\.py|instance.*\.py"

Length of output: 9039

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
apiserver/plane/app/views/external/base.py (1)

23-65: LGTM! Well-structured provider abstraction.

The provider class hierarchy is well-designed with good separation of concerns. The implementation allows for easy addition of new providers.

Consider adding docstrings to the provider classes to document their specific configurations and any provider-specific behaviors.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 411e16a and 2d597b8.

📒 Files selected for processing (2)
  • apiserver/plane/app/views/external/base.py (2 hunks)
  • apiserver/plane/license/management/commands/configure_instance.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
apiserver/plane/app/views/external/base.py

111-111: Line too long (109 > 88)

(E501)


153-153: Line too long (105 > 88)

(E501)


191-191: Line too long (105 > 88)

(E501)

🔇 Additional comments (1)
apiserver/plane/app/views/external/base.py (1)

67-109: LGTM! Robust configuration management.

The function properly handles configuration retrieval and validation with comprehensive error handling and logging.

Comment on lines +139 to +158
api_key, model, provider = get_llm_config()

# Get the configuration value
# Check the keys
if not OPENAI_API_KEY or not GPT_ENGINE:
if not api_key or not model or not provider:
return Response(
{"error": "OpenAI API key and engine is required"},
{"error": "LLM provider API key and model are required"},
status=status.HTTP_400_BAD_REQUEST,
)

prompt = request.data.get("prompt", False)
task = request.data.get("task", False)

if not task:
return Response(
{"error": "Task is required"}, status=status.HTTP_400_BAD_REQUEST
)

final_text = task + "\n" + prompt

client = OpenAI(api_key=OPENAI_API_KEY)

response = client.chat.completions.create(
model=GPT_ENGINE, messages=[{"role": "user", "content": final_text}]
)
text, error = get_llm_response(task, request.data.get("prompt", False), api_key, model, provider)
if not text and error:
return Response(
{"error": "An internal error has occurred."},
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error handling in endpoints.

Both endpoints use the same generic error message. Consider providing more specific error messages based on the error type.

Apply this improvement to both endpoint handlers:

         if not text and error:
+            # Map error messages to user-friendly responses
+            error_mapping = {
+                "Authentication failed": ("Service configuration error", status.HTTP_503_SERVICE_UNAVAILABLE),
+                "Service temporarily unavailable": ("Please try again later", status.HTTP_429_TOO_MANY_REQUESTS),
+            }
+            error_msg, error_status = error_mapping.get(
+                error, 
+                ("An unexpected error occurred", status.HTTP_500_INTERNAL_SERVER_ERROR)
+            )
             return Response(
-                {"error": "An internal error has occurred."},
-                status=status.HTTP_500_INTERNAL_SERVER_ERROR,
+                {"error": error_msg},
+                status=error_status,
             )

Also applies to: 177-196

🧰 Tools
🪛 Ruff (0.8.2)

153-153: Line too long (105 > 88)

(E501)

Comment on lines +111 to +134
def get_llm_response(task, prompt, api_key: str, model: str, provider: str) -> Tuple[str | None, str | None]:
"""Helper to get LLM completion response"""
final_text = task + "\n" + prompt
try:
# For Gemini, prepend provider name to model
if provider.lower() == "gemini":
model = f"gemini/{model}"

response = litellm.completion(
model=model,
messages=[{"role": "user", "content": final_text}],
api_key=api_key,
)
text = response.choices[0].message.content.strip()
return text, None
except Exception as e:
log_exception(e)
error_type = e.__class__.__name__
if error_type == "AuthenticationError":
return None, f"Invalid API key for {provider}"
elif error_type == "RateLimitError":
return None, f"Rate limit exceeded for {provider}"
else:
return None, f"Error occurred while generating response from {provider}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Enhance error message security.

While the error handling is good, the error messages could potentially leak sensitive information about the provider configuration.

Apply this diff to make error messages more generic:

-            return None, f"Invalid API key for {provider}"
-        elif error_type == "RateLimitError":
-            return None, f"Rate limit exceeded for {provider}"
-        else:
-            return None, f"Error occurred while generating response from {provider}"
+            return None, "Authentication failed"
+        elif error_type == "RateLimitError":
+            return None, "Service temporarily unavailable"
+        else:
+            return None, "An error occurred while processing your request"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def get_llm_response(task, prompt, api_key: str, model: str, provider: str) -> Tuple[str | None, str | None]:
"""Helper to get LLM completion response"""
final_text = task + "\n" + prompt
try:
# For Gemini, prepend provider name to model
if provider.lower() == "gemini":
model = f"gemini/{model}"
response = litellm.completion(
model=model,
messages=[{"role": "user", "content": final_text}],
api_key=api_key,
)
text = response.choices[0].message.content.strip()
return text, None
except Exception as e:
log_exception(e)
error_type = e.__class__.__name__
if error_type == "AuthenticationError":
return None, f"Invalid API key for {provider}"
elif error_type == "RateLimitError":
return None, f"Rate limit exceeded for {provider}"
else:
return None, f"Error occurred while generating response from {provider}"
def get_llm_response(task, prompt, api_key: str, model: str, provider: str) -> Tuple[str | None, str | None]:
"""Helper to get LLM completion response"""
final_text = task + "\n" + prompt
try:
# For Gemini, prepend provider name to model
if provider.lower() == "gemini":
model = f"gemini/{model}"
response = litellm.completion(
model=model,
messages=[{"role": "user", "content": final_text}],
api_key=api_key,
)
text = response.choices[0].message.content.strip()
return text, None
except Exception as e:
log_exception(e)
error_type = e.__class__.__name__
if error_type == "AuthenticationError":
return None, "Authentication failed"
elif error_type == "RateLimitError":
return None, "Service temporarily unavailable"
else:
return None, "An error occurred while processing your request"
🧰 Tools
🪛 Ruff (0.8.2)

111-111: Line too long (109 > 88)

(E501)

Comment on lines +135 to 157
"key": "LLM_API_KEY",
"value": os.environ.get("LLM_API_KEY"),
"category": "AI",
"is_encrypted": True,
},
{
"key": "GPT_ENGINE",
"key": "LLM_PROVIDER",
"value": os.environ.get("LLM_PROVIDER", "openai"),
"category": "AI",
"is_encrypted": False,
},
{
"key": "LLM_MODEL",
"value": os.environ.get("LLM_MODEL", "gpt-4o-mini"),
"category": "AI",
"is_encrypted": False,
},
# Deprecated, use LLM_MODEL
{
"key": "GPT_ENGINE",
"value": os.environ.get("GPT_ENGINE", "gpt-3.5-turbo"),
"category": "SMTP",
"is_encrypted": False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix category for deprecated GPT_ENGINE configuration.

The deprecated GPT_ENGINE configuration is incorrectly categorized under "SMTP" instead of "AI".

Apply this fix:

             {
                 "key": "GPT_ENGINE",  
                 "value": os.environ.get("GPT_ENGINE", "gpt-3.5-turbo"),
-                "category": "SMTP",
+                "category": "AI",
                 "is_encrypted": False,
             },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"key": "LLM_API_KEY",
"value": os.environ.get("LLM_API_KEY"),
"category": "AI",
"is_encrypted": True,
},
{
"key": "GPT_ENGINE",
"key": "LLM_PROVIDER",
"value": os.environ.get("LLM_PROVIDER", "openai"),
"category": "AI",
"is_encrypted": False,
},
{
"key": "LLM_MODEL",
"value": os.environ.get("LLM_MODEL", "gpt-4o-mini"),
"category": "AI",
"is_encrypted": False,
},
# Deprecated, use LLM_MODEL
{
"key": "GPT_ENGINE",
"value": os.environ.get("GPT_ENGINE", "gpt-3.5-turbo"),
"category": "SMTP",
"is_encrypted": False,
"key": "LLM_API_KEY",
"value": os.environ.get("LLM_API_KEY"),
"category": "AI",
"is_encrypted": True,
},
{
"key": "LLM_PROVIDER",
"value": os.environ.get("LLM_PROVIDER", "openai"),
"category": "AI",
"is_encrypted": False,
},
{
"key": "LLM_MODEL",
"value": os.environ.get("LLM_MODEL", "gpt-4o-mini"),
"category": "AI",
"is_encrypted": False,
},
# Deprecated, use LLM_MODEL
{
"key": "GPT_ENGINE",
"value": os.environ.get("GPT_ENGINE", "gpt-3.5-turbo"),
"category": "AI",
"is_encrypted": False,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants