Skip to content

AI Automation Suggester 1.2.1

Latest
Compare
Choose a tag to compare
@ITSpecialist111 ITSpecialist111 released this 05 Jan 13:33
· 2 commits to main since this release

AI Automation Suggester 1.2.1

Release Date: 2025-01-05

Highlights

  • Improved Prompt Management & Token Handling

    • Added approximate token counting and truncation for both OpenAI and Google calls to avoid sending overly large prompts.
    • Ensures requests stay within each provider’s maximum token limit (e.g., 30,720 tokens for Google, 32,768 for OpenAI).
  • Model-Specific Parameter Adjustments

    • For OpenAI:
      • Standard models continue to use max_tokens.
      • New or special models (e.g., gpt-4o, o1, o1-mini, o1-preview) that require max_completion_tokens are now properly supported.
      • Prevents the “Unsupported parameter: 'max_tokens'” error on models that do not accept max_tokens.
    • For Google:
      • Simplified approach that uses maxOutputTokens only—removed checks for non-Google models (e.g., gpt-4o, o1-preview).

Detailed Changes

  1. OpenAI Fixes

    • Added logic to detect specific models needing max_completion_tokens instead of max_tokens (e.g., gpt-4o, o1, o1-mini, o1-preview).
    • When prompt size is too large, we truncate it to stay within a safe limit (default ~32K tokens).
  2. Google API Updates

    • Introduced token counting to cap request size at 30,720 tokens, addressing the INVALID_ARGUMENT errors.
    • Removed references to non-Google models in the Google request code.
  3. General Resilience

    • Safer request-building steps guard against sending huge prompts or invalid parameters.
    • Logging improvements to warn when prompts exceed size limits.

Upgrading

  • No special steps required for upgrading from 1.2.0 to 1.2.1.
  • Restart Home Assistant after updating to ensure the changes take effect.

Notes

  • If you continue to see truncation warnings, consider refining your prompts or summarizing content.
  • If you use custom models not listed here (beyond gpt-4o, o1, o1-mini, o1-preview), you can extend the logic in coordinator.py similarly to handle model-specific parameters.

Thank you for using AI Automation Suggester!