AI Automation Suggester 1.2.1
Release Date: 2025-01-05
Highlights
-
Improved Prompt Management & Token Handling
- Added approximate token counting and truncation for both OpenAI and Google calls to avoid sending overly large prompts.
- Ensures requests stay within each provider’s maximum token limit (e.g., 30,720 tokens for Google, 32,768 for OpenAI).
-
Model-Specific Parameter Adjustments
- For OpenAI:
- Standard models continue to use
max_tokens
. - New or special models (e.g.,
gpt-4o
,o1
,o1-mini
,o1-preview
) that requiremax_completion_tokens
are now properly supported. - Prevents the “Unsupported parameter: 'max_tokens'” error on models that do not accept
max_tokens
.
- Standard models continue to use
- For Google:
- Simplified approach that uses
maxOutputTokens
only—removed checks for non-Google models (e.g.,gpt-4o
,o1-preview
).
- Simplified approach that uses
- For OpenAI:
Detailed Changes
-
OpenAI Fixes
- Added logic to detect specific models needing
max_completion_tokens
instead ofmax_tokens
(e.g.,gpt-4o
,o1
,o1-mini
,o1-preview
). - When prompt size is too large, we truncate it to stay within a safe limit (default ~32K tokens).
- Added logic to detect specific models needing
-
Google API Updates
- Introduced token counting to cap request size at 30,720 tokens, addressing the
INVALID_ARGUMENT
errors. - Removed references to non-Google models in the Google request code.
- Introduced token counting to cap request size at 30,720 tokens, addressing the
-
General Resilience
- Safer request-building steps guard against sending huge prompts or invalid parameters.
- Logging improvements to warn when prompts exceed size limits.
Upgrading
- No special steps required for upgrading from 1.2.0 to 1.2.1.
- Restart Home Assistant after updating to ensure the changes take effect.
Notes
- If you continue to see truncation warnings, consider refining your prompts or summarizing content.
- If you use custom models not listed here (beyond
gpt-4o
,o1
,o1-mini
,o1-preview
), you can extend the logic incoordinator.py
similarly to handle model-specific parameters.
Thank you for using AI Automation Suggester!