You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, the model is fixed (gpt-4o-mini for OpenAI, mistral for Ollama), but LLMs are evolving quickly, and everyone has different preferences based on performance, cost, or personal choice. It’d be great to have an option to select a different model.
Would it be possible to add a --model option (or something similar) to let users choose? This would make things way more flexible and future-proof.
Thanks for the awesome tool! 🚀
The text was updated successfully, but these errors were encountered:
Hi! Thank you for your contribution and improvement ideas!
I agree that model variability could be valuable. Unfortunately, I've lost access to the NPM account used for publishing this package, which diminished my motivation to develop it further.
Currently, people mainly use direct installation rather than the NPM version, which complicates onboarding and limits the product's potential.
Nevertheless, I actively monitor open pull requests and welcome functionality additions for any model. I believe any LLM can handle such script modifications in just a few clicks.
Additionally, we could integrate the Vercel AI SDK for a unified codebase and access to all underlying models, though this might be more complex.
Right now, the model is fixed (
gpt-4o-mini
for OpenAI,mistral
for Ollama), but LLMs are evolving quickly, and everyone has different preferences based on performance, cost, or personal choice. It’d be great to have an option to select a different model.Would it be possible to add a
--model
option (or something similar) to let users choose? This would make things way more flexible and future-proof.Thanks for the awesome tool! 🚀
The text was updated successfully, but these errors were encountered: