You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've purchased a lifetime subscription to msty with the hope of simplifying LLM management, particularly for local models and basic client functionality. However, I'm encountering some challenges due to a lack of comprehensive documentation. Here are my main concerns:
Instructions: There's a lack of clear instructions on how everything is supposed to work, especially for users utilizing local models.
LocalAI Tuning: While there are numerous options for tuning LocalAI, there's insufficient information on which settings to adjust for optimal results.
Web Search Functionality: Despite enabling web search for real-time data, the AI models often insist they don't have internet access, even when web results appear in the chat. This inconsistency persists across several locally hosted models, despite appropriate prompting.
Documentation: The current documentation seems limited to tooltips. A more comprehensive base documentation would greatly enhance the tool's usability and effectiveness.
Implementation Details: More information on the underlying implementation would be helpful for troubleshooting issues like the web search inconsistency mentioned above.
In summary, while the tool shows promise, the lack of detailed documentation makes it challenging to use effectively. At minimum, I'd request the development of base documentation that goes beyond the current tooltips. If such documentation already exists and I've overlooked it, please point me in the right direction.
Thank you for your attention to these concerns. I believe addressing them would significantly improve the user experience for those of us working with local models and trying to leverage msty's full potential.
The text was updated successfully, but these errors were encountered:
I've purchased a lifetime subscription to msty with the hope of simplifying LLM management, particularly for local models and basic client functionality. However, I'm encountering some challenges due to a lack of comprehensive documentation. Here are my main concerns:
Instructions: There's a lack of clear instructions on how everything is supposed to work, especially for users utilizing local models.
LocalAI Tuning: While there are numerous options for tuning LocalAI, there's insufficient information on which settings to adjust for optimal results.
Web Search Functionality: Despite enabling web search for real-time data, the AI models often insist they don't have internet access, even when web results appear in the chat. This inconsistency persists across several locally hosted models, despite appropriate prompting.
Documentation: The current documentation seems limited to tooltips. A more comprehensive base documentation would greatly enhance the tool's usability and effectiveness.
Implementation Details: More information on the underlying implementation would be helpful for troubleshooting issues like the web search inconsistency mentioned above.
In summary, while the tool shows promise, the lack of detailed documentation makes it challenging to use effectively. At minimum, I'd request the development of base documentation that goes beyond the current tooltips. If such documentation already exists and I've overlooked it, please point me in the right direction.
Thank you for your attention to these concerns. I believe addressing them would significantly improve the user experience for those of us working with local models and trying to leverage msty's full potential.
The text was updated successfully, but these errors were encountered: