You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is meant to be updated as the list of changes is not exhaustive
Hello everyone,
we have recently launched the SmartGPT UI as a small exclusive preview to AI Insiders. This was intended to get some initial feedback, as well as testing the online deployment without overwhelming our initial capacity.
Already, i think that SmartGPT showcases an exiting way to unluck the potential of even the current generation of large language models. Utilizing their capabilities to generate diverse output, and crucially to reflect on their generation, helps to generate higher quality output. And more importantly, SmartGPT prompting can, as we have already highlighted , enable LLMs to solve tasks that they could not solve with standard forward text generation.
But we are not done with SmartGPT. We have plans for further expansion.
Features
Right now, SmartGPT UI already has some great features:
Generate smart prompts for your requests utilizing multiple requests, researchers and resolver prompt
Ability to utilize multiple models, importantly using different models for original requests and researcher/resolver prompt
Optimized default prompts (system, assistant, researcher prompts,...), but also to ability to use your own prompts
Save and load prompt configuration
A history of conversations, including the option to import and export conversations
An easy way to have conversation with models of different providers via the normal chat interface
Roadmap
We are working on some improvements and even major new features. And we would love to see your support.
Improved error messages (right now the rror messages are often empty)
Better formatting of the output (e.g. coloring for different models)
Integration of additional models (Gemini API planned)
Using different models for the initial model calls
Enable propper chatting after the SmartGPT phase, i.e. continue the conversation after initial prompts
Multi-Modality: Read Images as input, potentially even generate images as intermediate outputs and use similar research, resolver
Document reading/ File upload
Variable reasoning graphs, i.e. Smart-GPT uses a fixed structure of initial calls, researcher and resolver, maybe more flexible structures (similar to Graph of Thoughts can give improved results in some cases)
Automated prompt tuning
and potentially more...
The text was updated successfully, but these errors were encountered:
This issue is meant to be updated as the list of changes is not exhaustive
Hello everyone,
we have recently launched the SmartGPT UI as a small exclusive preview to AI Insiders. This was intended to get some initial feedback, as well as testing the online deployment without overwhelming our initial capacity.
Already, i think that SmartGPT showcases an exiting way to unluck the potential of even the current generation of large language models. Utilizing their capabilities to generate diverse output, and crucially to reflect on their generation, helps to generate higher quality output. And more importantly, SmartGPT prompting can, as we have already highlighted , enable LLMs to solve tasks that they could not solve with standard forward text generation.
But we are not done with SmartGPT. We have plans for further expansion.
Features
Right now, SmartGPT UI already has some great features:
Roadmap
We are working on some improvements and even major new features. And we would love to see your support.
and potentially more...
The text was updated successfully, but these errors were encountered: