You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
⚠️ Please check that this feature request hasn't been suggested before.
I searched previous Ideas in Discussions didn't find any similar feature requests.
I searched previous Issues didn't find any similar feature requests.
🔖 Feature description
I think about adding image generation models as a new type of agent.
For instance:
workflows:
- bedrock:
model: bedrock/stability.stable-diffusion-xl-v0name: Image generator assistantintro: 👋 Hi there! How can I help search for answers on the internet.
Adding this as a new tool is another option, but I think it does not make sense to call an LLM inference API to call the tool. If we want to generate an image, it is always the case that we will choose that image gen tool. So, it would cause extra latency + cost. That's why I think adding image gen as a agent type is more viable option.
Acknowledgements
My issue title is concise and descriptive.
I have searched the existing issues to make sure this feature has not been requested yet.
I have provided enough information for the maintainers to understand and evaluate this request.
The text was updated successfully, but these errors were encountered:
🔖 Feature description
I think about adding image generation models as a new type of agent.
For instance:
Adding this as a new tool is another option, but I think it does not make sense to call an LLM inference API to call the tool. If we want to generate an image, it is always the case that we will choose that image gen tool. So, it would cause extra latency + cost. That's why I think adding image gen as a agent type is more viable option.
Acknowledgements
The text was updated successfully, but these errors were encountered: