-
Notifications
You must be signed in to change notification settings - Fork 260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: OpenAI-compatible API #64
Comments
Re: OpenAI-compatible API Excellent Feature Request; this is a must for wide adoption. Would appreciate if you could share the specific API spec you are referring to and we should be able to push it pretty quickly. Thanks for sharing your detailed use case with us. |
I think Podcastfy can go with the tts API format: python
curl
node
OpenAI TTS API Reference: |
we should support ollama for starters. |
@ralyodio podcastfy now supports running local llms via llamafiles https://github.com/souzatharsis/podcastfy/blob/main/usage/local_llm.md What would be the value add of adding ollama given that? We can move the ollama discussion to a separate issue if there's value in it. And keep this issue focused on OpenAI interface request. Curious about your experience. |
a lot of people already have ollama running and it exposes a rest api so apps (like this one) can easily integrate with it. |
@taowang1993 the I feel this api would be a bit awkward as it focus only on a selection of arguments that podcastfy normally uses. It's technically feasible (as an instance method of a new class or even as a closure) and it would mesh very well with projects that want to expose this as a rest API, but won't be too useful for the integration of podcastfy in a larger python projects, and would be something we have to maintain over time. I think the documentation could present a recipe to create a fastapi endpoint that would more or less respect the openai openapi.json for example, with a real code snippet leveraging the current abstractions, so that would be even more helpful for the kind of need you have, while not adding a new interface to maintain in the codebase. That's my 2cts anyway :) |
In the context of podcastfy, the "input" would be the raw text (or document) that will be fed to podcastfy to convert into podcasts. In the context of openai tts, the "input" is the transcript that users want to convert into speech. The reason I propose an "openai-compatible" appoach, is because many ai systems already have the openai client sdk built in. If podcastfy exposes an openai-compatible api, then it would be very easy to integrate into other systems, leading to wider adoption. If openai api format is not very suitable for podcastfy, then any api format will also work. |
Thanks for the explanations. That's convincing! |
Docker image has been created: https://github.com/souzatharsis/podcastfy/blob/main/usage/docker.md I've updated this Issue to focus solely on enabling OpenAI-type API |
Hi, Podcastfy Team.
I would like to make a feature request for easier deployment and integration with other systems.
Docker Support: It would be much easier to have a Dockerfile so that users can build a docker image and deploy it to the cloud. It would be even better if a prebuilt official docker image is provided in the dockerhub.
OpenAI-compatible API: It would be very easy to integrate Podcastfy into other systems if Podcastfy provides an API that is compatible with the OpenAI /audio/speech API format.
For example:
In Dify (an agent building platform like Langchain), currently I can generate a script and convert it into audio with tts models by clicking the play button.
I would like to integrate Podcastfy with Dify so that I can generate podcasts.
This will open up many opportunities.
For example, I can build an AI teacher that helps people learn new languages.
Podcastfy can be used to help students improve English listening comprehension.
As you can see in the screenshot, Dify can call any OpenAI-compatible APIs.
Thank you.
Upvote & Fund
The text was updated successfully, but these errors were encountered: