Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: support the new createEmbedding call #54

Open
grmatthews opened this issue Mar 13, 2022 · 0 comments
Open

Feature request: support the new createEmbedding call #54

grmatthews opened this issue Mar 13, 2022 · 0 comments

Comments

@grmatthews
Copy link

grmatthews commented Mar 13, 2022

Apparently there's a new createEmbedding function, where the purpose is to allow you to create embeddings for a chunk of data with a cheaper engine e.g. babbage, and then use those embeddings in calls to davinci.

It's a processing and especially cost optimization.

Initial info from the open ai team is here: https://openai.com/blog/introducing-text-and-code-embeddings/

I think in Javascript it is called like this, but I haven't been able to get it work because it 'illegally' sets a user-agent and so xhr doesn't actually make the 'createEmbedding' call from the browser.

const getOpenAI = () => {
        const { Configuration, OpenAIApi } = require("openai")
        const configuration = new Configuration({
            apiKey: process.env.REACT_APP_OPEN_AI_KEY,
        })
        const openai = new OpenAIApi(configuration)

        return openai
    }

   // The open ai team suggestion is, depending on the scenario, to create and store embeddings using a cheaper engine
  // like babbage, and then use the embeddings in a call to davinci.
    const handleCreateEmbeddings = async () => {
        const openai = getOpenAI()
        const response = await openai.createEmbedding("text-babbage-001", {
            prompt: "Say this is a test",
            max_tokens: 6,
        })

        console.log("response", response)
    }

That's 1 reason why I've been using this openai-api wrapper library.

Copy pasted text from Juston at Open AI....
"As an alternative, we'd recommend looking into implementing an Embeddings + Completion call workflow, where you embed all of your documents/information and store the embeddings. Afterwards, you embed your query/question, compare it to your stored embeddings to find the nearest neighbors, and then use the nearest neighbors to provide "context" for your completion call. With this method, you're only charged for the cost of embedding the documents a single time vs every time with a search call. Here's an example that illustrates this -> https://beta.openai.com/playground/p/TLxOrLWyAY8fsO1G0XXt72GN?model=text-davinci-001"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant