You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Apparently there's a new createEmbedding function, where the purpose is to allow you to create embeddings for a chunk of data with a cheaper engine e.g. babbage, and then use those embeddings in calls to davinci.
It's a processing and especially cost optimization.
I think in Javascript it is called like this, but I haven't been able to get it work because it 'illegally' sets a user-agent and so xhr doesn't actually make the 'createEmbedding' call from the browser.
const getOpenAI = () => {
const { Configuration, OpenAIApi } = require("openai")
const configuration = new Configuration({
apiKey: process.env.REACT_APP_OPEN_AI_KEY,
})
const openai = new OpenAIApi(configuration)
return openai
}
// The open ai team suggestion is, depending on the scenario, to create and store embeddings using a cheaper engine
// like babbage, and then use the embeddings in a call to davinci.
const handleCreateEmbeddings = async () => {
const openai = getOpenAI()
const response = await openai.createEmbedding("text-babbage-001", {
prompt: "Say this is a test",
max_tokens: 6,
})
console.log("response", response)
}
That's 1 reason why I've been using this openai-api wrapper library.
Copy pasted text from Juston at Open AI....
"As an alternative, we'd recommend looking into implementing an Embeddings + Completion call workflow, where you embed all of your documents/information and store the embeddings. Afterwards, you embed your query/question, compare it to your stored embeddings to find the nearest neighbors, and then use the nearest neighbors to provide "context" for your completion call. With this method, you're only charged for the cost of embedding the documents a single time vs every time with a search call. Here's an example that illustrates this -> https://beta.openai.com/playground/p/TLxOrLWyAY8fsO1G0XXt72GN?model=text-davinci-001"
The text was updated successfully, but these errors were encountered:
Apparently there's a new createEmbedding function, where the purpose is to allow you to create embeddings for a chunk of data with a cheaper engine e.g. babbage, and then use those embeddings in calls to davinci.
It's a processing and especially cost optimization.
Initial info from the open ai team is here: https://openai.com/blog/introducing-text-and-code-embeddings/
I think in Javascript it is called like this, but I haven't been able to get it work because it 'illegally' sets a user-agent and so xhr doesn't actually make the 'createEmbedding' call from the browser.
That's 1 reason why I've been using this openai-api wrapper library.
Copy pasted text from Juston at Open AI....
"As an alternative, we'd recommend looking into implementing an Embeddings + Completion call workflow, where you embed all of your documents/information and store the embeddings. Afterwards, you embed your query/question, compare it to your stored embeddings to find the nearest neighbors, and then use the nearest neighbors to provide "context" for your completion call. With this method, you're only charged for the cost of embedding the documents a single time vs every time with a search call. Here's an example that illustrates this -> https://beta.openai.com/playground/p/TLxOrLWyAY8fsO1G0XXt72GN?model=text-davinci-001"
The text was updated successfully, but these errors were encountered: