Skip to content

Commit

Permalink
feat: New updates on the SDK (#12)
Browse files Browse the repository at this point in the history
* feat: updated the code to new APIs structure

* feat: adding the embeddings api

* feat: adding test cases

* feat: embedding APIs

* fix: minor changes on tsconfig

* fix: minor changes

* feat: adding the new prompts API

* feat: adding the embeddings api

* fix: updating the embeddings API

* fix: updating the baseurl and routes

* feat: post api for proxy and minor changes on the urls

* fix: adding the postmethdo file

* fix: updating the test cases

* fix: adding yarn.lock file

* feat: adding the CI pipeline for publishing to NPM on release

* fix: making portkey the default export

* docs: adding an example docs

* feat: adding commit checker

* fix: updating the commit checker

* fix: updating the commit checker

* fix: updating the commit checker

* fix: updating the commit checker

* feat: adding feedback routes

* fix: adding streaming to feedbacks and override configs in APIs

* feat: over-ride of the params in chat and completions route

* fix: feedbacks to accept array as input

* fix: config override

* fix: adding the embeddings api in client

* fix: changing traceID to trace_id

* fix: adding traceid type

* fix: adding the tracid correction

* feat: added the getHeaders function for each response in non-streaming mode

* feat: added the completions method on embeddings

* feat: added Llamaindex integration in portkey

* fix: removing the langchain folder

* fix: fixed override of configs with empty updated config. Also added an isempty check function

* fix: added langchainjs

* fix: updating the type

* fix: overwriting the config everytime

* fix: removing empty values on the headers

* fix: embeddings and prompts completion

* fix: handling last package in streaming mode

* fix: minor fix

* fix: update on the url of prompts api

* fix: adding stream to prompts body

* fix: updating the promptId to promptID

* feat: adding authorization param in the client

* fix: updating the post method

* fix: fixing the camel case issue in the body

* doc: Udpating the readme file with the latest changes

* doc: URL update and contributing guidelines added

* feat: version upgrade to 1.0.0
  • Loading branch information
noble-varghese authored Dec 8, 2023
1 parent 9b4d860 commit df889c8
Show file tree
Hide file tree
Showing 43 changed files with 15,673 additions and 1,757 deletions.
13 changes: 13 additions & 0 deletions .eslintrc.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
module.exports = {
root: true,
parser: "@typescript-eslint/parser",
plugins: ["@typescript-eslint"],
extends: [
"eslint:recommended",
"plugin:@typescript-eslint/recommended",
"prettier",
],
rules: {
'no-unused-vars': 'off',
}
};
11 changes: 11 additions & 0 deletions .github/workflows/commit-checker.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
name: verify-conventional-commits

on: [pull_request]

jobs:
conventional-commits-checker:
runs-on: ubuntu-latest
steps:
- name: verify conventional commits
uses: taskmedia/[email protected]

18 changes: 18 additions & 0 deletions .github/workflows/node-publish.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
name: Publish Package to npmjs
on:
release:
types: [published]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# Setup .npmrc file to publish to npm
- uses: actions/setup-node@v3
with:
node-version: '20.x'
registry-url: 'https://registry.npmjs.org'
- run: npm ci
- run: npm run build && npm publish
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
114 changes: 10 additions & 104 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,17 +36,18 @@ $ export PORTKEY_API_KEY="PORTKEY_API_KEY"
#### Now, let's make a request with GPT-4

```js
import { Portkey } from "portkey-ai";
import Portkey from 'portkey-ai';

// Construct a client with a virtual key
const portkey = new Portkey({
mode: "single",
llms: [{ provider: "openai", virtual_key: "open-ai-xxx" }]
});
apiKey: "PORTKEY_API_KEY",
virtualKey: "VIRTUAL_KEY"
})

async function main() {
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'gpt-4'
model: 'gpt-3.5-turbo',
});

console.log(chatCompletion.choices);
Expand All @@ -58,104 +59,6 @@ main();
Portkey fully adheres to the OpenAI SDK signature. This means that you can instantly switch to Portkey and start using Portkey's advanced production features right out of the box.


## **🪜 Detailed Integration Guide**

**There are 4️ Steps to Integrate Portkey**
1. Setting your Portkey API key and your virtual key for AI providers.
2. Constructing your LLM with Portkey features, provider features (and prompt!).
3. Constructing the Portkey client and setting usage mode.
4. Making your request!

Let's dive in! If you are an advanced user and want to directly jump to various full-fledged examples, [click here](https://github.com/Portkey-AI/portkey-node-sdk/blob/main/examples).

---

### **Step 1️ : Get your Portkey API Key and your Virtual Keys for AI providers**

**Portkey API Key:** Log into [Portkey here](https://app.portkey.ai/), then click on the profile icon on top left and “Copy API Key”.
```bash
export PORTKEY_API_KEY="PORTKEY_API_KEY"
```
**Virtual Keys:** Navigate to the "Virtual Keys" page on [Portkey](https://app.portkey.ai/) and hit the "Add Key" button. Choose your AI provider and assign a unique name to your key. Your virtual key is ready!

### **Step 2️ : Construct your LLM, add Portkey features, provider features, and prompt**

**Portkey Features**:
You can find a comprehensive [list of Portkey features here](#📔-list-of-portkey-features). This includes settings for caching, retries, metadata, and more.

**Provider Features**:
Portkey is designed to be flexible. All the features you're familiar with from your LLM provider, like `top_p`, `top_k`, and `temperature`, can be used seamlessly. Check out the [complete list of provider features here](https://github.com/Portkey-AI/portkey-node-sdk/blob/539021dcae8fa0945cf7f0b8c27fc26a7dd56092/src/_types/portkeyConstructs.ts#L34).

**Setting the Prompt Input**:
This param lets you override any prompt that is passed during the completion call - set a model-specific prompt here to optimise the model performance. You can set the input in two ways. For models like Claude and GPT3, use `prompt` = `(str)`, and for models like GPT3.5 & GPT4, use `messages` = `[array]`.

Here's how you can combine everything:

```js
import { LLMOptions } from "portkey-ai";

// Portkey Config
const provider = "openai";
const virtual_key = "open-ai-xxx";
const trace_id = "portkey_sdk_test";
const cache_status = "semantic";

// Model Params
const model = "gpt-4";
const temperature = 1;

// Prompt
const messages = [{"role": "user", "content": "Who are you?"}];

const llm_a: LLMOptions = {
provider: provider,
virtual_key: virtual_key,
cache_status: cache_status,
trace_id: trace_id,
model: model,
temperature: temperature,
messages: messages
};

```

### **Step 3️ : Construct the Portkey Client**

Portkey client's config takes 3 params: `api_key`, `mode`, `llms`.

* `api_key`: You can set your Portkey API key here or with `$ EXPORT` as done above.
* `mode`: There are **3** modes - Single, Fallback, Loadbalance.
* **Single** - This is the standard mode. Use it if you do not want Fallback OR Loadbalance features.
* **Fallback** - Set this mode if you want to enable the Fallback feature.
* **Loadbalance** - Set this mode if you want to enable the Loadbalance feature.
* `llms`: This is an array where we pass our LLMs constructed using the LLMOptions interface.

```js
import { Portkey } from "portkey-ai";

const portkey = new Portkey({ mode: "single", llms: [llm_a] });
```

### **Step 4️ : Call the Portkey Client!**

The Portkey client can do `ChatCompletions` and `Completions` calls.

Since our LLM is GPT4, we will use ChatCompletions:

```js
async function main() {
const response = await portkey.chatCompletions.create({
messages: [{ "role": "user", "content": "Who are you ?"}]
});
console.log(response.choices[0].message);
};

main();
```

You have integrated Portkey's Node SDK in just 4 steps!

---


## **📔 List of Portkey Features**
Expand Down Expand Up @@ -189,7 +92,10 @@ You can set all of these features while constructing your LLMOptions object.

---

#### [📝 Full Documentation](https://docs.portkey.ai/) | [🛠️ Integration Requests](https://github.com/Portkey-AI/portkey-node-sdk/issues) |
#### [📝 Full Documentation](https://docs.portkey.ai/docs) | [🛠️ Integration Requests](https://github.com/Portkey-AI/portkey-node-sdk/issues) |

<a href="https://twitter.com/intent/follow?screen_name=portkeyai"><img src="https://img.shields.io/twitter/follow/portkeyai?style=social&logo=twitter" alt="follow on Twitter"></a>
<a href="https://discord.gg/sDk9JaNfK8" target="_blank"><img src="https://img.shields.io/discord/1143393887742861333?logo=discord" alt="Discord"></a>

## **🛠️ Contributing**
Get started by checking out Github issues. Feel free to open an issue, or reach out if you would like to add to the project!
6 changes: 6 additions & 0 deletions babel.config.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
module.exports = {
presets: [
['@babel/preset-env', { targets: { node: 'current' } }],
'@babel/preset-typescript',
],
};
27 changes: 27 additions & 0 deletions build
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@

#!/usr/bin/env bash
set -exuo pipefail

node scripts/check-version.cjs

# Build into dist and will publish the package from there,
# so that src/resources/foo.ts becomes <package root>/resources/foo.js
# This way importing from `"portkey-ai/apis/foo"` works
# even with `"moduleResolution": "node"`

rm -rf dist; mkdir dist

# Copy src to dist/src and build from dist/src into dist, so that
# the source map for index.js.map will refer to ./src/index.ts etc
cp -rp src README.md dist
# Copy the changelog and license files tp dist
for file in LICENSE CHANGELOG.md; do
if [ -e "${file}" ]; then cp "${file}" dist; fi
done

# this converts the export map paths for the dist directory
# and does a few other minor things
node scripts/make-dist-package-json.cjs > dist/package.json

# build to .js/.mjs/.d.ts files
npm exec tsc-multi
48 changes: 18 additions & 30 deletions examples/demo.ts
Original file line number Diff line number Diff line change
@@ -1,35 +1,23 @@
import { Portkey } from "../src";
import { config } from 'dotenv';
import Portkey from '../src';

const client = new Portkey({
mode: "fallback",
llms: [{
provider: "openai",
virtual_key: "openai-v"
}]
});
config({ override: true })

const messages = [
{ content: "You want to talk in rhymes.", role: "system" },
{ content: "Hello, world!", role: "user" },
{ content: "Hello!", role: "assistant" },
{
content:
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
role: "user",
},
]
// Initialize the Portkey client
const portkey = new Portkey({
apiKey: process.env["PORTKEY_API_KEY"] ?? "",
baseURL: "https://api.portkeydev.com/v1",
provider: "openai",
virtualKey: process.env["OPENAI_VIRTUAL_KEY"] ?? ""
});

const prompt = "write a story about a king"
// Generate a text completion
async function getTextCompletion() {
const completion = await portkey.completions.create({
prompt: "Say this is a test",
model: "gpt-3.5-turbo-instruct",
});

async function main() {
const params = {}
const res = await client.chatCompletions.create({ messages, ...params, stream: true })
for await (const completion of res) {
process.stdout.write(completion.choices[0]?.delta?.content || "");
}
console.log(completion.choices[0]?.text);
}

main().catch((err) => {
console.error(err);
process.exit(1);
});
getTextCompletion();
32 changes: 16 additions & 16 deletions examples/fallback.ts
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
import { Portkey } from "../src";
// import { Portkey } from "../src";

const portkey = new Portkey({
apiKey:"your-portkey-api-key",
mode: "fallback",
llms: [
{ provider: "openai", virtual_key: "open-ai-key-1234", trace_id: "1234", metadata: { hello: "world" } },
{ provider: "cohere", virtual_key: "cohere-api-key-1234", trace_id: "1234", metadata: { hello: "world" } },
]
});
// const portkey = new Portkey({
// apiKey:"your-portkey-api-key",
// mode: "fallback",
// llms: [
// { provider: "openai", virtual_key: "open-ai-key-1234", trace_id: "1234", metadata: { hello: "world" } },
// { provider: "cohere", virtual_key: "cohere-api-key-1234", trace_id: "1234", metadata: { hello: "world" } },
// ]
// });

async function main() {
const chatCompletion = await portkey.chatCompletions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
});
// async function main() {
// const chatCompletion = await portkey.chatCompletions.create({
// messages: [{ role: 'user', content: 'Say this is a test' }],
// });

console.log(chatCompletion.choices);
};
// console.log(chatCompletion.choices);
// };

main();
// main();
24 changes: 12 additions & 12 deletions examples/promptGeneration.ts
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
import { Portkey } from "../src";
// import { Portkey } from "../src";

const portkey = new Portkey({
mode: "fallback"
});
// const portkey = new Portkey({
// mode: "fallback"
// });

async function main() {
const chatCompletion = await portkey.generations.create({
promptId: "your-prompt-id",
// variables: {hello: "world"} # Add variables if required
});
// async function main() {
// const chatCompletion = await portkey.generations.create({
// promptId: "your-prompt-id",
// // variables: {hello: "world"} # Add variables if required
// });

console.log(chatCompletion.data);
};
// console.log(chatCompletion.data);
// };

main();
// main();
Loading

0 comments on commit df889c8

Please sign in to comment.