diff --git a/README.md b/README.md
index 44c3fed..da83a38 100644
--- a/README.md
+++ b/README.md
@@ -1,53 +1,74 @@
-
-
-## Ship Ambitious Gen AI Apps with Portkey's full-stack LLMOps Platform
+
+## Control Panel for AI Apps
```bash
npm install portkey-ai
```
-
-## **💡 Features**
-
-**🚪 AI Gateway:**
-* **Unified API Signature**: If you've used OpenAI, you already know how to use Portkey with any other provider.
-* **Interoperability**: Write once, run with any provider. Switch between _any model_ from _any provider_ seamlessly.
-* **Automated Fallbacks & Retries**: Ensure your application remains functional even if a primary service fails.
-* **Load Balancing & A/B Testing**: Efficiently distribute incoming requests among multiple models and run A/B tests at scale.
-* **Semantic Caching**: Reduce costs and latency by intelligently caching results.
-
-**🔬 Observability:**
-* **Logging**: Keep track of all requests for monitoring and debugging.
-* **Requests Tracing**: Understand the journey of each request for optimization.
-* **Custom Tags**: Segment and categorize requests for better insights.
-
+## Features
+
+### AI Gateway
+
+
+ Unified API Signature If you've used OpenAI, you already know how to use Portkey with any other provider. |
+ Interoperability Write once, run with any provider. Switch between any model from_any provider seamlessly. |
+
+
+ Automated Fallbacks & Retries Ensure your application remains functional even if a primary service fails. |
+ Load Balancing Efficiently distribute incoming requests among multiple models. |
+
+
+ Semantic Caching Reduce costs and latency by intelligently caching results. |
+ Virtual Keys Secure your LLM API keys by storing them in Portkey vault and using disposable virtual keys. |
+
+
+ Request Timeouts Manage unpredictable LLM latencies effectively by setting custom request timeouts on requests. |
+
+
+
+### Observability
+
+
+ Logging Keep track of all requests for monitoring and debugging. |
+ Requests Tracing Understand the journey of each request for optimization. |
+
+
+ Custom Metadata Segment and categorize requests for better insights. |
+ Feedbacks Collect and analyse weighted feedback on requests from users. |
+
+
+ Analytics Track your app & LLM's performance with 40+ production-critical metrics in a single place. |
+
+
+
+## **Usage**
+#### Prerequisites
+1. [Sign up on Portkey](https://app.portkey.ai/) and grab your Portkey API Key
+2. Add your [OpenAI key](https://platform.openai.com/api-keys) to Portkey's Virtual Keys page and keep it handy
-## **🚀 Quick Start**
-
-#### First, install the SDK & export Portkey API Key
-[Get Portkey API key here.](https://app.portkey.ai/signup)
```bash
+# Installing the SDK
+
$ npm install portkey-ai
$ export PORTKEY_API_KEY="PORTKEY_API_KEY"
```
-#### Now, let's make a request with GPT-4
-
+#### Making a Request to OpenAI
+* Portkey fully adheres to the OpenAI SDK signature. You can instantly switch to Portkey and start using our production features right out of the box.
+* Just replace `import OpenAI from 'openai'` with `import Portkey from 'portkey-ai'`:
```js
import Portkey from 'portkey-ai';
-// Construct a client with a virtual key
const portkey = new Portkey({
- apiKey: "PORTKEY_API_KEY",
virtualKey: "VIRTUAL_KEY"
})
async function main() {
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
- model: 'gpt-3.5-turbo',
+ model: 'gpt-4',
});
console.log(chatCompletion.choices);
@@ -56,46 +77,10 @@ async function main() {
main();
```
-Portkey fully adheres to the OpenAI SDK signature. This means that you can instantly switch to Portkey and start using Portkey's advanced production features right out of the box.
-
-
-
-
-## **📔 List of Portkey Features**
-
-You can set all of these features while constructing your LLMOptions object.
-
-
-| Feature | Config Key | Value(Type) | Required |
-|--|--|--|--|
-| API Key OR Virtual Key | `api_key` OR `virtual_key` | `string` | ✅ Required |
-| Provider Name | `provider` | `openai`, `cohere`, `anthropic`, `azure-openai` | ✅ Required |
-| Model Name | `model` | The relevant model name from the provider. For example, `gpt-3.5-turbo` OR `claude-2` | ❔ Optional |
-| Weight (For Loadbalance) | `weight` | `integer` | ❔ Optional |
-| Cache Type | `cache_status` | `simple`, `semantic` | ❔ Optional |
-| Force Cache Refresh | `cache_force_refresh` | `True`, `False` (Boolean) | ❔ Optional |
-| Cache Age | `cache_age` | `integer` (in seconds) | ❔ Optional |
-| Trace ID | `trace_id` | `string` | ❔ Optional |
-| Retries | `retry` | `integer` [0,5] | ❔ Optional |
-| Metadata | `metadata` | `json object` [More info](https://docs.portkey.ai/key-features/custom-metadata) | ❔ Optional |
-| All Model Params | As per the model/provider | This is params like `top_p`, `temperature`, etc | ❔ Optional |
-
-## **🤝 Supported Providers**
-
-|| Provider | Support Status | Supported Endpoints |
-|---|---|---|---|
-| | OpenAI | ✅ Supported | `/completion`, `/chatcompletion` |
-| | Azure OpenAI | ✅ Supported | `/completion`, `/chatcompletion` |
-| | Anthropic | ✅ Supported | `/complete` |
-| | Cohere | ✅ Supported | `generate` |
-
-
----
-
-#### [📝 Full Documentation](https://docs.portkey.ai/docs) | [🛠️ Integration Requests](https://github.com/Portkey-AI/portkey-node-sdk/issues) |
+#### [Check out Portkey docs for the full list of supported providers](https://portkey.ai/docs/welcome/what-is-portkey#ai-providers-supported)
-## **🛠️ Contributing**
-Get started by checking out Github issues. Feel free to open an issue, or reach out if you would like to add to the project!
\ No newline at end of file
+#### Contributing
+Get started by checking out Github issues. Email us at support@portkey.ai or just ping on Discord to chat.