Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: DOC-259: Enhance Prompt and other fixes #6591

Merged
merged 5 commits into from
Nov 5, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions docs/source/guide/prompts_create.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,12 @@ date: 2024-06-11 16:53:16

## Prerequisites

* An OpenAI API key or an Azure OpenAI key.
* An API key for your LLM.
* A project that meets the [criteria noted below](#Create-a-Prompt).

## Model provider API keys

You can specify one OpenAI API key and/or multiple Azure OpenAI keys per organization. Keys only need to be added once.
You can specify one OpenAI API key and/or multiple custom and Azure OpenAI keys per organization. Keys only need to be added once.

Click **API Keys** in the top right of the Prompts page to open the **Model Provider API Keys** window:

Expand Down Expand Up @@ -120,10 +120,11 @@ From the Prompts page, click **Create Prompt** in the upper right and then compl
* For text classification, this means that the labeling configuration for the project must use `Choice` tags.
* For NER, this means that the labeling configuration for the project must use `Label` tags.
* The project must have one output type (`Choice` or `Label`) and not a mix of both.
* The project cannot include multiple `Choices` or `Labels` blocks in its labeling configuration.
* The project must include text data. While it can include other data types such as images or video, it must include `<Text>`.
* You must have access to the project. If you are in the Manager role, you need to be added to the project to have access.
* The project cannot be located in your Personal Sandbox workspace.
* While projects connected to an ML backend will still appear in the list of eligible projects, we do not recommend using Prompts with an ML backend.
* While projects connected to an ML backend will still appear in the list of eligible projects, we do not recommend using Prompts with an ML backend as this can interfere with how accuracy and score are calculated when evaluating the prompt.

## Types

Expand Down
35 changes: 32 additions & 3 deletions docs/source/guide/prompts_draft.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,16 @@ With your [Prompt created](prompts_create), you can begin drafting your prompt c

1. Select your base model.

The models that appear depend on the [API keys](prompts_create#Model-provider-API-keys) that you have configured for your organization. If you have added an OpenAI key, then you will see all supported OpenAI models. If you have added Azure OpenAI keys, then you will see one model per each deployment that you have added.
The models that appear depend on the [API keys](prompts_create#Model-provider-API-keys) that you have configured for your organization. If you have added an OpenAI key, then you will see all supported OpenAI models. If you have other API keys, then you will see one model per each deployment that you have added.

For a description of all OpenAI models, see [OpenAI's models overview](https://platform.openai.com/docs/models/models-overview).
2. In the **Prompt** field, enter your prompt. Keep in mind the following:
* You must include the text class. (In the demo below, this is the `review` class.) Click the text class name to insert it into the prompt.
* You must include the text classes. These appear directly above the prompt field. (In the demo below, this is the `review` class.) Click the text class name to insert it into the prompt.
caitlinwheeless marked this conversation as resolved.
Show resolved Hide resolved
* Although not strictly required, you should provide definitions for each class to ensure prediction accuracy and to help [add context](#Add-context).

!!! info Tip
You can generate an initial draft by simply adding the text classes and then [clicking **Enhance Prompt**](#Enhance-prompt).
caitlinwheeless marked this conversation as resolved.
Show resolved Hide resolved

3. Select your baseline:
* **All Project Tasks** - Generate predictions for all tasks in the project. Depending on the size of your project, this might take some time to process. This does not generate an accuracy score for the prompt.

Expand Down Expand Up @@ -162,12 +166,37 @@ NER
</td>
<td>

The cost to run the prompt evaluation based on the number of tokens required.
The cost to run the prompt based on the number of tokens required.

</td>
</tr>
</table>

## Enhance prompt

You can use **Enhance Prompt** to help you construct and auto-refine your prompts.

At minimum, you need to insert the text classes first. (Click the text class name to insert it into the prompt. These appear above the prompts field).
caitlinwheeless marked this conversation as resolved.
Show resolved Hide resolved

From the **Enhance Prompt** window you will need to select the **Teacher Model** that you want to use to write your prompt. As you auto-refine your prompt, you'll get the following:

* A new prompt displayed next to the previous prompt.
* An explanation of the changes made.
* The estimated cost spent auto-refining your prompt.

![Screenshot of enhance prompt modal](../images/prompts/enhance.png)

**How it works**

The **Task Subset** is used as the context when auto-refining the prompt. If you have ground truth data available, that will serve as the task subset. Otherwise, a sample of up to to 10 projected tasks are used.
caitlinwheeless marked this conversation as resolved.
Show resolved Hide resolved

Auto-refinement applies your initial prompt and the Teacher Model to generate predictions on the task subset (which will be ground truth tasks or a sample dataset). If applicable, predictions are then compared to the ground truth for accuracy.

Your Teacher Model evaluates the initial prompt’s predictions against the ground truth (or sample task output) and identifies areas for improvement. It then suggests a refined prompt, aimed at achieving closer alignment with the desired outcomes.




## Drafting effective prompts

For a comprehensive guide to drafting prompts, see [The Prompt Report: A Systematic Survey of Prompting Techniques](https://arxiv.org/abs/2406.06608) or OpenAI's guide to [Prompt Engineering](https://platform.openai.com/docs/guides/prompt-engineering).
Expand Down
2 changes: 1 addition & 1 deletion docs/source/guide/prompts_overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ date: 2024-05-15 14:30:14

Use Prompts to evaluate and refine your LLM prompts and then generate predictions to automate your labeling process.

All you need to get started is an OpenAI API key and a project.
All you need to get started is an LLM deployment API key and a project.

With Prompts, you can:

Expand Down
Binary file added docs/themes/v2/source/images/prompts/enhance.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.