Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bugfix/fix issue unbounded func #1

Open
wants to merge 127 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
Show all changes
127 commits
Select commit Hold shift + click to select a range
0e22be2
Implement Lares Smart Home Assistant Using llmclient (#23)
TYRONEMICHAEL Jun 17, 2024
3a9daf0
feat: added prefix Ax across type namespace
dosco Jun 17, 2024
a027acc
Updated README
dosco Jun 17, 2024
349f5d4
Update README.md
dosco Jun 17, 2024
a170b55
feat: multi-modal dsp, use images in input fields
dosco Jun 19, 2024
12dd181
Update README.md
dosco Jun 19, 2024
95a0680
feat: added multi-modal support to anthropic api and other fixes
dosco Jun 20, 2024
7dd8e19
refactor: change axAI function to AxAI class
dosco Jun 20, 2024
5974284
[breaking change]: is now , added claude 3.5 sonnet and other fixes
dosco Jun 21, 2024
a8f7b7b
feat: ai balancer to pick the cheapest llm with automatic fallback
dosco Jun 21, 2024
ad0cccd
chore: start of official releases
dosco Jun 21, 2024
18b305c
chore(release): 9.1.0
dosco Jun 21, 2024
4df306f
Create npm-publish.yml
dosco Jun 21, 2024
015c334
chore: testing npm publish from github
dosco Jun 21, 2024
699cb9b
Update npm-publish.yml
dosco Jun 21, 2024
747fec6
Update package.json
dosco Jun 21, 2024
e2fa4c0
fix: openai and anthropic function calling issues
dosco Jun 22, 2024
45fe357
fix: google gemini function calling fix
dosco Jun 22, 2024
a839c04
fix: cohere and gemini function calling
dosco Jun 22, 2024
e114dd7
chore: release stuff
dosco Jun 22, 2024
610c2e2
chore: release 9.0.8
dosco Jun 22, 2024
ceae901
fix: build issue
dosco Jun 22, 2024
8d74f9e
fix: added default ratelimiter to groq
dosco Jun 23, 2024
47f78d6
fix: build issue
dosco Jun 23, 2024
5800ff0
fix: several issues with agents
dosco Jun 25, 2024
f91c044
Update README.md
dosco Jun 25, 2024
f180616
Update README.md
dosco Jun 25, 2024
e24f197
fix: build fixes
dosco Jun 25, 2024
a25286a
fix: build issue
dosco Jun 25, 2024
9f54a8b
fix: issue with function results
dosco Jun 25, 2024
a77f46a
fix: Export hugging face data Loader and fix summarize example (#32)
TYRONEMICHAEL Jun 26, 2024
8b9a190
docs: ax presentation
dosco Jun 27, 2024
4932b1f
fix: input fields type image rejected #33 (#34)
marcbuils Jun 27, 2024
d4a4cd2
chore: version upgrade
dosco Jun 27, 2024
122dc2d
fix: issues with prompt tuning
dosco Jun 28, 2024
079b792
fix: build issue
dosco Jun 28, 2024
85ffdd3
feat: gemma 2
dosco Jun 28, 2024
29a7e9c
feat: gemini code execution added
dosco Jun 29, 2024
6b0a894
Fix: Remove casting to a string value in the loader (#37)
TYRONEMICHAEL Jun 30, 2024
56949d2
Create static.yml
dosco Jul 1, 2024
5290713
docs: new website
dosco Jul 1, 2024
a03abe5
docs: new website
dosco Jul 1, 2024
eb1829c
docs: site update
dosco Jul 1, 2024
791715d
docs: site update
dosco Jul 1, 2024
20042cc
docs: more updates
dosco Jul 1, 2024
bf1e08b
docs: more updates
dosco Jul 1, 2024
b758dee
Add monorepo (#39)
karol-f Jul 3, 2024
00cd53f
chore: cleanup
dosco Jul 3, 2024
2e3c8d8
feat: add multi format build with CJS build for compatibility (#40)
karol-f Jul 3, 2024
d167f64
fix: removed publish from exmples
dosco Jul 3, 2024
a86e6fe
fix: minor fixes
dosco Jul 3, 2024
4d35f06
fix: increased tests timeout
dosco Jul 3, 2024
51400be
Ava test timeout and "prepare" script changes (#42)
karol-f Jul 3, 2024
13be299
Test improvements - Fix Ava runs on CI (#43)
karol-f Jul 3, 2024
fdd96c2
Release process improvements (#44)
karol-f Jul 3, 2024
9b4e1cc
chore: release v9.0.20
dosco Jul 3, 2024
64f661f
fix: Accessing Stream Chunks (Streamed generation) #36
dosco Jul 4, 2024
1cba1e7
chore: release v9.0.21
dosco Jul 4, 2024
acb773a
fix: release files (#45)
karol-f Jul 4, 2024
ca42387
fix: minor fix
dosco Jul 4, 2024
64e7b6c
chore: release v9.0.22
dosco Jul 4, 2024
3392a70
Fix node imports (#48)
karol-f Jul 5, 2024
b6f935b
docs: design update
dosco Jul 5, 2024
f7e8669
chore: release v9.0.23
dosco Jul 5, 2024
8753689
docs: design update
dosco Jul 5, 2024
49a4cfc
docs: design update
dosco Jul 5, 2024
3b07010
docs: design update
dosco Jul 6, 2024
48e3b21
docs: design update
dosco Jul 6, 2024
f1b6c71
docs: updated
dosco Jul 7, 2024
aa5b2ec
docs: update
dosco Jul 7, 2024
d738c15
docs: update
dosco Jul 7, 2024
a3f27a6
fix: with and without signature base program classes
dosco Jul 7, 2024
e337f89
chore: release v9.0.24
dosco Jul 7, 2024
802b288
fix: refactor functions
dosco Jul 7, 2024
6d532cd
chore: release v9.0.25
dosco Jul 7, 2024
a423c45
feat: docker sandbox function
dosco Jul 8, 2024
a753a82
chore: release v9.0.26
dosco Jul 8, 2024
1a40587
docs: updated
dosco Jul 8, 2024
ab29d6e
fix: model map issue
dosco Jul 9, 2024
a552713
chore: release v9.0.27
dosco Jul 9, 2024
acfb41b
Auto-add release changelog after the release (#49)
karol-f Jul 9, 2024
d33d9be
fix: issue with model map feature
dosco Jul 9, 2024
f9e39e5
chore: release v9.0.28
dosco Jul 9, 2024
7049914
fix: redesigned model map feature
dosco Jul 9, 2024
176128f
chore: release v9.0.29
dosco Jul 9, 2024
b01fcb7
fix: more fixes related to model mapping
dosco Jul 9, 2024
e4fac9d
chore: release v9.0.30
dosco Jul 9, 2024
e8aa618
docs: how to use in production
dosco Jul 12, 2024
2a0900d
docs: fix links in apidocs
dosco Jul 14, 2024
bec28e4
chore: release v9.0.31
dosco Jul 14, 2024
69c733b
docs: minor link fix
dosco Jul 15, 2024
d1a733e
fix: corrected embeddings endpoint (#51)
polydeuxes Jul 18, 2024
b0ae470
feat: added new models for mistral and openai
dosco Jul 18, 2024
2a06efb
chore: release v9.0.32
dosco Jul 18, 2024
2007510
Update README.md
dosco Jul 18, 2024
711a700
FixL Add terms to spellcheck dictionary (#54)
polydeuxes Jul 20, 2024
e77dce4
Fix: Convert Ollama API files from JS to TS (#52)
polydeuxes Jul 20, 2024
bc461d5
Revert "Fix: Convert Ollama API files from JS to TS (#52)"
dosco Jul 21, 2024
60e646f
feat: new ai-sdk-provider
dosco Jul 23, 2024
edb9971
chore: release v9.0.33
dosco Jul 23, 2024
2f1b72e
fix: package.json for publishing
dosco Jul 24, 2024
58c40c9
chore: release v9.0.34
dosco Jul 24, 2024
bdc7c18
chore(docs): fix grammar (#55)
HelloAlexPan Jul 24, 2024
e574f3a
fix: build issues
dosco Jul 24, 2024
5c46f1b
chore: release v9.0.35
dosco Jul 24, 2024
cbe25eb
fix: spelling
dosco Jul 24, 2024
da8aff7
chore: release v9.0.36
dosco Jul 24, 2024
192adac
fix: streaming fix in ai sdk provider
dosco Jul 26, 2024
3e3ccc6
chore: release v9.0.37
dosco Jul 26, 2024
9f030c0
feat: added a ai sdk agent provider
dosco Jul 27, 2024
2c69ae2
chore: release v9.0.38
dosco Jul 27, 2024
096ad0c
fix: ai sdk agent provider update
dosco Jul 27, 2024
53ee6f8
chore: release v9.0.39
dosco Jul 27, 2024
7ea8600
fix: automatic zod schema creation for ai sdk provider tools
dosco Jul 28, 2024
8969119
chore: release v9.0.40
dosco Jul 28, 2024
b87bf02
fix: ax ai provider
dosco Jul 28, 2024
1c53d13
chore: release v9.0.41
dosco Jul 28, 2024
ca62b91
fix: updates to ai sdk provider
dosco Jul 28, 2024
37acf4d
chore: release v9.0.42
dosco Jul 28, 2024
148e692
fix: updates to the ai sdk provider
dosco Jul 29, 2024
4bf9ebf
chore: release v9.0.43
dosco Jul 29, 2024
ff77e1f
fix: updates to the ai sdk provider
dosco Jul 29, 2024
01fca9a
fix: updates to the ai sdk provider
dosco Jul 29, 2024
162eeb1
chore: release v9.0.44
dosco Jul 29, 2024
ef1f267
feat: added reka models
dosco Jul 31, 2024
8cbad04
chore: release v9.0.45
dosco Jul 31, 2024
bd3df43
Fix: Bind agent functions to instances for correct 'this' context"
TYRONEMICHAEL Aug 15, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Updated README
dosco committed Jun 17, 2024
commit a027accda3e05e54b83cd5760b3ec73d4e9454f6
82 changes: 42 additions & 40 deletions README.md
Original file line number Diff line number Diff line change
@@ -10,7 +10,7 @@ Build intelligent agents with ease, inspired by the power of "Agentic workflows"

## Our focus on agents

We've renamed from "llmclient" to "ax" to highlight our focus on powering agentic workflows. We agree with many experts like "Andrew Ng" that agentic workflows are the key to unlocking the true power of large language models and what can be achieved with in-context learning. Also we are big fans of the Stanford DSP paper and this library is the result of all of this coming together to build a powerful framework for you to build with.
We've renamed from "llmclient" to "ax" to highlight our focus on powering agentic workflows. We agree with many experts like "Andrew Ng" that agentic workflows are the key to unlocking the true power of large language models and what can be achieved with in-context learning. Also we are big fans of the Stanford DSP paper and this library is the result of all of this coming together to build a powerful framework for you to build with.

![image](https://github.com/ax-llm/ax/assets/832235/801b8110-4cba-4c50-8ec7-4d5859121fe5)

@@ -63,13 +63,13 @@ yarn add @ax-llm/ax
## Example: Using chain-of-thought to summarize text

```typescript
import { AI, ChainOfThought, OpenAIArgs } from '@ax-llm/ax';
import { axAI, AxChainOfThought } from '@ax-llm/ax';

const textToSummarize = `
The technological singularity—or simply the singularity[1]—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.[2][3] ...`;

const ai = AI('openai', { apiKey: process.env.OPENAI_APIKEY } as OpenAIArgs);
const gen = new ChainOfThought(
const ai = axAI('openai', { apiKey: process.env.OPENAI_APIKEY });
const gen = new AxChainOfThought(
ai,
`textToSummarize -> shortSummary "summarize in 5 to 10 words"`
);
@@ -85,19 +85,19 @@ Use the agent prompt (framework) to build agents that work with other agents to
```typescript
# npm run tsx ./src/examples/agent.ts

const researcher = new Agent(ai, {
const researcher = new AxAgent(ai, {
name: 'researcher',
description: 'Researcher agent',
signature: `physicsQuestion "physics questions" -> answer "reply in bullet points"`
});

const summarizer = new Agent(ai, {
const summarizer = new AxAgent(ai, {
name: 'summarizer',
description: 'Summarizer agent',
signature: `text "text so summarize" -> shortSummary "summarize in 5 to 10 words"`
});

const agent = new Agent(ai, {
const agent = new AxAgent(ai, {
name: 'agent',
description: 'A an agent to research complex topics',
signature: `question -> answer`,
@@ -116,25 +116,25 @@ Use the Router to efficiently route user queries to specific routes designed to
```typescript
# npm run tsx ./src/examples/routing.ts

const customerSupport = new Route('customerSupport', [
const customerSupport = new AxRoute('customerSupport', [
'how can I return a product?',
'where is my order?',
'can you help me with a refund?',
'I need to update my shipping address',
'my product arrived damaged, what should I do?'
]);

const technicalSupport = new Route('technicalSupport', [
const technicalSupport = new AxRoute('technicalSupport', [
'how do I install your software?',
'I’m having trouble logging in',
'can you help me configure my settings?',
'my application keeps crashing',
'how do I update to the latest version?'
]);

const ai = AI('openai', { apiKey: process.env.OPENAI_APIKEY } as OpenAIArgs);
const ai = axAI('openai', { apiKey: process.env.OPENAI_APIKEY });

const router = new Router(ai);
const router = new AxRouter(ai);
await router.setRoutes(
[customerSupport, technicalSupport],
{ filename: 'router.json' }
@@ -166,7 +166,7 @@ Vector databases are critical to building LLM workflows. We have clean abstracti
const ret = await this.ai.embed({ texts: 'hello world' });

// Create an in memory vector db
const db = new DB('memory');
const db = new AxDB('memory');

// Insert into vector db
await this.db.upsert({
@@ -182,11 +182,11 @@ const matches = await this.db.query({
});
```

Alternatively you can use the `DBManager` which handles smart chunking, embedding and querying everything
Alternatively you can use the `AxDBManager` which handles smart chunking, embedding and querying everything
for you, it makes things almost too easy.

```typescript
const manager = new DBManager({ ai, db });
const manager = new AxDBManager({ ai, db });
await manager.insert(text);

const matches = await manager.query(
@@ -205,13 +205,13 @@ Launch Apache Tika
docker run -p 9998:9998 apache/tika
```

Convert documents to text and embed them for retrieval using the `DBManager` it also supports a reranker and query rewriter. Two default implementations `DefaultResultReranker` and `DefaultQueryRewriter` are available to use.
Convert documents to text and embed them for retrieval using the `AxDBManager` it also supports a reranker and query rewriter. Two default implementations `AxDefaultResultReranker` and `AxDefaultQueryRewriter` are available to use.

```typescript
const tika = new ApacheTika();
const tika = new AxApacheTika();
const text = await tika.convert('/path/to/document.pdf');

const manager = new DBManager({ ai, db });
const manager = new AxDBManager({ ai, db });
await manager.insert(text);

const matches = await manager.query('Find some text');
@@ -224,7 +224,7 @@ We support parsing output fields and function execution while streaming. This al

```typescript
// setup the prompt program
const gen = new ChainOfThought(
const gen = new AxChainOfThought(
ai,
`startNumber:number -> next10Numbers:number[]`
);
@@ -285,12 +285,12 @@ trace.setGlobalTracerProvider(provider);

const tracer = trace.getTracer('test');

const ai = AI('ollama', {
const ai = axAI('ollama', {
model: 'nous-hermes2',
options: { tracer }
} as unknown as OllamaArgs);

const gen = new ChainOfThought(
const gen = new AxChainOfThought(
ai,
`text -> shortSummary "summarize in 5 to 10 words"`
);
@@ -324,36 +324,37 @@ const res = await gen.forward({ text });

## Tuning the prompts (programs)

You can tune your prompts using a larger model to help them run more efficiently and give you better results. This is done by using an optimizer like `BootstrapFewShot` with and examples from the popular `HotPotQA` dataset. The optimizer generates demonstrations `demos` which when used with the prompt help improve its efficiency.
You can tune your prompts using a larger model to help them run more efficiently and give you better results. This is done by using an optimizer like `AxBootstrapFewShot` with and examples from the popular `HotPotQA` dataset. The optimizer generates demonstrations `demos` which when used with the prompt help improve its efficiency.

```typescript
// Download the HotPotQA dataset from huggingface
const hf = new HFDataLoader();
const hf = new AxHFDataLoader();
const examples = await hf.getData<{ question: string; answer: string }>({
dataset: 'hotpot_qa',
split: 'train',
count: 100,
fields: ['question', 'answer']
});

const ai = AI('openai', { apiKey: process.env.OPENAI_APIKEY } as OpenAIArgs);
const ai = axAI('openai', { apiKey: process.env.OPENAI_APIKEY });

// Setup the program to tune
const program = new ChainOfThought<{ question: string }, { answer: string }>(
const program = new AxChainOfThought<{ question: string }, { answer: string }>(
ai,
`question -> answer "in short 2 or 3 words"`
);

// Setup a Bootstrap Few Shot optimizer to tune the above program
const optimize = new BootstrapFewShot<{ question: string }, { answer: string }>(
{
program,
examples
}
);
const optimize = new AxBootstrapFewShot<
{ question: string },
{ answer: string }
>({
program,
examples
});

// Setup a evaluation metric em, f1 scores are a popular way measure retrieval performance.
const metricFn: MetricFn = ({ prediction, example }) =>
const metricFn: AxMetricFn = ({ prediction, example }) =>
emScore(prediction.answer as string, example.answer as string);

// Run the optimizer and save the result
@@ -365,10 +366,10 @@ await optimize.compile(metricFn, { filename: 'demos.json' });
And to use the generated demos with the above `ChainOfThought` program

```typescript
const ai = AI('openai', { apiKey: process.env.OPENAI_APIKEY } as OpenAIArgs);
const ai = axAI('openai', { apiKey: process.env.OPENAI_APIKEY });

// Setup the program to use the tuned data
const program = new ChainOfThought<{ question: string }, { answer: string }>(
const program = new AxChainOfThought<{ question: string }, { answer: string }>(
ai,
`question -> answer "in short 2 or 3 words"`
);
@@ -408,6 +409,7 @@ OPENAI_APIKEY=openai_key npm run tsx ./src/examples/marketing.ts
| qna-use-tuned.ts | Use the optimized tuned prompts |
| streaming1.ts | Output fields validation while streaming |
| streaming2.ts | Per output field validation while streaming |
| smart-hone.ts | Agent looks for dog in smart home |

## Built-in Functions

@@ -426,7 +428,7 @@ Large language models (LLMs) are getting really powerful and have reached a poin

```ts
// Pick a LLM
const ai = new OpenAI({ apiKey: process.env.OPENAI_APIKEY } as OpenAIArgs);
const ai = new AxOpenAI({ apiKey: process.env.OPENAI_APIKEY } as AxOpenAIArgs);
```

### 2. Create a prompt signature based on your usecase
@@ -488,13 +490,13 @@ const functions = [
### 2. Pass the functions to a prompt

```ts
const cot = new ReAct(ai, `question:string -> answer:string`, { functions });
const cot = new AxReAct(ai, `question:string -> answer:string`, { functions });
```

## Enable debug logs

```ts
const ai = new OpenAI({ apiKey: process.env.OPENAI_APIKEY } as OpenAIArgs);
const ai = new AxOpenAI({ apiKey: process.env.OPENAI_APIKEY } as AxOpenAIArgs);
ai.setOptions({ debug: true });
```

@@ -515,20 +517,20 @@ You can pass a configuration object as the second parameter when creating a new

```ts
const apiKey = process.env.OPENAI_APIKEY;
const conf = OpenAIBestConfig();
const ai = new OpenAI({ apiKey, conf } as OpenAIArgs);
const conf = AxOpenAIBestConfig();
const ai = new AxOpenAI({ apiKey, conf } as AxOpenAIArgs);
```

## 3. My prompt is too long and can I change the max tokens

```ts
const conf = OpenAIDefaultConfig(); // or OpenAIBestOptions()
const conf = axOpenAIDefaultConfig(); // or OpenAIBestOptions()
conf.maxTokens = 2000;
```

## 4. How do I change the model say I want to use GPT4

```ts
const conf = OpenAIDefaultConfig(); // or OpenAIBestOptions()
const conf = axOpenAIDefaultConfig(); // or OpenAIBestOptions()
conf.model = OpenAIModel.GPT4Turbo;
```
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@ax-llm/ax",
"version": "9.0.0",
"version": "9.0.1",
"type": "module",
"description": "The best library to work with LLMs",
"typings": "build/module/src/index.d.ts",
28 changes: 13 additions & 15 deletions src/examples/lares-smart-home.ts → src/examples/smart-home.ts
Original file line number Diff line number Diff line change
@@ -8,8 +8,8 @@
* https://interconnected.org/more/2024/lares/
*/

import { Agent, AI } from '../index.js';
import type { FunctionJSONSchema, OpenAIArgs } from '../index.js';
import { AxAgent, axAI } from '../index.js';
import type { AxFunctionJSONSchema, AxOpenAIArgs } from '../index.js';

interface RoomState {
light: boolean;
@@ -31,9 +31,11 @@ const state: HomeState = {
dogLocation: 'livingRoom'
};

const ai = AI('openai', { apiKey: process.env.OPENAI_APIKEY } as OpenAIArgs);
const ai = axAI('openai', {
apiKey: process.env.OPENAI_APIKEY
} as AxOpenAIArgs);

const agent = new Agent(ai, {
const agent = new AxAgent(ai, {
name: 'lares',
description: 'Lares smart home assistant',
signature: `instruction -> room:string "the room where the dog is found"`,
@@ -47,7 +49,7 @@ const agent = new Agent(ai, {
room: { type: 'string', description: 'Room to toggle light' }
},
required: ['room']
} as FunctionJSONSchema,
} as AxFunctionJSONSchema,
func: async (args: Readonly<{ room: string }>) => {
const roomState = state.rooms[args.room];
if (roomState) {
@@ -73,7 +75,7 @@ const agent = new Agent(ai, {
destination: { type: 'string', description: 'Destination room' }
},
required: ['destination']
} as FunctionJSONSchema,
} as AxFunctionJSONSchema,
func: async (args: Readonly<{ destination: string }>) => {
if (state.rooms[args.destination]) {
state.robotLocation = args.destination;
@@ -90,7 +92,7 @@ const agent = new Agent(ai, {
parameters: {
type: 'object',
properties: {}
} as FunctionJSONSchema,
} as AxFunctionJSONSchema,
func: async () => {
const location = state.robotLocation;
const room = state.rooms[location];
@@ -110,17 +112,13 @@ const agent = new Agent(ai, {
]
});

async function main() {
// Initial state prompt for the LLM
const instruction = `
// Initial state prompt for the LLM
const instruction = `
You are controlling a smart home with the following rooms: kitchen, livingRoom, bedroom.
Each room has a light that can be toggled on or off. There is a robot that can move between rooms.
Your task is to find the dog. You can turn on lights in rooms to see inside them, and move the robot to different rooms.
The initial state is: ${JSON.stringify({ ...state, dogLocation: 'unknown' })}.
`;

const res = await agent.forward({ instruction });
console.log('Response:', res);
}

main().catch(console.error);
const res = await agent.forward({ instruction });
console.log('Response:', res);