You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+37-3
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ This provides a natural language way to interacting with a web browser:
11
11
- Manage and automate tasks on project management platforms (like JIRA) by filtering issues, easing the workflow for users.
12
12
- Provide personal shopping assistance, suggesting products based on the user's needs, such as storage options for game cards.
13
13
14
-
While Agent-E is growing, it is already equipped to handle a versatile range of tasks, but the best task is the one that you come up with. So, take it for a spin and tell us what you were able to do with it. For more information see our [blog article](https://blog.emergence.ai/2024/03/28/distilling-the-web-agent.html).
14
+
While Agent-E is growing, it is already equipped to handle a versatile range of tasks, but the best task is the one that you come up with. So, take it for a spin and tell us what you were able to do with it. For more information see our [blog article](https://www.emergence.ai/blog/distilling-the-web-for-multi-agent-automation).
15
15
16
16
17
17
## Quick Start
@@ -30,7 +30,8 @@ While Agent-E is growing, it is already equipped to handle a versatile range of
30
30
- .env file in project root is needed with the following (sample `.env-example` is included for convience):
31
31
- Follow the directions in the sample file
32
32
- You will need to set `AUTOGEN_MODEL_NAME` (for example `gpt-4-turbo-preview`) and `AUTOGEN_MODEL_API_KEY`
33
-
- If you are using a model other than OpenAI, you need to set `AUTOGEN_MODEL_BASE_URL` for example `https://api.groq.com/openai/v1`
33
+
- If you are using a model other than OpenAI, you need to set `AUTOGEN_MODEL_BASE_URL` for example `https://api.groq.com/openai/v1` or `https://<REPLACE_AI_SERVICES>.openai.azure.com` on [Azure](https://azure.microsoft.com/).
34
+
- For [Azure](https://azure.microsoft.com/), you'll also need to configure `AUTOGEN_MODEL_API_TYPE=azure` and `AUTOGEN_MODEL_API_VERSION` (for example `2023-03-15-preview`) variables.
34
35
- If you want to use local chrome browser over playwright browser, go to chrome://version/ in chrome, find the path to your profile and set `BROWSER_STORAGE_DIR` to the path value
7. Build the documentation, from `docs` directory, run: `sphinx-build -b html . _build`
156
157
157
158
159
+
## Open-source models
160
+
161
+
Using open-source models is possible through LiteLLM with Ollama. Ollama allows users to run language models locally on their machines, and LiteLLM translates OpenAI-format inputs to local models' endpoints. To use open-source models as Agent-E backbone, follow the steps below:
162
+
163
+
1. Install LiteLLM
164
+
```bash
165
+
pip install 'litellm[proxy]'
166
+
```
167
+
2. Install Ollama
168
+
* For Mac and Windows, download [Ollama](https://ollama.com/download).
169
+
* For Linux:
170
+
```bash
171
+
curl -fsSL https://ollama.com/install.sh | sh
172
+
```
173
+
3. Pull Ollama models
174
+
Before you can use a model, you need to download it from the library. The list of available models is [here](https://ollama.com/library). Here, we use Mistral v0.3:
175
+
```bash
176
+
ollama pull mistral:v0.3
177
+
```
178
+
4. Run LiteLLM
179
+
To run the downloaded model with LiteLLM as a proxy, run:
180
+
```bash
181
+
litellm --model ollama_chat/mistral:v0.3
182
+
```
183
+
5. Configure model in Autogen
184
+
Configure the `.env` file as follows. Note that the model name and API keys are not needed since the local model is already running.
185
+
```bash
186
+
AUTOGEN_MODEL_NAME=NotRequired
187
+
AUTOGEN_MODEL_API_KEY=NotRequired
188
+
AUTOGEN_MODEL_BASE_URL=http://0.0.0.0:400
189
+
```
190
+
191
+
158
192
## TODO
159
193
160
194
- Action verification - Responding from every skill with changes that took place in the DOM (Mutation Observers) so that the LLM can judge whether the skill did execute properly or not
0 commit comments