Skip to content

Commit

Permalink
Merge pull request #142 from polywrap/nerfzael/autogen
Browse files Browse the repository at this point in the history
Autogen integration
  • Loading branch information
dOrgJelli authored Apr 9, 2024
2 parents b9f32b0 + 7750228 commit 5ef1d24
Show file tree
Hide file tree
Showing 28 changed files with 1,066 additions and 4,353 deletions.
28 changes: 20 additions & 8 deletions .env.example
Original file line number Diff line number Diff line change
@@ -1,15 +1,27 @@
# https://openai.com/ API Key
#########################
### REQUIRED ###
#########################

# LLM API Key (ex: https://platform.openai.com/account/api-keys)
OPENAI_API_KEY=
# LLM API Base URL(Only Openai API compatibility) (Default: https://api.openai.com/v1)
OPENAI_API_BASE=https://api.openai.com/v1
OPENAI_BASE_URL=https://api.openai.com/v1
# LLM Model Name (Default: gpt-4-turbo-preview)

# LLM Model Name
OPENAI_MODEL_NAME=gpt-4-turbo-preview

# RPC URL of the blockchain to fork from. Used for offline tx simulation.
CHAIN_RPC_URL=
# (optional) Connect an existing smart account (ex: safe address).
CHAIN_RPC_URL=https://mainnet.infura.io/v3/f1f688077be642c190ac9b28769daecf

#########################
### OPTIONAL ###
#########################

# Connect an existing smart account (ex: safe address).
# If undefined, an offline test account is generated and used.
SMART_ACCOUNT_ADDRESS=

# https://www.coingecko.com/ API Key
COINGECKO_API_KEY =
COINGECKO_API_KEY=

# LLM API Base URL (Only Openai API compatibility https://ollama.com/blog/openai-compatibility)
OPENAI_API_BASE=https://api.openai.com/v1
OPENAI_BASE_URL=https://api.openai.com/v1
8 changes: 3 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,15 +47,15 @@ Please install the following:
1. Clone the repository via `git clone https://github.com/polywrap/AutoTx` and `cd AutoTx` into the directory.
2. Create a new .env file via `cp .env.example .env`
3. Find the line that says OPENAI_API_KEY=, and add your unique OpenAI API Key `OPENAI_API_KEY=sk-...`
4. Find the line that says CHAIN_RPC_URL=, and add your unique Ethereum RPC URL `CHAIN_RPC_URL=https://mainnet.infura.io/v3/...` (see https://www.infura.io/)
5. Find the line that says COINGECKO_API_KEY=, and add your Coingecko API Key `COINGECKO_API_KEY=CG-...` (see https://docs.coingecko.com/reference/setting-up-your-api-key)
4. (Optional) If you have an Infura API Key, find the line that says CHAIN_RPC_URL=, and update the infura key `CHAIN_RPC_URL=https://mainnet.infura.io/v3/YOUR_INFURA_KEY` (see https://www.infura.io/).
5. (Optional) If you have a Coingecko API Key, find the line that says `COINGECKO_API_KEY=`, and add it `COINGECKO_API_KEY=CG-...` (see [Coingecko API Documentation](https://docs.coingecko.com/reference/setting-up-your-api-key)). Note: Without the Coingecko API Key, the Token Research Agent will not be added to the agent's execution loop.
6. Start a new poetry shell `poetry shell`
7. Install python dependencies `poetry install`

## Run The Agent

1. AutoTx requires a fork of the blockchain network you want to transact with. You can start the fork by running `poetry run start-fork`, and stop it with `poetry run stop-fork`. This command requires Docker to be running on your computer.
2. Run `poetry run ask` and provide a prompt for AutoTx to work on solving for you (example: `Send 1 ETH to vitalik.eth`). The `--prompt "..."` option can be used for non-interactive startup. The `--non-interactive` (or `-n`) flag will disable all requests for user input, including the final approval of the transaction plan.
2. Run `poetry run ask` and provide a prompt for AutoTx to work on solving for you (example: `Send 1 ETH to vitalik.eth`). You can also provide the prompt as an argument for non-interactive startup. The `--non-interactive` (or `-n`) flag will disable all requests for user input, including the final approval of the transaction plan. The `--verbose` (or `-v`) flag will enable verbose logging.

### Test Offline
By default, if the `SMART_ACCOUNT_ADDRESS` environment variable is not defined, AutoTx will create and execute transactions within an offline test environment. This test environment includes a new smart account, as well as a development address with test ETH for tx execution.
Expand All @@ -64,9 +64,7 @@ By default, if the `SMART_ACCOUNT_ADDRESS` environment variable is not defined,
AutoTx can be connected to your existing smart account by doing the following:

1. Set the `SMART_ACCOUNT_ADDRESS` to the address of your smart account in your `.env`. This tells AutoTx which account it should interact with.

2. AutoTx's agent address, which it generates locally, must be set as a signer in your Safe's configuration to allow it to create transactions on behalf of the smart account. To get this address, run `poetry run agent address`.

3. Update the `CHAIN_RPC_URL` value in your `.env` with the correct RPC URL of the network where your smart account is deployed.


Expand Down
118 changes: 82 additions & 36 deletions autotx/AutoTx.py
Original file line number Diff line number Diff line change
@@ -1,73 +1,119 @@
from typing import Optional, Callable
from textwrap import dedent
from typing import Any, Dict, Optional, Callable
from dataclasses import dataclass
from autogen import UserProxyAgent, AssistantAgent, GroupChat, GroupChatManager
from termcolor import cprint
from typing import Optional
from crewai import Agent, Crew, Process, Task
from autogen.io import IOStream
from autotx.autotx_agent import AutoTxAgent
from autotx.utils.PreparedTx import PreparedTx
from autotx.utils.agent.build_goal import build_goal
from autotx.utils.agent.define_tasks import define_tasks
from langchain_core.tools import StructuredTool
from crewai import Agent, Crew, Process, Task
from autotx.utils.ethereum import SafeManager
from autotx.utils.ethereum.networks import NetworkInfo
from autotx.utils.llm import open_ai_llm
from autotx.utils.io_silent import IOConsole, IOSilent

@dataclass(kw_only=True)
class Config:
verbose: bool

class AutoTx:
manager: SafeManager
agents: list[Agent]
config: Config = Config(verbose=False)
transactions: list[PreparedTx] = []
network: NetworkInfo
get_llm_config: Callable[[], Optional[Dict[str, Any]]]
user_proxy: UserProxyAgent
agents: list[AutoTxAgent]

def __init__(
self, manager: SafeManager, network: NetworkInfo, agent_factories: list[Callable[['AutoTx'], Agent]], config: Optional[Config]
self, manager: SafeManager, network: NetworkInfo, agents: list[AutoTxAgent], config: Optional[Config],
get_llm_config: Callable[[], Optional[Dict[str, Any]]]
):
self.manager = manager
self.network = network
self.get_llm_config = get_llm_config
if config:
self.config = config
self.agents = [factory(self) for factory in agent_factories]
self.agents = agents

def run(self, prompt: str, non_interactive: bool):
print(f"Defining goal for prompt: '{prompt}'")

agents_information = self.get_agents_information()
def run(self, prompt: str, non_interactive: bool, silent: bool = False):
print("Running AutoTx with the following prompt: ", prompt)

user_proxy = UserProxyAgent(
name="user_proxy",
is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
human_input_mode="NEVER",
max_consecutive_auto_reply=20,
system_message=f"You are a user proxy. You will be interacting with the agents to accomplish the tasks.",
llm_config=self.get_llm_config(),
code_execution_config=False,
)

agents_information = self.get_agents_information(self.agents)

goal = build_goal(prompt, agents_information, self.manager.address, non_interactive)

print(f"Defining tasks for goal: '{goal}'")
tasks: list[Task] = define_tasks(goal, agents_information, self.agents)

self.run_for_tasks(tasks, non_interactive)

def run_for_tasks(self, tasks: list[Task], non_interactive: bool):
print(f"Running tasks...")
Crew(
agents=self.agents,
tasks=tasks,
verbose=self.config.verbose,
process=Process.sequential,
function_calling_llm=open_ai_llm,
).kickoff()

self.manager.send_tx_batch(self.transactions, require_approval=not non_interactive)
verifier_agent = AssistantAgent(
name="verifier",
is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
system_message=dedent(
"""
Verifier is an expert in verifiying if user goals are met.
Verifier analyzes chat and responds with TERMINATE if the goal is met.
Verifier can consider the goal met if the other agents have prepared the necessary transactions.
"""
),
llm_config=self.get_llm_config(),
human_input_mode="NEVER",
code_execution_config=False,
)

autogen_agents = [agent.build_autogen_agent(self, user_proxy, self.get_llm_config()) for agent in self.agents]

groupchat = GroupChat(
agents=autogen_agents + [user_proxy, verifier_agent],
messages=[],
max_round=20,
select_speaker_prompt_template = (
"""
Read the above conversation. Then select the next role from {agentlist} to play. Only return the role and NOTHING else.
"""
)
)
manager = GroupChatManager(groupchat=groupchat, llm_config=self.get_llm_config())

if silent:
IOStream.set_global_default(IOSilent())
else:
IOStream.set_global_default(IOConsole())

user_proxy.initiate_chat(manager, message=dedent(
f"""
My goal is: {prompt}
Advisor reworded: {goal}
"""
))

try:
self.manager.send_tx_batch(self.transactions, require_approval=not non_interactive)
except Exception as e:
cprint(e, "red")

self.transactions.clear()


def get_agents_information(self) -> str:
def get_agents_information(self, agents: list[AutoTxAgent]) -> str:
agent_descriptions = []
for agent in self.agents:
agent_default_tools: list[StructuredTool] = agent.tools
for agent in agents:
tools_available = "\n".join(
[
f" - Name: {tool.name}\n - Description: {tool.description} \n"
for tool in agent_default_tools
f"\n- {tool}"
for tool in agent.tool_descriptions
]
)
description = f"Agent name: {agent.name}\nRole: {agent.role}\nTools available:\n{tools_available}"
description = f"Agent name: {agent.name}\nTools available:{tools_available}"
agent_descriptions.append(description)

agents_information = "\n".join(agent_descriptions)
return agents_information

return agents_information
5 changes: 5 additions & 0 deletions autotx/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
from autotx.AutoTx import AutoTx
from autotx.autotx_agent import AutoTxAgent
from autotx.autotx_tool import AutoTxTool

__all__ = ['AutoTx', 'AutoTxAgent', 'AutoTxTool']
67 changes: 29 additions & 38 deletions autotx/agents/ExampleAgent.py
Original file line number Diff line number Diff line change
@@ -1,51 +1,42 @@
from typing import Callable
from textwrap import dedent
from crewai import Agent
from autotx.AutoTx import AutoTx
from autotx.auto_tx_agent import AutoTxAgent
from autotx.auto_tx_tool import AutoTxTool
from typing import Annotated, Callable
from autotx import AutoTx, AutoTxAgent, AutoTxTool

name = "example-agent"

system_message = f"""
Example of an agent system message.
...
"""

class ExampleTool(AutoTxTool):
name: str = "Example tool that does something useful"
name: str = "example_tool"
description: str = dedent(
"""
This tool does something very useful.
Args:
amount (float): Amount of something.
receiver (str): The receiver of something.
Returns:
The result of the useful tool in a useful format.
"""
)

def _run(
self, amount: float, receiver: str
) -> str:
def build_tool(self, autotx: AutoTx) -> Callable:
def run(
amount: Annotated[float, "Amount of something."],
receiver: Annotated[str, "The receiver of something."]
) -> str:
# TODO: do something useful
print(f"ExampleTool run: {amount} {receiver}")

# NOTE: you can add transactions to AutoTx's current bundle
# autotx.transactions.append(tx)

# TODO: do something useful
print(f"ExampleTool run: {amount} {receiver}")

# NOTE: you can add transactions to AutoTx's current bundle
# self.autotx.transactions.append(tx)
return f"Something useful has been done with {amount} to {receiver}"

return f"Something useful has been done with {amount} to {receiver}"
return run

class ExampleAgent(AutoTxAgent):
def __init__(self, autotx: AutoTx):
super().__init__(
name="example-agent",
role="Example agent role",
goal="Example agent goal",
backstory="Example agent backstory",
tools=[
ExampleTool(autotx),
# AnotherTool(...),
# AndAnotherTool(...)
],
)

def build_agent_factory() -> Callable[[AutoTx], Agent]:
def agent_factory(autotx: AutoTx) -> ExampleAgent:
return ExampleAgent(autotx)
return agent_factory
name=name
system_message=dedent(system_message)
tools=[
ExampleTool(),
# AnotherTool(...),
# AndAnotherTool(...)
]
Loading

0 comments on commit 5ef1d24

Please sign in to comment.