Skip to content

syntax-syndicate/trae-agent

Β 
Β 

Repository files navigation

Trae Agent

arXiv:2507.23370 Python 3.12+ License: MIT Pre-commit Unit Tests Discord

Trae Agent is an LLM-based agent for general purpose software engineering tasks. It provides a powerful CLI interface that can understand natural language instructions and execute complex software engineering workflows using various tools and LLM providers.

For technical details please refer to our technical report.

Project Status: The project is still being actively developed. Please refer to docs/roadmap.md and CONTRIBUTING if you are willing to help us improve Trae Agent.

Difference with Other CLI Agents: Trae Agent offers a transparent, modular architecture that researchers and developers can easily modify, extend, and analyze, making it an ideal platform for studying AI agent architectures, conducting ablation studies, and developing novel agent capabilities. This research-friendly design enables the academic and open-source communities to contribute to and build upon the foundational agent framework, fostering innovation in the rapidly evolving field of AI agents.

✨ Features

  • 🌊 Lakeview: Provides short and concise summarisation for agent steps
  • πŸ€– Multi-LLM Support: Works with OpenAI, Anthropic, Doubao, Azure, OpenRouter, Ollama and Google Gemini APIs
  • πŸ› οΈ Rich Tool Ecosystem: File editing, bash execution, sequential thinking, and more
  • 🎯 Interactive Mode: Conversational interface for iterative development
  • πŸ“Š Trajectory Recording: Detailed logging of all agent actions for debugging and analysis
  • βš™οΈ Flexible Configuration: YAML-based configuration with environment variable support
  • πŸš€ Easy Installation: Simple pip-based installation

πŸš€ Installation

Requirements

Setup

git clone https://github.com/bytedance/trae-agent.git
cd trae-agent
uv sync --all-extras
source .venv/bin/activate

βš™οΈ Configuration

YAML Configuration (Recommended)

  1. Copy the example configuration file:

    cp trae_config.yaml.example trae_config.yaml
  2. Edit trae_config.yaml with your API credentials and preferences:

agents:
  trae_agent:
    enable_lakeview: true
    model: trae_agent_model  # the model configuration name for Trae Agent
    max_steps: 200  # max number of agent steps
    tools:  # tools used with Trae Agent
      - bash
      - str_replace_based_edit_tool
      - sequentialthinking
      - task_done

model_providers:  # model providers configuration
  anthropic:
    api_key: your_anthropic_api_key
    provider: anthropic
  openai:
    api_key: your_openai_api_key
    provider: openai

models:
  trae_agent_model:
    model_provider: anthropic
    model: claude-sonnet-4-20250514
    max_tokens: 4096
    temperature: 0.5

Note: The trae_config.yaml file is ignored by git to protect your API keys.

Environment Variables (Alternative)

You can also configure API keys using environment variables and store them in the .env file:

export OPENAI_API_KEY="your-openai-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export GOOGLE_API_KEY="your-google-api-key"
export OPENROUTER_API_KEY="your-openrouter-api-key"
export DOUBAO_API_KEY="your-doubao-api-key"
export DOUBAO_BASE_URL="https://ark.cn-beijing.volces.com/api/v3/"

MCP Services (Optional)

To enable Model Context Protocol (MCP) services, add an mcp_servers section to your configuration:

mcp_servers:
  playwright:
    command: npx
    args:
      - "@playwright/[email protected]"

Configuration Priority: Command-line arguments > Configuration file > Environment variables > Default values

Legacy JSON Configuration: If using the older JSON format, see docs/legacy_config.md. We recommend migrating to YAML.

πŸ“– Usage

Basic Commands

# Simple task execution
trae-cli run "Create a hello world Python script"

# Check configuration
trae-cli show-config

# Interactive mode
trae-cli interactive

Provider-Specific Examples

# OpenAI
trae-cli run "Fix the bug in main.py" --provider openai --model gpt-4o

# Anthropic
trae-cli run "Add unit tests" --provider anthropic --model claude-sonnet-4-20250514

# Google Gemini
trae-cli run "Optimize this algorithm" --provider google --model gemini-2.5-flash

# OpenRouter (access to multiple providers)
trae-cli run "Review this code" --provider openrouter --model "anthropic/claude-3-5-sonnet"
trae-cli run "Generate documentation" --provider openrouter --model "openai/gpt-4o"

# Doubao
trae-cli run "Refactor the database module" --provider doubao --model doubao-seed-1.6

# Ollama (local models)
trae-cli run "Comment this code" --provider ollama --model qwen3

Advanced Options

# Custom working directory
trae-cli run "Add tests for utils module" --working-dir /path/to/project

# Save execution trajectory
trae-cli run "Debug authentication" --trajectory-file debug_session.json

# Force patch generation
trae-cli run "Update API endpoints" --must-patch

# Interactive mode with custom settings
trae-cli interactive --provider openai --model gpt-4o --max-steps 30

Interactive Mode Commands

In interactive mode, you can use:

  • Type any task description to execute it
  • status - Show agent information
  • help - Show available commands
  • clear - Clear the screen
  • exit or quit - End the session

πŸ› οΈ Advanced Features

Available Tools

Trae Agent provides a comprehensive toolkit for software engineering tasks including file editing, bash execution, structured thinking, and task completion. For detailed information about all available tools and their capabilities, see docs/tools.md.

Trajectory Recording

Trae Agent automatically records detailed execution trajectories for debugging and analysis:

# Auto-generated trajectory file
trae-cli run "Debug the authentication module"
# Saves to: trajectories/trajectory_YYYYMMDD_HHMMSS.json

# Custom trajectory file
trae-cli run "Optimize database queries" --trajectory-file optimization_debug.json

Trajectory files contain LLM interactions, agent steps, tool usage, and execution metadata. For more details, see docs/TRAJECTORY_RECORDING.md.

πŸ”§ Development

Contributing

For contribution guidelines, please refer to CONTRIBUTING.md.

Troubleshooting

Import Errors:

PYTHONPATH=. trae-cli run "your task"

API Key Issues:

# Verify API keys
echo $OPENAI_API_KEY
trae-cli show-config

Command Not Found:

uv run trae-cli run "your task"

Permission Errors:

chmod +x /path/to/your/project

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

✍️ Citation

@article{traeresearchteam2025traeagent,
      title={Trae Agent: An LLM-based Agent for Software Engineering with Test-time Scaling},
      author={Trae Research Team and Pengfei Gao and Zhao Tian and Xiangxin Meng and Xinchen Wang and Ruida Hu and Yuanan Xiao and Yizhou Liu and Zhao Zhang and Junjie Chen and Cuiyun Gao and Yun Lin and Yingfei Xiong and Chao Peng and Xia Liu},
      year={2025},
      eprint={2507.23370},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2507.23370},
}

πŸ™ Acknowledgments

We thank Anthropic for building the anthropic-quickstart project that served as a valuable reference for the tool ecosystem.

About

Trae Agent is an LLM-based agent for general purpose software engineering tasks.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.4%
  • Other 0.6%