Skip to content

MCPHost Anywhere: Remote Ollama & Standardized Provider Interface #48

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
177 changes: 148 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,24 +7,27 @@ Discuss the Project on [Discord](https://discord.gg/RqSS2NQVsY)
## Overview 🌟

MCPHost acts as a host in the MCP client-server architecture, where:

- **Hosts** (like MCPHost) are LLM applications that manage connections and interactions
- **Clients** maintain 1:1 connections with MCP servers
- **Servers** provide context, tools, and capabilities to the LLMs

This architecture allows language models to:

- Access external tools and data sources 🛠️
- Maintain consistent context across interactions 🔄
- Execute commands and retrieve information safely 🔒

Currently supports:

- Claude 3.5 Sonnet (claude-3-5-sonnet-20240620)
- Any Ollama-compatible model with function calling support
- Google Gemini models
- Any OpenAI-compatible local or online model with function calling support

## Features ✨

- Interactive conversations with support models
- Interactive conversations with supported models
- Support for multiple concurrent MCP servers
- Dynamic tool discovery and integration
- Tool calling capabilities for both model types
Expand All @@ -43,29 +46,39 @@ Currently supports:
## Environment Setup 🔧

1. Anthropic API Key (for Claude):

```bash
export ANTHROPIC_API_KEY='your-api-key'
```

2. Ollama Setup:

- Install Ollama from https://ollama.ai
- Pull your desired model:

```bash
ollama pull mistral
```

- Ensure Ollama is running:

```bash
ollama serve
```

You can also configure the Ollama client using standard environment variables, such as `OLLAMA HOST` for the Ollama base URL.
You can configure Ollama to use either a local or remote server:

- Use the `--llm-url` flag to specify a custom Ollama API endpoint (default: http://localhost:11434)
- Example with remote server: `mcphost -m ollama:llama3 --llm-url http://remote-server:11434`

3. Google API Key (for Gemini):

```bash
export GOOGLE_API_KEY='your-api-key'
```

4. OpenAI compatible online Setup

- Get your api server base url, api key and model name

## Installation 📦
Expand All @@ -77,57 +90,53 @@ go install github.com/mark3labs/mcphost@latest
## Configuration ⚙️

### MCP-server

MCPHost will automatically create a configuration file at `~/.mcp.json` if it doesn't exist. You can also specify a custom location using the `--config` flag.

#### STDIO

The configuration for an STDIO MCP-server should be defined as the following:

```json
{
"mcpServers": {
"sqlite": {
"command": "uvx",
"args": [
"mcp-server-sqlite",
"--db-path",
"/tmp/foo.db"
]
"args": ["mcp-server-sqlite", "--db-path", "/tmp/foo.db"]
},
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/tmp"
]
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
}
}
}
```

Each STDIO entry requires:
- `command`: The command to run (e.g., `uvx`, `npx`)

- `command`: The command to run (e.g., `uvx`, `npx`)
- `args`: Array of arguments for the command:
- For SQLite server: `mcp-server-sqlite` with database path
- For filesystem server: `@modelcontextprotocol/server-filesystem` with directory path

### Server Side Events (SSE)
### Server Side Events (SSE)

For SSE the following config should be used:

```json
{
"mcpServers": {
"server_name": {
"url": "http://some_jhost:8000/sse",
"headers":[
"Authorization: Bearer my-token"
]
"headers": ["Authorization: Bearer my-token"]
}
}
}
```

Each SSE entry requires:
- `url`: The URL where the MCP server is accessible.

- `url`: The URL where the MCP server is accessible.
- `headers`: (Optional) Array of headers that will be attached to the requests

### System-Prompt
Expand All @@ -136,57 +145,165 @@ You can specify a custom system prompt using the `--system-prompt` flag. The sys

```json
{
"systemPrompt": "You're a cat. Name is Neko"
"systemPrompt": "You're a cat. Name is Neko"
}
```

Usage:

```bash
mcphost --system-prompt ./my-system-prompt.json
```


## Usage 🚀

MCPHost is a CLI tool that allows you to interact with various AI models through a unified interface. It supports various tools through MCP servers.

### Available Models

Models can be specified using the `--model` (`-m`) flag:

- Anthropic Claude (default): `anthropic:claude-3-5-sonnet-latest`
- OpenAI or OpenAI-compatible: `openai:gpt-4`
- Ollama models: `ollama:modelname`
- Google: `google:gemini-2.0-flash`

### Examples

```bash
# Use Ollama with Qwen model
mcphost -m ollama:qwen2.5:3b

# Use Ollama with a remote server
mcphost -m ollama:llama3 --llm-url http://remote-server:11434

# Use OpenAI's GPT-4
mcphost -m openai:gpt-4

# Use OpenAI-compatible model
# Use OpenAI-compatible model with custom URL
mcphost --model openai:<your-model-name> \
--openai-url <your-base-url> \
--openai-api-key <your-api-key>
--llm-url <your-base-url> \
--api-key <your-api-key>

# Run as HTTP server mode on port 8080
mcphost --server --port 8080
```

### Flags
- `--anthropic-url string`: Base URL for Anthropic API (defaults to api.anthropic.com)
- `--anthropic-api-key string`: Anthropic API key (can also be set via ANTHROPIC_API_KEY environment variable)

#### Generic arguments for all LLMs

- `--api-key string`: API key for LLM providers (required for Anthropic, OpenAI and Google)
- `--llm-url string`: Base URL for LLM API (used for all providers including Ollama, uses defaults if not provided)

#### Provider-specific arguments

- `--google-api-key string`: Google API key (can also be set via GOOGLE_API_KEY environment variable)

> Note: Previously provider-specific URL and API key flags (`--anthropic-url`, `--openai-url`, `--ollama-url`, `--anthropic-api-key`, `--openai-api-key`) have been replaced by the generic `--llm-url` and `--api-key` flags above.

#### Other arguments

- `--config string`: Config file location (default is $HOME/.mcp.json)
- `--system-prompt string`: system-prompt file location
- `--system-prompt string`: system prompt file location
- `--debug`: Enable debug logging
- `--message-window int`: Number of messages to keep in context (default: 10)
- `-m, --model string`: Model to use (format: provider:model) (default "anthropic:claude-3-5-sonnet-latest")
- `--openai-url string`: Base URL for OpenAI API (defaults to api.openai.com)
- `--openai-api-key string`: OpenAI API key (can also be set via OPENAI_API_KEY environment variable)
- `--google-api-key string`: Google API key (can also be set via GOOGLE_API_KEY environment variable)
- `--server`: Run in HTTP server mode instead of interactive mode
- `--port int`: HTTP server port (default: 8080, only used with --server)

### Server Mode API Endpoints

When running MCPHost in server mode, the following REST API endpoints are available:

#### POST /chat

Send a message to the AI. For new conversations, omit the `referenceId` field. For continuing a conversation, include the `conversationId` from the previous response as `referenceId`.

Request body:

```json
{
"message": "Your message here",
"referenceId": "optional-conversation-id"
}
```

Response:

```json
{
"conversationId": "conversation-uuid",
"message": {
"role": "assistant",
"content": [
{
"type": "text",
"text": "AI response"
}
]
}
}
```

#### DELETE /conversation/{id}

Close a conversation when you're done with it.

Response: 204 No Content

### Server Mode Examples with curl

#### Start mcphost in server mode:

```bash
# Start the server with Claude model on port 8080
mcphost --server --port 8080 --model anthropic:claude-3-5-sonnet-latest
```

#### Start a new conversation:

```bash
curl -X POST http://localhost:8080/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello, what can you help me with today?"}'
```

Example response:

```json
{
"conversationId": "b7e9b0f0-1c2d-3e4f-5a6b-7c8d9e0f1a2b",
"message": {
"role": "assistant",
"content": [
{
"type": "text",
"text": "Hello! I'd be happy to help you today. I can assist with a variety of tasks such as answering questions, providing information, helping with problem-solving, or engaging in conversation on many different topics. Is there something specific you'd like to discuss or learn more about?"
}
]
}
}
```

#### Continue the conversation:

```bash
curl -X POST http://localhost:8080/chat \
-H "Content-Type: application/json" \
-d '{"message": "What are the main features of the MCP protocol?", "referenceId": "b7e9b0f0-1c2d-3e4f-5a6b-7c8d9e0f1a2b"}'
```

#### Close the conversation when finished:

```bash
curl -X DELETE http://localhost:8080/conversation/b7e9b0f0-1c2d-3e4f-5a6b-7c8d9e0f1a2b
```

### Interactive Commands

While chatting, you can use:

- `/help`: Show available commands
- `/tools`: List all available tools
- `/servers`: List configured MCP servers
Expand All @@ -195,6 +312,7 @@ While chatting, you can use:
- `Ctrl+C`: Exit at any time

### Global Flags

- `--config`: Specify custom config file location
- `--message-window`: Set number of messages to keep in context (default: 10)

Expand All @@ -205,6 +323,7 @@ MCPHost can work with any MCP-compliant server. For examples and reference imple
## Contributing 🤝

Contributions are welcome! Feel free to:

- Submit bug reports or feature requests through issues
- Create pull requests for improvements
- Share your custom MCP servers
Expand Down
Loading