Skip to content

Commit

Permalink
Merge pull request #135 from stacklok/ollama-support
Browse files Browse the repository at this point in the history
Ollama provider
  • Loading branch information
lukehinds authored Dec 2, 2024
2 parents c70b16d + 34ca65c commit 366bd6e
Show file tree
Hide file tree
Showing 12 changed files with 599 additions and 6 deletions.
10 changes: 10 additions & 0 deletions docs/cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,11 @@ codegate serve [OPTIONS]
- Base URL for Anthropic provider
- Overrides configuration file and environment variables

- `--ollama-url TEXT`: Ollama provider URL (default: http://localhost:11434)
- Optional
- Base URL for Ollama provider (/api path is added automatically)
- Overrides configuration file and environment variables

### show-prompts

Display the loaded system prompts:
Expand Down Expand Up @@ -120,6 +125,11 @@ Start server with custom vLLM endpoint:
codegate serve --vllm-url https://vllm.example.com
```

Start server with custom Ollama endpoint:
```bash
codegate serve --ollama-url http://localhost:11434
```

Show default system prompts:
```bash
codegate show-prompts
Expand Down
11 changes: 9 additions & 2 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ The configuration system in Codegate is managed through the `Config` class in `c
- vLLM: "http://localhost:8000"
- OpenAI: "https://api.openai.com/v1"
- Anthropic: "https://api.anthropic.com/v1"
- Ollama: "http://localhost:11434"

## Configuration Methods

Expand All @@ -41,6 +42,7 @@ provider_urls:
vllm: "https://vllm.example.com"
openai: "https://api.openai.com/v1"
anthropic: "https://api.anthropic.com/v1"
ollama: "http://localhost:11434"
```
### From Environment Variables
Expand All @@ -55,6 +57,7 @@ Environment variables are automatically loaded with these mappings:
- `CODEGATE_PROVIDER_VLLM_URL`: vLLM provider URL
- `CODEGATE_PROVIDER_OPENAI_URL`: OpenAI provider URL
- `CODEGATE_PROVIDER_ANTHROPIC_URL`: Anthropic provider URL
- `CODEGATE_PROVIDER_OLLAMA_URL`: Ollama provider URL

```python
config = Config.from_env()
Expand All @@ -72,21 +75,25 @@ Provider URLs can be configured in several ways:
vllm: "https://vllm.example.com" # /v1 path is added automatically
openai: "https://api.openai.com/v1"
anthropic: "https://api.anthropic.com/v1"
ollama: "http://localhost:11434" # /api path is added automatically
```

2. Via Environment Variables:
```bash
export CODEGATE_PROVIDER_VLLM_URL=https://vllm.example.com
export CODEGATE_PROVIDER_OPENAI_URL=https://api.openai.com/v1
export CODEGATE_PROVIDER_ANTHROPIC_URL=https://api.anthropic.com/v1
export CODEGATE_PROVIDER_OLLAMA_URL=http://localhost:11434
```

3. Via CLI Flags:
```bash
codegate serve --vllm-url https://vllm.example.com
codegate serve --vllm-url https://vllm.example.com --ollama-url http://localhost:11434
```

Note: For the vLLM provider, the /v1 path is automatically appended to the base URL if not present.
Note:
- For the vLLM provider, the /v1 path is automatically appended to the base URL if not present.
- For the Ollama provider, the /api path is automatically appended to the base URL if not present.

### Log Levels

Expand Down
15 changes: 11 additions & 4 deletions docs/development.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
# Development Guide

This guide provides comprehensive information for developers working on the Codegate project.

## Project Overview
Expand Down Expand Up @@ -157,6 +155,13 @@ Codegate supports multiple AI providers through a modular provider system.
- Default URL: https://api.anthropic.com/v1
- Anthropic Claude API implementation

4. **Ollama Provider**
- Default URL: http://localhost:11434
- Endpoints:
* Native Ollama API: `/ollama/api/chat`
* OpenAI-compatible: `/ollama/chat/completions`
```
### Configuring Providers
Provider URLs can be configured through:
Expand All @@ -167,18 +172,20 @@ Provider URLs can be configured through:
vllm: "https://vllm.example.com"
openai: "https://api.openai.com/v1"
anthropic: "https://api.anthropic.com/v1"
ollama: "http://localhost:11434" # /api path added automatically
```

2. Environment variables:
```bash
export CODEGATE_PROVIDER_VLLM_URL=https://vllm.example.com
export CODEGATE_PROVIDER_OPENAI_URL=https://api.openai.com/v1
export CODEGATE_PROVIDER_ANTHROPIC_URL=https://api.anthropic.com/v1
export CODEGATE_PROVIDER_OLLAMA_URL=http://localhost:11434
```

3. CLI flags:
```bash
codegate serve --vllm-url https://vllm.example.com
codegate serve --vllm-url https://vllm.example.com --ollama-url http://localhost:11434
```

### Implementing New Providers
Expand Down Expand Up @@ -276,4 +283,4 @@ codegate serve --prompts my-prompts.yaml
codegate serve --vllm-url https://vllm.example.com
```

See [CLI Documentation](cli.md) for detailed command information.
See [CLI Documentation](cli.md) for detailed command information.
1 change: 1 addition & 0 deletions src/codegate/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
"openai": "https://api.openai.com/v1",
"anthropic": "https://api.anthropic.com/v1",
"vllm": "http://localhost:8000", # Base URL without /v1 path
"ollama": "http://localhost:11434", # Default Ollama server URL
}


Expand Down
1 change: 1 addition & 0 deletions src/codegate/pipeline/extract_snippets/extract_snippets.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@

logger = structlog.get_logger("codegate")


def ecosystem_from_filepath(filepath: str) -> Optional[str]:
"""
Determine language from filepath.
Expand Down
2 changes: 2 additions & 0 deletions src/codegate/providers/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
from codegate.providers.anthropic.provider import AnthropicProvider
from codegate.providers.base import BaseProvider
from codegate.providers.ollama.provider import OllamaProvider
from codegate.providers.openai.provider import OpenAIProvider
from codegate.providers.registry import ProviderRegistry
from codegate.providers.vllm.provider import VLLMProvider
Expand All @@ -10,4 +11,5 @@
"OpenAIProvider",
"AnthropicProvider",
"VLLMProvider",
"OllamaProvider",
]
3 changes: 3 additions & 0 deletions src/codegate/providers/ollama/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from codegate.providers.ollama.provider import OllamaProvider

__all__ = ["OllamaProvider"]
86 changes: 86 additions & 0 deletions src/codegate/providers/ollama/adapter.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
from typing import Any, Dict

from litellm import ChatCompletionRequest

from codegate.providers.normalizer.base import ModelInputNormalizer, ModelOutputNormalizer


class OllamaInputNormalizer(ModelInputNormalizer):
def __init__(self):
super().__init__()

def normalize(self, data: Dict) -> ChatCompletionRequest:
"""
Normalize the input data to the format expected by Ollama.
"""
# Make a copy of the data to avoid modifying the original
normalized_data = data.copy()

# Format the model name
if "model" in normalized_data:
normalized_data["model"] = normalized_data["model"].strip()

# Convert messages format if needed
if "messages" in normalized_data:
messages = normalized_data["messages"]
converted_messages = []
for msg in messages:
if isinstance(msg.get("content"), list):
# Convert list format to string
content_parts = []
for part in msg["content"]:
if part.get("type") == "text":
content_parts.append(part["text"])
msg = msg.copy()
msg["content"] = " ".join(content_parts)
converted_messages.append(msg)
normalized_data["messages"] = converted_messages

# Ensure the base_url ends with /api if provided
if "base_url" in normalized_data:
base_url = normalized_data["base_url"].rstrip("/")
if not base_url.endswith("/api"):
normalized_data["base_url"] = f"{base_url}/api"

return ChatCompletionRequest(**normalized_data)

def denormalize(self, data: ChatCompletionRequest) -> Dict:
"""
Convert back to raw format for the API request
"""
return data


class OllamaOutputNormalizer(ModelOutputNormalizer):
def __init__(self):
super().__init__()

def normalize_streaming(
self,
model_reply: Any,
) -> Any:
"""
Pass through Ollama response
"""
return model_reply

def normalize(self, model_reply: Any) -> Any:
"""
Pass through Ollama response
"""
return model_reply

def denormalize(self, normalized_reply: Any) -> Any:
"""
Pass through Ollama response
"""
return normalized_reply

def denormalize_streaming(
self,
normalized_reply: Any,
) -> Any:
"""
Pass through Ollama response
"""
return normalized_reply
Loading

0 comments on commit 366bd6e

Please sign in to comment.