Skip to content

Commit

Permalink
adding ollama and o1 support
Browse files Browse the repository at this point in the history
  • Loading branch information
vprelovac committed Dec 18, 2024
1 parent b4069e2 commit 5ffff89
Show file tree
Hide file tree
Showing 2 changed files with 21 additions and 16 deletions.
33 changes: 18 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,27 +172,30 @@ To get a full list of supported models:

```python
model = llms.init()
model.list()
model.list() # list all models

model.list("gpt") # lists only models with 'gpt' in name/provider name
```
Currently supported models (may be outdated):

| **Provider** | **Models** |
|---------------------------|---------------------------------------------------------------------------------------------------------|
| HuggingfaceHubProvider | hf_pythia, hf_falcon40b, hf_falcon7b, hf_mptinstruct, hf_mptchat, hf_llava, hf_dolly, hf_vicuna |
| GroqProvider | llama-3.1-8b-instant, llama-3.1-405b-reasoning, llama-3.1-70b-versatile |
| DeepSeekProvider | deepseek-chat, deepseek-coder |
| MistralProvider | mistral-tiny, open-mistral-7b, open-mistral-nemo, mistral-small, open-mixtral-8x7b, mistral-small-latest, mistral-medium-latest, mistral-large-latest |
| OpenAIProvider | gpt-4o-mini, gpt-3.5-turbo, gpt-3.5-turbo-1106, gpt-3.5-turbo-instruct, gpt-4o, gpt-4-1106-preview, gpt-4-turbo-preview, gpt-4-turbo |
| GoogleProvider | gemini-1.5-pro-preview-0514, gemini-1.5-flash-preview-0514, chat-bison, text-bison, text-bison-32k, code-bison, code-bison-32k, codechat-bison, codechat-bison-32k, gemini-pro |
| GoogleGenAIProvider | chat-bison-genai, text-bison-genai, gemini-1.5-pro-latest |
| AnthropicProvider | claude-3-haiku-20240307, claude-instant-v1.1, claude-instant-v1, claude-instant-1, claude-instant-1.2, claude-3-sonnet-20240229, claude-3-5-sonnet-20240620, claude-2.1, claude-v1, claude-v1-100k, claude-3-opus-20240229 |
| BedrockAnthropicProvider | anthropic.claude-3-haiku-20240307-v1:0, anthropic.claude-instant-v1, anthropic.claude-v1, anthropic.claude-v2, anthropic.claude-3-sonnet-20240229-v1:0 |
| TogetherProvider | meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo |
| RekaProvider | reka-edge, reka-flash, reka-core |
| AlephAlphaProvider | luminous-base, luminous-extended, luminous-supreme, luminous-supreme-control |
| AI21Provider | j2-grande-instruct, j2-jumbo-instruct, command, command-nightly |
| CohereProvider | command, command-nightly |
| OpenAIProvider | gpt-3.5-turbo, gpt-3.5-turbo-1106, gpt-3.5-turbo-instruct, gpt-4, gpt-4-1106-preview, gpt-4-turbo-preview, gpt-4-turbo, gpt-4o, gpt-4o-mini, gpt-4o-2024-08-06, o1-preview, o1-mini, o1 |
| AnthropicProvider | claude-instant-v1.1, claude-instant-v1, claude-v1, claude-v1-100k, claude-instant-1, claude-instant-1.2, claude-2.1, claude-3-haiku-20240307, claude-3-sonnet-20240229, claude-3-opus-20240229, claude-3-5-sonnet-20240620, claude-3-5-sonnet-20241022 |
| BedrockAnthropicProvider | anthropic.claude-instant-v1, anthropic.claude-v1, anthropic.claude-v2, anthropic.claude-3-haiku-20240307-v1:0, anthropic.claude-3-sonnet-20240229-v1:0, anthropic.claude-3-5-sonnet-20240620-v1:0 |
| AI21Provider | j2-grande-instruct, j2-jumbo-instruct |
| CohereProvider | command, command-nightly |
| AlephAlphaProvider | luminous-base, luminous-extended, luminous-supreme, luminous-supreme-control |
| HuggingfaceHubProvider | hf_pythia, hf_falcon40b, hf_falcon7b, hf_mptinstruct, hf_mptchat, hf_llava, hf_dolly, hf_vicuna |
| GoogleGenAIProvider | chat-bison-genai, text-bison-genai, gemini-1.5-pro, gemini-1.5-pro-latest, gemini-1.5-flash, gemini-1.5-flash-latest, gemini-1.5-pro-exp-0801 |
| GoogleProvider | chat-bison, text-bison, text-bison-32k, code-bison, code-bison-32k, codechat-bison, codechat-bison-32k, gemini-pro, gemini-1.5-pro-preview-0514, gemini-1.5-flash-preview-0514 |
| OllamaProvider | vanilj/Phi-4:latest, falcon3:10b, smollm2:latest, llama3.2:3b-instruct-q8_0, qwen2:1.5b, mistral:7b-instruct-v0.2-q4_K_S, phi3:latest, phi3:3.8b, phi:latest, tinyllama:latest, magicoder:latest, deepseek-coder:6.7b, deepseek-coder:latest, dolphin-phi:latest, stablelm-zephyr:latest |
| DeepSeekProvider | deepseek-chat, deepseek-coder |
| GroqProvider | llama-3.1-405b-reasoning, llama-3.1-70b-versatile, llama-3.1-8b-instant, gemma2-9b-it |
| RekaProvider | reka-edge, reka-flash, reka-core |
| TogetherProvider | meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo |
| OpenRouterProvider | nvidia/llama-3.1-nemotron-70b-instruct, x-ai/grok-2, nousresearch/hermes-3-llama-3.1-405b:free, google/gemini-flash-1.5-exp, liquid/lfm-40b, mistralai/ministral-8b, qwen/qwen-2.5-72b-instruct |
| MistralProvider | mistral-tiny, open-mistral-7b, mistral-small, open-mixtral-8x7b, mistral-small-latest, mistral-medium-latest, mistral-large-latest, open-mistral-nemo |



Expand Down
4 changes: 3 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

setup(
name="pyllms",
version="0.6.1",
version="0.7.0",
description="Minimal Python library to connect to LLMs (OpenAI, Anthropic, Google, Mistral, OpenRouter, Reka, Groq, Together, Ollama, AI21, Cohere, Aleph-Alpha, HuggingfaceHub), with a built-in model performance benchmark.",
long_description=long_description,
long_description_content_type="text/markdown",
Expand All @@ -31,6 +31,8 @@
"google-generativeai",
"mistralai",
"ollama",
"reka-api",
"together",
],
extras_require={
"local": ["einops", "accelerate"]
Expand Down

0 comments on commit 5ffff89

Please sign in to comment.