Skip to content

Commit

Permalink
Integrate Yi model (#14353)
Browse files Browse the repository at this point in the history
* first update

* modify

* support yi llm

* fix url error

* rm colab link

* pants and lint

* pants and lint

---------

Co-authored-by: Andrei Fajardo <[email protected]>
  • Loading branch information
Yimi81 and nerdai authored Jun 25, 2024
1 parent 5142603 commit 5ddb2b2
Show file tree
Hide file tree
Showing 12 changed files with 675 additions and 0 deletions.
232 changes: 232 additions & 0 deletions docs/docs/examples/llm/yi.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,232 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "2e33dced-e587-4397-81b3-d6606aa1738a",
"metadata": {},
"source": [
"# 01.AI LLM\n",
"\n",
"This notebook shows how to use `Yi` series LLMs.\n",
"\n",
"Visit https://platform.01.ai/ and sign up to get an API key."
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "5863dde9-84a0-4c33-ad52-cc767442f63f",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"id": "833bdb2b",
"metadata": {},
"source": [
"If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4aff387e",
"metadata": {},
"outputs": [],
"source": [
"%pip install llama-index-llms-yi"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9bbbc106",
"metadata": {},
"outputs": [],
"source": [
"!pip install llama-index"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ad297f19-998f-4485-aa2f-d67020058b7d",
"metadata": {},
"outputs": [],
"source": [
"from llama_index.llms.yi import Yi"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "152ced37-9a42-47be-9a39-4218521f5e72",
"metadata": {},
"outputs": [],
"source": [
"# set api key in env or in llm\n",
"# import os\n",
"# os.environ[\"YI_API_KEY\"] = \"your api key\"\n",
"\n",
"llm = Yi(model=\"yi-large\", api_key=\"your api key\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d61b10bb-e911-47fb-8e84-19828cf224be",
"metadata": {},
"outputs": [],
"source": [
"resp = llm.complete(\"Who is Paul Graham?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3bd14f4e-c245-4384-a471-97e4ddfcb40e",
"metadata": {},
"outputs": [],
"source": [
"print(resp)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "3ba9503c-b440-43c6-a50c-676c79993813",
"metadata": {},
"source": [
"#### Call `chat` with a list of messages"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ee8a4a55-5680-4dc6-a44c-fc8ad7892f80",
"metadata": {},
"outputs": [],
"source": [
"from llama_index.core.llms import ChatMessage\n",
"\n",
"messages = [\n",
" ChatMessage(\n",
" role=\"system\", content=\"You are a pirate with a colorful personality\"\n",
" ),\n",
" ChatMessage(role=\"user\", content=\"What is your name\"),\n",
"]\n",
"resp = llm.chat(messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2a9bfe53-d15b-4e75-9d91-8c5d024f4eda",
"metadata": {},
"outputs": [],
"source": [
"print(resp)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "25ad1b00-28fc-4bcd-96c4-d5b35605721a",
"metadata": {},
"source": [
"### Streaming"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "13c641fa-345a-4dce-87c5-ab1f6dcf4757",
"metadata": {},
"source": [
"Using `stream_complete` endpoint "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "06da1ef1-2f6b-497c-847b-62dd2df11491",
"metadata": {},
"outputs": [],
"source": [
"response = llm.stream_complete(\"Who is Paul Graham?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1b851def-5160-46e5-a30c-5a3ef2356b79",
"metadata": {},
"outputs": [],
"source": [
"for r in response:\n",
" print(r.delta, end=\"\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "ca52051d-6b28-49d7-98f5-82e266a1c7a6",
"metadata": {},
"source": [
"Using `stream_chat` endpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fe553190-52a9-436d-84ae-4dd99a1808f4",
"metadata": {},
"outputs": [],
"source": [
"from llama_index.core.llms import ChatMessage\n",
"\n",
"messages = [\n",
" ChatMessage(\n",
" role=\"system\", content=\"You are a pirate with a colorful personality\"\n",
" ),\n",
" ChatMessage(role=\"user\", content=\"What is your name\"),\n",
"]\n",
"resp = llm.stream_chat(messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "154c503c-f893-4b6b-8a65-a9a27b636046",
"metadata": {},
"outputs": [],
"source": [
"for r in resp:\n",
" print(r.delta, end=\"\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
153 changes: 153 additions & 0 deletions llama-index-integrations/llms/llama-index-llms-yi/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,153 @@
llama_index/_static
.DS_Store
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
bin/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
etc/
include/
lib/
lib64/
parts/
sdist/
share/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
.ruff_cache

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints
notebooks/

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
pyvenv.cfg

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# Jetbrains
.idea
modules/
*.swp

# VsCode
.vscode

# pipenv
Pipfile
Pipfile.lock

# pyright
pyrightconfig.json
3 changes: 3 additions & 0 deletions llama-index-integrations/llms/llama-index-llms-yi/BUILD
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
poetry_requirements(
name="poetry",
)
Loading

0 comments on commit 5ddb2b2

Please sign in to comment.