-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
add optimum intel with ipex backend to llama-index-integration (#14553)
- Loading branch information
Showing
9 changed files
with
607 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,223 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"attachments": {}, | ||
"cell_type": "markdown", | ||
"id": "978146e2", | ||
"metadata": {}, | ||
"source": [ | ||
"<a href=\"https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/llm/openvino.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "f717d3d4-942b-4d86-9435-fc44b3ac6d39", | ||
"metadata": {}, | ||
"source": [ | ||
"# Optimum Intel LLMs optimized with IPEX backend\n", | ||
"\n", | ||
"[Optimum Intel](https://github.com/rbrugaro/optimum-intel) accelerates Hugging Face pipelines on Intel architectures leveraging [Intel Extension for Pytorch, (IPEX)](https://github.com/intel/intel-extension-for-pytorch) optimizations\n", | ||
"\n", | ||
"Optimum Intel models can be run locally through `OptimumIntelLLM` entitiy wrapped by LlamaIndex :" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "90cf0f2e-8d8d-4e42-81bf-866c759221e1", | ||
"metadata": {}, | ||
"source": [ | ||
"In the below line, we install the packages necessary for this demo:" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "f413f179", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"%pip install llama-index-llms-optimum-intel" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "3dac8f9f-7136-43f7-9e9f-de679e74d66e", | ||
"metadata": {}, | ||
"source": [ | ||
"Now that we're set up, let's play around:" | ||
] | ||
}, | ||
{ | ||
"attachments": {}, | ||
"cell_type": "markdown", | ||
"id": "2c577674", | ||
"metadata": {}, | ||
"source": [ | ||
"If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "86028752", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"!pip install llama-index" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "0465029c-fe69-454a-9561-55f7a382b2e2", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from llama_index.llms.optimum_intel import OptimumIntelLLM" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "49122583", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"def messages_to_prompt(messages):\n", | ||
" prompt = \"\"\n", | ||
" for message in messages:\n", | ||
" if message.role == \"system\":\n", | ||
" prompt += f\"<|system|>\\n{message.content}</s>\\n\"\n", | ||
" elif message.role == \"user\":\n", | ||
" prompt += f\"<|user|>\\n{message.content}</s>\\n\"\n", | ||
" elif message.role == \"assistant\":\n", | ||
" prompt += f\"<|assistant|>\\n{message.content}</s>\\n\"\n", | ||
"\n", | ||
" # ensure we start with a system prompt, insert blank if needed\n", | ||
" if not prompt.startswith(\"<|system|>\\n\"):\n", | ||
" prompt = \"<|system|>\\n</s>\\n\" + prompt\n", | ||
"\n", | ||
" # add final assistant prompt\n", | ||
" prompt = prompt + \"<|assistant|>\\n\"\n", | ||
"\n", | ||
" return prompt\n", | ||
"\n", | ||
"\n", | ||
"def completion_to_prompt(completion):\n", | ||
" return f\"<|system|>\\n</s>\\n<|user|>\\n{completion}</s>\\n<|assistant|>\\n\"" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "d3e21cef-b3c3-4ddd-a70c-728de440648e", | ||
"metadata": {}, | ||
"source": [ | ||
"### Model Loading\n", | ||
"\n", | ||
"Models can be loaded by specifying the model parameters using the `OptimumIntelLLM` method." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "a27feba3-d027-4d10-b1af-1e130e764a67", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"oi_llm = OptimumIntelLLM(\n", | ||
" model_name=\"Intel/neural-chat-7b-v3-3\",\n", | ||
" tokenizer_name=\"Intel/neural-chat-7b-v3-3\",\n", | ||
" context_window=3900,\n", | ||
" max_new_tokens=256,\n", | ||
" generate_kwargs={\"temperature\": 0.7, \"top_k\": 50, \"top_p\": 0.95},\n", | ||
" messages_to_prompt=messages_to_prompt,\n", | ||
" completion_to_prompt=completion_to_prompt,\n", | ||
" device_map=\"cpu\",\n", | ||
")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "e25c7162", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"response = oi_llm.complete(\"What is the meaning of life?\")\n", | ||
"print(str(response))" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "dda1be10", | ||
"metadata": {}, | ||
"source": [ | ||
"### Streaming\n", | ||
"\n", | ||
"Using `stream_complete` endpoint " | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "12e0f3c0", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"response = oi_llm.stream_complete(\"Who is Mother Teresa?\")\n", | ||
"for r in response:\n", | ||
" print(r.delta, end=\"\")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "2c87c383", | ||
"metadata": {}, | ||
"source": [ | ||
"Using `stream_chat` endpoint" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "2db801a8", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from llama_index.core.llms import ChatMessage\n", | ||
"\n", | ||
"messages = [\n", | ||
" ChatMessage(\n", | ||
" role=\"system\",\n", | ||
" content=\"You are an American chef in a small restaurant in New Orleans\",\n", | ||
" ),\n", | ||
" ChatMessage(role=\"user\", content=\"What is your dish of the day?\"),\n", | ||
"]\n", | ||
"resp = oi_llm.stream_chat(messages)\n", | ||
"\n", | ||
"for r in resp:\n", | ||
" print(r.delta, end=\"\")" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3 (ipykernel)", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 5 | ||
} |
153 changes: 153 additions & 0 deletions
153
llama-index-integrations/llms/llama-index-llms-optimum-intel/.gitignore
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,153 @@ | ||
llama_index/_static | ||
.DS_Store | ||
# Byte-compiled / optimized / DLL files | ||
__pycache__/ | ||
*.py[cod] | ||
*$py.class | ||
|
||
# C extensions | ||
*.so | ||
|
||
# Distribution / packaging | ||
.Python | ||
bin/ | ||
build/ | ||
develop-eggs/ | ||
dist/ | ||
downloads/ | ||
eggs/ | ||
.eggs/ | ||
etc/ | ||
include/ | ||
lib/ | ||
lib64/ | ||
parts/ | ||
sdist/ | ||
share/ | ||
var/ | ||
wheels/ | ||
pip-wheel-metadata/ | ||
share/python-wheels/ | ||
*.egg-info/ | ||
.installed.cfg | ||
*.egg | ||
MANIFEST | ||
|
||
# PyInstaller | ||
# Usually these files are written by a python script from a template | ||
# before PyInstaller builds the exe, so as to inject date/other infos into it. | ||
*.manifest | ||
*.spec | ||
|
||
# Installer logs | ||
pip-log.txt | ||
pip-delete-this-directory.txt | ||
|
||
# Unit test / coverage reports | ||
htmlcov/ | ||
.tox/ | ||
.nox/ | ||
.coverage | ||
.coverage.* | ||
.cache | ||
nosetests.xml | ||
coverage.xml | ||
*.cover | ||
*.py,cover | ||
.hypothesis/ | ||
.pytest_cache/ | ||
.ruff_cache | ||
|
||
# Translations | ||
*.mo | ||
*.pot | ||
|
||
# Django stuff: | ||
*.log | ||
local_settings.py | ||
db.sqlite3 | ||
db.sqlite3-journal | ||
|
||
# Flask stuff: | ||
instance/ | ||
.webassets-cache | ||
|
||
# Scrapy stuff: | ||
.scrapy | ||
|
||
# Sphinx documentation | ||
docs/_build/ | ||
|
||
# PyBuilder | ||
target/ | ||
|
||
# Jupyter Notebook | ||
.ipynb_checkpoints | ||
notebooks/ | ||
|
||
# IPython | ||
profile_default/ | ||
ipython_config.py | ||
|
||
# pyenv | ||
.python-version | ||
|
||
# pipenv | ||
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. | ||
# However, in case of collaboration, if having platform-specific dependencies or dependencies | ||
# having no cross-platform support, pipenv may install dependencies that don't work, or not | ||
# install all needed dependencies. | ||
#Pipfile.lock | ||
|
||
# PEP 582; used by e.g. github.com/David-OConnor/pyflow | ||
__pypackages__/ | ||
|
||
# Celery stuff | ||
celerybeat-schedule | ||
celerybeat.pid | ||
|
||
# SageMath parsed files | ||
*.sage.py | ||
|
||
# Environments | ||
.env | ||
.venv | ||
env/ | ||
venv/ | ||
ENV/ | ||
env.bak/ | ||
venv.bak/ | ||
pyvenv.cfg | ||
|
||
# Spyder project settings | ||
.spyderproject | ||
.spyproject | ||
|
||
# Rope project settings | ||
.ropeproject | ||
|
||
# mkdocs documentation | ||
/site | ||
|
||
# mypy | ||
.mypy_cache/ | ||
.dmypy.json | ||
dmypy.json | ||
|
||
# Pyre type checker | ||
.pyre/ | ||
|
||
# Jetbrains | ||
.idea | ||
modules/ | ||
*.swp | ||
|
||
# VsCode | ||
.vscode | ||
|
||
# pipenv | ||
Pipfile | ||
Pipfile.lock | ||
|
||
# pyright | ||
pyrightconfig.json |
3 changes: 3 additions & 0 deletions
3
llama-index-integrations/llms/llama-index-llms-optimum-intel/BUILD
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
poetry_requirements( | ||
name="poetry", | ||
) |
17 changes: 17 additions & 0 deletions
17
llama-index-integrations/llms/llama-index-llms-optimum-intel/Makefile
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
GIT_ROOT ?= $(shell git rev-parse --show-toplevel) | ||
|
||
help: ## Show all Makefile targets. | ||
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[33m%-30s\033[0m %s\n", $$1, $$2}' | ||
|
||
format: ## Run code autoformatters (black). | ||
pre-commit install | ||
git ls-files | xargs pre-commit run black --files | ||
|
||
lint: ## Run linters: pre-commit (black, ruff, codespell) and mypy | ||
pre-commit install && git ls-files | xargs pre-commit run --show-diff-on-failure --files | ||
|
||
test: ## Run tests via pytest. | ||
pytest tests | ||
|
||
watch-docs: ## Build and watch documentation. | ||
sphinx-autobuild docs/ docs/_build/html --open-browser --watch $(GIT_ROOT)/llama_index/ |
Oops, something went wrong.