-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
27 changed files
with
2,165 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,195 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"id": "introduction", | ||
"metadata": {}, | ||
"source": [ | ||
"# Client of Baidu Intelligent Cloud's Qianfan LLM Platform\n", | ||
"\n", | ||
"Baidu Intelligent Cloud's Qianfan LLM Platform offers API services for all Baidu LLMs, such as ERNIE-3.5-8K and ERNIE-4.0-8K. It also provides a small number of open-source LLMs like Llama-2-70b-chat.\n", | ||
"\n", | ||
"Before using the chat client, you need to activate the LLM service on the Qianfan LLM Platform console's [online service](https://console.bce.baidu.com/qianfan/ais/console/onlineService) page. Then, Generate an Access Key and a Secret Key in the [Security Authentication](https://console.bce.baidu.com/iam/#/iam/accesslist) page of the console." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "installation", | ||
"metadata": {}, | ||
"source": [ | ||
"## Installation\n", | ||
"\n", | ||
"Install the necessary package:" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "installation-code", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"%pip install llama-index-llms-qianfan" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "initialization", | ||
"metadata": {}, | ||
"source": [ | ||
"## Initialization" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "initialization-code", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from llama_index.llms.qianfan import Qianfan\n", | ||
"import asyncio\n", | ||
"\n", | ||
"access_key = \"XXX\"\n", | ||
"secret_key = \"XXX\"\n", | ||
"model_name = \"ERNIE-Speed-8K\"\n", | ||
"endpoint_url = \"https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie_speed\"\n", | ||
"context_window = 8192\n", | ||
"llm = Qianfan(access_key, secret_key, model_name, endpoint_url, context_window)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "sync-chat", | ||
"metadata": {}, | ||
"source": [ | ||
"## Synchronous Chat\n", | ||
"\n", | ||
"Generate a chat response synchronously using the `chat` method:\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "sync-chat-code", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from llama_index.core.base.llms.types import ChatMessage\n", | ||
"\n", | ||
"messages = [\n", | ||
" ChatMessage(role=\"user\", content=\"Tell me a joke.\"),\n", | ||
"]\n", | ||
"chat_response = llm.chat(messages)\n", | ||
"print(chat_response.message.content)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "sync-stream-chat", | ||
"metadata": {}, | ||
"source": [ | ||
"## Synchronous Stream Chat\n", | ||
"\n", | ||
"Generate a streaming chat response synchronously using the `stream_chat` method:\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "sync-stream-chat-code", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"messages = [\n", | ||
" ChatMessage(role=\"system\", content=\"You are a helpful assistant.\"),\n", | ||
" ChatMessage(role=\"user\", content=\"Tell me a story.\"),\n", | ||
"]\n", | ||
"content = \"\"\n", | ||
"for chat_response in llm.stream_chat(messages):\n", | ||
" content += chat_response.delta\n", | ||
" print(chat_response.delta, end=\"\")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "async-chat", | ||
"metadata": {}, | ||
"source": [ | ||
"## Asynchronous Chat\n", | ||
"\n", | ||
"Generate a chat response asynchronously using the `achat` method:\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "async-chat-code", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"async def async_chat():\n", | ||
" messages = [\n", | ||
" ChatMessage(role=\"user\", content=\"Tell me an async joke.\"),\n", | ||
" ]\n", | ||
" chat_response = await llm.achat(messages)\n", | ||
" print(chat_response.message.content)\n", | ||
"\n", | ||
"\n", | ||
"asyncio.run(async_chat())" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "async-stream-chat", | ||
"metadata": {}, | ||
"source": [ | ||
"## Asynchronous Stream Chat\n", | ||
"\n", | ||
"Generate a streaming chat response asynchronously using the `astream_chat` method:\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "async-stream-chat-code", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"async def async_stream_chat():\n", | ||
" messages = [\n", | ||
" ChatMessage(role=\"system\", content=\"You are a helpful assistant.\"),\n", | ||
" ChatMessage(role=\"user\", content=\"Tell me an async story.\"),\n", | ||
" ]\n", | ||
" content = \"\"\n", | ||
" response = await llm.astream_chat(messages)\n", | ||
" async for chat_response in response:\n", | ||
" content += chat_response.delta\n", | ||
" print(chat_response.delta, end=\"\")\n", | ||
"\n", | ||
"\n", | ||
"asyncio.run(async_stream_chat())" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 5 | ||
} |
153 changes: 153 additions & 0 deletions
153
llama-index-integrations/llms/llama-index-llms-qianfan/.gitignore
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,153 @@ | ||
llama_index/_static | ||
.DS_Store | ||
# Byte-compiled / optimized / DLL files | ||
__pycache__/ | ||
*.py[cod] | ||
*$py.class | ||
|
||
# C extensions | ||
*.so | ||
|
||
# Distribution / packaging | ||
.Python | ||
bin/ | ||
build/ | ||
develop-eggs/ | ||
dist/ | ||
downloads/ | ||
eggs/ | ||
.eggs/ | ||
etc/ | ||
include/ | ||
lib/ | ||
lib64/ | ||
parts/ | ||
sdist/ | ||
share/ | ||
var/ | ||
wheels/ | ||
pip-wheel-metadata/ | ||
share/python-wheels/ | ||
*.egg-info/ | ||
.installed.cfg | ||
*.egg | ||
MANIFEST | ||
|
||
# PyInstaller | ||
# Usually these files are written by a python script from a template | ||
# before PyInstaller builds the exe, so as to inject date/other infos into it. | ||
*.manifest | ||
*.spec | ||
|
||
# Installer logs | ||
pip-log.txt | ||
pip-delete-this-directory.txt | ||
|
||
# Unit test / coverage reports | ||
htmlcov/ | ||
.tox/ | ||
.nox/ | ||
.coverage | ||
.coverage.* | ||
.cache | ||
nosetests.xml | ||
coverage.xml | ||
*.cover | ||
*.py,cover | ||
.hypothesis/ | ||
.pytest_cache/ | ||
.ruff_cache | ||
|
||
# Translations | ||
*.mo | ||
*.pot | ||
|
||
# Django stuff: | ||
*.log | ||
local_settings.py | ||
db.sqlite3 | ||
db.sqlite3-journal | ||
|
||
# Flask stuff: | ||
instance/ | ||
.webassets-cache | ||
|
||
# Scrapy stuff: | ||
.scrapy | ||
|
||
# Sphinx documentation | ||
docs/_build/ | ||
|
||
# PyBuilder | ||
target/ | ||
|
||
# Jupyter Notebook | ||
.ipynb_checkpoints | ||
notebooks/ | ||
|
||
# IPython | ||
profile_default/ | ||
ipython_config.py | ||
|
||
# pyenv | ||
.python-version | ||
|
||
# pipenv | ||
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. | ||
# However, in case of collaboration, if having platform-specific dependencies or dependencies | ||
# having no cross-platform support, pipenv may install dependencies that don't work, or not | ||
# install all needed dependencies. | ||
#Pipfile.lock | ||
|
||
# PEP 582; used by e.g. github.com/David-OConnor/pyflow | ||
__pypackages__/ | ||
|
||
# Celery stuff | ||
celerybeat-schedule | ||
celerybeat.pid | ||
|
||
# SageMath parsed files | ||
*.sage.py | ||
|
||
# Environments | ||
.env | ||
.venv | ||
env/ | ||
venv/ | ||
ENV/ | ||
env.bak/ | ||
venv.bak/ | ||
pyvenv.cfg | ||
|
||
# Spyder project settings | ||
.spyderproject | ||
.spyproject | ||
|
||
# Rope project settings | ||
.ropeproject | ||
|
||
# mkdocs documentation | ||
/site | ||
|
||
# mypy | ||
.mypy_cache/ | ||
.dmypy.json | ||
dmypy.json | ||
|
||
# Pyre type checker | ||
.pyre/ | ||
|
||
# Jetbrains | ||
.idea | ||
modules/ | ||
*.swp | ||
|
||
# VsCode | ||
.vscode | ||
|
||
# pipenv | ||
Pipfile | ||
Pipfile.lock | ||
|
||
# pyright | ||
pyrightconfig.json |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
poetry_requirements( | ||
name="poetry", | ||
) |
17 changes: 17 additions & 0 deletions
17
llama-index-integrations/llms/llama-index-llms-qianfan/Makefile
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
GIT_ROOT ?= $(shell git rev-parse --show-toplevel) | ||
|
||
help: ## Show all Makefile targets. | ||
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[33m%-30s\033[0m %s\n", $$1, $$2}' | ||
|
||
format: ## Run code autoformatters (black). | ||
pre-commit install | ||
git ls-files | xargs pre-commit run black --files | ||
|
||
lint: ## Run linters: pre-commit (black, ruff, codespell) and mypy | ||
pre-commit install && git ls-files | xargs pre-commit run --show-diff-on-failure --files | ||
|
||
test: ## Run tests via pytest. | ||
pytest tests | ||
|
||
watch-docs: ## Build and watch documentation. | ||
sphinx-autobuild docs/ docs/_build/html --open-browser --watch $(GIT_ROOT)/llama_index/ |
26 changes: 26 additions & 0 deletions
26
llama-index-integrations/llms/llama-index-llms-qianfan/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
# LlamaIndex Llms Integration: Baidu Qianfan | ||
|
||
Baidu Intelligent Cloud's Qianfan LLM Platform offers API services for all Baidu LLMs, such as ERNIE-3.5-8K and ERNIE-4.0-8K. It also provides a small number of open-source LLMs like Llama-2-70b-chat. | ||
|
||
Before using the chat client, you need to activate the LLM service on the Qianfan LLM Platform console's [online service](https://console.bce.baidu.com/qianfan/ais/console/onlineService) page. Then, Generate an Access Key and a Secret Key in the [Security Authentication](https://console.bce.baidu.com/iam/#/iam/accesslist) page of the console. | ||
|
||
## Installation | ||
|
||
Install the necessary package: | ||
|
||
``` | ||
pip install llama-index-llms-qianfan | ||
``` | ||
|
||
## Initialization | ||
|
||
```python | ||
from llama_index.llms.qianfan import Qianfan | ||
|
||
access_key = "XXX" | ||
secret_key = "XXX" | ||
model_name = "ERNIE-Speed-8K" | ||
endpoint_url = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/ernie_speed" | ||
context_window = 8192 | ||
llm = Qianfan(access_key, secret_key, model_name, endpoint_url, context_window) | ||
``` |
Oops, something went wrong.