Skip to content

Commit

Permalink
Logan/fix ibm docs again (#14305)
Browse files Browse the repository at this point in the history
  • Loading branch information
logan-markewich authored Jun 21, 2024
1 parent a7dd0d7 commit cae9303
Show file tree
Hide file tree
Showing 31 changed files with 1,958 additions and 14 deletions.
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
# IBM watsonx.ai

::: llama_index.embeddings.ibm
options:
members:
Expand Down
4 changes: 4 additions & 0 deletions docs/docs/api_reference/llms/ibm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.llms.ibm
options:
members:
- WatsonxLLM
16 changes: 8 additions & 8 deletions docs/docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,12 +58,12 @@ LlamaIndex imposes no restriction on how you use LLMs. You can use LLMs as auto-

Some popular use cases for LlamaIndex and context augmentation in general include:

- [Question-Answering](./use_cases/q_and_a/) (Retrieval-Augmented Generation aka RAG)
- [Chatbots](./use_cases/chatbots/)
- [Document Understanding and Data Extraction](./use_cases/extraction/)
- [Autonomous Agents](./use_cases/agents/) that can perform research and take actions
- [Multi-modal applications](./use_cases/multimodal/) that combine text, images, and other data types
- [Fine-tuning](./use_cases/fine_tuning/) models on data to improve performance
- [Question-Answering](./use_cases/q_and_a/index.md) (Retrieval-Augmented Generation aka RAG)
- [Chatbots](./use_cases/chatbots.md)
- [Document Understanding and Data Extraction](./use_cases/extraction.md)
- [Autonomous Agents](./use_cases/agents.md) that can perform research and take actions
- [Multi-modal applications](./use_cases/multimodal.md) that combine text, images, and other data types
- [Fine-tuning](./use_cases/fine_tuning.md) models on data to improve performance

Check out our [use cases](./use_cases/index.md) documentation for more examples and links to tutorials.

Expand Down Expand Up @@ -99,7 +99,7 @@ response = query_engine.query("Some question about the data should go here")
print(response)
```

If any part of this trips you up, don't worry! Check out our more comprehensive starter tutorials using [remote APIs like OpenAI](./getting_started/starter_example/) or [any model that runs on your laptop](./getting_started/starter_example_local/).
If any part of this trips you up, don't worry! Check out our more comprehensive starter tutorials using [remote APIs like OpenAI](./getting_started/starter_example.md) or [any model that runs on your laptop](./getting_started/starter_example_local.md).

## LlamaCloud

Expand Down Expand Up @@ -130,7 +130,7 @@ Need help? Have a feature suggestion? Join the LlamaIndex community:

### Contributing

We are open-source and always welcome contributions to the project! Check out our [contributing guide](./CONTRIBUTING) for full details on how to extend the core library or add an integration to a third party like an LLM, a vector store, an agent tool and more.
We are open-source and always welcome contributions to the project! Check out our [contributing guide](./CONTRIBUTING.md) for full details on how to extend the core library or add an integration to a third party like an LLM, a vector store, an agent tool and more.

## Related projects

Expand Down
4 changes: 1 addition & 3 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -786,7 +786,6 @@ nav:
- ./api_reference/embeddings/huggingface_optimum.md
- ./api_reference/embeddings/huggingface_optimum_intel.md
- ./api_reference/embeddings/ibm.md
- ./api_reference/embeddings/ibm_watsonx.md
- ./api_reference/embeddings/index.md
- ./api_reference/embeddings/instructor.md
- ./api_reference/embeddings/ipex_llm.md
Expand Down Expand Up @@ -873,7 +872,6 @@ nav:
- ./api_reference/llms/huggingface.md
- ./api_reference/llms/huggingface_api.md
- ./api_reference/llms/ibm.md
- ./api_reference/llms/ibm_watsonx.md
- ./api_reference/llms/index.md
- ./api_reference/llms/ipex_llm.md
- ./api_reference/llms/konko.md
Expand Down Expand Up @@ -1856,7 +1854,6 @@ plugins:
- ../llama-index-integrations/question_gen/llama-index-question-gen-guidance
- ../llama-index-integrations/question_gen/llama-index-question-gen-openai
- ../llama-index-integrations/llms/llama-index-llms-palm
- ../llama-index-integrations/llms/llama-index-llms-watsonx
- ../llama-index-integrations/llms/llama-index-llms-cohere
- ../llama-index-integrations/llms/llama-index-llms-nvidia-triton
- ../llama-index-integrations/llms/llama-index-llms-ai21
Expand Down Expand Up @@ -2018,6 +2015,7 @@ plugins:
- ../llama-index-integrations/postprocessor/llama-index-postprocessor-mixedbreadai-rerank
- ../llama-index-integrations/readers/llama-index-readers-pdf-marker
- ../llama-index-integrations/llms/llama-index-llms-you
- ../llama-index-integrations/llms/llama-index-llms-watsonx
- redirects:
redirect_maps:
./api/llama_index.vector_stores.MongoDBAtlasVectorSearch.html: api_reference/storage/vector_store/mongodb.md
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,153 @@
llama_index/_static
.DS_Store
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
bin/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
etc/
include/
lib/
lib64/
parts/
sdist/
share/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
.ruff_cache

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints
notebooks/

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
pyvenv.cfg

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# Jetbrains
.idea
modules/
*.swp

# VsCode
.vscode

# pipenv
Pipfile
Pipfile.lock

# pyright
pyrightconfig.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
poetry_requirements(
name="poetry",
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
GIT_ROOT ?= $(shell git rev-parse --show-toplevel)

help: ## Show all Makefile targets.
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[33m%-30s\033[0m %s\n", $$1, $$2}'

format: ## Run code autoformatters (black).
pre-commit install
git ls-files | xargs pre-commit run black --files

lint: ## Run linters: pre-commit (black, ruff, codespell) and mypy
pre-commit install && git ls-files | xargs pre-commit run --show-diff-on-failure --files

test: ## Run tests via pytest.
pytest tests

watch-docs: ## Build and watch documentation.
sphinx-autobuild docs/ docs/_build/html --open-browser --watch $(GIT_ROOT)/llama_index/
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
# LlamaIndex Embeddings Integration: IBM

This package provides the integration between LlamaIndex and IBM watsonx.ai through the `ibm-watsonx-ai` SDK.

## Installation

```bash
pip install llama-index-embeddings-ibm
```

## Usage

### Setting up

To use IBM's models, you must have an IBM Cloud user API key. Here's how to obtain and set up your API key:

1. **Obtain an API Key:** For more details on how to create and manage an API key, refer to IBM's [documentation](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui).
2. **Set the API Key as an Environment Variable:** For security reasons, it's recommended to not hard-code your API key directly in your scripts. Instead, set it up as an environment variable. You can use the following code to prompt for the API key and set it as an environment variable:

```python
import os
from getpass import getpass

watsonx_api_key = getpass()
os.environ["WATSONX_APIKEY"] = watsonx_api_key
```

In alternative, you can set the environment variable in your terminal.

- **Linux/macOS:** Open your terminal and execute the following command:

```bash
export WATSONX_APIKEY='your_ibm_api_key'
```

To make this environment variable persistent across terminal sessions, add the above line to your `~/.bashrc`, `~/.bash_profile`, or `~/.zshrc` file.

- **Windows:** For Command Prompt, use:
```cmd
set WATSONX_APIKEY=your_ibm_api_key
```

### Load the model

You might need to adjust embedding parameters for different tasks.

```python
truncate_input_tokens = 3
```

Initialize the `WatsonxEmbeddings` class with previously set parameters.

**Note**:

- To provide context for the API call, you must add `project_id` or `space_id`. For more information see [documentation](https://www.ibm.com/docs/en/watsonx-as-a-service?topic=projects).
- Depending on the region of your provisioned service instance, use one of the urls described [here](https://ibm.github.io/watsonx-ai-python-sdk/setup_cloud.html#authentication).

In this example, we’ll use the `project_id` and Dallas url.

You need to specify `model_id` that will be used for inferencing.

```python
from llama_index.embeddings.ibm import WatsonxEmbeddings
watsonx_embedding = WatsonxEmbeddings(
model_id="ibm/slate-125m-english-rtrvr",
url="https://us-south.ml.cloud.ibm.com",
project_id="PASTE YOUR PROJECT_ID HERE",
truncate_input_tokens=truncate_input_tokens,
)
```

Alternatively you can use Cloud Pak for Data credentials. For details, see [documentation](https://ibm.github.io/watsonx-ai-python-sdk/setup_cpd.html).

```python
watsonx_embedding = WatsonxEmbeddings(
model_id="ibm/slate-125m-english-rtrvr",
url="PASTE YOUR URL HERE",
username="PASTE YOUR USERNAME HERE",
password="PASTE YOUR PASSWORD HERE",
instance_id="openshift",
version="5.0",
project_id="PASTE YOUR PROJECT_ID HERE",
truncate_input_tokens=truncate_input_tokens,
)
```

## Usage

### Embed query

```python
query = "Example query."
query_result = watsonx_embedding.get_query_embedding(query)
print(query_result[:5])
```

### Embed list of texts

```python
texts = ["This is a content of one document", "This is another document"]
doc_result = watsonx_embedding.get_text_embedding_batch(texts)
print(doc_result[0][:5])
```
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
python_sources()
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from llama_index.embeddings.ibm.base import WatsonxEmbeddings


__all__ = ["WatsonxEmbeddings"]
Loading

0 comments on commit cae9303

Please sign in to comment.