Skip to content

Commit

Permalink
doc: fix missing references to README.md (opea-project#860)
Browse files Browse the repository at this point in the history
Signed-off-by: David B. Kinder <[email protected]>
  • Loading branch information
dbkinder authored Sep 24, 2024
1 parent 5ed7767 commit 3e796ba
Show file tree
Hide file tree
Showing 21 changed files with 32 additions and 32 deletions.
2 changes: 1 addition & 1 deletion AgentQnA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,4 +103,4 @@ curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: app

## How to register your own tools with agent

You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/langchain#5-customize-agent-strategy).
You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/langchain/README.md#5-customize-agent-strategy).
2 changes: 1 addition & 1 deletion AudioQnA/benchmark/accuracy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ We evaluate the WER (Word Error Rate) metric of the ASR microservice.

### Launch ASR microservice

Launch the ASR microserice with the following commands. For more details please refer to [doc](https://github.com/opea-project/GenAIComps/tree/main/comps/asr).
Launch the ASR microserice with the following commands. For more details please refer to [doc](https://github.com/opea-project/GenAIComps/tree/main/comps/asr/whisper/README.md).

```bash
git clone https://github.com/opea-project/GenAIComps
Expand Down
2 changes: 1 addition & 1 deletion AudioQnA/kubernetes/intel/README_gmc.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This document outlines the deployment process for a AudioQnA application utilizi

The AudioQnA Service leverages a Kubernetes operator called genai-microservices-connector(GMC). GMC supports connecting microservices to create pipelines based on the specification in the pipeline yaml file in addition to allowing the user to dynamically control which model is used in a service such as an LLM or embedder. The underlying pipeline language also supports using external services that may be running in public or private cloud elsewhere.

Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector). Soon as we publish images to Docker Hub, at which point no builds will be required, simplifying install.
Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). Soon as we publish images to Docker Hub, at which point no builds will be required, simplifying install.


The AudioQnA application is defined as a Custom Resource (CR) file that the above GMC operator acts upon. It first checks if the microservices listed in the CR yaml file are running, if not starts them and then proceeds to connect them. When the AudioQnA pipeline is ready, the service endpoint details are returned, letting you use the application. Should you use "kubectl get pods" commands you will see all the component microservices, in particular `asr`, `tts`, and `llm`.
Expand Down
4 changes: 2 additions & 2 deletions ChatQnA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@ Refer to the [Kubernetes Guide](./kubernetes/intel/README.md) for instructions o

Install Helm (version >= 3.15) first. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.

Refer to the [ChatQnA helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/chatqna) for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi.
Refer to the [ChatQnA helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/chatqna/README.md) for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi.

### Deploy ChatQnA on AI PC

Expand Down Expand Up @@ -306,7 +306,7 @@ Two ways of consuming ChatQnA Service:

## Troubleshooting

1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example:
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example:

```bash
http_proxy="" curl ${host_ip}:6006/embed -X POST -d '{"inputs":"What is Deep Learning?"}' -H 'Content-Type: application/json'
Expand Down
2 changes: 1 addition & 1 deletion ChatQnA/benchmark/performance/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ find . -name '*.yaml' -type f -exec sed -i "s#\$(RERANK_MODEL_ID)#${RERANK_MODEL

### Benchmark tool preparation

The test uses the [benchmark tool](https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark) to do performance test. We need to set up benchmark tool at the master node of Kubernetes which is k8s-master.
The test uses the [benchmark tool](https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark/README.md) to do performance test. We need to set up benchmark tool at the master node of Kubernetes which is k8s-master.

```bash
# on k8s-master node
Expand Down
4 changes: 2 additions & 2 deletions ChatQnA/kubernetes/intel/README_gmc.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

This document outlines the deployment process for a ChatQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline components on Intel Xeon server and Gaudi machines.

The ChatQnA Service leverages a Kubernetes operator called genai-microservices-connector(GMC). GMC supports connecting microservices to create pipelines based on the specification in the pipeline yaml file in addition to allowing the user to dynamically control which model is used in a service such as an LLM or embedder. The underlying pipeline language also supports using external services that may be running in public or private cloud elsewhere.
The ChatQnA Service leverages a Kubernetes operator called genai-microservices-connector (GMC). GMC supports connecting microservices to create pipelines based on the specification in the pipeline yaml file in addition to allowing the user to dynamically control which model is used in a service such as an LLM or embedder. The underlying pipeline language also supports using external services that may be running in public or private cloud elsewhere.

Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector). Soon as we publish images to Docker Hub, at which point no builds will be required, simplifying install.
Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). Soon as we publish images to Docker Hub, at which point no builds will be required, simplifying install.


The ChatQnA application is defined as a Custom Resource (CR) file that the above GMC operator acts upon. It first checks if the microservices listed in the CR yaml file are running, if not starts them and then proceeds to connect them. When the ChatQnA RAG pipeline is ready, the service endpoint details are returned, letting you use the application. Should you use "kubectl get pods" commands you will see all the component microservices, in particular `embedding`, `retriever`, `rerank`, and `llm`.
Expand Down
4 changes: 2 additions & 2 deletions CodeGen/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ Refer to the [Kubernetes Guide](./kubernetes/intel/README.md) for instructions o

Install Helm (version >= 3.15) first. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.

Refer to the [CodeGen helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codegen) for instructions on deploying CodeGen into Kubernetes on Xeon & Gaudi.
Refer to the [CodeGen helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codegen/README.md) for instructions on deploying CodeGen into Kubernetes on Xeon & Gaudi.

## Consume CodeGen Service

Expand All @@ -128,7 +128,7 @@ Two ways of consuming CodeGen Service:

## Troubleshooting

1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example:
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example:

```bash
http_proxy=""
Expand Down
2 changes: 1 addition & 1 deletion CodeGen/benchmark/accuracy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ We evaluate accuracy by [bigcode-evaluation-harness](https://github.com/bigcode-

### Launch CodeGen microservice

Please refer to [CodeGen Examples](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen), follow the guide to deploy CodeGen megeservice.
Please refer to [CodeGen Examples](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen/README.md), follow the guide to deploy CodeGen megeservice.

Use `curl` command to test codegen service and ensure that it has started properly

Expand Down
2 changes: 1 addition & 1 deletion CodeGen/kubernetes/intel/README_gmc.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This document outlines the deployment process for a Code Generation (CodeGen) application that utilizes the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice components on Intel Xeon servers and Gaudi machines.

Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector#readme). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install.
Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install.

If you have only Intel Xeon machines you could use the codegen_xeon.yaml file or if you have a Gaudi cluster you could use codegen_gaudi.yaml
In the below example we illustrate on Xeon.
Expand Down
4 changes: 2 additions & 2 deletions CodeTrans/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ Refer to the [Code Translation Kubernetes Guide](./kubernetes/intel/README.md)

Install Helm (version >= 3.15) first. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.

Refer to the [CodeTrans helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codetrans) for instructions on deploying CodeTrans into Kubernetes on Xeon & Gaudi.
Refer to the [CodeTrans helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/codetrans/README.md) for instructions on deploying CodeTrans into Kubernetes on Xeon & Gaudi.

## Consume Code Translation Service

Expand All @@ -121,7 +121,7 @@ By default, the UI runs on port 5173 internally.

## Troubleshooting

1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeTrans/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example:
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeTrans/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example:

```bash
http_proxy=""
Expand Down
2 changes: 1 addition & 1 deletion CodeTrans/kubernetes/intel/README_gmc.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This document outlines the deployment process for a Code Translation (CodeTran) application that utilizes the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice components on Intel Xeon servers and Gaudi machines.

Please install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector#readme). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install.
Please install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install.

If you have only Intel Xeon machines you could use the codetrans_xeon.yaml file or if you have a Gaudi cluster you could use codetrans_gaudi.yaml
In the below example we illustrate on Xeon.
Expand Down
4 changes: 2 additions & 2 deletions DocIndexRetriever/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,5 @@ DocRetriever are the most widely adopted use case for leveraging the different m

## We provided DocRetriever with different deployment infra

- [docker xeon version](docker_compose/intel/cpu/xeon/) => minimum endpoints, easy to setup
- [docker gaudi version](docker_compose/intel/hpu/gaudi/) => with extra tei_gaudi endpoint, faster
- [docker xeon version](docker_compose/intel/cpu/xeon/README.md) => minimum endpoints, easy to setup
- [docker gaudi version](docker_compose/intel/hpu/gaudi/README.md) => with extra tei_gaudi endpoint, faster
6 changes: 3 additions & 3 deletions DocSum/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Currently we support two ways of deploying Document Summarization services with
docker pull opea/docsum:latest
```

2. Start services using the docker images `built from source`: [Guide](./docker_compose)
2. Start services using the docker images `built from source`: [Guide](https://github.com/opea-project/GenAIExamples/tree/main/DocSum/docker_compose)

### Required Models

Expand Down Expand Up @@ -98,7 +98,7 @@ Refer to [Kubernetes deployment](./kubernetes/intel/README.md)

Install Helm (version >= 3.15) first. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.

Refer to the [DocSum helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/docsum) for instructions on deploying DocSum into Kubernetes on Xeon & Gaudi.
Refer to the [DocSum helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/docsum/README.md) for instructions on deploying DocSum into Kubernetes on Xeon & Gaudi.

### Workflow of the deployed Document Summarization Service

Expand Down Expand Up @@ -143,7 +143,7 @@ Two ways of consuming Document Summarization Service:

## Troubleshooting

1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/DocSum/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example:
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/DocSum/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example:

```bash
http_proxy=""
Expand Down
2 changes: 1 addition & 1 deletion DocSum/kubernetes/intel/README_gmc.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
This document outlines the deployment process for a Document Summary (DocSum) application that utilizes the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice components on Intel Xeon servers and Gaudi machines.
The DocSum Service leverages a Kubernetes operator called genai-microservices-connector(GMC). GMC supports connecting microservices to create pipelines based on the specification in the pipeline yaml file, in addition it allows the user to dynamically control which model is used in a service such as an LLM or embedder. The underlying pipeline language also supports using external services that may be running in public or private clouds elsewhere.

Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector#readme). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install.
Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install.

The DocSum application is defined as a Custom Resource (CR) file that the above GMC operator acts upon. It first checks if the microservices listed in the CR yaml file are running, if not it starts them and then proceeds to connect them. When the DocSum RAG pipeline is ready, the service endpoint details are returned, letting you use the application. Should you use "kubectl get pods" commands you will see all the component microservices, in particular embedding, retriever, rerank, and llm.

Expand Down
2 changes: 1 addition & 1 deletion FaqGen/benchmark/accuracy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ python get_context.py

### Launch FaQGen microservice

Please refer to [FaQGen microservice](https://github.com/opea-project/GenAIComps/tree/main/comps/llms/faq-generation/tgi), set up an microservice endpoint.
Please refer to [FaQGen microservice](https://github.com/opea-project/GenAIComps/tree/main/comps/llms/faq-generation/tgi/langchain/README.md), set up an microservice endpoint.

```
export FAQ_ENDPOINT = "http://${your_ip}:9000/v1/faqgen"
Expand Down
4 changes: 2 additions & 2 deletions LEGAL_INFORMATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ Generative AI Examples is licensed under [Apache License Version 2.0](http://www
This software includes components that have separate copyright notices and licensing terms.
Your use of the source code for these components is subject to the terms and conditions of the following licenses.

- [Third Party Programs](/third-party-programs.txt)
- [Third Party Programs](third-party-programs.txt)

See the accompanying [license](/LICENSE) file for full license text and copyright notices.
See the accompanying [license](LICENSE) file for full license text and copyright notices.

## Citation

Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@ Deployment are based on released docker images by default, check [docker image l
#### Prerequisite

- For Docker Compose based deployment, you should have docker compose installed. Refer to [docker compose install](https://docs.docker.com/compose/install/).
- For Kubernetes based deployment, we provide 3 ways from the easiest manifests to powerful [GMC](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector) based deployment.
- You should have a kubernetes cluster ready for use. If not, you can refer to [k8s install](https://github.com/opea-project/docs/tree/main/guide/installation/k8s_install) to deploy one.
- For Kubernetes based deployment, we provide 3 ways from the easiest manifests to powerful [GMC](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector/README.md) based deployment.
- You should have a kubernetes cluster ready for use. If not, you can refer to [k8s install](https://github.com/opea-project/docs/tree/main/guide/installation/k8s_install/README.md) to deploy one.
- (Optional) You should have GMC installed to your kubernetes cluster if you want to try with GMC. Refer to [GMC install](https://github.com/opea-project/docs/blob/main/guide/installation/gmc_install/gmc_install.md) for more information.
- (Optional) You should have Helm (version >= 3.15) installed if you want to deploy with Helm Charts. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.

Expand Down Expand Up @@ -68,4 +68,4 @@ Thank you for being a part of this journey. We can't wait to see what we can ach

- [Code of Conduct](https://github.com/opea-project/docs/tree/main/community/CODE_OF_CONDUCT.md)
- [Security Policy](https://github.com/opea-project/docs/tree/main/community/SECURITY.md)
- [Legal Information](/LEGAL_INFORMATION.md)
- [Legal Information](LEGAL_INFORMATION.md)
4 changes: 2 additions & 2 deletions SearchQnA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Currently we support two ways of deploying SearchQnA services with docker compos
docker pull opea/searchqna:latest
```

2. Start services using the docker images `built from source`: [Guide](./docker_compose)
2. Start services using the docker images `built from source`: [Guide](https://github.com/opea-project/GenAIExamples/tree/main/SearchQnA/docker_compose/)

### Setup Environment Variable

Expand Down Expand Up @@ -110,7 +110,7 @@ Two ways of consuming SearchQnA Service:

## Troubleshooting

1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon#validate-microservices) first. A simple example:
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon/README.md#validate-microservices) first. A simple example:

```bash
http_proxy=""
Expand Down
Loading

0 comments on commit 3e796ba

Please sign in to comment.