Skip to content

Commit 3319a09

Browse files
[pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
1 parent 867508a commit 3319a09

File tree

11 files changed

+60
-62
lines changed

11 files changed

+60
-62
lines changed

ArbPostHearingAssistant/README.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,6 @@
22

33
The Arbitration Post-Hearing Assistant is a GenAI-based module designed to process and summarize post-hearing transcripts or arbitration-related documents. It intelligently extracts key entities and insights to assist arbitrators, legal teams, and case managers in managing case follow-ups efficiently.
44

5-
65
## Table of contents
76

87
1. [Architecture](#architecture)
@@ -20,15 +19,14 @@ The ArbPostHearingAssistant example is implemented using the component-level mic
2019

2120
The table below lists currently available deployment options. They outline in detail the implementation of this example on selected hardware.
2221

23-
| Category | Deployment Option | Description |
24-
| ---------------------- | ---------------------- | -------------------------------------------------------------- |
22+
| Category | Deployment Option | Description |
23+
| ---------------------- | ---------------------- | ------------------------------------------------------------------------------- |
2524
| On-premise Deployments | Docker Compose (Xeon) | [ArbPostHearingAssistant deployment on Xeon](./docker_compose/intel/cpu/xeon) |
2625
| | Docker Compose (Gaudi) | [ArbPostHearingAssistant deployment on Gaudi](./docker_compose/intel/hpu/gaudi) |
2726

28-
2927
## Validated Configurations
3028

31-
| **Deploy Method** | **LLM Engine** | **LLM Model** | **Hardware** |
32-
| ----------------- | -------------- | ----------------------------------- | ------------ |
29+
| **Deploy Method** | **LLM Engine** | **LLM Model** | **Hardware** |
30+
| ----------------- | -------------- | ---------------------------------- | ------------ |
3331
| Docker Compose | vLLM, TGI | mistralai/Mistral-7B-Instruct-v0.2 | Intel Gaudi |
3432
| Docker Compose | vLLM, TGI | mistralai/Mistral-7B-Instruct-v0.2 | Intel Xeon |

ArbPostHearingAssistant/README_miscellaneous.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,4 +42,4 @@ Some HuggingFace resources, such as certain models, are only accessible if the d
4242
```
4343

4444
2. (Docker only) If all microservices work well, check the port ${host_ip}:7777, the port may be allocated by other users, you can modify the `compose.yaml`.
45-
3. (Docker only) If you get errors like "The container name is in use", change container name in `compose.yaml`.
45+
3. (Docker only) If you get errors like "The container name is in use", change container name in `compose.yaml`.

ArbPostHearingAssistant/arb_post_hearing_assistant.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,11 +12,11 @@
1212
from comps import MegaServiceEndpoint, MicroService, ServiceOrchestrator, ServiceRoleType, ServiceType
1313
from comps.cores.mega.utils import handle_message
1414
from comps.cores.proto.api_protocol import (
15+
ArbPostHearingAssistantChatCompletionRequest,
1516
ChatCompletionRequest,
1617
ChatCompletionResponse,
1718
ChatCompletionResponseChoice,
1819
ChatMessage,
19-
ArbPostHearingAssistantChatCompletionRequest,
2020
UsageInfo,
2121
)
2222
from fastapi import Request
@@ -48,9 +48,11 @@ def align_inputs(self, inputs, cur_node, runtime_graph, llm_parameters_dict, **k
4848
del inputs["input"]
4949
return inputs
5050

51+
5152
def align_outputs(self, data, *args, **kwargs):
5253
return data
5354

55+
5456
class OpeaArbPostHearingAssistantService:
5557
def __init__(self, host="0.0.0.0", port=8000):
5658
self.host = host
@@ -73,7 +75,7 @@ def add_remote_service(self):
7375
self.megaservice.add(arb_post_hearing_assistant)
7476

7577
async def handle_request(self, request: Request):
76-
"""Accept pure text"""
78+
"""Accept pure text."""
7779
if "application/json" in request.headers.get("content-type"):
7880
data = await request.json()
7981
chunk_size = data.get("chunk_size", -1)
@@ -144,4 +146,3 @@ def start(self):
144146
arbPostHearingAssistant = OpeaArbPostHearingAssistantService(port=MEGA_SERVICE_PORT)
145147
arbPostHearingAssistant.add_remote_service()
146148
arbPostHearingAssistant.start()
147-

ArbPostHearingAssistant/docker_compose/amd/cpu/epyc/README.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -94,12 +94,12 @@ docker compose up -d
9494

9595
Please refer to the table below to build different microservices from source:
9696

97-
| Microservice | Deployment Guide |
98-
| ------------ | ------------------------------------------------------------------------------------------------------------------------------------- |
99-
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
100-
| llm-arb-post-hearing-assistant | [LLM-ArbPostHearingAssistant build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/arb_post_hearing_assistant/src/#12-build-docker-image) |
101-
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
102-
| UI | [Basic UI build guide](../../../../README_miscellaneous.md#build-ui-docker-image) |
97+
| Microservice | Deployment Guide |
98+
| ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
99+
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
100+
| llm-arb-post-hearing-assistant | [LLM-ArbPostHearingAssistant build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/arb_post_hearing_assistant/src/#12-build-docker-image) |
101+
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
102+
| UI | [Basic UI build guide](../../../../README_miscellaneous.md#build-ui-docker-image) |
103103

104104
### Check the Deployment Status
105105

@@ -183,3 +183,4 @@ Users could follow previous section to testing vLLM microservice or Arbitration
183183
## Conclusion
184184

185185
This guide should enable developer to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.
186+
```

ArbPostHearingAssistant/docker_compose/amd/gpu/rocm/README.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -122,13 +122,13 @@ Use AMD GPU driver utilities to determine the correct `cardN` and `renderN` IDs
122122

123123
Please refer to the table below to build different microservices from source:
124124

125-
| Microservice | Deployment Guide |
126-
| ------------ | ------------------------------------------------------------------------------------------------------------------------------------- |
127-
| TGI | [TGI project](https://github.com/huggingface/text-generation-inference.git) |
128-
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
129-
| llm-arb-post-hearing-assistant | [LLM-ArbPostHearingAssistant build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/arb_post_hearing_assistant/src/#12-build-docker-image) |
130-
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
131-
| UI | [Basic UI build guide](../../../../README_miscellaneous.md#build-ui-docker-image) |
125+
| Microservice | Deployment Guide |
126+
| ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
127+
| TGI | [TGI project](https://github.com/huggingface/text-generation-inference.git) |
128+
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
129+
| llm-arb-post-hearing-assistant | [LLM-ArbPostHearingAssistant build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/arb_post_hearing_assistant/src/#12-build-docker-image) |
130+
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
131+
| UI | [Basic UI build guide](../../../../README_miscellaneous.md#build-ui-docker-image) |
132132

133133
### Check the Deployment Status
134134

@@ -149,6 +149,7 @@ CONTAINER ID IMAGE C
149149
32afc12de996 opea/llm-arb-post-hearing-assistant:latest "python comps/arb_po…" 2 hours ago Up 2 hours 0.0.0.0:9000->9000/tcp, [::]:9000->9000/tcp arb-post-hearing-assistant-xeon-llm-server
150150
c8e539360aff ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu "text-generation-lau…" 2 hours ago Up 2 hours (healthy) 0.0.0.0:8008->80/tcp, [::]:8008->80/tcp arb-post-hearing-assistant-xeon-tgi-server
151151
```
152+
152153
### Test the Pipeline
153154

154155
Once the Arbitration Post-Hearing Assistant services are running, test the pipeline using the following command:
@@ -212,4 +213,5 @@ Users could follow previous section to testing vLLM microservice or Arbitration
212213

213214
## Conclusion
214215

215-
This guide should enable developer to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.
216+
This guide should enable developer to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.
217+
```

ArbPostHearingAssistant/docker_compose/intel/cpu/xeon/README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -56,12 +56,12 @@ docker compose up -d
5656

5757
Please refer to the table below to build different microservices from source:
5858

59-
| Microservice | Deployment Guide |
60-
| ------------ | ------------------------------------------------------------------------------------------------------------------------------------- |
61-
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
62-
| llm-arb-post-hearing-assistant | [LLM-ArbPostHearingAssistant build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/arb_post_hearing_assistant/src/#12-build-docker-image) |
63-
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
64-
| UI | [Basic UI build guide](../../../../README_miscellaneous.md#build-ui-docker-image) |
59+
| Microservice | Deployment Guide |
60+
| ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
61+
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
62+
| llm-arb-post-hearing-assistant | [LLM-ArbPostHearingAssistant build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/arb_post_hearing_assistant/src/#12-build-docker-image) |
63+
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
64+
| UI | [Basic UI build guide](../../../../README_miscellaneous.md#build-ui-docker-image) |
6565

6666
### Check the Deployment Status
6767

@@ -103,7 +103,6 @@ docker compose -f compose.yaml down
103103

104104
All the Arbitration Post-Hearing Assistant containers will be stopped and then removed on completion of the "down" command.
105105

106-
107106
## arb-post-hearing-assistant Docker Compose Files
108107

109108
In the context of deploying a arb-post-hearing-assistant pipeline on an Intel® Xeon® platform, we can pick and choose different large language model serving frameworks. The table below outlines the various configurations that are available as part of the application.
@@ -169,3 +168,4 @@ Users could follow previous section to testing vLLM microservice or Arbitration
169168
## Conclusion
170169

171170
This guide should enable developer to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.
171+
```

ArbPostHearingAssistant/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -58,12 +58,12 @@ docker compose up -d
5858

5959
Please refer to the table below to build different microservices from source:
6060

61-
| Microservice | Deployment Guide |
62-
| ------------ | ------------------------------------------------------------------------------------------------------------------------------------- |
63-
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
64-
| llm-arb-post-hearing-assistant | [LLM-ArbPostHearingAssistant build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/arb_post_hearing_assistant/src/#12-build-docker-image) |
65-
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
66-
| UI | [Basic UI build guide](../../../../README_miscellaneous.md#build-ui-docker-image) |
61+
| Microservice | Deployment Guide |
62+
| ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
63+
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
64+
| llm-arb-post-hearing-assistant | [LLM-ArbPostHearingAssistant build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/arb_post_hearing_assistant/src/#12-build-docker-image) |
65+
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
66+
| UI | [Basic UI build guide](../../../../README_miscellaneous.md#build-ui-docker-image) |
6767

6868
### Check the Deployment Status
6969

@@ -147,3 +147,4 @@ Users could follow previous section to testing vLLM microservice or Arbitration
147147
## Conclusion
148148

149149
This guide should enable developer to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.
150+
```

ArbPostHearingAssistant/docker_compose/intel/set_env.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ export MEGA_SERVICE_HOST_IP=${host_ip} #Example: MEGA_SERVICE_HOST_IP="localhost
3232
export LLM_SERVICE_HOST_IP=${host_ip} #Example: LLM_SERVICE_HOST_IP="localhost"
3333

3434
# uncomment below during development
35-
# export VLLM_SKIP_WARMUP=true
35+
# export VLLM_SKIP_WARMUP=true
3636

3737
export BACKEND_SERVICE_PORT=8888
3838
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:${BACKEND_SERVICE_PORT}/v1/arb-post-hearing"

ArbPostHearingAssistant/tests/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,4 @@ On Intel Xeon with TGI:
1212

1313
```bash
1414
bash test_compose_tgi_on_xeon.sh
15-
```
15+
```

ArbPostHearingAssistant/ui/gradio/README.md

Lines changed: 8 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
21
# Arbitration Post-Hearing Assistant
32

43
The Arbitration Post-Hearing Assistant is a GenAI-based module designed to process and summarize post-hearing transcripts or arbitration-related documents. It intelligently extracts key entities and insights to assist arbitrators, legal teams, and case managers in managing case follow-ups efficiently.
@@ -14,7 +13,7 @@ Identifies and extracts essential details such as:
1413
- Hearing date and time
1514
- Next hearing schedule and purpose
1615
- Hearing outcomes and reasons
17-
16+
1817
## Docker
1918

2019
### Build UI Docker Image
@@ -57,7 +56,6 @@ python arb_post_hearing_assistant_ui_gradio.py
5756

5857
This command starts the frontend application using Python.
5958

60-
6159
## 📸 Project Screenshots
6260

6361
![project-screenshot](../../assets/img/arbritation_post_hearing_ui_gradio_text.png)
@@ -68,17 +66,16 @@ Here are some of the project's features:
6866

6967
## Features
7068

71-
- **Automated Case Extraction:** Extracts key arbitration details including case number, claimant/respondent, arbitrator, hearing dates, next hearing schedule, and outcome.
72-
- **Hearing Summarization:** Generates concise summaries of post-hearing proceedings.
73-
- **LLM-Powered Processing:** Integrates with vLLM or TGI backends for natural language understanding.
74-
- **Structured Output:** Returns all extracted information in JSON format for easy storage, display, or integration with case management systems.
69+
- **Automated Case Extraction:** Extracts key arbitration details including case number, claimant/respondent, arbitrator, hearing dates, next hearing schedule, and outcome.
70+
- **Hearing Summarization:** Generates concise summaries of post-hearing proceedings.
71+
- **LLM-Powered Processing:** Integrates with vLLM or TGI backends for natural language understanding.
72+
- **Structured Output:** Returns all extracted information in JSON format for easy storage, display, or integration with case management systems.
7573
- **Easy Deployment:** Containerized microservice, lightweight and reusable across legal workflows.
76-
- **Typical Flow:**
77-
1. Upload or stream post-hearing transcript.
78-
2. LLM backend analyzes text and extracts entities.
74+
- **Typical Flow:**
75+
1. Upload or stream post-hearing transcript.
76+
2. LLM backend analyzes text and extracts entities.
7977
3. Returns structured JSON with case details and summary.
8078

81-
8279
## Additional Information
8380

8481
### Prerequisites

0 commit comments

Comments
 (0)