You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: ArbPostHearingAssistant/README.md
+4-6Lines changed: 4 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,6 @@
2
2
3
3
The Arbitration Post-Hearing Assistant is a GenAI-based module designed to process and summarize post-hearing transcripts or arbitration-related documents. It intelligently extracts key entities and insights to assist arbitrators, legal teams, and case managers in managing case follow-ups efficiently.
4
4
5
-
6
5
## Table of contents
7
6
8
7
1.[Architecture](#architecture)
@@ -20,15 +19,14 @@ The ArbPostHearingAssistant example is implemented using the component-level mic
20
19
21
20
The table below lists currently available deployment options. They outline in detail the implementation of this example on selected hardware.
Copy file name to clipboardExpand all lines: ArbPostHearingAssistant/README_miscellaneous.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,4 +42,4 @@ Some HuggingFace resources, such as certain models, are only accessible if the d
42
42
```
43
43
44
44
2. (Docker only) If all microservices work well, check the port ${host_ip}:7777, the port may be allocated by other users, you can modify the `compose.yaml`.
45
-
3. (Docker only) If you get errors like "The container name is in use", change container name in `compose.yaml`.
45
+
3. (Docker only) If you get errors like "The container name is in use", change container name in `compose.yaml`.
@@ -183,3 +183,4 @@ Users could follow previous section to testing vLLM microservice or Arbitration
183
183
## Conclusion
184
184
185
185
This guide should enable developer to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.
32afc12de996 opea/llm-arb-post-hearing-assistant:latest "python comps/arb_po…" 2 hours ago Up 2 hours 0.0.0.0:9000->9000/tcp, [::]:9000->9000/tcp arb-post-hearing-assistant-xeon-llm-server
150
150
c8e539360aff ghcr.io/huggingface/text-generation-inference:2.4.0-intel-cpu "text-generation-lau…" 2 hours ago Up 2 hours (healthy) 0.0.0.0:8008->80/tcp, [::]:8008->80/tcp arb-post-hearing-assistant-xeon-tgi-server
151
151
```
152
+
152
153
### Test the Pipeline
153
154
154
155
Once the Arbitration Post-Hearing Assistant services are running, test the pipeline using the following command:
@@ -212,4 +213,5 @@ Users could follow previous section to testing vLLM microservice or Arbitration
212
213
213
214
## Conclusion
214
215
215
-
This guide should enable developer to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.
216
+
This guide should enable developer to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.
In the context of deploying a arb-post-hearing-assistant pipeline on an Intel® Xeon® platform, we can pick and choose different large language model serving frameworks. The table below outlines the various configurations that are available as part of the application.
@@ -169,3 +168,4 @@ Users could follow previous section to testing vLLM microservice or Arbitration
169
168
## Conclusion
170
169
171
170
This guide should enable developer to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.
@@ -147,3 +147,4 @@ Users could follow previous section to testing vLLM microservice or Arbitration
147
147
## Conclusion
148
148
149
149
This guide should enable developer to deploy the default configuration or any of the other compose yaml files for different configurations. It also highlights the configurable parameters that can be set before deployment.
Copy file name to clipboardExpand all lines: ArbPostHearingAssistant/ui/gradio/README.md
+8-11Lines changed: 8 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,3 @@
1
-
2
1
# Arbitration Post-Hearing Assistant
3
2
4
3
The Arbitration Post-Hearing Assistant is a GenAI-based module designed to process and summarize post-hearing transcripts or arbitration-related documents. It intelligently extracts key entities and insights to assist arbitrators, legal teams, and case managers in managing case follow-ups efficiently.
@@ -14,7 +13,7 @@ Identifies and extracts essential details such as:
@@ -68,17 +66,16 @@ Here are some of the project's features:
68
66
69
67
## Features
70
68
71
-
-**Automated Case Extraction:** Extracts key arbitration details including case number, claimant/respondent, arbitrator, hearing dates, next hearing schedule, and outcome.
72
-
-**Hearing Summarization:** Generates concise summaries of post-hearing proceedings.
73
-
-**LLM-Powered Processing:** Integrates with vLLM or TGI backends for natural language understanding.
74
-
-**Structured Output:** Returns all extracted information in JSON format for easy storage, display, or integration with case management systems.
69
+
-**Automated Case Extraction:** Extracts key arbitration details including case number, claimant/respondent, arbitrator, hearing dates, next hearing schedule, and outcome.
70
+
-**Hearing Summarization:** Generates concise summaries of post-hearing proceedings.
71
+
-**LLM-Powered Processing:** Integrates with vLLM or TGI backends for natural language understanding.
72
+
-**Structured Output:** Returns all extracted information in JSON format for easy storage, display, or integration with case management systems.
75
73
-**Easy Deployment:** Containerized microservice, lightweight and reusable across legal workflows.
76
-
-**Typical Flow:**
77
-
1. Upload or stream post-hearing transcript.
78
-
2. LLM backend analyzes text and extracts entities.
74
+
-**Typical Flow:**
75
+
1. Upload or stream post-hearing transcript.
76
+
2. LLM backend analyzes text and extracts entities.
79
77
3. Returns structured JSON with case details and summary.
0 commit comments