@@ -3,8 +3,7 @@ title: How to use GraphRAG in the Arango Data Platform web interface
33menuTitle : Web Interface
44weight : 20
55description : >-
6- Learn how to create, configure, and run a full GraphRAG workflow in four steps
7- using the Platform web interface
6+ Learn how to create, configure, and run a full GraphRAG workflow in just a few steps
87---
98{{< tip >}}
109The Arango Data Platform & AI Suite are available as a pre-release. To get
@@ -17,9 +16,10 @@ the Arango team.
1716The entire process is organized into sequential steps within a ** Project** :
1817
19181 . Creating the importer service
20- 2 . Uploading your file and exploring the generated Knowledge Graph
21- 3 . Creating the retriever service
22- 4 . Chatting with your Knowledge Graph
19+ 2 . Adding data sources
20+ 3 . Exploring the generated Knowledge Graph
21+ 4 . Creating the retriever service
22+ 5 . Chatting with your Knowledge Graph
2323
2424## Create a GraphRAG project
2525
@@ -33,12 +33,22 @@ To create a new GraphRAG project using the Arango Data Platform web interface, f
3333 a description for your project.
34345 . Click the ** Create project** button to finalize the creation.
3535
36+ ## Project Settings
37+
38+ The ** Project Settings** dialog allows you to configure and manage your
39+ Importer and Retriever services.
40+
41+ You can open the ** Project Settings** dialog in two ways:
42+ - In the ** Data Sources** section, click ** Add data source** and then click on
43+ the ** Open project settings** button.
44+ - In the ** Graph** section, click on the gear icon.
45+
3646## Configure the Importer service
3747
38- Configure a service to import, parse, and retrieve all the needed data from a
48+ Configure a service to import, parse, and extract all the needed data from a
3949file. This service uses the LLM API provider and model of your choice.
4050
41- After clicking on a project name , you are taken to a screen where you can
51+ After opening the ** Project Settings ** , you are taken to a dialog where you can
4252configure and start a new importer service job. Follow the steps below.
4353
4454{{< tabs "importer-service" >}}
@@ -49,24 +59,20 @@ configure and start a new importer service job. Follow the steps below.
4959 the service is using ** O4 Mini** .
50603 . Enter your ** OpenAI API Key** .
51614 . Click the ** Start importer service** button.
52-
53- ![ Configure Importer service using OpenAI] ( ../../images/graphrag-ui-configure-importer-openai.png )
5462{{< /tab >}}
5563
5664{{< tab "OpenRouter" >}}
57651 . Select ** OpenRouter** from the ** LLM API Provider** dropdown menu.
58662 . Select the model you want to use from the ** Model** dropdown menu. By default,
59- the service is using ** Mistral AI - Mistral Nemo** .
60- 1 . Enter your ** OpenAI API Key** .
61- 2 . Enter your ** OpenRouter API Key** .
62- 3 . Click the ** Start importer service** button.
67+ the service uses ** Mistral AI - Mistral Nemo** .
68+ 3 . Enter your ** OpenAI API Key** .
69+ 4 . Enter your ** OpenRouter API Key** .
70+ 5 . Click the ** Start importer service** button.
6371
6472{{< info >}}
65- When using the OpenRouter option, the LLM responses are served via OpenRouter
66- while OpenAI is used for the embedding model.
73+ When using OpenRouter, you need both API keys because the LLM responses are served
74+ via OpenRouter while OpenAI is used for the embedding model.
6775{{< /info >}}
68-
69- ![ Configure Importer service using OpenRouter] ( ../../images/graphrag-ui-configure-importer-openrouter.png )
7076{{< /tab >}}
7177
7278{{< tab "Triton LLM Host" >}}
@@ -78,39 +84,59 @@ while OpenAI is used for the embedding model.
7884Note that you must first register your model in MLflow. The [ Triton LLM Host] ( ../reference/triton-inference-server.md )
7985service automatically downloads and loads models from the MLflow registry.
8086{{< /info >}}
81-
82- ![ Configure Importer service using Triton] ( ../../images/graphrag-ui-configure-importer-triton.png )
8387{{< /tab >}}
8488
8589{{< /tabs >}}
8690
87- See also the [ GraphRAG Importer] ( ../reference/importer.md ) service documentation.
91+ See also the [ Importer] ( ../reference/importer.md ) service documentation.
8892
89- ## Upload your file
93+ ## Add data source
9094
91- 1 . Upload a file by dragging and dropping it in the designated upload area.
92- The importer service you previously launched parses and creates the
93- Knowledge Graph automatically.
94- 2 . Enter a file name.
95- 3 . Click the ** Start import** button.
95+ To add your first data source:
96+
97+ 1 . In the ** Data Sources** section, click the ** Add data source** button.
98+ 2 . Upload a file by dragging and dropping it in the designated upload area.
99+ The importer service you previously configured will automatically parse the file
100+ and create the Knowledge Graph.
101+ 3 . Enter a descriptive name for your file.
102+ 4 . Click the ** Start import** button.
96103
97104{{< info >}}
98- You can only import a single file, either in ` .md ` or ` .txt ` format.
105+ Currently, you can import one file at a time in either Markdown (` .md ` ) or
106+ plain text (` .txt ` ) format. Additional files can be added to update the Knowledge Graph.
99107{{< /info >}}
100108
101109![ Upload file in GraphRAG web interface] ( ../../images/graphrag-ui-upload-file.png )
102110
103111## Explore the Knowledge Graph
104112
105- You can open and explore the Knowledge Graph that has been generated by clicking
106- on the ** Explore in visualizer** button.
113+ After your file is processed, you can view and explore the generated Knowledge Graph
114+ in the ** Graph** section.
115+
116+ ![ Explore Knowledge Graph in GraphRAG web interface] ( ../../images/graphrag-ui-explore-knowledge-graph.png )
117+
118+ For a more detailed exploration, click the ** Explore** button to open the Knowledge Graph in the dedicated Graph Visualizer.
107119
108120For more information, see the [ Graph Visualizer] ( ../../data-platform/graph-visualizer.md ) documentation.
109121
122+ ## Update the Knowledge Graph
123+
124+ Once you have created your initial Knowledge Graph, you can update it by uploading
125+ additional files using the same process described in the [ Add data source] ( #add-data-source ) section.
126+ The importer service will automatically update the existing Knowledge Graph and
127+ underlying collections with the new data.
128+
129+ To update your Knowledge Graph:
130+
131+ 1 . In the ** Data Sources** section, click the ** Add data source** button again.
132+ 2 . Upload a new file by dragging and dropping it in the designated upload area.
133+ 3 . The importer service will process the new file and update the existing Knowledge Graph along with the underlying collections.
134+
110135## Configure the Retriever service
111136
112- Creating the retriever service allows you to extract information from
113- the generated Knowledge Graph. Follow the steps below to configure the service.
137+ The retriever service enables you to query and extract information from
138+ the generated Knowledge Graph. To configure the retriever service, open the
139+ ** Project Settings** and follow the steps below.
114140
115141{{< tabs "retriever-service" >}}
116142
@@ -120,8 +146,6 @@ the generated Knowledge Graph. Follow the steps below to configure the service.
120146 the service uses ** O4 Mini** .
1211473 . Enter your ** OpenAI API Key** .
1221484 . Click the ** Start retriever service** button.
123-
124- ![ Configure Retriever Service using OpenAI] ( ../../images/graphrag-ui-configure-retriever-openai.png )
125149{{< /tab >}}
126150
127151{{< tab "OpenRouter" >}}
@@ -132,11 +156,9 @@ the generated Knowledge Graph. Follow the steps below to configure the service.
1321564 . Click the ** Start retriever service** button.
133157
134158{{< info >}}
135- When using the OpenRouter option , the LLM responses are served via OpenRouter
136- while OpenAI is used for the embedding model.
159+ When using OpenRouter, the LLM responses are served via OpenRouter while OpenAI
160+ is used for the embedding model.
137161{{< /info >}}
138-
139- ![ Configure Retriever Service using OpenRouter] ( ../../images/graphrag-ui-configure-retriever-openrouter.png )
140162{{< /tab >}}
141163
142164{{< tab "Triton LLM Host" >}}
@@ -148,27 +170,28 @@ while OpenAI is used for the embedding model.
148170Note that you must first register your model in MLflow. The [ Triton LLM Host] ( ../reference/triton-inference-server.md )
149171service automatically downloads and loads models from the MLflow registry.
150172{{< /info >}}
151-
152- ![ Configure Retriever Service using Triton] ( ../../images/graphrag-ui-configure-retriever-triton.png )
153173{{< /tab >}}
154174
155175{{< /tabs >}}
156176
157- See also the [ GraphRAG Retriever] ( ../reference/retriever.md ) documentation.
177+ See also the [ Retriever] ( ../reference/retriever.md ) documentation.
158178
159179## Chat with your Knowledge Graph
160180
161- The Retriever service provides two search methods:
162- - [ Local search] ( ../reference/retriever.md#local-search ) : Local queries let you
163- explore specific nodes and their direct connections.
164- - [ Global search] ( ../reference/retriever.md#global-search ) : Global queries uncover
165- broader patters and relationships across the entire Knowledge Graph.
166-
167- ![ Chat with your Knowledge Graph] ( ../../images/graphrag-ui-chat.png )
181+ The chat interface provides two search methods:
182+ - ** Instant search** : Instant queries provide fast responses.
183+ - ** Deep search** : This option will take longer to return a response.
168184
169185In addition to querying the Knowledge Graph, the chat service allows you to do the following:
170- - Switch the search method from ** Local Query ** to ** Global Query ** and vice-versa
186+ - Switch the search method from ** Instant search ** to ** Deep search ** and vice-versa
171187 directly in the chat
172- - Change the retriever service
188+ - Change or create a new retriever service
173189- Clear the chat
174- - Integrate the Knowledge Graph chat service into your own applications
190+
191+ ## Integrate the Knowledge Graph chat service into your application
192+
193+ To integrate any service into your own applications,
194+ go to ** Project Settings** and use the copy button next to each service to
195+ copy its integration endpoint. You cam make ` POST ` requests to the endpoints
196+ with your queries, the services accept ` JSON ` payloads and return structured
197+ responses for building custom interfaces.
0 commit comments