Skip to content

Latest commit

 

History

History

controller

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

FHIR Pipelines Controller

The FHIR Pipelines Controller helps you schedule and manage the continuous transformation of data from a HAPI FHIR server to a collection of Apache Parquet files. It uses the FHIR Data Pipes pipeline to run either full or incremental transformations to a Parquet data warehouse.

The JDBC mode of FHIR Pipelines Controller only works with HAPI FHIR servers. You can see an example of configuring a HAPI FHIR server to use Postgres here.

Usage

Setup

  1. Clone the fhir-data-pipes GitHub repository, open a terminal window.
  2. cd to the directory where you cloned it.
  3. Change to the controller directory: cd pipelines/controller/.

Later terminal commands will assume your working directory is the controller directory.

Next, configure the FHIR Pipelines Controller. The FHIR Pipelines Controller relies on several configuration files to run. Edit them to match your environment and requirements.

Run the FHIR Pipelines Controller

There are 2 ways to run the FHIR Pipelines Controller.

Using Spring Boot:

mvn spring-boot:run

Running the JAR directly:

mvn clean install
java -jar ./target/controller-bundled.jar

After running, open a web browser and visit http://localhost:8080. You should see the FHIR Pipelines Control Panel.

There are 3 ways to have the FHIR Pipelines Controller run the transformation pipeline:

  • Manually trigger the Run Full option by clicking the button. This transforms all of the selected FHIR resource types to Parquet files. You are required to use the Run Full option once before using any of the following incremental options.
  • Manually trigger the Run Incremental option by clicking the button. This only outputs resources that are new or changed since the last run.
  • Automatically scheduled incremental runs, as specified by incrementalSchedule in the application.yaml file. You can see when the next scheduled run is near the top of the control panel.

After running the pipeline, look for the Parquet files created in the directory specified by dwhRootPrefix in the application.yaml file.

Explore the configuration settings

The bottom area of the control panel shows the options being used by the FHIR Pipelines Controller.

Main configuration parameters

This section corresponds to the settings in the application.yml file.

Batch pipeline non-default configurations

This section calls out FHIR Data Pipes batch pipeline settings that are different from their default values. These are also mostly derived from application.yml. Use these settings if you want to run the batch pipeline manually.