Skip to content

Latest commit

 

History

History
953 lines (682 loc) · 33.1 KB

README.adoc

File metadata and controls

953 lines (682 loc) · 33.1 KB

Sepsis Detection Demo - HAPI FHIR Server

1. Overview

The purpose of this project is to provide a reproducible demo of a patient (with a probability of sepsis) encounter at a hospital. This demo uses many Red Hat technologies.

Note
This branch of the project makes use of community HAPI FHIR server. If you prefer to use SmileCDR, please refer to the smilecdr branch of this projet.

Sepsis is a costly and life threatening condition resulting in multi-organ failure. Beating conditions like sepsis requires rapid detection and mitigation of risks. Recovery at home is often preferred, yet Medical teams often lack the capability to perform constant surveillance for emerging risks across their patient cohorts, especially in rural settings. We will demonstrate an early warning system driven by Clinical AI at the Edge, fed by at-home post-operative monitoring and automated Clinical care escalation and coordination processes.

1.2. Technical Discussion Vectors

  1. Operations

    1. Ansible for automated and repeatable deployment of Application Services to OpenShift

    2. Deployment of Red Hat Application Services products via Operator Lifecycle Manager

  2. Events in Motion

    1. Use of Red Hat AMQ Streams

    2. Use of Debezium for Change Data Capture on HAPI FHIR database

    3. Use of both Binary and Structured Cloud Events

    4. Visibility of events via KafDrop

    5. Raw FHIR related Server Sent Events streamed to Angular UI

  3. Process Automation

    1. RH-PAM embedded in SpringBoot as process automation engine

    2. BPMN models with process variables of type FHIR R4

    3. User Task centric process model with Task lifecycle driven by clients via KIE-Server API

    4. Angular based simple Task Inbox web app

    5. KIE-Server APIs extended with custom endpoints that allow for (un)marshalling of HAPI FHIR R4 resources

    6. jBPM Executor Service for asyncroneous tasks

    7. Dashbuilder for process and task related monitoring and KPIs

  4. Web Security

    Please see OIDC Enabled Workflow Apps for details.

    1. Use of OIDC Authorization Code Flow protocol between RH-SSO and Angular web app to obtain JWT based access token

    2. Access token includes users roles so as to facilitate UserGroupCallback functionality in RH-PAM

    3. KIE-Server APIs of RH-PAM secured via RH-SSO as Bearer-Token endpoints

  5. Polyglot Frameworks

    1. Quarkus

    2. SpringBoot

    3. JBoss EAP

    4. Python

    5. KNative Serverless functions

    6. Angular

  6. Machine Learning

    1. Machine Learning algorithm to determine probability of Sepsis

2. Order from RHPDS

The Red Hat Product Demo System (RHPDS) provides a wide variety of cloud-based labs and demos showcasing Red Hat software. One of the offerings from RHPDS is the HIMSS 2021 Sepsis Detection Demo.

NOTE: Expect the ordering process to take about 1.5 hours total.

  1. Log into RHPDS

    To utilize RHPDS, you will need the following:

    1. OPENTLC credentials.

      OPENTLC credentials are available only to Red Hat associates and Red Hat partners.

    2. SFDC Opportunity, Campaign ID or Partner Registration

  2. In the left panel, navigate to: Catalog → All Services → Multi-Product Demo → HIMSS 2021 Sepsis Detection Demo

    rhpds demo home
  3. Read through the overview and click Order at the bottom of the page.

  4. Fill in the details in the Lab Information tab

    Note
    Among the ordering options, there are several OpenShift cluster sizes to choose from. A size of Training is sufficient to support the demo.
  5. Click: Submit.

  6. Expect to receive an intial two emails providing process within the first 20 minutes of ordering.

    While waiting, one suggestion might be to skim through the following 3 sections of this doc: Demo Components, Demo Scenario and Architecture ,

  7. Expect a third email about another 40 minutes after the second email.

    The base OpenShift environment is now provisioned.

    This third email will provide details regarding this OpenShift environment that the demo will run on.

    Log into the OpenShift Console using the details provided in the email.

  8. After the third email arrives, wait about another 30 minutes for the HIMSS demo itself to fully provision on the base OpenShift.

    1. If you’ve logged into the new OpenShift environment at the command line, you can monitor demo installation progress by executing the following:

      $ oc logs -f -c manager $( oc get pod -n ansible-system | grep "^ansible" | awk '{print $1}' ) -n ansible-system
    2. Upon successful completion of the HIMSS demo, a log statement should appear similar to the following:

      ----- Ansible Task Status Event StdOut (cache.redhat.com/v1alpha1, Kind=HIMSS2021, himss2021-sample/ansible-system) -----
      
      
      PLAY RECAP *********************************************************************
      localhost                  : ok=233  changed=74   unreachable=0    failed=0    skipped=42   rescued=0    ignored=0
      
      
      ----------
  9. The following are OpenShift namespaces with functionality that supports the demo:

    $  oc get project | grep 'knative\|sepsis'
    
    knative-eventing
    knative-serving
    knative-serving-ingress
    sepsisdetection-sso
    user1-sepsisdetection

3. Demo Scenario

The demo scenario involves 3 different users each with different roles.

An Administrator starts a business process. A doctor reviews the state of the business process and administers any tasks assigned to she/he . A provider then administers any tasks assigned to she/he .

3.1. Reset Demo

  1. In the Openshift console, navigate to the routes in the user1-sepsisdetection namespace

    ocp console ui route
    1. Click the URL of the sepsisdetection-ui route.

      sepsisui login
  2. Authenticate in using credentials of: pamAdmin / pam

  3. Click the Reset Demo button:

    sepsisui admin no process
  4. After a few seconds, there should be an active business process:

    sepsisui singleprocess
  5. Click the Log out button at the top right corner to log out as an Administrator.

3.2. Doctors: Administer Tasks

  1. Log back into the Sepsis Detection UI as a doctor.

    Use credentials of: eve / pam.

  2. Click Show/Hide Workflow:

    sepsisui risk assessment

    Notice the timer on the Primary Doctor Evaluates Risk task. For the purpose of the demo, this timer is set to 1 minute. If not administered within 1 minute of creation, the workflow will automatically route to the On Call Doctor Evaluates Risk task.

  3. Click the My Tasks tab:

    sepsisui singletask
  4. On any of the tasks, click the Open button and decide on an appropriate course of action.

    sepsisui risk evaluation
    1. Select one of the options from the Risk Evaluation Result drop-down.

    2. Click Submit.

  5. Click the Log out button at the top right corner to log out as a doctor.

3.3. Providers: Administer Tasks

  1. Log back into the Sepsis Detection UI as a provider.

    Use credentials of: bob / pam.

  2. Similar to what you already did as a simualated doctor, manage the lifecycle of any tasks assigned to a provider.

4. Demo Components

The purpose of this section is to highlight the major components of the demo.

4.1. Red Hat SSO

rh sso

Red Hat SSO is used as the OpenID Connect provider of access tokens needed by other demo components for authentication and authorization.

For the purpose of the demo, the RH-SSO consists a single SSO realm configured with an SSO client and multiple users and roles to facilitate the use case.

4.2. Hapi FHIR Server

hapi fhir home

The demo consists of a HAPI FHIR JPA server .

This server maintains the state of all FHIR resources involved in the sepsis detection use case.

The HAPI FHIR server is backed by a PostgreSQL database.

4.3. AMQ Streams / Debezium / Kafdrop

The architecture of the demo is primarily event driven.

As such, the demo makes use of Red Hat’s AMQ Streams and Debezium technologies.

For monitoring of kafka topcs in AMQ Streams, the demo provides an instance of Kafdrop

kafdrop home

Kafdrop allows for instrospection of messages in the kafka topics.

kafdrop message

4.4. Sepsisdetection-rhpam

This service consists of the RH-PAM process_engine embedded in SpringBoot.

  1. This service consumes messages from Red Hat AMQ Streams

    In particular, it consumes change events from the PostgreSQL database of the HAPI FHIR server.

    ie: When a new FHIR Patient resource is posted to the HAPI FHIR REST API, a record is added to the HAPI FHIR PostgreSQL database. Subsequently, a change event that captures this database record is sent to an AMQ Streams/Kafka topic.

  2. This service also exposes the following RESTful APIs:

    1. Standard RH-PAM KIE-Server REST APIs

      swagger ui
    2. FHIR Enabled REST APIs:

      Augments the RH-PAM KIE-Server with additional APIs that allow for handling FHIR related process and task variables

4.5. Dashbuilder

For monitoring of business processes and user tasks the demo provides an instance of RH-PAM’s dashbuilder technology.

dashbuilder tasks

4.6. Sepsisdetection-UI

The demo provides a user interface written in AngularJS .

sepsisui home

This UI allows role-based-access-control to various features based on the roles in the SSO access token of an authenticated user.

5. Architecture

5.1. Sepsis Detection Runtime processes:

  1. sepsisdetection parent process

    sepsisdetection svg
  2. highmediummitigation Subprocess:

    highmediummitigation svg

5.2. Deployment Architecture

reference architecture actual
  1. An external client POSTs a FHIR R4 bundle (with a Patient, Location and multiple Observation resources ) to the RESTful API of the HAPI FHIR JPA server.

  2. HAPI FHIR JPA Server persists (using Hiberate) to its PostgreSQL database. FHIR resources are stored as gzip blobs in the following table of the HAPI FHIR database schema: public.hfj_res_ver .

  3. Debezium detects the additional records to the public.hfj_res_ver table and puts them in motion by sending the raw GZIP blobs to a kafka topic: fhir.public.hfj_res_ver

  4. Messages in the fhir.public.hfj_res_ver topic can now be viewed via monitoring tools such as KafDrop. The sepsisdetection_rhpam application is also a consumer on that topic. With consumption of a Patient resource, the RH-PAM process-engine embedded in the sepsisdetection-RHPAM application is invoked and a sepsis-detection business process is started.

    These business process and corresponding human tasks can be monitored via tools such as RH-PAMs dashbuilder component.

    1. As part of the sepsis-detection business process, the RESTful API of the HAPI FHIR server is queried for a list of all Observation resources for the Patient in a given time period and a PatientVitals resource is created.

    2. As part of the sepsis-detection business process, the PatientVitals resource is used as the payload of an HTTP POST request to the sepsisdetection-ml function. The function responds with an indication of whether sepsis is likely or not.

    3. As part of the sepsis-detection business process, a generateRiskAssessmentCommand message is sent (as a _Cloud Event) to RHT AMQ Streams.

  5. The SepsisDetection-Risk service consumes the generateRiskAssessmentCommand Cloud Event. A FHIR R4 RiskAssessment resource (which includes the data indicating likelyhood of sepsis) is posted to the FHIR Server via its RESTful APIs.

    Debezium detects the addition of the Risk Assessment resource in the HAPI FHIR database and forwards this event as message to Red Hat AMQ Streams.

    RH-PAM picks up this change event with the Risk Assessment resource and advances the business process to the next task.

  6. A user with a set of roles (defined in RH-SSO) authenticates into the SepsisDetection-UI. The sepsisdetection-ui interacts with RH-SSO (as per the Authorization Code Flow protocol of OIDC) to generate an Access Token. The sepsisdetection-ui interacts with the RESTful KIE-Server APIs (and includes the access token in the request) of sepsisdetection-rhpam and renders a user interface that allows for management of the sepsis-detection business process and corresponding human tasks. Depending on the role of the authenticated user, that user is presented with user tasks with which to work through their lifecycle.

  7. The sepsisdetection-ui pulls in an IFrame from HealthFlow.

6. Deploy to OpenShift using Ansible

Ansible is included to deploy this application to OpenShift in a repeatable manner.

This section is only relevant if the desire is to provision the demo on your own OpenShift environemnt and not order the demo from RHPDS .

6.1. Pre-reqs:

  1. OpenShift Container Platform

    Sepsis dection demo has been tested on the following versions of OCP:

    1. 4.10.26

    2. 4.11.2

  2. Resource requirements

    Resource requirements as needed by the app (doesn’t include resource requirements of Openshift to support itself) is as follows:

    1. RAM: 6 GB

    2. CPU: 8

    3. Storage: 10 PVCs of type RWO (no RWX requirement) and each of size 5 GiB

  3. cluster-admin credentials to this OpenShift cluster are needed

  4. wildcard certificate for routes

    Out-of-the-box install of OCP typically includes self-signed certs to secure the cluster’s routes. It is highly recommended that a wildcard cert issued by a well-known certificate authority (ie: LetsEncrypt) be applied to the cluster. If not, the sepsisdetection demo will successfully provision but the sepsisdetection-ui (as rendered in your browser) will not function correctly. In particular, CORS settings typically break when the various routes that your browser will need access to are secured using a self-signed cert.

  5. oc utility (of version correspsonding to OCP cluster) installed locally

    All versions of this utility are available at either of the following:

  6. ansible installed locally

    ie: dnf install ansible

    1. On the host machine that will run the ansible, ensure that both kubernetes and jmespath ansible galaxy collections are installed for the version of python using by ansible:

      1. Check version of python used by ansible:

        $ ansible --version
        
        
        ansible [core 2.12.2]
          ...
        
          python version = 3.8.12 (default, Sep 16 2021, 10:46:05) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
        
          ...
      2. Install dependencies as root user:

        # python3.8 -m pip install kubernetes jmespath
  7. git installed locally

6.2. Procedure:

  1. Using the oc utility that corresponds to the version of OpenShift that you will deploy to, log into the cluster:

    $ oc login <OCP API Server url> -u <cluster-admin userId> -p <passwd>
  2. Clone the source code of this project:

    $ git clone https://github.com/redhat-naps-da/himss_2021_sepsis_detection
  3. Change to the ansible directory of this project:

    $ cd config_mgmt/ansible
  4. Deploy to OpenShift:

    Note
    If you are running the install from a Mac, it will be necessary to manually create the user1-sepsisdetection namespace prior to the step below.
    $ ansible-playbook playbooks/install.yml
    1. Deployment should complete in about 15 minutes.

    2. Notice the creation of a new OCP namespace where the application resides: user1-sepsisdetection

    3. At the completion of the installation, expect to see messages similar to the following:

      PLAY RECAP *******************************************************************************************************************************************************************************************************
          localhost                  :  ok=137  changed=77   unreachable=0    failed=0    skipped=14   rescued=0    ignored=0
  5. Optional: Uninstall from OpenShift:

    $ ansible-playbook playbooks/uninstall.yml

7. Local containerized environment

This project includes a docker-compose config file that allows for deployment of the application as containers in your local environment.

This section is only relevant to developers of the demo

  1. Start application pod with all linux containers:

    $ docker-compose -f etc/docker-compose.yaml up -d
    Note
    If underlying linux container system in use in your local environment is podman, then follow this set-up guide.
  2. The following diagram depicts the containers instantiated as part of this pod:

    docker compose architecture
  3. Post Debezium configs to kafka_connect container:

    $ curl -X POST \
            -H "Accept:application/json" -H "Content-Type:application/json" \
            localhost:8083/connectors/ \
            -d "@etc/hapi-fhir/debezium-fhir-server-pgsql.json"
    Note
    This step is not needed when running the solution in OpenShift. It’s only needed when running the solution in a local containerized environmennt (ie: docker-compose)
  4. Stop application pod with all linux containers:

    $ docker-compose -f etc/docker-compose.yaml down

8. Test

This section is only relevant to developers of the demo

8.1. Environment Variables

Set the following environment variables with values similar to the following:

  1. If testing locally deployed application (via docker-compose):

    export RHSSO_HOST=sso.local
    export RHSSO_URL=http://$RHSSO_HOST:4080
    export RHSSO_MASTER_PASSWD=admin
    export REALM_ID=kc-demo
    export retrieve_token_url="$RHSSO_URL/realms/$REALM_ID/protocol/openid-connect/token"
    export SEPSISDETECTION_RHPAM_URL=http://localhost:9080
    export FHIR_SERVER_URL=http://localhost:8080
  2. Add the following entry to your /etc/hosts:

    127.0.0.1   sso.local
  3. If testing environment deployed to OpenShift:

    SEPSISDETECTION_RHPAM_URL=https://$(oc get route sepsisdetection-rhpam -n user1-sepsisdetection --template='{{ .spec.host }}')
    RHSSO_URL=https://$(oc get route sso -n sepsisdetection-sso --template='{{ .spec.host }}')/auth
    REALM_ID=user1-sepsis
    retrieve_token_url="$RHSSO_URL/realms/$REALM_ID/protocol/openid-connect/token"
    FHIR_SERVER_URL=https://$(oc get route fhir-server -n user1-sepsisdetection --template='{{ .spec.host }}')

8.2. RH-SSO

8.2.1. Master Realm

You can also login directly to the custom SSO realm used in the demo. Details as follows:

  1. userId : admin

  2. password : execute the following from the command line:

    $ echo -en "\n$(
         oc get secret credential-rhsso -o json -n sepsisdetection-sso \
         | jq .data.ADMIN_PASSWORD \
         | sed 's/"//g' \
         | base64 -d
      )\n"
  3. url:

    Using the credentials listed above, log into the master realm of the RH-SSO server at the following URL:

    $ echo -en "\n$RHSSO_URL\n"

8.2.2. Sepsis Detection SSO Realm

  1. You can also login directly to the custom SSO realm used in the demo. Details as follows:

    1. URL

      $ echo -en "\n$RHSSO_URL/auth/admin/$REALM_ID/console\n"
    2. userId : ssoRealmAdmin

    3. password : pam

8.3. HAPI FHIR Server

The application includes a HAPI FHIR Server that exposes RESTful endpoints.

  1. Test HAPI FHIR Server CORS headers using a preflight request:

    $ curl -i -X OPTIONS -H "Origin: http://localhost:7080" \
        -H 'Access-Control-Request-Method: POST' \
        -H 'Access-Control-Request-Headers: Content-Type, Authorization' \
        "http://localhost:8080/fhir"
    
    HTTP/1.1 200
    Vary: Origin
    Vary: Access-Control-Request-Method
    Vary: Access-Control-Request-Headers
    Access-Control-Allow-Origin: *
    Access-Control-Allow-Methods: GET,POST,PUT,DELETE,OPTIONS,PATCH,HEAD
    Access-Control-Allow-Headers: Content-Type, Authorization
    Access-Control-Expose-Headers: Location, Content-Location
  2. POST Demo Observation to FHIR server

    $ curl -X POST \
           -H "Content-Type:application/fhir+json" \
           $FHIR_SERVER_URL/fhir \
           -d "@sepsisdetection-rhpam/src/test/resources/fhir/DemoBundle.json"
  3. POST Demo RiskAssessment to FHIR server

    $ curl -X POST \
           -H "Content-Type:application/fhir+json" \
           $FHIR_SERVER_URL/fhir/RiskAssessment \
           -d "@sepsisdetection-risk/src/test/resources/fhir/RiskAssessment.json"

8.4. SepsisDetection RHPAM

The sepsisdetection-rhpam deployment is enabled with the kie_server as well as various endpoints that can consume FHIR payloads.

  1. Retrieve an OAuth2 token using the sepsisdetection SSO client of the pre-configured SSO realm:

    TKN=$(curl -X POST "$retrieve_token_url" \
                -H "Content-Type: application/x-www-form-urlencoded" \
                -d "username=pamAdmin" \
                -d "password=pam" \
                -d "grant_type=password" \
                -d "client_id=sepsisdetection" \
                | sed 's/.*access_token":"//g' | sed 's/".*//g')
    
    echo $TKN
  2. By setting fullScopeAllowed=true in SSO client, all roles assocated with an authenticated user will be included in the access token.

    These roles can be visualized as follows:

    $ jq -R 'split(".") | .[1] | @base64d | fromjson' <<< $TKN | jq .realm_access.roles
    
    [
      "interviewer",
      "kie-server",
      "user"
    ]
  3. Health Check Report

    $ curl -H "Authorization: Bearer $TKN" \
           -H 'Accept:application/json' \
           $SEPSISDETECTION_RHPAM_URL/rest/server/healthcheck?report=true
  4. View raw swagger json

    $ curl -H "Authorization: Bearer $TKN" $SEPSISDETECTION_RHPAM_URL/rest/swagger.json | jq .
  5. View swagger-ui:

    Point your browser to the output of the following:

    $ echo -en "\n$SEPSISDETECTION_RHPAM_URL/rest/api-docs/?url=$SEPSISDETECTION_RHPAM_URL/rest/swagger.json\n"
    swagger ui
  6. List KIE Containers

    $ curl -H "Authorization: Bearer $TKN" \
           -X GET $SEPSISDETECTION_RHPAM_URL/rest/server/containers
  7. List process definitions in JSON representation:

    $ curl -H "Authorization: Bearer $TKN" \
           -X GET -H 'Accept:application/json' \
           $SEPSISDETECTION_RHPAM_URL/rest/server/containers/sepsisdetection-kjar/processes/
  8. List process instances for a deployment in JSON representation:

    $ curl -H "Authorization: Bearer $TKN" \
           -X GET -H 'Accept:application/json' \
           $SEPSISDETECTION_RHPAM_URL/rest/server/queries/containers/sepsisdetection-kjar-1.0.0/process/instances
  9. Identify active node of process instance:

    $ curl -H "Authorization: Bearer $TKN" \
           -X GET -H 'Accept:application/json' \
           $SEPSISDETECTION_RHPAM_URL/rest/server/containers/sepsisdetection-kjar-1.0.0/processesses/instances/${pInstanceId}/nodes/instances | jq .[][0]
  10. List user tasks given a list of roles in access token:

    $ curl -H "Authorization: Bearer $TKN" \
           -X GET -H 'Accept:application/json' \
           $SEPSISDETECTION_RHPAM_URL/rest/server/queries/tasks/instances/pot-owners | jq .
  11. List user tasks as a Business Admin:

    $ curl -H "Authorization: Bearer $TKN" \
           -X GET -H 'Accept:application/json' \
           $SEPSISDETECTION_RHPAM_URL/rest/server/queries/tasks/instances/admins | jq .
  12. List cases in JSON representation:

    $ curl -H "Authorization: Bearer $TKN" \
           -X GET -H 'Accept:application/json' \
           $SEPSISDETECTION_RHPAM_URL/rest/server/queries/cases/

9. Additional Development Notes

9.1. HAPI FHIR Server Development & Customizations

  1. Start HAPI FHIR server in debug mode:

    $ JAVA_OPTS="$JAVA_OPTS -agentlib:jdwp=transport=dt_socket,address=*:5005,server=y,suspend=n"
    $ mvn clean package -DskipTests -Pboot
    $ java -DJAVA_OPTS=$JAVA_OPTS -jar target/ROOT.war
  2. View bytea type in res_text field of public.hfj_res_ver table:

    fhir=# \d hfj_res_ver
                              Table "public.hfj_res_ver"
         Column     |            Type             | Collation | Nullable | Default
    ----------------+-----------------------------+-----------+----------+---------
     pid            | bigint                      |           | not null |
     partition_date | date                        |           |          |
     partition_id   | integer                     |           |          |
     res_deleted_at | timestamp without time zone |           |          |
     res_version    | character varying(7)        |           |          |
     has_tags       | boolean                     |           | not null |
     res_published  | timestamp without time zone |           | not null |
     res_updated    | timestamp without time zone |           | not null |
     res_encoding   | character varying(5)        |           | not null |
     res_text       | bytea                       |           |          |
     res_id         | bigint                      |           | not null |
     res_type       | character varying(40)       |           | not null |
     res_ver        | bigint                      |           | not null |

9.2. sepsisdetection-rhpam

  1. Build and install kjar project:

    $ cd sepsisdetection-kjar
    
    $ mvn clean install -DskipTests
  2. Build KIE-Server executable from this project:

    $ cd sepsisdetection-rhpam
    
    $ mvn clean package
  3. Build and Start app

    $ mvn clean package -DskipTests && \
             java -Dorg.kie.server.repo=../etc/sepsisdetection-rhpam/runtime_configs \
                  -jar target/sepsisdetection-rhpam-0.0.1.jar &> /tmp/sepsisdetection-rhpam.log &
  4. Optional: Create a kie-container in kie-server (kie-container should already be registered as per contents of etc/rhpam/sepsisdetection-rhpam.xml )

    $ export KJAR_VERSION=1.0.0
    $ export KIE_SERVER_CONTAINER_NAME=sepsisdetection-rhpam
    
    $ sed "s/{KIE_SERVER_CONTAINER_NAME}/$KIE_SERVER_CONTAINER_NAME/g" etc/rhpam/kie_container.json \
         | sed "s/{KJAR_VERSION}/$KJAR_VERSION/g" \
         > /tmp/kie_container.json && \
         curl -u "kieserver:kieserver" -X PUT -H 'Content-type:application/json' localhost:9080/rest/server/containers/$KIE_SERVER_CONTAINER_NAME-$KJAR_VERSION -d '@/tmp/kie_container.json'

11. Bug list

  1. Decide what to do about integrating with Healthflow.io . Maybe create a simulator

    HealthFlow was initially supposed to have been deployed on the kubeframe as part of the demo, but they couldn’t get it containerized in time, so it was relegated to being displayed in that iFrame. There’s a container image out there for it, but it’s monolithic and bulky, and we kind of shelved helping them with it for the time being. It’s based on a project called Meteor, and includes an embedded FHIR server with database instance, as well as some other stuff. Pretty heavy duty.

    Example URL:

  2. Persisting list of Observations as part of process instance variables caused problems when retrieving those pInstance variables and marshalling to json (so as to be rendered in sepsisdetection-ui ).

  3. KnativeEventing

    Knative Eventing is not currently used. However, if it was used, there seems to be a problem with starting multiple KnativeKafka installs in the default knative-eventing namespace when deploying in a shared cluster.

12. Operator notes

12.2. Monitoring

The HIMSS Demo operator can be monitored by tailing its log file as follows:

$ oc logs -f -c manager $( oc get pod -n ansible-system | grep "^ansible" | awk '{print $1}' ) -n ansible-system

12.3. Development

12.3.1. Base Operator

Note
HIMSS 2021 demo is available via RHPDS. For the purpose of updating the HIMSS 2021 operator (which is invoked when ordering the demo from RHPDS), execution of the steps in this section is all that is needed.
  1. At the root of the ansible directory of this project, modify the Makefile (as needed)

    Most likely, all you’ll need to do is increment the VERSION.

  2. Ensure you have permissions to push images to push HIMSS 2021 Sepsis Detection Operator Image Repo.

  3. Using the podman utility, log into quay.io.

  4. Build image and deploy to quay:

    $ make docker-build docker-push
  5. Change latest tag in quay:

    In order for the updated sepsisdetection-operator image to be picked up in RHPDS, you’ll need to change the latest tag in quay.io.

    1. In your browser, navigate to https://quay.io/repository/redhat_naps_da/sepsisdetection-operator?tab=tags

    2. Log into quay.io as admin of the redhat_naps_da organization.

    3. Modify the latest tag such that it is linked with the latest image that was previously pushed.

      quay link

12.3.2. Optional: Deploy operator to your own OCP cluster

  1. Deploy operator in OpenShift cluster:

    $ make deploy
    
    cd config/manager && /u01/labs/mw/redhat-naps-da/himss_interoperability_showcase_2021/ansible/bin/kustomize edit set image controller=quay.io/redhat_naps_da/sepsisdetection-operator:0.0.2
    /u01/labs/mw/redhat-naps-da/himss_interoperability_showcase_2021/ansible/bin/kustomize build config/default | kubectl apply -f -
    I0831 13:00:25.259384   30895 request.go:668] Waited for 1.075752563s due to client-side throttling, not priority and fairness, request: GET:https://api.cluster-3983.3983.sandbox362.opentlc.com:6443/apis/security.internal.openshift.io/v1?timeout=32s
    namespace/ansible-system created
    customresourcedefinition.apiextensions.k8s.io/himss2021s.cache.redhat.com created
    serviceaccount/ansible-controller-manager created
    role.rbac.authorization.k8s.io/ansible-leader-election-role created
    clusterrole.rbac.authorization.k8s.io/ansible-manager-role created
    clusterrole.rbac.authorization.k8s.io/ansible-metrics-reader created
    clusterrole.rbac.authorization.k8s.io/ansible-proxy-role created
    rolebinding.rbac.authorization.k8s.io/ansible-leader-election-rolebinding created
    clusterrolebinding.rbac.authorization.k8s.io/ansible-manager-rolebinding created
    clusterrolebinding.rbac.authorization.k8s.io/ansible-proxy-rolebinding created
    configmap/ansible-manager-config created
    service/ansible-controller-manager-metrics-service created
    deployment.apps/ansible-controller-manager created
  2. Install HIMSS2021 resource

    $ oc apply -f config/samples/cache_v1alpha1_himss2021.yaml -n ansible-system
  3. Acquire needed configs for use in RHPDS:

    $ mkdir rhpds
    $ bin/kustomize build config/default > rhpds/sepsisdetection-operator-all-configs.yml
    $ cp config/samples/cache_v1alpha1_himss2021.yaml rhpds

12.3.3. OLM

  1. list status of existing OLM on RHPDS cluster

    $  operator-sdk olm status --olm-namespace openshift-operator-lifecycle-manager
  2. uninstall existing OLM on RHPDS cluster

    $  operator-sdk olm uninstall --version 0.17.0
  3. install latest OLM in olm namespace

    $ operator-sdk olm install
  1. Demo Onboarding request into RHPDS

  2. agnosticd pull request