diff --git a/README.md b/README.md index cedaf81..99d3ffd 100644 --- a/README.md +++ b/README.md @@ -3,6 +3,8 @@ The Alvearie Health Record Ingestion service: a common 'Deployment Ready Compone This repo contains the code for the Management API of the HRI, which uses [IBM Functions](https://cloud.ibm.com/docs/openwhisk?topic=cloud-functions-getting-started) (Serverless built on [OpenWhisk](https://openwhisk.apache.org/)) with [Golang](https://golang.org/doc/). Basically, this repo defines an API and maps endpoints to Golang executables packaged into 'actions'. IBM Functions takes care of standing up an API Gateway, executing & scaling the actions, and transmitting data between them. [mgmt-api-manifest.yml](mgmt-api-manifest.yml) defines the actions, API, and the mapping between them. A separate OpenAPI specification is maintained in [Alvearie/hri-api-spec](https://github.com/Alvearie/hri-api-spec) for external user's reference. Please Note: Any changes to this (RESTful) Management API for the HRI requires changes in both the hri-api-spec repo and this hri-mgmt-api repo. +This version is compatible with HRI `v2.1`. + ## Communication * Please [join](https://alvearie.io/contributions/requestSlackAccess) our Slack channel for further questions: [#health-record-ingestion](https://alvearie.slack.com/archives/C01GM43LFJ6) * Please see recent contributors or [maintainers](MAINTAINERS.md) @@ -45,13 +47,16 @@ rm src/exec ## CI/CD Since this application must be deployed using IBM Functions in an IBM Cloud account, there isn't a way to launch and test the API & actions locally. So, we have set up GitHub actions to automatically deploy every branch in its own IBM Function's namespace in our IBM cloud account and run integration tests. They all share common Elastic Search and Event Streams instances. Once it's deployed, you can perform manual testing with your namespace. You can also use the IBM Functions UI or IBM Cloud CLI to modify the actions or API in your namespace. When the GitHub branch is deleted, the associated IBM Function's namespace is also automatically deleted. +### Releases +Releases are created by creating GitHub tags, which triggers a build that packages everything into a Docker image to deploy the Management API. See [docker/README.md](docker/README.md) for more details. + ### Docker image build -Images are published on every `develop` branch build with the tag `-timestamp`. +Images are published on every `develop` branch build with the tag `develop-timestamp`. ## Code Overview ### IBM Function Actions - Golang Mains -For each API endpoint, there is a Golang executable packaged into an IBM Function's 'action' to service the requests. There are several `.go` files in the base `src/` directory, one for each action and no others, each of which defines `func main()`. If you're familiar with Golang, you might be asking how there can be multiple files with different definitions of `func main()`. The Makefile takes care of compiling each one into a separate executable, and each file includes a [Build Constraint](https://golang.org/pkg/go/build/#hdr-Build_Constraints) to exclude it from unit tests. This also means these files are not unit tested and thus are kept as small as possible. Each one sets up any required clients and then calls an implementation method in a sub package. They also use `common.actionloopmin.Main()` to implement the OpenWhisk [action loop protocol](https://github.com/apache/openwhisk-runtime-go/blob/main/docs/ACTION.md). +For each API endpoint, there is a Golang executable packaged into an IBM Function's 'action' to service the requests. There are several `.go` files in the base `src/` directory, one for each action and no others, each of which defines `func main()`. If you're familiar with Golang, you might be asking how there can be multiple files with different definitions of `func main()`. The Makefile takes care of compiling each one into a separate executable, and each file includes a [Build Constraint](https://golang.org/pkg/go/build/#hdr-Build_Constraints) to exclude it from unit tests. This also means these files are not unit tested and thus are kept as small as possible. Each one sets up any required clients and then calls an implementation method in a sub package. They also use `common.actionloopmin.Main()` to implement the OpenWhisk [action loop protocol](https://github.com/apache/openwhisk-runtime-go/blob/master/docs/ACTION.md). The compiled binaries have to be named `exec` and put in a zip file. Additionally, a `exec.env` file has to be included, which contains the name of the docker container to use when running the action. All the zip files are written to the `build` directory when running `make`. @@ -77,14 +82,15 @@ The goal is to have 90% code coverage with unit tests. The build automatically p The API that this repo implements is defined in [Alvearie/hri-api-spec](https://github.com/Alvearie/hri-api-spec) using OpenAPI 3.0. There are automated Dredd tests to make sure the implemented API meets the spec. If there are changes to the API, make them to the specification repo using a branch with the same name. Then the Dredd tests will run against the modified API specification. ### Authentication & Authorization -All endpoints (except the health check) require an OAuth 2.0 JWT bearer access token per [RFC8693](https://tools.ietf.org/html/rfc8693) in the `Authorization` header field. The Tenant and Stream endpoints require IAM tokens, but the Batch endpoints require a token with HRI and Tenant scopes for authorization. The Batch token issuer is configurable via a bound parameter, and must be OIDC compliant because the code dynamically uses the OIDC defined well know endpoints to validate tokens. Integration and testing have already been completed with App ID, the standard IBM Cloud solution. +All endpoints (except the health check) require an OAuth 2.0 JWT bearer access token per [RFC8693](https://tools.ietf.org/html/rfc8693) in the `Authorization` header field. The Tenant and Stream endpoints require IAM tokens, but the Batch endpoints require a token with HRI and Tenant scopes for authorization. The Batch token issuer is configurable via a bound parameter, and must be OIDC compliant because the code dynamically uses the OIDC defined well know endpoints to validate tokens. Integration and testing have already been completed with [App ID](https://cloud.ibm.com/docs/appid), the standard IBM Cloud solution. Batch JWT access token scopes: -- hri_data_integrator - Data Integrators can create, get, and change the status of batches, but only ones that they created. -- hri_consumer - Consumers can list and get Batches +- hri_data_integrator - Data Integrators can create, get, and call 'sendComplete' and 'terminate' endpoints for batches, but only ones that they created. +- hri_consumer - Consumers can list and get batches. +- hri_internal - For internal processing, can call batch 'processingComplete' and 'fail' endpoints. - tenant_ - provides access to this tenant's batches. This scope must use the prefix 'tenant_'. For example, if a data integrator tries to create a batch by making an HTTP POST call to `tenants/24/batches`, the token must contain scope `tenant_24`, where the `24` is the tenantId. -The scopes claim must contain one or more of the HRI roles ("hri_data_integrator", "hri_consumer") as well as the tenant id of the tenant being accessed. +The scopes claim must contain one or more of the HRI roles ("hri_data_integrator", "hri_consumer", "hri_internal") as well as the tenant id of the tenant being accessed. ## Contribution Guide Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us. diff --git a/deploy.sh b/deploy.sh index f978671..aec5398 100755 --- a/deploy.sh +++ b/deploy.sh @@ -4,6 +4,8 @@ # # SPDX-License-Identifier: Apache-2.0 +2>&1 + set -eo pipefail echo "CLOUD_API_KEY: ****" @@ -16,7 +18,7 @@ echo "ELASTIC_SVC_ACCOUNT: $ELASTIC_SVC_ACCOUNT" echo "KAFKA_INSTANCE: $KAFKA_INSTANCE" echo "KAFKA_SVC_ACCOUNT: $KAFKA_SVC_ACCOUNT" echo "OIDC_ISSUER: $OIDC_ISSUER" -echo "JWT_AUDIENCE_ID: $JWT_AUDIENCE_ID" +echo "VALIDATION: $VALIDATION" # determine if IBM Cloud CLI is already installed set +e > /dev/null 2>&1 @@ -55,9 +57,19 @@ fi ibmcloud fn deploy --manifest mgmt-api-manifest.yml echo "Building OpenWhisk Parameters" -params="$(cat <`) | | APPID_PREFIX | (Optional) Prefix string to append to the AppId applications and roles created during deployment | -| SET_UP_APPID | (Optional) defaults to true. Set to false if you do not want the App ID set-up described [above](#using-app-id-for-oidc-authentication) enabled. | +| SET_UP_APPID | (Optional) defaults to true. Set to false if you do not want the App ID set-up enabled. | ## Implementation Details @@ -25,12 +24,12 @@ The image entrypoint is `run.sh`, which: 1. sets some environment variables 1. logs into the IBM Cloud CLI 1. calls `elastic.sh` - 1. calls `appid.sh` + 1. calls `appid.sh` 1. calls `deploy.sh` `elastic.sh` turns off automatic index creation and sets the default template for batch indexes. These are idempotent actions, so they can be executed multiple times. -`appid.sh` creates HRI application as well as HRI Consumer and HRI Data Integrator roles in AppId. +`appid.sh` creates HRI and HRI Internal applications and HRI Internal, HRI Consumer, and HRI Data Integrator roles in AppId. `deploy.sh` deploys the Management API to IBM Functions and runs smoke tests (by calling the health check endpoint). diff --git a/docker/appid.sh b/docker/appid.sh index c583cd4..b2fbffd 100755 --- a/docker/appid.sh +++ b/docker/appid.sh @@ -2,6 +2,8 @@ # # SPDX-License-Identifier: Apache-2.0 +#!/bin/bash + # Exit on errors set -e @@ -12,18 +14,17 @@ echo "issuer:$issuer" # Get IAM Token # Note, in this command and many below, the response is gathered and then sent to jq via echo (rather than piping directly) because if you pipe the response # directly to jq, the -f flag to fail if the curl command fails will not terminate the script properly. -echo echo "Requesting IAM token" response=$(curl -X POST -sS 'https://iam.cloud.ibm.com/identity/token' -d "grant_type=urn:ibm:params:oauth:grant-type:apikey&apikey=${CLOUD_API_KEY}") iamToken=$(echo $response | jq -r '.access_token // "NO_TOKEN"') if [ $iamToken = "NO_TOKEN" ]; then echo "the_curl_response: $response" echo "Error getting IAM Token! Exiting!" - exit 1 + exit 1 fi # Create application -echo +echo echo "Creating HRI provider application" # Do not fail script if this call fails. We need to check if it failed because of a CONFLICT, in which case the script will exit 0. hriApplicationName="${APPID_PREFIX}HRI Management API" @@ -49,13 +50,12 @@ if [ -z $hriApplicationId ]; then response=$(curl -X GET -sS "${issuer}/applications" -H "Authorization: Bearer ${iamToken}") hriApplicationId=$(echo $response | jq -r --arg name "$hriApplicationName" '.applications[] | select(.name == $name) | .clientId') - echo - echo "hriApplicationId: $hriApplicationId" if [ -z $hriApplicationId ]; then echo "Failed to get existing HRI Management API application ID! Unable to set JWT_AUDIENCE_ID!" echo "the_curl_response: $response" exit 1 fi + echo echo "Setting JWT_AUDIENCE_ID to existing HRI Management API ID: $hriApplicationId" echo $hriApplicationId > JWT_AUDIENCE_ID exit 0 @@ -69,16 +69,34 @@ echo $hriApplicationId > JWT_AUDIENCE_ID # Assign scopes to application echo -echo "Assigning hri_consumer and hri_data_integrator scopes to HRI provider application" +echo "Assigning hri_internal, hri_consumer and hri_data_integrator scopes to HRI provider application" curl -X PUT -sS -f "${issuer}/applications/${hriApplicationId}/scopes" -H "Content-Type: application/json" -H "Authorization: Bearer ${iamToken}" -d @- << EOF { -"scopes": [ "hri_consumer", "hri_data_integrator"] +"scopes": [ "hri_internal", "hri_consumer", "hri_data_integrator"] } EOF # Create roles -echo +echo echo "Creating roles for each of the created scopes" +response=$(curl -X POST -sS "${issuer}/roles" -H "Authorization: Bearer ${iamToken}" -H "Content-Type: application/json" -d @- << EOF +{ +"name": "${APPID_PREFIX}HRI Internal", +"description": "HRI Internal Role", +"access": [ { + "application_id": "${hriApplicationId}", + "scopes": [ "hri_internal" ] +} ] +} +EOF +) +internalRoleId=$(echo $response | jq -r '.id // "REQUEST_FAILED"') +if [ $internalRoleId = "REQUEST_FAILED" ]; then + echo "Error Creating role: HRI Internal!" + echo "the_curl_response: $response" + exit 1 +fi + response=$(curl -X POST -sS "${issuer}/roles" -H "Authorization: Bearer ${iamToken}" -H "Content-Type: application/json" -d @- << EOF { "name": "${APPID_PREFIX}HRI Consumer", @@ -94,7 +112,7 @@ consumerRoleId=$(echo $response | jq -r '.id // "REQUEST_FAILED"') if [ $consumerRoleId = "REQUEST_FAILED" ]; then echo "Error Creating role: HRI Consumer Role!" echo "the_curl_response: $response" - exit 1 + exit 1 fi response=$(curl -X POST -sS "${issuer}/roles" -H "Authorization: Bearer ${iamToken}" -H "Content-Type: application/json" -d @- << EOF @@ -112,7 +130,34 @@ dataIntegratorRoleId=$(echo $response | jq -r '.id // "REQUEST_FAILED"') if [ $dataIntegratorRoleId = "REQUEST_FAILED" ]; then echo "Error Creating role: HRI Data Integrator Role!" echo "the_curl_response: $response" - exit 1 + exit 1 fi +# Create HRI Internal application. +echo +echo "Creating HRI Internal application" +response=$(curl -X POST -sS "${issuer}/applications" -H "Authorization: Bearer ${iamToken}" -H 'Content-Type: application/json' -d @- << EOF +{ +"name": "${APPID_PREFIX}HRI Internal", +"type": "regularwebapp" +} +EOF +) +internalApplicationId=$(echo $response | jq -r '.clientId // "REQUEST_FAILED"') +if [ $internalApplicationId = "REQUEST_FAILED" ]; then + echo "Error Creating role: HRI Internal App Role!" + echo "the_curl_response: $response" + exit 1 +fi + +# Assign roles to internal application. +echo +echo "Assigning internal and consumer roles to HRI Internal application" +curl -X PUT -sS -f "${issuer}/applications/${internalApplicationId}/roles" -H "Authorization: Bearer ${iamToken}" -H "Content-Type: application/json" -d @- << EOF +{ +"roles":{ + "ids":["${internalRoleId}", "${consumerRoleId}"] +}} +EOF + exit 0 diff --git a/docker/elastic.sh b/docker/elastic.sh index e754ad3..f6dff31 100755 --- a/docker/elastic.sh +++ b/docker/elastic.sh @@ -23,22 +23,21 @@ echo "ES baseUrl: ${baseUrl/:\/\/*@/://}" rtn=0 # set auto-index creation off -echo echo "Setting ElasticSearch auto index creation to false" -curl -X PUT -sS -f $baseUrl/_cluster/settings -H 'Content-Type: application/json' -d' +curl -sS -f -X PUT $baseUrl/_cluster/settings -H 'Content-Type: application/json' -d' { "persistent": { "action.auto_create_index": "false" } -}' || { echo 'Setting ElasticSearch auto index creation failed!' ; rtn=1; } +}' || { echo -e 'Setting ElasticSearch auto index creation failed!' ; rtn=1; } # upload batches index template echo -echo "Setting ElasticSearch Batches index template" -curl -X PUT -sS -f $baseUrl/_index_template/batches -H 'Content-Type: application/json' -d '@batches.json' || +echo -e "Setting ElasticSearch Batches index template" +curl -sS -f -X PUT $baseUrl/_index_template/batches -H 'Content-Type: application/json' -d '@batches.json' || { - echo -e '\nSetting ElasticSearch Batches index template failed!' ; rtn=1; + echo -e 'Setting ElasticSearch Batches index template failed!' ; rtn=1; } echo -echo "ElasticSearch configuration complete" +echo -e "ElasticSearch configuration complete" exit $rtn diff --git a/docker/template.env b/docker/template.env index 64c0228..a4d0875 100644 --- a/docker/template.env +++ b/docker/template.env @@ -1,3 +1,4 @@ +# Environment Variables for local testing IBM_CLOUD_API_KEY= IBM_CLOUD_REGION=ibm:yp:us-south RESOURCE_GROUP=MY_RESOURCE_GROUP diff --git a/document-store/index-templates/batches.json b/document-store/index-templates/batches.json index 47a6e3b..af069b4 100644 --- a/document-store/index-templates/batches.json +++ b/document-store/index-templates/batches.json @@ -17,26 +17,46 @@ }, "recordCount": { "type": "long", - "index": "false" + "index": false + }, + "expectedRecordCount": { + "type": "long", + "index": false + }, + "actualRecordCount": { + "type": "long", + "index": false }, "topic": { "type": "keyword", - "index": "false" + "index": false }, "dataType": { "type": "keyword", - "index": "false" + "index": false }, "startDate": { "type": "date" }, "endDate": { "type": "date", - "index": "false" + "index": false }, "metadata": { "type": "object", - "enabled": "false" + "enabled": false + }, + "invalidThreshold": { + "type": "long", + "index": false + }, + "invalidRecordCount": { + "type": "long", + "index": false + }, + "failureMessage": { + "type": "text", + "index": false } } } diff --git a/elastic-cert64 b/elastic-cert64 index 3d59122..20f6114 100644 --- a/elastic-cert64 +++ b/elastic-cert64 @@ -16,4 +16,4 @@ H+6i04hA9TkKT6ooLwMPc1LYYzqDljEkfKlLIPWCkOAozD3cyc26pV/35nG7WzAF xw7S3jAyB3WcJDlWlSWGTn58w3EHxzVXvKT6Y9eAdKp4SjUHyVFsL5xtSyjH8zpF pZKK8wWNUwgWQ66MNh8Ckq732JZ+so6RAfb4BbNj45I3s9fuZSYlvjkc5/+da3Ck Rp6anX5N6yIrzhVmAgefjQdBztYzdfPhsJBkS/TDnRmk ------END CERTIFICATE----- \ No newline at end of file +-----END CERTIFICATE----- diff --git a/mgmt-api-manifest.yml b/mgmt-api-manifest.yml index d5ef9fa..b0f1743 100644 --- a/mgmt-api-manifest.yml +++ b/mgmt-api-manifest.yml @@ -4,26 +4,26 @@ packages: hri_mgmt_api: - version: 1.2.5 + version: 2.1.5 actions: create_batch: function: build/batches_create-bin.zip runtime: go:1.15 web-export: true annotations: - require-whisk-auth: $FN_WEB_SECURE_KEY + require-whisk-auth: $FN_WEB_SECURE_KEY get_batches: function: build/batches_get-bin.zip runtime: go:1.15 web-export: true annotations: - require-whisk-auth: $FN_WEB_SECURE_KEY + require-whisk-auth: $FN_WEB_SECURE_KEY get_batch_by_id: function: build/batches_get_by_id-bin.zip runtime: go:1.15 web-export: true annotations: - require-whisk-auth: $FN_WEB_SECURE_KEY + require-whisk-auth: $FN_WEB_SECURE_KEY healthcheck: function: build/healthcheck_get-bin.zip runtime: go:1.15 @@ -36,12 +36,24 @@ packages: web-export: true annotations: require-whisk-auth: $FN_WEB_SECURE_KEY + processing_complete: + function: build/batches_processingcomplete-bin.zip + runtime: go:1.15 + web-export: true + annotations: + require-whisk-auth: $FN_WEB_SECURE_KEY terminate_batch: function: build/batches_terminate-bin.zip runtime: go:1.15 web-export: true annotations: require-whisk-auth: $FN_WEB_SECURE_KEY + fail_batch: + function: build/batches_fail-bin.zip + runtime: go:1.15 + web-export: true + annotations: + require-whisk-auth: $FN_WEB_SECURE_KEY create_tenant: function: build/tenants_create-bin.zip runtime: go:1.15 @@ -67,23 +79,23 @@ packages: annotations: require-whisk-auth: $FN_WEB_SECURE_KEY get_streams: - function: build/streams_get-bin.zip - runtime: go:1.15 - web-export: true - annotations: - require-whisk-auth: $FN_WEB_SECURE_KEY + function: build/streams_get-bin.zip + runtime: go:1.15 + web-export: true + annotations: + require-whisk-auth: $FN_WEB_SECURE_KEY create_stream: - function: build/streams_create-bin.zip - runtime: go:1.15 - web-export: true - annotations: - require-whisk-auth: $FN_WEB_SECURE_KEY + function: build/streams_create-bin.zip + runtime: go:1.15 + web-export: true + annotations: + require-whisk-auth: $FN_WEB_SECURE_KEY delete_stream: - function: build/streams_delete-bin.zip - runtime: go:1.15 - web-export: true - annotations: - require-whisk-auth: $FN_WEB_SECURE_KEY + function: build/streams_delete-bin.zip + runtime: go:1.15 + web-export: true + annotations: + require-whisk-auth: $FN_WEB_SECURE_KEY apis: hri-batches: hri: @@ -120,18 +132,26 @@ packages: send_complete: method: PUT response: http + tenants/{tenantId}/batches/{batchId}/action/processingComplete: + processing_complete: + method: PUT + response: http tenants/{tenantId}/batches/{batchId}/action/terminate: terminate_batch: method: PUT response: http + tenants/{tenantId}/batches/{batchId}/action/fail: + fail_batch: + method: PUT + response: http tenants/{tenantId}/streams: - get_streams: - method: GET - response: http + get_streams: + method: GET + response: http tenants/{tenantId}/streams/{streamId}: - create_stream: - method: POST - response: http - delete_stream: - method: DELETE - response: http + create_stream: + method: POST + response: http + delete_stream: + method: DELETE + response: http diff --git a/monitors/README.md b/monitors/README.md deleted file mode 100644 index e51e58b..0000000 --- a/monitors/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# New Relic Monitors - -This directory contains Javascript monitors for [New Relic](https://newrelic.com), the monitoring framework for Watson Foundation for Health solutions. These monitors allow an operations team to track service availability for instances of the Management API. diff --git a/monitors/new-relic-healthcheck.js b/monitors/new-relic-healthcheck.js deleted file mode 100644 index 4dda310..0000000 --- a/monitors/new-relic-healthcheck.js +++ /dev/null @@ -1,32 +0,0 @@ -/** - * (C) Copyright IBM Corp. 2020 - * - * SPDX-License-Identifier: Apache-2.0 - */ -$util.insights.set('service_name', 'wh-hri-mgmt-api'); -// Each IBM functions namespace has a unique API endpoint URL, which can be -// retrieved from the IBM Cloud dashboard. When creating a monitor in New Relic -// you will need to include the endpoint in the string below. -const HEALTH_CHECK_URL = '/healthcheck'; - -const assert = require('assert'); - -function evaluateHealth(err, response, body) { - if (err) { - assert.fail('Unable to contact endpoint.', err.message); - } - - assert.ok(response.statusCode == 200, 'Expected a 200 OK response, got ' + response.statusCode); -} - -const options = { - url: HEALTH_CHECK_URL, - headers: { - // API Key required for accessing HRI-API namespace; secret must be created - // in the New Relic web application - 'X-IBM-Client-Id': $secure.HRI_MGMT_API_KEY, - 'Content-Type': 'application/json', - 'Accept': 'application/json' - } -} -$http.get(options, evaluateHealth); diff --git a/run-dreddtests.sh b/run-dreddtests.sh index e8658ea..54fc4ca 100755 --- a/run-dreddtests.sh +++ b/run-dreddtests.sh @@ -1,4 +1,5 @@ #!/bin/bash + # (C) Copyright IBM Corp. 2020 # # SPDX-License-Identifier: Apache-2.0 @@ -15,7 +16,7 @@ exists=$(git show-ref refs/remotes/origin/${TRAVIS_BRANCH}) if [[ -n "$exists" ]]; then git checkout ${TRAVIS_BRANCH} else - git checkout support-1.x + git checkout support-2.x fi # convert API to swagger 2.0 diff --git a/run-ivttests.sh b/run-ivttests.sh index f2a525f..bdd106a 100755 --- a/run-ivttests.sh +++ b/run-ivttests.sh @@ -1,7 +1,16 @@ #!/usr/bin/env bash + # (C) Copyright IBM Corp. 2020 # # SPDX-License-Identifier: Apache-2.0 -echo 'Run IVT Tests' -rspec test/spec --tag ~@broken --format documentation --format RspecJunitFormatter --out ivttest.xml +set -e + +echo 'Run IVT Tests Without Validation' +rspec test/spec/hri_management_api_no_validation_spec.rb --tag ~@broken --format documentation --format RspecJunitFormatter --out test/ivt_test_results/ivttest_no_validation.xml + +echo 'Turn On Validation' +./setValidation.sh true + +echo 'Run IVT Tests With Validation' +rspec test/spec/hri_management_api_validation_spec.rb --tag ~@broken --format documentation --format RspecJunitFormatter --out test/ivt_test_results/ivttest_validation.xml \ No newline at end of file diff --git a/run-smoketests.sh b/run-smoketests.sh index 20b8dc8..47e6ef8 100755 --- a/run-smoketests.sh +++ b/run-smoketests.sh @@ -1,4 +1,5 @@ #!/usr/bin/env bash + # (C) Copyright IBM Corp. 2020 # # SPDX-License-Identifier: Apache-2.0 @@ -10,6 +11,7 @@ failing=0 output="" # lookup the base API url for the current targeted functions namespace +# Note: this doesn't work in MacOS due to differences in `sed` flags as compared to Linux. serviceUrl=$(ibmcloud fn api list -f | grep 'URL: ' | grep 'hri/healthcheck' -m 1 | sed -rn 's/^.*: (.*)\/hri.*/\1\/hri/p') echo 'Run Smoke Tests' diff --git a/run-unittests.sh b/run-unittests.sh index 01e95f5..0127023 100755 --- a/run-unittests.sh +++ b/run-unittests.sh @@ -1,4 +1,5 @@ #!/bin/bash + # (C) Copyright IBM Corp. 2020 # # SPDX-License-Identifier: Apache-2.0 diff --git a/setValidation.sh b/setValidation.sh new file mode 100755 index 0000000..2364541 --- /dev/null +++ b/setValidation.sh @@ -0,0 +1,83 @@ +#!/usr/bin/env bash + +# (C) Copyright IBM Corp. 2020 +# +# SPDX-License-Identifier: Apache-2.0 +# +# This sript will set the 'validation' property on the Management API to 'true' or 'false'. This property changes the behavior of the API based on whether there is Flink Validation. The main use for this script is to support integration testing of both behaviors. +set -eo pipefail + +validation=$1 + +if [ -z "$validation" ] || [ "$validation" != "true" ] && [ "$validation" != "false" ]; +then + echo "Missing validation parameter or invalid value." + echo "Usage: ./setValidation.sh ( true | false )" + exit 1 +fi + +if [ -z "$ELASTIC_INSTANCE" ] +then + echo "Please set ELASTIC_INSTANCE environment variable" + exit 1 +fi + +if [ -z "$ELASTIC_SVC_ACCOUNT" ] +then + echo "Please set ELASTIC_SVC_ACCOUNT environment variable" + exit 1 +fi + +if [ -z "$KAFKA_INSTANCE" ] +then + echo "Please set KAFKA_INSTANCE environment variable" + exit 1 +fi + +if [ -z "$KAFKA_SVC_ACCOUNT" ] +then + echo "Please set KAFKA_SVC_ACCOUNT environment variable" + exit 1 +fi + +if [ -z "$OIDC_ISSUER" ] +then + echo "Please set OIDC_ISSUER environment variable" + exit 1 +fi + +if [ -z "$JWT_AUDIENCE_ID" ] +then + read -p "JWT_AUDIENCE_ID not set. Enter the Audience ID: " JWT_AUDIENCE_ID +fi + +echo "Setting validation to $validation" + +#NOTE: updating a package with paramters, will overwrite all the existing parameters including bound credentials. So if more parameters are ever added, they will need to be included in this script. And the service bind commands always need to be rerun at the end. + +echo "Building OpenWhisk Parameters" +params="$(cat << EOF +{ + "issuer": "$OIDC_ISSUER", + "validation": $validation, + "jwtAudienceId": "$JWT_AUDIENCE_ID" +} +EOF +)" +echo $params + +# save OpenWhisk parameters to temp file +paramFile=$(mktemp) +echo $params > $paramFile + +echo "Created temp params file" + +# set config parameters, all of them have to be set in the same command +ibmcloud fn package update hri_mgmt_api --param-file $paramFile + +rm $paramFile + +# bind Elastic and Kafka service instances to hri-mgmt-api +ibmcloud fn service bind databases-for-elasticsearch hri_mgmt_api --instance "${ELASTIC_INSTANCE}" --keyname "${ELASTIC_SVC_ACCOUNT}" +ibmcloud fn service bind messagehub hri_mgmt_api --instance "${KAFKA_INSTANCE}" --keyname "${KAFKA_SVC_ACCOUNT}" + diff --git a/src/batches/conversion.go b/src/batches/conversion.go index f10db69..b673cea 100644 --- a/src/batches/conversion.go +++ b/src/batches/conversion.go @@ -19,6 +19,20 @@ const ( func EsDocToBatch(esDoc map[string]interface{}) map[string]interface{} { batch := esDoc["_source"].(map[string]interface{}) batch[param.BatchId] = esDoc[esparam.EsDocId] + batch = NormalizeBatchRecordCountValues(batch) + return batch +} + +// If the provided batch has either recordCount or expectedRecordCount (but not both), +// this method will set the unset property to match. This is temporary to support the deprecated +// recordCount, but expectedRecordCount is the property that will be used long-term. +func NormalizeBatchRecordCountValues(batch map[string]interface{}) map[string]interface{} { + var recordCount, expectedRecordCount = batch[param.RecordCount], batch[param.ExpectedRecordCount] + if recordCount != nil && expectedRecordCount == nil { + batch[param.ExpectedRecordCount] = recordCount + } else if recordCount == nil && expectedRecordCount != nil { + batch[param.RecordCount] = expectedRecordCount + } return batch } diff --git a/src/batches/conversion_test.go b/src/batches/conversion_test.go index c7ca3c8..46e8a09 100644 --- a/src/batches/conversion_test.go +++ b/src/batches/conversion_test.go @@ -35,6 +35,58 @@ func TestEsDocToBatch(t *testing.T) { "startDate": "2019-10-30T12:34:00Z", }, }, + map[string]interface{}{ + "id": "1", + "name": "batch-2019-10-07", + "topic": "ingest.1.fhir", + "dataType": "claims", + "integratorId": "dataIntegrator1", + "status": "started", + "recordCount": 100, + "expectedRecordCount": 100, + "startDate": "2019-10-30T12:34:00Z", + }, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + if got := EsDocToBatch(tt.esDoc); !reflect.DeepEqual(got, tt.want) { + t.Errorf("EsDocToBatch() = %v, want %v", got, tt.want) + } + }) + } +} + +func TestNormalizeBatchRecordCountValues(t *testing.T) { + tests := []struct { + name string + inputBatch map[string]interface{} + expected map[string]interface{} + }{ + {"set-record-count-from-expected-record-count", + map[string]interface{}{ + "id": "1", + "name": "batch-2019-10-07", + "topic": "ingest.1.fhir", + "dataType": "claims", + "integratorId": "dataIntegrator1", + "status": "started", + "expectedRecordCount": 100, + "startDate": "2019-10-30T12:34:00Z", + }, + map[string]interface{}{ + "id": "1", + "name": "batch-2019-10-07", + "topic": "ingest.1.fhir", + "dataType": "claims", + "integratorId": "dataIntegrator1", + "status": "started", + "recordCount": 100, + "expectedRecordCount": 100, + "startDate": "2019-10-30T12:34:00Z", + }, + }, + {"set-expected-record-count-from-record-count", map[string]interface{}{ "id": "1", "name": "batch-2019-10-07", @@ -45,12 +97,66 @@ func TestEsDocToBatch(t *testing.T) { "recordCount": 100, "startDate": "2019-10-30T12:34:00Z", }, + map[string]interface{}{ + "id": "1", + "name": "batch-2019-10-07", + "topic": "ingest.1.fhir", + "dataType": "claims", + "integratorId": "dataIntegrator1", + "status": "started", + "recordCount": 100, + "expectedRecordCount": 100, + "startDate": "2019-10-30T12:34:00Z", + }, + }, + {"no-change-when-neither-set", + map[string]interface{}{ + "id": "1", + "name": "batch-2019-10-07", + "topic": "ingest.1.fhir", + "dataType": "claims", + "integratorId": "dataIntegrator1", + "status": "started", + "startDate": "2019-10-30T12:34:00Z", + }, + map[string]interface{}{ + "id": "1", + "name": "batch-2019-10-07", + "topic": "ingest.1.fhir", + "dataType": "claims", + "integratorId": "dataIntegrator1", + "status": "started", + "startDate": "2019-10-30T12:34:00Z", + }, + }, + {"set-expected-record-count-from-record-count-0", + map[string]interface{}{ + "id": "1", + "name": "batch-2019-10-07", + "topic": "ingest.1.fhir", + "dataType": "claims", + "integratorId": "dataIntegrator1", + "status": "started", + "recordCount": 0, + "startDate": "2019-10-30T12:34:00Z", + }, + map[string]interface{}{ + "id": "1", + "name": "batch-2019-10-07", + "topic": "ingest.1.fhir", + "dataType": "claims", + "integratorId": "dataIntegrator1", + "status": "started", + "recordCount": 0, + "expectedRecordCount": 0, + "startDate": "2019-10-30T12:34:00Z", + }, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { - if got := EsDocToBatch(tt.esDoc); !reflect.DeepEqual(got, tt.want) { - t.Errorf("EsDocToBatch() = %v, want %v", got, tt.want) + if got := NormalizeBatchRecordCountValues(tt.inputBatch); !reflect.DeepEqual(got, tt.expected) { + t.Errorf("EsDocToBatch() = %v, want %v", got, tt.expected) } }) } diff --git a/src/batches/create.go b/src/batches/create.go index e94c885..18ec57e 100644 --- a/src/batches/create.go +++ b/src/batches/create.go @@ -79,10 +79,11 @@ func Create( ) // parse the response - body, errRes := elastic.DecodeBody(indexRes, err, tenantId, logger) - if errRes != nil { - return errRes + body, elasticErr := elastic.DecodeBody(indexRes, err) + if elasticErr != nil { + return elasticErr.LogAndBuildApiResponse(logger, "Batch creation failed") } + batchId := body[esparam.EsDocId].(string) // add batchId to info and publish to the notification topic @@ -106,12 +107,17 @@ func buildBatchInfo(args map[string]interface{}, integrator string) map[string]i currentTime := time.Now().UTC() info := map[string]interface{}{ - param.Name: args[param.Name].(string), - param.IntegratorId: integrator, - param.Topic: args[param.Topic].(string), - param.DataType: args[param.DataType].(string), - param.Status: status.Started.String(), - param.StartDate: currentTime.Format(elastic.DateTimeFormat), + param.Name: args[param.Name].(string), + param.IntegratorId: integrator, + param.Topic: args[param.Topic].(string), + param.DataType: args[param.DataType].(string), + param.Status: status.Started.String(), + param.StartDate: currentTime.Format(elastic.DateTimeFormat), + param.InvalidThreshold: args[param.InvalidThreshold], + } + + if info[param.InvalidThreshold] == nil { + info[param.InvalidThreshold] = -1 } if val, ok := args[param.Metadata]; ok { diff --git a/src/batches/create_test.go b/src/batches/create_test.go index e4b1951..ec4d1f7 100644 --- a/src/batches/create_test.go +++ b/src/batches/create_test.go @@ -34,41 +34,68 @@ func TestCreate(t *testing.T) { topicBase := "batchTopic" inputTopic := topicBase + inputSuffix batchMetadata := map[string]interface{}{"operation": "update"} + batchInvalidThreshold := 10 validArgs := map[string]interface{}{ - path.ParamOwPath: fmt.Sprintf("/hri/tenants/%s/batches", tenantId), - param.Name: batchName, - param.Topic: inputTopic, - param.DataType: batchDataType, - param.Metadata: batchMetadata, + path.ParamOwPath: fmt.Sprintf("/hri/tenants/%s/batches", tenantId), + param.Name: batchName, + param.Topic: inputTopic, + param.DataType: batchDataType, + param.Metadata: batchMetadata, + param.InvalidThreshold: batchInvalidThreshold, } validClaims := auth.HriClaims{Scope: auth.HriIntegrator, Subject: integratorId} validBatchMetadata := map[string]interface{}{ - param.BatchId: batchId, - param.Name: batchName, - param.IntegratorId: integratorId, - param.Topic: inputTopic, - param.DataType: batchDataType, - param.Status: status.Started.String(), - param.StartDate: "ignored", - param.Metadata: batchMetadata, + param.BatchId: batchId, + param.Name: batchName, + param.IntegratorId: integratorId, + param.Topic: inputTopic, + param.DataType: batchDataType, + param.Status: status.Started.String(), + param.StartDate: "ignored", + param.Metadata: batchMetadata, + param.InvalidThreshold: batchInvalidThreshold, } elasticIndexRequestBody, err := json.Marshal(map[string]interface{}{ - param.Name: batchName, - param.IntegratorId: integratorId, - param.Topic: inputTopic, - param.DataType: batchDataType, - param.Status: status.Started.String(), - param.StartDate: test.DatePattern, - param.Metadata: batchMetadata, + param.Name: batchName, + param.IntegratorId: integratorId, + param.Topic: inputTopic, + param.DataType: batchDataType, + param.Status: status.Started.String(), + param.StartDate: test.DatePattern, + param.Metadata: batchMetadata, + param.InvalidThreshold: batchInvalidThreshold, }) if err != nil { t.Fatal("Unable to marshal expected elastic Index request body") } + elasticIndexRequestBodyDefaultThreshold, err := json.Marshal(map[string]interface{}{ + param.Name: batchName, + param.IntegratorId: integratorId, + param.Topic: inputTopic, + param.DataType: batchDataType, + param.Status: status.Started.String(), + param.StartDate: test.DatePattern, + param.Metadata: batchMetadata, + param.InvalidThreshold: -1, + }) + + invalidThresholdBody := map[string]interface{}{ + param.BatchId: batchId, + param.Name: batchName, + param.IntegratorId: integratorId, + param.Topic: inputTopic, + param.DataType: batchDataType, + param.Status: status.Started.String(), + param.StartDate: "ignored", + param.Metadata: batchMetadata, + param.InvalidThreshold: -1, + } + badParamResponse := map[string]interface{}{"bad": "param"} elasticErrMsg := "elasticErrMsg" @@ -80,6 +107,7 @@ func TestCreate(t *testing.T) { transport *test.FakeTransport writerError error expected map[string]interface{} + kafkaValue map[string]interface{} }{ { name: "unauthorized", @@ -102,6 +130,7 @@ func TestCreate(t *testing.T) { transport: test.NewFakeTransport(t), validatorResponse: badParamResponse, expected: badParamResponse, + kafkaValue: validBatchMetadata, }, { name: "missing-path", @@ -115,6 +144,7 @@ func TestCreate(t *testing.T) { expected: response.Error( http.StatusBadRequest, "Required parameter '__ow_path' is missing"), + kafkaValue: validBatchMetadata, }, { name: "bad-response", @@ -129,8 +159,9 @@ func TestCreate(t *testing.T) { ), expected: response.Error( http.StatusInternalServerError, - fmt.Sprintf("Elastic client error: %s", elasticErrMsg), + fmt.Sprintf("Batch creation failed: elasticsearch client error: %s", elasticErrMsg), ), + kafkaValue: validBatchMetadata, }, { name: "writer-error", @@ -148,6 +179,7 @@ func TestCreate(t *testing.T) { ), writerError: errors.New("Unable to write to Kafka"), expected: response.Error(http.StatusInternalServerError, "Unable to write to Kafka"), + kafkaValue: validBatchMetadata, }, { name: "good-request", @@ -160,7 +192,28 @@ func TestCreate(t *testing.T) { ResponseBody: fmt.Sprintf(`{"%s": "%s"}`, esparam.EsDocId, batchId), }, ), - expected: response.Success(http.StatusCreated, map[string]interface{}{param.BatchId: batchId}), + expected: response.Success(http.StatusCreated, map[string]interface{}{param.BatchId: batchId}), + kafkaValue: validBatchMetadata, + }, + { + name: "missing-invalid-threshold", + args: map[string]interface{}{ + path.ParamOwPath: fmt.Sprintf("/hri/tenants/%s/batches", tenantId), + param.Name: batchName, + param.Topic: inputTopic, + param.DataType: batchDataType, + param.Metadata: batchMetadata, + }, + claims: validClaims, + transport: test.NewFakeTransport(t).AddCall( + fmt.Sprintf("/%s-batches/_doc", tenantId), + test.ElasticCall{ + RequestBody: string(elasticIndexRequestBodyDefaultThreshold), + ResponseBody: fmt.Sprintf(`{"%s": "%s"}`, esparam.EsDocId, batchId), + }, + ), + expected: response.Success(http.StatusCreated, map[string]interface{}{param.BatchId: batchId}), + kafkaValue: invalidThresholdBody, }, } @@ -184,7 +237,7 @@ func TestCreate(t *testing.T) { T: t, ExpectedTopic: topicBase + notificationSuffix, ExpectedKey: batchId, - ExpectedValue: validBatchMetadata, + ExpectedValue: tc.kafkaValue, Error: tc.writerError, } diff --git a/src/batches/fail.go b/src/batches/fail.go new file mode 100644 index 0000000..650cefb --- /dev/null +++ b/src/batches/fail.go @@ -0,0 +1,63 @@ +/* + * (C) Copyright IBM Corp. 2020 + * + * SPDX-License-Identifier: Apache-2.0 + */ +package batches + +import ( + "errors" + "fmt" + "github.com/Alvearie/hri-mgmt-api/batches/status" + "github.com/Alvearie/hri-mgmt-api/common/auth" + "github.com/Alvearie/hri-mgmt-api/common/elastic" + "github.com/Alvearie/hri-mgmt-api/common/param" + "log" + "reflect" + "time" +) + +type Fail struct{} + +const FailAction string = "fail" + +func (Fail) GetAction() string { + return FailAction +} + +func (Fail) CheckAuth(claims auth.HriClaims) error { + // Only internal code can call fail + if !claims.HasScope(auth.HriInternal) { + return errors.New(fmt.Sprintf(auth.MsgInternalRoleRequired, "failed")) + } + return nil +} + +func (Fail) GetUpdateScript(params map[string]interface{}, validator param.Validator, _ auth.HriClaims, logger *log.Logger) (map[string]interface{}, map[string]interface{}) { + errResp := validator.Validate( + params, + // golang receives numeric JSON values as Float64 + param.Info{param.ActualRecordCount, reflect.Float64}, + param.Info{param.InvalidRecordCount, reflect.Float64}, + param.Info{param.FailureMessage, reflect.String}, + ) + if errResp != nil { + logger.Printf("Bad input params: %s", errResp) + return nil, errResp + } + actualRecordCount := int(params[param.ActualRecordCount].(float64)) + invalidRecordCount := int(params[param.InvalidRecordCount].(float64)) + failureMessage := params[param.FailureMessage].(string) + currentTime := time.Now().UTC() + + // Can't fail a batch if status is already 'terminated' or 'failed' + updateScript := fmt.Sprintf("if (ctx._source.status != '%s' && ctx._source.status != '%s') {ctx._source.status = '%s'; ctx._source.actualRecordCount = %d; ctx._source.invalidRecordCount = %d; ctx._source.failureMessage = '%s'; ctx._source.endDate = '%s';} else {ctx.op = 'none'}", + status.Terminated, status.Failed, status.Failed, actualRecordCount, invalidRecordCount, failureMessage, currentTime.Format(elastic.DateTimeFormat)) + + updateRequest := map[string]interface{}{ + "script": map[string]interface{}{ + "source": updateScript, + }, + } + return updateRequest, nil +} diff --git a/src/batches/fail_test.go b/src/batches/fail_test.go new file mode 100644 index 0000000..711a4a3 --- /dev/null +++ b/src/batches/fail_test.go @@ -0,0 +1,257 @@ +/* + * (C) Copyright IBM Corp. 2020 + * + * SPDX-License-Identifier: Apache-2.0 + */ +package batches + +import ( + "encoding/json" + "fmt" + "github.com/Alvearie/hri-mgmt-api/batches/status" + "github.com/Alvearie/hri-mgmt-api/common/auth" + "github.com/Alvearie/hri-mgmt-api/common/elastic" + "github.com/Alvearie/hri-mgmt-api/common/param" + "github.com/Alvearie/hri-mgmt-api/common/path" + "github.com/Alvearie/hri-mgmt-api/common/response" + "github.com/Alvearie/hri-mgmt-api/common/test" + "log" + "net/http" + "os" + "reflect" + "testing" +) + +func TestFail_AuthCheck(t *testing.T) { + tests := []struct { + name string + claims auth.HriClaims + expectedErr string + }{ + { + name: "With internal role, should return nil", + claims: auth.HriClaims{Scope: auth.HriInternal}, + }, + { + name: "With internal & Consumer role, should return nil", + claims: auth.HriClaims{Scope: auth.HriInternal + " " + auth.HriConsumer}, + }, + { + name: "Without internal role, should return error", + claims: auth.HriClaims{Scope: auth.HriIntegrator}, + expectedErr: "Must have hri_internal role to mark a batch as failed", + }, + } + + fail := Fail{} + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + + err := fail.CheckAuth(tt.claims) + if (err == nil && tt.expectedErr != "") || (err != nil && err.Error() != tt.expectedErr) { + t.Errorf("GetAuth() = '%v', expected '%v'", err, tt.expectedErr) + } + }) + } +} + +func TestFail_GetUpdateScript(t *testing.T) { + tests := []struct { + name string + params map[string]interface{} + claims auth.HriClaims + // Note that the following chars must be escaped because expectedScript is used as a regex pattern: ., ), ( + expectedRequest map[string]interface{} + expectedErr map[string]interface{} + }{ + { + name: "success", + params: map[string]interface{}{ + param.ActualRecordCount: float64(10), + param.InvalidRecordCount: float64(2), + param.FailureMessage: "Batch Failed", + }, + expectedRequest: map[string]interface{}{ + "script": map[string]interface{}{ + "source": `if \(ctx\._source\.status != 'terminated' && ctx\._source\.status != 'failed'\) {ctx\._source\.status = 'failed'; ctx\._source\.actualRecordCount = 10; ctx\._source\.invalidRecordCount = 2; ctx\._source\.failureMessage = 'Batch Failed'; ctx\._source\.endDate = '` + test.DatePattern + `';} else {ctx\.op = 'none'}`, + }, + }, + }, + { + name: "Missing Actual Record Count param", + params: map[string]interface{}{ + param.InvalidRecordCount: float64(2), + param.FailureMessage: "Batch Failed", + }, + expectedErr: response.MissingParams(param.ActualRecordCount), + }, + { + name: "Missing Invalid Record Count param", + params: map[string]interface{}{ + param.ActualRecordCount: float64(10), + param.FailureMessage: "Batch Failed", + }, + expectedErr: response.MissingParams(param.InvalidRecordCount), + }, + { + name: "Missing Failure Message param", + params: map[string]interface{}{ + param.ActualRecordCount: float64(10), + param.InvalidRecordCount: float64(2), + }, + expectedErr: response.MissingParams(param.FailureMessage), + }, + } + + fail := Fail{} + logger := log.New(os.Stdout, fmt.Sprintf("batches/%s: ", fail.GetAction()), log.Llongfile) + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + + request, errResp := fail.GetUpdateScript(tt.params, param.ParamValidator{}, tt.claims, logger) + if !reflect.DeepEqual(errResp, tt.expectedErr) { + t.Errorf("GetUpdateScript().errResp = '%v', expected '%v'", errResp, tt.expectedErr) + } else if tt.expectedRequest != nil { + if err := RequestCompareScriptTest(tt.expectedRequest, request); err != nil { + t.Errorf("GetUpdateScript().udpateRequest = \n\t'%s' \nDoesn't match expected \n\t'%s'\n%v", request, tt.expectedRequest, err) + } + } + + }) + } + +} + +func TestUpdateStatus_Fail(t *testing.T) { + activationId := "activationId" + _ = os.Setenv(response.EnvOwActivationId, activationId) + + const ( + scriptFail string = `{"script":{"source":"if \(ctx\._source\.status != 'terminated' && ctx\._source\.status != 'failed'\) {ctx\._source\.status = 'failed'; ctx\._source\.actualRecordCount = 10; ctx\._source\.invalidRecordCount = 2; ctx\._source\.failureMessage = 'Batch Failed'; ctx\._source\.endDate = '` + test.DatePattern + `';} else {ctx\.op = 'none'}"}}` + "\n" + ) + + validClaims := auth.HriClaims{Scope: auth.HriInternal, Subject: "internalId"} + + failedBatch := map[string]interface{}{ + param.BatchId: batchId, + param.Name: batchName, + param.IntegratorId: integratorId, + param.Topic: batchTopic, + param.DataType: batchDataType, + param.Status: status.Failed.String(), + param.StartDate: batchStartDate, + param.RecordCount: batchExpectedRecordCount, + param.ExpectedRecordCount: batchExpectedRecordCount, + param.ActualRecordCount: batchActualRecordCount, + param.InvalidThreshold: batchInvalidThreshold, + param.InvalidRecordCount: batchInvalidRecordCount, + } + failedJSON, err := json.Marshal(failedBatch) + if err != nil { + t.Errorf("Unable to create batch JSON string: %s", err.Error()) + } + + terminatedBatch := map[string]interface{}{ + param.BatchId: batchId, + param.Name: batchName, + param.IntegratorId: integratorId, + param.Topic: batchTopic, + param.DataType: batchDataType, + param.Status: status.Terminated.String(), + param.StartDate: batchStartDate, + param.ExpectedRecordCount: batchExpectedRecordCount, + param.InvalidThreshold: batchInvalidThreshold, + } + terminatedJSON, err := json.Marshal(terminatedBatch) + if err != nil { + t.Errorf("Unable to create batch JSON string: %s", err.Error()) + } + + tests := []struct { + name string + params map[string]interface{} + claims auth.HriClaims + ft *test.FakeTransport + writerError error + expectedNotification map[string]interface{} + expectedResponse map[string]interface{} + }{ + { + name: "simple fail", + params: map[string]interface{}{ + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/fail", + param.ActualRecordCount: batchActualRecordCount, + param.InvalidRecordCount: batchInvalidRecordCount, + param.FailureMessage: batchFailureMessage, + }, + claims: validClaims, + ft: test.NewFakeTransport(t).AddCall( + "/1234-batches/_doc/test-batch/_update", + test.ElasticCall{ + RequestQuery: transportQueryParams, + RequestBody: scriptFail, + ResponseBody: fmt.Sprintf(` + { + "_index": "1234-batches", + "_type": "_doc", + "_id": "test-batch", + "result": "updated", + "get": { + "_source": %s + } + }`, failedJSON), + }, + ), + expectedNotification: failedBatch, + expectedResponse: response.Success(http.StatusOK, map[string]interface{}{}), + }, + { + name: "'fail' fails on terminated batch", + params: map[string]interface{}{ + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/fail", + param.ActualRecordCount: batchActualRecordCount, + param.InvalidRecordCount: batchInvalidRecordCount, + param.FailureMessage: batchFailureMessage, + }, + claims: validClaims, + ft: test.NewFakeTransport(t).AddCall( + "/1234-batches/_doc/test-batch/_update", + test.ElasticCall{ + RequestQuery: transportQueryParams, + RequestBody: scriptFail, + ResponseBody: fmt.Sprintf(` + { + "_index": "1234-batches", + "_type": "_doc", + "_id": "test-batch", + "result": "noop", + "get": { + "_source": %s + } + }`, terminatedJSON), + }, + ), + expectedResponse: response.Error(http.StatusConflict, "The 'fail' endpoint failed, batch is in 'terminated' state"), + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + esClient, err := elastic.ClientFromTransport(tt.ft) + if err != nil { + t.Error(err) + } + writer := test.FakeWriter{ + T: t, + ExpectedTopic: InputTopicToNotificationTopic(batchTopic), + ExpectedKey: batchId, + ExpectedValue: tt.expectedNotification, + Error: tt.writerError, + } + + if result := UpdateStatus(tt.params, param.ParamValidator{}, tt.claims, Fail{}, esClient, writer); !reflect.DeepEqual(result, tt.expectedResponse) { + t.Errorf("UpdateStatus() = \n\t%v, expected: \n\t%v", result, tt.expectedResponse) + } + }) + } +} diff --git a/src/batches/get.go b/src/batches/get.go index bf99e11..339b4e4 100644 --- a/src/batches/get.go +++ b/src/batches/get.go @@ -87,6 +87,7 @@ func appendRange(params map[string]interface{}, paramName string, gteParam strin func Get(params map[string]interface{}, claims auth.HriClaims, client *elasticsearch.Client) map[string]interface{} { logger := log.New(os.Stdout, "batches/get: ", log.Llongfile) + // Data Integrators and Consumers can use this endpoint, so either scope allows access if !claims.HasScope(auth.HriConsumer) && !claims.HasScope(auth.HriIntegrator) { errMsg := auth.MsgAccessTokenMissingScopes logger.Println(errMsg) @@ -171,26 +172,9 @@ func Get(params map[string]interface{}, claims auth.HriClaims, client *elasticse client.Search.WithTrackTotalHits(true), ) - body, errResp := elastic.DecodeBody(res, err, tenantId, logger) - if errResp != nil { - // check for parse exceptions - if body != nil { - if body["error"].(map[string]interface{})["type"] == "search_phase_execution_exception" { - causes, ok := body["error"].(map[string]interface{})["root_cause"].([]interface{}) - if !ok { - logger.Println("unable to decode error.root_cause") - return errResp - } - for _, value := range causes { - entry := value.(map[string]interface{}) - if entry["type"].(string) == "parse_exception" { - return response.Error(http.StatusBadRequest, fmt.Sprintf("%v: %v", entry["type"], entry["reason"])) - } - } - } - } - - return errResp + body, elasticErr := elastic.DecodeBody(res, err) + if elasticErr != nil { + return elasticErr.LogAndBuildApiResponse(logger, "Could not retrieve batches") } hits := body["hits"].(map[string]interface{})["hits"].([]interface{}) diff --git a/src/batches/get_by_id.go b/src/batches/get_by_id.go index b64a5c0..2f30c09 100644 --- a/src/batches/get_by_id.go +++ b/src/batches/get_by_id.go @@ -19,6 +19,7 @@ import ( ) const msgMissingStatusElem = "Error: Elastic Search Result body does Not have the expected '_source' Element" +const msgDocNotFound string = "The document for tenantId: %s with document (batch) ID: %s was not found" func GetById(params map[string]interface{}, claims auth.HriClaims, client *elasticsearch.Client) map[string]interface{} { logger := log.New(os.Stdout, "batches/GetById: ", log.Llongfile) @@ -49,9 +50,16 @@ func GetById(params map[string]interface{}, claims auth.HriClaims, client *elast res, err := client.Get(index, batchId) - resultBody, errResp := elastic.DecodeBody(res, err, tenantId, logger) - if errResp != nil { - return errResp + resultBody, elasticErr := elastic.DecodeBody(res, err) + if elasticErr != nil { + if elasticErr.ErrorObj == nil && resultBody != nil && docNotFound(resultBody) { + msg := fmt.Sprintf(msgDocNotFound, tenantId, batchId) + logger.Println(msg) + return response.Error(http.StatusNotFound, msg) + } + + return elasticErr.LogAndBuildApiResponse(logger, + fmt.Sprintf("Could not retrieve batch with id: %s", batchId)) } errResponse := checkBatchAuthorization(claims, resultBody) @@ -62,6 +70,13 @@ func GetById(params map[string]interface{}, claims auth.HriClaims, client *elast return response.Success(http.StatusOK, EsDocToBatch(resultBody)) } +func docNotFound(resultBody map[string]interface{}) bool { + found, ok := resultBody["found"].(bool) + return ok && !found +} + +// Data Integrators and Consumers can call this endpoint, but the behavior is slightly different. Consumers can see +// all Batches, but Data Integrators are only allowed to see Batches they created. func checkBatchAuthorization(claims auth.HriClaims, resultBody map[string]interface{}) map[string]interface{} { if claims.HasScope(auth.HriConsumer) { //= Always Authorized return nil // return nil Error for Authorized diff --git a/src/batches/get_by_id_test.go b/src/batches/get_by_id_test.go index 70fda6d..7f6662c 100644 --- a/src/batches/get_by_id_test.go +++ b/src/batches/get_by_id_test.go @@ -6,7 +6,6 @@ package batches import ( - "errors" "github.com/Alvearie/hri-mgmt-api/common/auth" "github.com/Alvearie/hri-mgmt-api/common/elastic" "github.com/Alvearie/hri-mgmt-api/common/path" @@ -65,7 +64,7 @@ func TestGetById(t *testing.T) { }`, }, ), - expected: response.Success(http.StatusOK, map[string]interface{}{"id": "batch7j3", "name": "monkeyBatch", "status": "started", "startDate": "2019-12-13", "dataType": "claims", "topic": "ingest-test", "recordCount": float64(1)}), + expected: response.Success(http.StatusOK, map[string]interface{}{"id": "batch7j3", "name": "monkeyBatch", "status": "started", "startDate": "2019-12-13", "dataType": "claims", "topic": "ingest-test", "recordCount": float64(1), "expectedRecordCount": float64(1)}), }, { name: "batch not found", @@ -164,65 +163,7 @@ func TestGetById(t *testing.T) { ), expected: response.Error( http.StatusNotFound, - "index_not_found_exception: no such index"), - }, - { - name: "bad-ES-response-body-EOF", - args: validPathArg, - claims: auth.HriClaims{Scope: auth.HriConsumer}, - transport: test.NewFakeTransport(t).AddCall( - "/tenant12x-batches/_doc/batch7j3", - test.ElasticCall{ - ResponseStatusCode: http.StatusNotFound, - ResponseBody: ``, - }, - ), - expected: response.Error( - http.StatusInternalServerError, - "Error parsing the Elastic search response body: EOF"), - }, - { - name: "body decode error on ES OK Response", - args: validPathArg, - claims: auth.HriClaims{Scope: auth.HriConsumer}, - transport: test.NewFakeTransport(t).AddCall( - "/tenant12x-batches/_doc/batch7j3", - test.ElasticCall{ - ResponseBody: `{bad json message : "`, - }, - ), - expected: response.Error( - http.StatusInternalServerError, - "Error parsing the Elastic search response body: invalid character 'b' looking for beginning of object key string"), - }, - { - name: "body decode error on ES Response: 400 Bad Request", - args: validPathArg, - claims: auth.HriClaims{Scope: auth.HriConsumer}, - transport: test.NewFakeTransport(t).AddCall( - "/tenant12x-batches/_doc/batch7j3", - test.ElasticCall{ - ResponseStatusCode: http.StatusBadRequest, - ResponseBody: `{bad json message : "`, - }, - ), - expected: response.Error( - http.StatusInternalServerError, - "Error parsing the Elastic search response body: invalid character 'b' looking for beginning of object key string"), - }, - { - name: "client error", - args: validPathArg, - claims: auth.HriClaims{Scope: auth.HriConsumer}, - transport: test.NewFakeTransport(t).AddCall( - "/tenant12x-batches/_doc/batch7j3", - test.ElasticCall{ - ResponseErr: errors.New("some client error"), - }, - ), - expected: response.Error( - http.StatusInternalServerError, - "Elastic client error: some client error"), + "Could not retrieve batch with id: batch7j3: index_not_found_exception: no such index"), }, { name: "integrator role integrator id matches sub claim", @@ -252,7 +193,7 @@ func TestGetById(t *testing.T) { }`, }, ), - expected: response.Success(http.StatusOK, map[string]interface{}{"id": "batch7j3", "integratorId": "dataIntegrator1", "name": "monkeyBatch", "status": "started", "startDate": "2019-12-13", "dataType": "claims", "topic": "ingest-test", "recordCount": float64(1)}), + expected: response.Success(http.StatusOK, map[string]interface{}{"id": "batch7j3", "integratorId": "dataIntegrator1", "name": "monkeyBatch", "status": "started", "startDate": "2019-12-13", "dataType": "claims", "topic": "ingest-test", "recordCount": float64(1), "expectedRecordCount": float64(1)}), }, { name: "integrator role integrator id Does NOT Match sub claim", diff --git a/src/batches/get_test.go b/src/batches/get_test.go index 47defda..a42ecab 100644 --- a/src/batches/get_test.go +++ b/src/batches/get_test.go @@ -128,88 +128,6 @@ func TestGet(t *testing.T) { test.NewFakeTransport(t), response.Error(http.StatusBadRequest, "Error parsing 'from' parameter: strconv.Atoi: parsing \"b2\": invalid syntax"), }, - {"bad gteDate value", - map[string]interface{}{path.ParamOwPath: "/hri/tenants/1234/batches", "gteDate": "2019-aaaef-01"}, - auth.HriClaims{Scope: auth.HriConsumer}, - test.NewFakeTransport(t).AddCall( - "/1234-batches/_search", - test.ElasticCall{ - RequestQuery: "from=0&size=10&track_total_hits=true", - // Note that ] and [ must be escaped because RequestBody is used as a regex pattern - RequestBody: `{"query":{"bool":{"must":\[{"range":{"startDate":{"gte":"2019-aaaef-01"}}}\]}}}` + "\n", - ResponseStatusCode: http.StatusBadRequest, - ResponseBody: ` - { - "error" : { - "root_cause" : [ - { - "type" : "parse_exception", - "reason" : "failed to parse date field [2019-aaaef-01] with format [strict_date_optional_time||epoch_millis]" - } - ], - "type" : "search_phase_execution_exception", - "reason" : "all shards failed", - "phase" : "query", - "grouped" : true, - "failed_shards" : [ - { - "shard" : 0, - "index" : "test-batches", - "node" : "PfG7NJ8qSGGnNre4aczgPQ", - "reason" : { - "type" : "parse_exception", - "reason" : "failed to parse date field [2019-aaaef-01] with format [strict_date_optional_time||epoch_millis]", - "caused_by" : { - "type" : "illegal_argument_exception", - "reason" : "Unrecognized chars at the end of [2019-aaaef-01]: [-aaaef-01]" - } - } - } - ] - }, - "status" : 400 - }`, - }, - ), - response.Error(http.StatusBadRequest, "parse_exception: failed to parse date field [2019-aaaef-01] with format [strict_date_optional_time||epoch_millis]"), - }, { - "invalid error Json in Response body", - map[string]interface{}{path.ParamOwPath: "/hri/tenants/1234/batches"}, - auth.HriClaims{Scope: auth.HriConsumer}, - test.NewFakeTransport(t).AddCall( - "/1234-batches/_search", - test.ElasticCall{ - RequestQuery: "from=0&size=10&track_total_hits=true", - ResponseStatusCode: http.StatusBadRequest, - ResponseBody: ` - { - "error" : { - "type" : "search_phase_execution_exception", - "reason" : "all shards failed", - "phase" : "query", - "grouped" : true, - "failed_shards" : [ - { - "shard" : 0, - "index" : "test-monkee-batches", - "node" : "XX-ZZ-top", - "reason" : { - "type" : "parse_exception", - "reason" : "failed to parse date field [2019-aaaef-01] with format [strict_date_optional_time||epoch_millis]", - "caused_by" : { - "type" : "illegal_argument_exception", - "reason" : "Unrecognized chars at the end of [2019-aaaef-01]: [-aaaef-01]" - } - } - } - ] - }, - "status" : 400 - }`, - }, - ), - response.Error(http.StatusBadRequest, "search_phase_execution_exception: all shards failed"), - }, {"bad name param_prohibited character", map[string]interface{}{path.ParamOwPath: "/hri/tenants/1234/batches", "name": "{[]//zzx[]}"}, auth.HriClaims{Scope: auth.HriConsumer}, @@ -228,7 +146,6 @@ func TestGet(t *testing.T) { test.NewFakeTransport(t), response.Error(http.StatusBadRequest, "query parameters may not contain these characters: \"[]{}"), }, - {"client error", map[string]interface{}{path.ParamOwPath: "/hri/tenants/1234/batches"}, auth.HriClaims{Scope: auth.HriConsumer}, @@ -239,85 +156,8 @@ func TestGet(t *testing.T) { ResponseErr: errors.New("client error"), }, ), - response.Error(http.StatusInternalServerError, "Elastic client error: client error"), - }, - {"response error", - map[string]interface{}{path.ParamOwPath: "/hri/tenants/1234/batches"}, - auth.HriClaims{Scope: auth.HriConsumer}, - test.NewFakeTransport(t).AddCall( - "/1234-batches/_search", - test.ElasticCall{ - RequestQuery: "from=0&size=10&track_total_hits=true", - ResponseStatusCode: http.StatusBadRequest, - ResponseBody: ` - { - "error": { - "type": "bad query", - "reason": "missing closing '}'" - } - }`, - }, - ), - response.Error(http.StatusBadRequest, "bad query: missing closing '}'"), - }, - {"body decode error on OK", - map[string]interface{}{path.ParamOwPath: "/hri/tenants/1234/batches"}, - auth.HriClaims{Scope: auth.HriConsumer}, - test.NewFakeTransport(t).AddCall( - "/1234-batches/_search", - test.ElasticCall{ - RequestQuery: "from=0&size=10&track_total_hits=true", - ResponseBody: `{bad json message " "`, - }, - ), - response.Error(http.StatusInternalServerError, "Error parsing the Elastic search response body: invalid character 'b' looking for beginning of object key string"), - }, - {"body decode error on 400", - map[string]interface{}{path.ParamOwPath: "/hri/tenants/1234/batches"}, - auth.HriClaims{Scope: auth.HriConsumer}, - test.NewFakeTransport(t).AddCall( - "/1234-batches/_search", - test.ElasticCall{ - RequestQuery: "from=0&size=10&track_total_hits=true", - ResponseStatusCode: http.StatusBadRequest, - ResponseBody: `{bad json message " "`, - }, - ), - response.Error(http.StatusInternalServerError, "Error parsing the Elastic search response body: invalid character 'b' looking for beginning of object key string"), - }, - {"bad tenantId", - map[string]interface{}{path.ParamOwPath: "/hri/tenants/1234/batches"}, - auth.HriClaims{Scope: auth.HriConsumer}, - test.NewFakeTransport(t).AddCall( - "/1234-batches/_search", - test.ElasticCall{ - RequestQuery: "from=0&size=10&track_total_hits=true", - ResponseStatusCode: http.StatusNotFound, - ResponseBody: ` - { - "error": { - "root_cause" : [ - { - "type" : "index_not_found_exception", - "reason" : "no such index", - "resource.type" : "index_or_alias", - "resource.id" : "1234-batches", - "index_uuid" : "_na_", - "index" : "1234-batches" - } - ], - "type" : "index_not_found_exception", - "reason" : "no such index", - "resource.type" : "index_or_alias", - "resource.id" : "1234-batches", - "index_uuid" : "_na_", - "index" : "1234-batches" - }, - "status" : 404 - }`, - }, - ), - response.Error(http.StatusNotFound, "index_not_found_exception: no such index"), + response.Error(http.StatusInternalServerError, + "Could not retrieve batches: elasticsearch client error: client error"), }, {"Missing scopes", map[string]interface{}{path.ParamOwPath: "/hri/tenants/1234/batches"}, diff --git a/src/batches/helper_test.go b/src/batches/helper_test.go new file mode 100644 index 0000000..deaee88 --- /dev/null +++ b/src/batches/helper_test.go @@ -0,0 +1,37 @@ +/* + * (C) Copyright IBM Corp. 2020 + * + * SPDX-License-Identifier: Apache-2.0 + */ +package batches + +import ( + "fmt" + "reflect" + "regexp" +) + +func RequestCompareScriptTest(expectedUpdateRequest map[string]interface{}, actualUpdateRequest map[string]interface{}) error { + + matches, err := regexp.MatchString(expectedUpdateRequest["script"].(map[string]interface{})["source"].(string), actualUpdateRequest["script"].(map[string]interface{})["source"].(string)) + if err != nil || !matches { + return err + } + + return nil //update requests are equivalent + +} + +func RequestCompareWithMetadataTest(expectedUpdateRequest map[string]interface{}, actualUpdateRequest map[string]interface{}) error { + + if !reflect.DeepEqual(expectedUpdateRequest["script"].(map[string]interface{})["lang"].(string), actualUpdateRequest["script"].(map[string]interface{})["lang"].(string)) { + return fmt.Errorf("expected lang field of script does not match actual lang field of script") + } + + if !reflect.DeepEqual(expectedUpdateRequest["script"].(map[string]interface{})["params"].(map[string]interface{}), actualUpdateRequest["script"].(map[string]interface{})["params"].(map[string]interface{})) { + return fmt.Errorf("expected params field of script does not match actual params field of script") + } + + return nil //update requests are equivalent + +} diff --git a/src/batches/processing_complete.go b/src/batches/processing_complete.go new file mode 100644 index 0000000..49d06e9 --- /dev/null +++ b/src/batches/processing_complete.go @@ -0,0 +1,60 @@ +/* + * (C) Copyright IBM Corp. 2020 + * + * SPDX-License-Identifier: Apache-2.0 + */ +package batches + +import ( + "errors" + "fmt" + "github.com/Alvearie/hri-mgmt-api/batches/status" + "github.com/Alvearie/hri-mgmt-api/common/auth" + "github.com/Alvearie/hri-mgmt-api/common/elastic" + "github.com/Alvearie/hri-mgmt-api/common/param" + "log" + "reflect" + "time" +) + +type ProcessingComplete struct{} + +const ProcessingCompleteAction string = "processingComplete" + +func (ProcessingComplete) GetAction() string { + return ProcessingCompleteAction +} + +func (ProcessingComplete) CheckAuth(claims auth.HriClaims) error { + // Only internal code can call processing_complete + if !claims.HasScope(auth.HriInternal) { + return errors.New(fmt.Sprintf(auth.MsgInternalRoleRequired, "processing complete")) + } + return nil +} + +func (ProcessingComplete) GetUpdateScript(params map[string]interface{}, validator param.Validator, _ auth.HriClaims, logger *log.Logger) (map[string]interface{}, map[string]interface{}) { + errResp := validator.Validate( + params, + // golang receives numeric JSON values as Float64 + param.Info{param.ActualRecordCount, reflect.Float64}, + param.Info{param.InvalidRecordCount, reflect.Float64}, + ) + if errResp != nil { + logger.Printf("Bad input params: %s", errResp) + return nil, errResp + } + actualRecordCount := int(params[param.ActualRecordCount].(float64)) + invalidRecordCount := int(params[param.InvalidRecordCount].(float64)) + currentTime := time.Now().UTC() + + updateScript := fmt.Sprintf("if (ctx._source.status == '%s') {ctx._source.status = '%s'; ctx._source.actualRecordCount = %d; ctx._source.invalidRecordCount = %d; ctx._source.endDate = '%s';} else {ctx.op = 'none'}", + status.SendCompleted, status.Completed, actualRecordCount, invalidRecordCount, currentTime.Format(elastic.DateTimeFormat)) + + updateRequest := map[string]interface{}{ + "script": map[string]interface{}{ + "source": updateScript, + }, + } + return updateRequest, nil +} diff --git a/src/batches/processing_complete_test.go b/src/batches/processing_complete_test.go new file mode 100644 index 0000000..816bfc6 --- /dev/null +++ b/src/batches/processing_complete_test.go @@ -0,0 +1,246 @@ +/* + * (C) Copyright IBM Corp. 2020 + * + * SPDX-License-Identifier: Apache-2.0 + */ +package batches + +import ( + "encoding/json" + "fmt" + "github.com/Alvearie/hri-mgmt-api/batches/status" + "github.com/Alvearie/hri-mgmt-api/common/auth" + "github.com/Alvearie/hri-mgmt-api/common/elastic" + "github.com/Alvearie/hri-mgmt-api/common/param" + "github.com/Alvearie/hri-mgmt-api/common/path" + "github.com/Alvearie/hri-mgmt-api/common/response" + "github.com/Alvearie/hri-mgmt-api/common/test" + "log" + "net/http" + "os" + "reflect" + "testing" +) + +func TestProcessingComplete_AuthCheck(t *testing.T) { + tests := []struct { + name string + claims auth.HriClaims + expectedErr string + }{ + { + name: "With internal role, should return nil", + claims: auth.HriClaims{Scope: auth.HriInternal}, + }, + { + name: "With internal & Consumer role, should return nil", + claims: auth.HriClaims{Scope: auth.HriInternal + " " + auth.HriConsumer}, + }, + { + name: "Without internal role, should return error", + claims: auth.HriClaims{Scope: auth.HriIntegrator}, + expectedErr: "Must have hri_internal role to mark a batch as processing complete", + }, + } + + processingComplete := ProcessingComplete{} + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + + err := processingComplete.CheckAuth(tt.claims) + if (err == nil && tt.expectedErr != "") || (err != nil && err.Error() != tt.expectedErr) { + t.Errorf("GetAuth() = '%v', expected '%v'", err, tt.expectedErr) + } + }) + } +} + +func TestProcessingComplete_GetUpdateScript(t *testing.T) { + tests := []struct { + name string + params map[string]interface{} + claims auth.HriClaims + // Note that the following chars must be escaped because expectedScript is used as a regex pattern: ., ), ( + expectedRequest map[string]interface{} + expectedErr map[string]interface{} + }{ + { + name: "success", + params: map[string]interface{}{ + param.ActualRecordCount: float64(10), + param.InvalidRecordCount: float64(2), + }, + expectedRequest: map[string]interface{}{ + "script": map[string]interface{}{ + "source": `if \(ctx\._source\.status == 'sendCompleted'\) {ctx\._source\.status = 'completed'; ctx\._source\.actualRecordCount = 10; ctx\._source\.invalidRecordCount = 2; ctx\._source\.endDate = '` + test.DatePattern + `';} else {ctx\.op = 'none'}`, + }, + }, + }, + { + name: "Missing Actual Record Count param", + params: map[string]interface{}{ + param.InvalidRecordCount: float64(2), + }, + expectedErr: response.MissingParams(param.ActualRecordCount), + }, + { + name: "Missing Invalid Record Count param", + params: map[string]interface{}{ + param.ActualRecordCount: float64(10), + }, + expectedErr: response.MissingParams(param.InvalidRecordCount), + }, + } + + processingComplete := ProcessingComplete{} + logger := log.New(os.Stdout, fmt.Sprintf("batches/%s: ", processingComplete.GetAction()), log.Llongfile) + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + + request, errResp := processingComplete.GetUpdateScript(tt.params, param.ParamValidator{}, tt.claims, logger) + if !reflect.DeepEqual(errResp, tt.expectedErr) { + t.Errorf("GetUpdateScript().errResp = '%v', expected '%v'", errResp, tt.expectedErr) + } else if tt.expectedRequest != nil { + if err := RequestCompareScriptTest(tt.expectedRequest, request); err != nil { + t.Errorf("GetUpdateScript().udpateRequest = \n\t'%s' \nDoesn't match expected \n\t'%s'\n%v", request, tt.expectedRequest, err) + } + } + }) + } + +} + +func TestUpdateStatus_ProcessingComplete(t *testing.T) { + activationId := "activationId" + _ = os.Setenv(response.EnvOwActivationId, activationId) + + const ( + scriptProcessingComplete string = `{"script":{"source":"if \(ctx\._source\.status == 'sendCompleted'\) {ctx\._source\.status = 'completed'; ctx\._source\.actualRecordCount = 10; ctx\._source\.invalidRecordCount = 2; ctx\._source\.endDate = '` + test.DatePattern + `';} else {ctx\.op = 'none'}"}}` + "\n" + ) + + validClaims := auth.HriClaims{Scope: auth.HriInternal, Subject: "internalId"} + + completedBatch := map[string]interface{}{ + param.BatchId: batchId, + param.Name: batchName, + param.IntegratorId: integratorId, + param.Topic: batchTopic, + param.DataType: batchDataType, + param.Status: status.Completed.String(), + param.StartDate: batchStartDate, + param.RecordCount: batchExpectedRecordCount, + param.ExpectedRecordCount: batchExpectedRecordCount, + param.ActualRecordCount: batchActualRecordCount, + param.InvalidThreshold: batchInvalidThreshold, + param.InvalidRecordCount: batchInvalidRecordCount, + } + completedJSON, err := json.Marshal(completedBatch) + if err != nil { + t.Errorf("Unable to create batch JSON string: %s", err.Error()) + } + + failedBatch := map[string]interface{}{ + param.BatchId: batchId, + param.Name: batchName, + param.IntegratorId: integratorId, + param.Topic: batchTopic, + param.DataType: batchDataType, + param.Status: status.Failed.String(), + param.StartDate: batchStartDate, + param.ExpectedRecordCount: batchExpectedRecordCount, + param.ActualRecordCount: batchActualRecordCount, + param.InvalidThreshold: batchInvalidThreshold, + param.InvalidRecordCount: batchInvalidRecordCount, + param.FailureMessage: batchFailureMessage, + } + failedJSON, err := json.Marshal(failedBatch) + if err != nil { + t.Errorf("Unable to create batch JSON string: %s", err.Error()) + } + + tests := []struct { + name string + params map[string]interface{} + claims auth.HriClaims + ft *test.FakeTransport + writerError error + expectedNotification map[string]interface{} + expectedResponse map[string]interface{} + }{ + { + name: "simple processingComplete", + params: map[string]interface{}{ + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/processingComplete", + param.ActualRecordCount: batchActualRecordCount, + param.InvalidRecordCount: batchInvalidRecordCount, + }, + claims: validClaims, + ft: test.NewFakeTransport(t).AddCall( + "/1234-batches/_doc/test-batch/_update", + test.ElasticCall{ + RequestQuery: transportQueryParams, + RequestBody: scriptProcessingComplete, + ResponseBody: fmt.Sprintf(` + { + "_index": "1234-batches", + "_type": "_doc", + "_id": "test-batch", + "result": "updated", + "get": { + "_source": %s + } + }`, completedJSON), + }, + ), + expectedNotification: completedBatch, + expectedResponse: response.Success(http.StatusOK, map[string]interface{}{}), + }, + { + name: "processingComplete fails on failed batch", + params: map[string]interface{}{ + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/processingComplete", + param.ActualRecordCount: batchActualRecordCount, + param.InvalidRecordCount: batchInvalidRecordCount, + }, + claims: validClaims, + ft: test.NewFakeTransport(t).AddCall( + "/1234-batches/_doc/test-batch/_update", + test.ElasticCall{ + RequestQuery: transportQueryParams, + RequestBody: scriptProcessingComplete, + ResponseBody: fmt.Sprintf(` + { + "_index": "1234-batches", + "_type": "_doc", + "_id": "test-batch", + "result": "noop", + "get": { + "_source": %s + } + }`, failedJSON), + }, + ), + expectedResponse: response.Error(http.StatusConflict, "The 'processingComplete' endpoint failed, batch is in 'failed' state"), + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + esClient, err := elastic.ClientFromTransport(tt.ft) + if err != nil { + t.Error(err) + } + writer := test.FakeWriter{ + T: t, + ExpectedTopic: InputTopicToNotificationTopic(batchTopic), + ExpectedKey: batchId, + ExpectedValue: tt.expectedNotification, + Error: tt.writerError, + } + + if result := UpdateStatus(tt.params, param.ParamValidator{}, tt.claims, ProcessingComplete{}, esClient, writer); !reflect.DeepEqual(result, tt.expectedResponse) { + t.Errorf("UpdateStatus() = \n\t%v, expected: \n\t%v", result, tt.expectedResponse) + } + }) + } +} diff --git a/src/batches/send_complete.go b/src/batches/send_complete.go new file mode 100644 index 0000000..c001e58 --- /dev/null +++ b/src/batches/send_complete.go @@ -0,0 +1,118 @@ +/* + * (C) Copyright IBM Corp. 2020 + * + * SPDX-License-Identifier: Apache-2.0 + */ +package batches + +import ( + "errors" + "fmt" + "github.com/Alvearie/hri-mgmt-api/batches/status" + "github.com/Alvearie/hri-mgmt-api/common/auth" + "github.com/Alvearie/hri-mgmt-api/common/elastic" + "github.com/Alvearie/hri-mgmt-api/common/param" + "github.com/Alvearie/hri-mgmt-api/common/response" + "log" + "reflect" + "time" +) + +type SendComplete struct{} + +func (SendComplete) GetAction() string { + return "sendComplete" +} + +func (SendComplete) CheckAuth(claims auth.HriClaims) error { + // Only Integrators can call sendComplete + if !claims.HasScope(auth.HriIntegrator) { + return errors.New(fmt.Sprintf(auth.MsgIntegratorRoleRequired, "update")) + } + return nil +} + +func (SendComplete) GetUpdateScript(params map[string]interface{}, validator param.Validator, claims auth.HriClaims, logger *log.Logger) (map[string]interface{}, map[string]interface{}) { + errResp := validator.Validate( + params, + // golang receives numeric JSON values as Float64 + param.Info{param.Validation, reflect.Bool}, + ) + if errResp != nil { + logger.Printf("Bad input params: %s", errResp) + return nil, errResp + } + + errResp = validator.ValidateOptional( + params, + param.Info{param.ExpectedRecordCount, reflect.Float64}, + param.Info{param.RecordCount, reflect.Float64}, + param.Info{param.Metadata, reflect.Map}, + ) + if errResp != nil { + logger.Printf("Bad input optional params: %s", errResp) + return nil, errResp + } + + var expectedRecordCount int + if _, present := params[param.ExpectedRecordCount]; present { + expectedRecordCount = int(params[param.ExpectedRecordCount].(float64)) + } else if _, present := params[param.RecordCount]; present { + expectedRecordCount = int(params[param.RecordCount].(float64)) + } else { + return nil, response.MissingParams(param.ExpectedRecordCount) + } + validation := params[param.Validation].(bool) + + metadata := params[param.Metadata] + + var updateScript string + if validation { + // When validation is enabled + // - change the status to 'sendCompleted' + // - set the record count + + if metadata == nil { + updateScript = fmt.Sprintf("if (ctx._source.status == '%s' && ctx._source.integratorId == '%s') {ctx._source.status = '%s'; ctx._source.expectedRecordCount = %d;} else {ctx.op = 'none'}", + status.Started, claims.Subject, status.SendCompleted, expectedRecordCount) + } else { + updateScript = fmt.Sprintf("if (ctx._source.status == '%s' && ctx._source.integratorId == '%s') {ctx._source.status = '%s'; ctx._source.expectedRecordCount = %d; ctx._source.metadata = params.metadata;} else {ctx.op = 'none'}", + status.Started, claims.Subject, status.SendCompleted, expectedRecordCount) + } + + } else { + // When validation is not enabled + // - change the status to 'completed' + // - set the record count + // - set the end date + currentTime := time.Now().UTC() + + if metadata == nil { + updateScript = fmt.Sprintf("if (ctx._source.status == '%s' && ctx._source.integratorId == '%s') {ctx._source.status = '%s'; ctx._source.expectedRecordCount = %d; ctx._source.endDate = '%s';} else {ctx.op = 'none'}", + status.Started, claims.Subject, status.Completed, expectedRecordCount, currentTime.Format(elastic.DateTimeFormat)) + } else { + updateScript = fmt.Sprintf("if (ctx._source.status == '%s' && ctx._source.integratorId == '%s') {ctx._source.status = '%s'; ctx._source.expectedRecordCount = %d; ctx._source.endDate = '%s'; ctx._source.metadata = params.metadata;} else {ctx.op = 'none'}", + status.Started, claims.Subject, status.Completed, expectedRecordCount, currentTime.Format(elastic.DateTimeFormat)) + } + } + + var updateRequest = map[string]interface{}{} + + if metadata == nil { + updateRequest = map[string]interface{}{ + "script": map[string]interface{}{ + "source": updateScript, + }, + } + } else { + updateRequest = map[string]interface{}{ + "script": map[string]interface{}{ + "source": updateScript, + "lang": "painless", + "params": map[string]interface{}{"metadata": metadata}, + }, + } + } + + return updateRequest, nil +} diff --git a/src/batches/send_complete_test.go b/src/batches/send_complete_test.go new file mode 100644 index 0000000..6336f0f --- /dev/null +++ b/src/batches/send_complete_test.go @@ -0,0 +1,389 @@ +/* + * (C) Copyright IBM Corp. 2020 + * + * SPDX-License-Identifier: Apache-2.0 + */ +package batches + +import ( + "encoding/json" + "fmt" + "github.com/Alvearie/hri-mgmt-api/batches/status" + "github.com/Alvearie/hri-mgmt-api/common/auth" + "github.com/Alvearie/hri-mgmt-api/common/elastic" + "github.com/Alvearie/hri-mgmt-api/common/param" + "github.com/Alvearie/hri-mgmt-api/common/path" + "github.com/Alvearie/hri-mgmt-api/common/response" + "github.com/Alvearie/hri-mgmt-api/common/test" + "log" + "net/http" + "os" + "reflect" + "testing" +) + +func TestSendComplete_AuthCheck(t *testing.T) { + tests := []struct { + name string + claims auth.HriClaims + expectedErr string + }{ + { + name: "With DI role, should return nil", + claims: auth.HriClaims{Scope: auth.HriIntegrator}, + }, + { + name: "With DI & Consumer role, should return nil", + claims: auth.HriClaims{Scope: auth.HriIntegrator + " " + auth.HriConsumer}, + }, + { + name: "Without DI role, should return error", + claims: auth.HriClaims{}, + expectedErr: "Must have hri_data_integrator role to update a batch", + }, + } + + sendComplete := SendComplete{} + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + + err := sendComplete.CheckAuth(tt.claims) + if (err == nil && tt.expectedErr != "") || (err != nil && err.Error() != tt.expectedErr) { + t.Errorf("GetAuth() = '%v', expected '%v'", err, tt.expectedErr) + } + }) + } +} + +func TestSendComplete_GetUpdateScript(t *testing.T) { + + validClaims := auth.HriClaims{Scope: auth.HriIntegrator, Subject: "integratorId"} + + tests := []struct { + name string + params map[string]interface{} + claims auth.HriClaims + // Note that the following chars must be escaped because expectedScript is used as a regex pattern: ., ), (, [, ] + expectedRequest map[string]interface{} + expectedErr map[string]interface{} + metadata bool + }{ + { + name: "no metadata with validation", + params: map[string]interface{}{ + param.Validation: true, + param.ExpectedRecordCount: float64(200), + }, + claims: validClaims, + expectedRequest: map[string]interface{}{ + "script": map[string]interface{}{ + "source": `if \(ctx\._source\.status == 'started' && ctx\._source\.integratorId == 'integratorId'\) {ctx\._source\.status = 'sendCompleted'; ctx\._source\.expectedRecordCount = 200;} else {ctx\.op = 'none'}`, + }, + }, + metadata: false, + }, + { + name: "no metadata without validation", + params: map[string]interface{}{ + param.Validation: false, + param.ExpectedRecordCount: float64(200), + }, + claims: validClaims, + expectedRequest: map[string]interface{}{ + "script": map[string]interface{}{ + "source": `if \(ctx\._source\.status == 'started' && ctx\._source\.integratorId == 'integratorId'\) {ctx\._source\.status = 'completed'; ctx\._source\.expectedRecordCount = 200; ctx\._source\.endDate = '` + test.DatePattern + `';} else {ctx\.op = 'none'}`, + }, + }, + metadata: false, + }, + { + name: "metadata with validation", + params: map[string]interface{}{ + param.Validation: true, + param.ExpectedRecordCount: float64(200), + param.Metadata: map[string]interface{}{"compression": "gzip", "userMetaField1": "metadata", "userMetaField2": -5}, + }, + claims: validClaims, + expectedRequest: map[string]interface{}{ + "script": map[string]interface{}{ + "source": `if \(ctx\._source\.status == 'started' && ctx\._source\.integratorId == 'integratorId'\) {ctx\._source\.status = 'sendCompleted'; ctx\._source\.expectedRecordCount = 200; ctx\._source\.metadata = params\.metadata;} else {ctx\.op = 'none'}`, + "lang": "painless", + "params": map[string]interface{}{"metadata": map[string]interface{}{"compression": "gzip", "userMetaField1": "metadata", "userMetaField2": -5}}, + }, + }, + metadata: true, + }, + { + name: "metadata without validation", + params: map[string]interface{}{ + param.Validation: false, + param.ExpectedRecordCount: float64(200), + param.Metadata: map[string]interface{}{"compression": "gzip", "userMetaField1": "metadata", "userMetaField2": 3}, + }, + claims: validClaims, + expectedRequest: map[string]interface{}{ + "script": map[string]interface{}{ + "source": `if \(ctx\._source\.status == 'started' && ctx\._source\.integratorId == 'integratorId'\) {ctx\._source\.status = 'completed'; ctx\._source\.expectedRecordCount = 200; ctx\._source\.endDate = '` + test.DatePattern + `'; ctx\._source\.metadata = params\.metadata;} else {ctx\.op = 'none'}`, + "lang": "painless", + "params": map[string]interface{}{"metadata": map[string]interface{}{"compression": "gzip", "userMetaField1": "metadata", "userMetaField2": 3}}, + }, + }, + metadata: true, + }, + { + name: "with deprecated recordCount field", + params: map[string]interface{}{ + param.Validation: true, + param.RecordCount: float64(200), + }, + claims: validClaims, + expectedRequest: map[string]interface{}{ + "script": map[string]interface{}{ + "source": `if \(ctx\._source\.status == 'started' && ctx\._source\.integratorId == 'integratorId'\) {ctx\._source\.status = 'sendCompleted'; ctx\._source\.expectedRecordCount = 200;} else {ctx\.op = 'none'}`, + }, + }, + metadata: false, + }, + { + name: "Missing Validation param", + params: map[string]interface{}{ + param.ExpectedRecordCount: float64(200), + }, + claims: validClaims, + expectedErr: response.MissingParams(param.Validation), + }, + { + name: "Missing Record Count param", + params: map[string]interface{}{ + param.Validation: false, + }, + claims: validClaims, + expectedErr: response.MissingParams(param.ExpectedRecordCount), + }, + { + name: "Bad Metadata type", + params: map[string]interface{}{ + param.Validation: false, + param.ExpectedRecordCount: float64(200), + param.Metadata: "nil", + }, + claims: validClaims, + expectedErr: response.InvalidParams("metadata must be a map, got string instead."), + }, + { + name: "Missing claim.Subject", + params: map[string]interface{}{ + param.Validation: true, + param.ExpectedRecordCount: float64(200), + }, + claims: auth.HriClaims{}, + expectedRequest: map[string]interface{}{ + "script": map[string]interface{}{ + "source": `if \(ctx\._source\.status == 'started' && ctx\._source\.integratorId == ''\) {ctx\._source\.status = 'sendCompleted'; ctx\._source\.expectedRecordCount = 200;} else {ctx\.op = 'none'}`, + }, + }, + metadata: false, + }, + } + + sendComplete := SendComplete{} + logger := log.New(os.Stdout, fmt.Sprintf("batches/%s: ", sendComplete.GetAction()), log.Llongfile) + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + + updateRequest, errResp := sendComplete.GetUpdateScript(tt.params, param.ParamValidator{}, tt.claims, logger) + if !reflect.DeepEqual(errResp, tt.expectedErr) { + t.Errorf("GetUpdateScript().errResp = '%v', expected '%v'", errResp, tt.expectedErr) + } else if tt.expectedRequest != nil { + if err := RequestCompareScriptTest(tt.expectedRequest, updateRequest); err != nil { + t.Errorf("GetUpdateScript().udpateRequest = \n\t'%s' \nDoesn't match expected \n\t'%s'\n%v", updateRequest, tt.expectedRequest, err) + } else if tt.metadata { + if err := RequestCompareWithMetadataTest(tt.expectedRequest, updateRequest); err != nil { + t.Errorf("GetUpdateScript().udpateRequest = \n\t'%s' \nDoesn't match expected \n\t'%s'\n%v", updateRequest, tt.expectedRequest, err) + } + } + } + + }) + } + +} + +func TestUpdateStatus_SendComplete(t *testing.T) { + activationId := "activationId" + _ = os.Setenv(response.EnvOwActivationId, activationId) + + const ( + scriptSendComplete string = `{"script":{"source":"if \(ctx\._source\.status == 'started' && ctx\._source\.integratorId == 'integratorId'\) {ctx\._source\.status = 'sendCompleted'; ctx\._source\.expectedRecordCount = 10;} else {ctx\.op = 'none'}"}}` + "\n" + scriptSendCompleteWithMetadata string = `{"script":{"lang":"painless","params":{"metadata":{"compression":"gzip","userMetaField1":"metadata","userMetaField2":-5}},"source":"if \(ctx\._source\.status == 'started' && ctx\._source\.integratorId == 'integratorId'\) {ctx\._source\.status = 'sendCompleted'; ctx\._source\.expectedRecordCount = 10; ctx\._source\.metadata = params\.metadata;} else {ctx\.op = 'none'}"}}` + "\n" + ) + + validClaims := auth.HriClaims{Scope: auth.HriIntegrator, Subject: "integratorId"} + + sendCompletedBatch := map[string]interface{}{ + param.BatchId: batchId, + param.Name: batchName, + param.IntegratorId: integratorId, + param.Topic: batchTopic, + param.DataType: batchDataType, + param.Status: status.SendCompleted.String(), + param.StartDate: batchStartDate, + param.RecordCount: batchExpectedRecordCount, + param.ExpectedRecordCount: batchExpectedRecordCount, + param.InvalidThreshold: batchInvalidThreshold, + } + sendCompletedJSON, err := json.Marshal(sendCompletedBatch) + if err != nil { + t.Errorf("Unable to create batch JSON string: %s", err.Error()) + } + + sendCompletedBatchWithMetadata := map[string]interface{}{ + param.BatchId: batchId, + param.Name: batchName, + param.IntegratorId: integratorId, + param.Topic: batchTopic, + param.DataType: batchDataType, + param.Status: status.SendCompleted.String(), + param.StartDate: batchStartDate, + param.RecordCount: batchExpectedRecordCount, + param.ExpectedRecordCount: batchExpectedRecordCount, + param.InvalidThreshold: batchInvalidThreshold, + } + + sendCompletedBatchWithMetadataJSON, err := json.Marshal(sendCompletedBatchWithMetadata) + if err != nil { + t.Errorf("Unable to create batch JSON string: %s", err.Error()) + } + + terminatedBatch := map[string]interface{}{ + param.BatchId: batchId, + param.Name: batchName, + param.IntegratorId: integratorId, + param.Topic: batchTopic, + param.DataType: batchDataType, + param.Status: status.Terminated.String(), + param.StartDate: batchStartDate, + param.ExpectedRecordCount: batchExpectedRecordCount, + param.InvalidThreshold: batchInvalidThreshold, + param.Metadata: map[string]interface{}{"compression": "gzip", "userMetaField1": "metadata", "userMetaField2": -5}, + } + terminatedJSON, err := json.Marshal(terminatedBatch) + if err != nil { + t.Errorf("Unable to create batch JSON string: %s", err.Error()) + } + + tests := []struct { + name string + params map[string]interface{} + claims auth.HriClaims + ft *test.FakeTransport + writerError error + expectedNotification map[string]interface{} + expectedResponse map[string]interface{} + }{ + { + name: "simple sendComplete, with validation", + params: map[string]interface{}{ + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", + param.Validation: true, + param.ExpectedRecordCount: batchExpectedRecordCount, + }, + claims: validClaims, + ft: test.NewFakeTransport(t).AddCall( + "/1234-batches/_doc/test-batch/_update", + test.ElasticCall{ + RequestQuery: transportQueryParams, + RequestBody: scriptSendComplete, + ResponseBody: fmt.Sprintf(` + { + "_index": "1234-batches", + "_type": "_doc", + "_id": "test-batch", + "result": "updated", + "get": { + "_source": %s + } + }`, sendCompletedJSON), + }, + ), + expectedNotification: sendCompletedBatch, + expectedResponse: response.Success(http.StatusOK, map[string]interface{}{}), + }, + { + name: "simple sendComplete, with validation and metadata", + params: map[string]interface{}{ + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", + param.Validation: true, + param.ExpectedRecordCount: batchExpectedRecordCount, + param.Metadata: map[string]interface{}{"compression": "gzip", "userMetaField1": "metadata", "userMetaField2": -5}, + }, + claims: validClaims, + ft: test.NewFakeTransport(t).AddCall( + "/1234-batches/_doc/test-batch/_update", + test.ElasticCall{ + RequestQuery: transportQueryParams, + RequestBody: scriptSendCompleteWithMetadata, + ResponseBody: fmt.Sprintf(` + { + "_index": "1234-batches", + "_type": "_doc", + "_id": "test-batch", + "result": "updated", + "get": { + "_source": %s + } + }`, sendCompletedBatchWithMetadataJSON), + }, + ), + expectedNotification: sendCompletedBatchWithMetadata, + expectedResponse: response.Success(http.StatusOK, map[string]interface{}{}), + }, + { + name: "sendComplete fails on terminated batch", + params: map[string]interface{}{ + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", + param.Validation: true, + param.ExpectedRecordCount: batchExpectedRecordCount, + }, + claims: validClaims, + ft: test.NewFakeTransport(t).AddCall( + "/1234-batches/_doc/test-batch/_update", + test.ElasticCall{ + RequestQuery: transportQueryParams, + RequestBody: scriptSendComplete, + ResponseBody: fmt.Sprintf(` + { + "_index": "1234-batches", + "_type": "_doc", + "_id": "test-batch", + "result": "noop", + "get": { + "_source": %s + } + }`, terminatedJSON), + }, + ), + expectedResponse: response.Error(http.StatusConflict, "The 'sendComplete' endpoint failed, batch is in 'terminated' state"), + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + esClient, err := elastic.ClientFromTransport(tt.ft) + if err != nil { + t.Error(err) + } + writer := test.FakeWriter{ + T: t, + ExpectedTopic: InputTopicToNotificationTopic(batchTopic), + ExpectedKey: batchId, + ExpectedValue: tt.expectedNotification, + Error: tt.writerError, + } + + if result := UpdateStatus(tt.params, param.ParamValidator{}, tt.claims, SendComplete{}, esClient, writer); !reflect.DeepEqual(result, tt.expectedResponse) { + t.Errorf("UpdateStatus() = \n\t%v,\nexpected: \n\t%v", result, tt.expectedResponse) + } + }) + } +} diff --git a/src/batches/status/status.go b/src/batches/status/status.go index 59cbfbc..ae871d6 100644 --- a/src/batches/status/status.go +++ b/src/batches/status/status.go @@ -10,11 +10,12 @@ type BatchStatus int const ( Unknown BatchStatus = iota Started + SendCompleted Completed Failed Terminated ) func (s BatchStatus) String() string { - return [...]string{"unknown", "started", "completed", "failed", "terminated"}[s] + return [...]string{"unknown", "started", "sendCompleted", "completed", "failed", "terminated"}[s] } diff --git a/src/batches/terminate.go b/src/batches/terminate.go new file mode 100644 index 0000000..6fd7a9b --- /dev/null +++ b/src/batches/terminate.go @@ -0,0 +1,70 @@ +/* + * (C) Copyright IBM Corp. 2020 + * + * SPDX-License-Identifier: Apache-2.0 + */ +package batches + +import ( + "errors" + "fmt" + "github.com/Alvearie/hri-mgmt-api/batches/status" + "github.com/Alvearie/hri-mgmt-api/common/auth" + "github.com/Alvearie/hri-mgmt-api/common/elastic" + "github.com/Alvearie/hri-mgmt-api/common/param" + "log" + "reflect" + "time" +) + +type Terminate struct{} + +func (Terminate) GetAction() string { + return "terminate" +} + +func (Terminate) CheckAuth(claims auth.HriClaims) error { + // Only Integrators can call terminate + if !claims.HasScope(auth.HriIntegrator) { + return errors.New(fmt.Sprintf(auth.MsgIntegratorRoleRequired, "update")) + } + return nil +} + +func (Terminate) GetUpdateScript(params map[string]interface{}, validator param.Validator, claims auth.HriClaims, logger *log.Logger) (map[string]interface{}, map[string]interface{}) { + errResp := validator.ValidateOptional( + params, + param.Info{param.Metadata, reflect.Map}, + ) + if errResp != nil { + logger.Printf("Bad input optional params: %s", errResp) + return nil, errResp + } + metadata := params[param.Metadata] + + currentTime := time.Now().UTC() + + if metadata == nil { + updateScript := fmt.Sprintf("if (ctx._source.status == '%s' && ctx._source.integratorId == '%s') {ctx._source.status = '%s'; ctx._source.endDate = '%s';} else {ctx.op = 'none'}", + status.Started, claims.Subject, status.Terminated, currentTime.Format(elastic.DateTimeFormat)) + + updateRequest := map[string]interface{}{ + "script": map[string]interface{}{ + "source": updateScript, + }, + } + return updateRequest, nil + } else { + updateScript := fmt.Sprintf("if (ctx._source.status == '%s' && ctx._source.integratorId == '%s') {ctx._source.status = '%s'; ctx._source.endDate = '%s'; ctx._source.metadata = params.metadata;} else {ctx.op = 'none'}", + status.Started, claims.Subject, status.Terminated, currentTime.Format(elastic.DateTimeFormat)) + + updateRequest := map[string]interface{}{ + "script": map[string]interface{}{ + "source": updateScript, + "lang": "painless", + "params": map[string]interface{}{"metadata": metadata}, + }, + } + return updateRequest, nil + } +} diff --git a/src/batches/terminate_test.go b/src/batches/terminate_test.go new file mode 100644 index 0000000..bda4175 --- /dev/null +++ b/src/batches/terminate_test.go @@ -0,0 +1,244 @@ +/* + * (C) Copyright IBM Corp. 2020 + * + * SPDX-License-Identifier: Apache-2.0 + */ +package batches + +import ( + "encoding/json" + "fmt" + "github.com/Alvearie/hri-mgmt-api/batches/status" + "github.com/Alvearie/hri-mgmt-api/common/auth" + "github.com/Alvearie/hri-mgmt-api/common/elastic" + "github.com/Alvearie/hri-mgmt-api/common/param" + "github.com/Alvearie/hri-mgmt-api/common/path" + "github.com/Alvearie/hri-mgmt-api/common/response" + "github.com/Alvearie/hri-mgmt-api/common/test" + "log" + "net/http" + "os" + "reflect" + "testing" +) + +func TestTerminate_CheckAuth(t *testing.T) { + tests := []struct { + name string + claims auth.HriClaims + expectedErr string + }{ + { + name: "With DI role, should return nil", + claims: auth.HriClaims{Scope: auth.HriIntegrator}, + }, + { + name: "With DI & Consumer role, should return nil", + claims: auth.HriClaims{Scope: auth.HriIntegrator + " " + auth.HriConsumer}, + }, + { + name: "Without DI role, should return error", + claims: auth.HriClaims{}, + expectedErr: "Must have hri_data_integrator role to update a batch", + }, + } + + terminate := Terminate{} + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + + err := terminate.CheckAuth(tt.claims) + if (err == nil && tt.expectedErr != "") || (err != nil && err.Error() != tt.expectedErr) { + t.Errorf("GetAuth() = '%v', expected '%v'", err, tt.expectedErr) + } + }) + } +} + +func TestTerminate_GetUpdateScript(t *testing.T) { + validClaims := auth.HriClaims{Scope: auth.HriIntegrator, Subject: "integratorId"} + + tests := []struct { + name string + params map[string]interface{} + claims auth.HriClaims + // Note that the following chars must be escaped because expectedScript is used as a regex pattern: ., ), ( + expectedRequest map[string]interface{} + expectedErr map[string]interface{} + }{ + { + name: "UpdateScript returns expected script without metadata", + params: map[string]interface{}{}, + claims: validClaims, + expectedRequest: map[string]interface{}{ + "script": map[string]interface{}{ + "source": `if \(ctx\._source\.status == 'started' && ctx\._source\.integratorId == 'integratorId'\) {ctx\._source\.status = 'terminated'; ctx\._source\.endDate = '` + test.DatePattern + `';} else {ctx\.op = 'none'}`, + }, + }, + }, + { + name: "UpdateScript returns expected script with metadata", + params: map[string]interface{}{ + param.Metadata: map[string]interface{}{"compression": "gzip", "userMetaField1": "metadata", "userMetaField2": -5}, + }, + claims: validClaims, + expectedRequest: map[string]interface{}{ + "script": map[string]interface{}{ + "source": `if \(ctx\._source\.status == 'started' && ctx\._source\.integratorId == 'integratorId'\) {ctx\._source\.status = 'terminated'; ctx\._source\.endDate = '` + test.DatePattern + `'; ctx\._source\.metadata = params\.metadata;} else {ctx\.op = 'none'}`, + "lang": "painless", + "params": map[string]interface{}{"metadata": map[string]interface{}{"compression": "gzip", "userMetaField1": "metadata", "userMetaField2": -5}}, + }, + }, + }, + { + name: "Missing claim.Subject", + params: map[string]interface{}{}, + claims: auth.HriClaims{}, + expectedRequest: map[string]interface{}{ + "script": map[string]interface{}{ + "source": `if \(ctx\._source\.status == 'started' && ctx\._source\.integratorId == ''\) {ctx\._source\.status = 'sendCompleted'; ctx\._source\.expectedRecordCount = 200;} else {ctx\.op = 'none'}`, + }, + }, + }, + { + name: "Bad Metadata type", + params: map[string]interface{}{ + param.Metadata: "nil", + }, + claims: auth.HriClaims{}, + expectedErr: response.InvalidParams("metadata must be a map, got string instead."), + }, + } + + terminate := Terminate{} + logger := log.New(os.Stdout, fmt.Sprintf("batches/%s: ", terminate.GetAction()), log.Llongfile) + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + + request, errResp := terminate.GetUpdateScript(tt.params, param.ParamValidator{}, tt.claims, logger) + if !reflect.DeepEqual(errResp, tt.expectedErr) { + t.Errorf("GetUpdateScript().errResp = '%v', expected '%v'", errResp, tt.expectedErr) + } else if tt.expectedRequest != nil { + if err := RequestCompareScriptTest(tt.expectedRequest, request); err != nil { + t.Errorf("GetUpdateScript().udpateRequest = \n\t'%s' \nDoesn't match expected \n\t'%s'\n%v", request, tt.expectedRequest, err) + } else if tt.params[param.Metadata] != nil { + if err := RequestCompareWithMetadataTest(tt.expectedRequest, request); err != nil { + t.Errorf("GetUpdateScript().udpateRequest = \n\t'%s' \nDoesn't match expected \n\t'%s'\n%v", request, tt.expectedRequest, err) + } + } + } + }) + } + +} + +func TestUpdateStatus_Terminate(t *testing.T) { + activationId := "activationId" + _ = os.Setenv(response.EnvOwActivationId, activationId) + + const ( + scriptTerminate string = `{"script":{"source":"if \(ctx\._source\.status == 'started' && ctx\._source\.integratorId == 'integratorId'\) {ctx\._source\.status = 'terminated'; ctx\._source\.endDate = '` + test.DatePattern + `';} else {ctx\.op = 'none'}"}}` + "\n" + ) + + validClaims := auth.HriClaims{Scope: auth.HriIntegrator, Subject: "integratorId"} + + terminatedBatch := map[string]interface{}{ + param.BatchId: batchId, + param.Name: batchName, + param.IntegratorId: integratorId, + param.Topic: batchTopic, + param.DataType: batchDataType, + param.Status: status.Terminated.String(), + param.StartDate: batchStartDate, + param.RecordCount: batchExpectedRecordCount, + param.ExpectedRecordCount: batchExpectedRecordCount, + } + terminatedJSON, err := json.Marshal(terminatedBatch) + if err != nil { + t.Errorf("Unable to create batch JSON string: %s", err.Error()) + } + + tests := []struct { + name string + params map[string]interface{} + claims auth.HriClaims + ft *test.FakeTransport + writerError error + expectedNotification map[string]interface{} + expectedResponse map[string]interface{} + }{ + { + name: "simple Terminate", + params: map[string]interface{}{ + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/terminate", + }, + claims: validClaims, + ft: test.NewFakeTransport(t).AddCall( + "/1234-batches/_doc/test-batch/_update", + test.ElasticCall{ + RequestQuery: transportQueryParams, + RequestBody: scriptTerminate, + ResponseBody: fmt.Sprintf(` + { + "_index": "1234-batches", + "_type": "_doc", + "_id": "test-batch", + "result": "updated", + "get": { + "_source": %s + } + }`, terminatedJSON), + }, + ), + expectedNotification: terminatedBatch, + expectedResponse: response.Success(http.StatusOK, map[string]interface{}{}), + }, + { + name: "Terminate fails on terminated batch", + params: map[string]interface{}{ + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/terminate", + param.Validation: true, + param.ExpectedRecordCount: batchExpectedRecordCount, + }, + claims: validClaims, + ft: test.NewFakeTransport(t).AddCall( + "/1234-batches/_doc/test-batch/_update", + test.ElasticCall{ + RequestQuery: transportQueryParams, + RequestBody: scriptTerminate, + ResponseBody: fmt.Sprintf(` + { + "_index": "1234-batches", + "_type": "_doc", + "_id": "test-batch", + "result": "noop", + "get": { + "_source": %s + } + }`, terminatedJSON), + }, + ), + expectedResponse: response.Error(http.StatusConflict, "The 'terminate' endpoint failed, batch is in 'terminated' state"), + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + esClient, err := elastic.ClientFromTransport(tt.ft) + if err != nil { + t.Error(err) + } + writer := test.FakeWriter{ + T: t, + ExpectedTopic: InputTopicToNotificationTopic(batchTopic), + ExpectedKey: batchId, + ExpectedValue: tt.expectedNotification, + Error: tt.writerError, + } + + if result := UpdateStatus(tt.params, param.ParamValidator{}, tt.claims, Terminate{}, esClient, writer); !reflect.DeepEqual(result, tt.expectedResponse) { + t.Errorf("UpdateStatus() = \n\t%v,\nexpected: \n\t%v", result, tt.expectedResponse) + } + }) + } +} diff --git a/src/batches/update_status.go b/src/batches/update_status.go index 060412e..5c0a5bd 100644 --- a/src/batches/update_status.go +++ b/src/batches/update_status.go @@ -8,7 +8,6 @@ package batches import ( "context" "fmt" - "github.com/Alvearie/hri-mgmt-api/batches/status" "github.com/Alvearie/hri-mgmt-api/common/auth" "github.com/Alvearie/hri-mgmt-api/common/elastic" "github.com/Alvearie/hri-mgmt-api/common/kafka" @@ -19,8 +18,6 @@ import ( "log" "net/http" "os" - "reflect" - "time" ) const ( @@ -30,21 +27,30 @@ const ( msgUpdateResultNotReturned string = "Update result not returned in Elastic response" ) +type StatusUpdater interface { + GetAction() string + + // Return an error if the request is not authorized to perform this action + CheckAuth(auth.HriClaims) error + + // NOTE: whenever the Elastic document is NOT updated, set the ctx.op = 'none' + // flag. Elastic will use this flag in the response so we can check if the update took place. + GetUpdateScript(map[string]interface{}, param.Validator, auth.HriClaims, *log.Logger) (map[string]interface{}, map[string]interface{}) +} + func UpdateStatus( params map[string]interface{}, validator param.Validator, claims auth.HriClaims, - targetStatus status.BatchStatus, + statusUpdater StatusUpdater, client *elasticsearch.Client, kafkaWriter kafka.Writer) map[string]interface{} { - logger := log.New(os.Stdout, fmt.Sprintf("batches/%s: ", targetStatus), log.Llongfile) + logger := log.New(os.Stdout, fmt.Sprintf("batches/%s: ", statusUpdater.GetAction()), log.Llongfile) - // validate that caller has sufficient permissions - if !claims.HasScope(auth.HriIntegrator) { - msg := fmt.Sprintf(fmt.Sprintf(auth.MsgIntegratorRoleRequired, "update")) - logger.Printf(msg) - return response.Error(http.StatusUnauthorized, msg) + if err := statusUpdater.CheckAuth(claims); err != nil { + logger.Println(err.Error()) + return response.Error(http.StatusUnauthorized, err.Error()) } // validate that required input params are present @@ -59,76 +65,11 @@ func UpdateStatus( return response.Error(http.StatusBadRequest, err.Error()) } - errResp := validator.ValidateOptional( - params, - param.Info{param.Metadata, reflect.Map}, - ) - if errResp != nil { - logger.Printf("Bad input optional params: %s", errResp) - return errResp - } - metadata := params[param.Metadata] - index := elastic.IndexFromTenantId(tenantId) - // Elastic conditional update query - var updateScript string - currentTime := time.Now().UTC() - if targetStatus == status.Completed { - // recordCount is required for Completed - errResp := validator.Validate( - params, - // golang receives numeric JSON values as Float64 - param.Info{param.RecordCount, reflect.Float64}, - ) - if errResp != nil { - logger.Printf("Bad input params: %s", errResp) - return errResp - } - recordCount := int(params[param.RecordCount].(float64)) - - // NOTE: whenever the Elastic document is NOT updated, set the ctx.op = 'none' - // flag. Elastic will use this flag in the response so we can check if the update took place. - if metadata == nil { - updateScript = fmt.Sprintf("if (ctx._source.status == '%s' && ctx._source.integratorId == '%s') {ctx._source.status = '%s'; ctx._source.recordCount = %d; ctx._source.endDate = '%s';} else {ctx.op = 'none'}", - status.Started, claims.Subject, targetStatus, recordCount, currentTime.Format(elastic.DateTimeFormat)) - } else { - updateScript = fmt.Sprintf("if (ctx._source.status == '%s' && ctx._source.integratorId == '%s') {ctx._source.status = '%s'; ctx._source.recordCount = %d; ctx._source.endDate = '%s'; ctx._source.metadata = params.metadata;} else {ctx.op = 'none'}", - status.Started, claims.Subject, targetStatus, recordCount, currentTime.Format(elastic.DateTimeFormat)) - } - - } else if targetStatus == status.Terminated { - - if metadata == nil { - updateScript = fmt.Sprintf("if (ctx._source.status == '%s' && ctx._source.integratorId == '%s') {ctx._source.status = '%s'; ctx._source.endDate = '%s';} else {ctx.op = 'none'}", - status.Started, claims.Subject, targetStatus, currentTime.Format(elastic.DateTimeFormat)) - } else { - updateScript = fmt.Sprintf("if (ctx._source.status == '%s' && ctx._source.integratorId == '%s') {ctx._source.status = '%s'; ctx._source.endDate = '%s'; ctx._source.metadata = params.metadata;} else {ctx.op = 'none'}", - status.Started, claims.Subject, targetStatus, currentTime.Format(elastic.DateTimeFormat)) - } - - } else { - // this method was somehow invoked with an invalid batch status - errMsg := fmt.Sprintf("Cannot update batch to status '%s', only '%s' and '%s' are acceptable", targetStatus, status.Completed, status.Terminated) - logger.Println(errMsg) - return response.Error(http.StatusUnprocessableEntity, errMsg) - } - - var updateRequest map[string]interface{} - if metadata == nil { - updateRequest = map[string]interface{}{ - "script": map[string]interface{}{ - "source": updateScript, - }, - } - } else { - updateRequest = map[string]interface{}{ - "script": map[string]interface{}{ - "source": updateScript, - "lang": "painless", - "params": map[string]interface{}{"metadata": metadata}, - }, - } + updateRequest, errResp := statusUpdater.GetUpdateScript(params, validator, claims, logger) + if errResp != nil { + return errResp } encodedQuery, err := elastic.EncodeQueryBody(updateRequest) @@ -146,10 +87,12 @@ func UpdateStatus( client.Update.WithSource("true"), // return updated batch in response ) - decodedUpdateResponse, elasticResponseErr := elastic.DecodeBody(updateResponse, updateErr, tenantId, logger) - if elasticResponseErr != nil { - return elasticResponseErr + decodedUpdateResponse, elasticErr := elastic.DecodeBody(updateResponse, updateErr) + if elasticErr != nil { + return elasticErr.LogAndBuildApiResponse(logger, + fmt.Sprintf("Could not update the status of batch %s", batchId)) } + // read elastic response and verify the batch was updated updateResult, hasUpdateResult := decodedUpdateResponse[elasticResultKey].(string) if !hasUpdateResult { @@ -167,6 +110,7 @@ func UpdateStatus( // successful update; publish update notification to Kafka updatedBatch[param.BatchId] = batchId notificationTopic := InputTopicToNotificationTopic(updatedBatch[param.Topic].(string)) + updatedBatch = NormalizeBatchRecordCountValues(updatedBatch) err = kafkaWriter.Write(notificationTopic, batchId, updatedBatch) if err != nil { logger.Println(err.Error()) @@ -174,7 +118,7 @@ func UpdateStatus( } return response.Success(http.StatusOK, map[string]interface{}{}) } else if updateResult == elasticResultNoop { - statusCode, errMsg := determineCause(targetStatus, claims.Subject, updatedBatch) + statusCode, errMsg := determineCause(statusUpdater, claims.Subject, updatedBatch) logger.Println(errMsg) return response.Error(statusCode, errMsg) } else { @@ -184,21 +128,15 @@ func UpdateStatus( } } -func determineCause(targetStatus status.BatchStatus, subject string, batch map[string]interface{}) (int, string) { - if subject != batch[param.IntegratorId] { - // update resulted in no-op, due to insuffient permissions - return http.StatusUnauthorized, fmt.Sprintf( - "Batch status was not updated to '%s'. Requested by '%s' but owned by '%s'", - targetStatus, - subject, - batch[param.IntegratorId], - ) +func determineCause(statusUpdater StatusUpdater, subject string, batch map[string]interface{}) (int, string) { + // The OAuth subject has to match the batch's integratorId only for the sendComplete and terminate endpoints. + if subject != batch[param.IntegratorId] && statusUpdater.GetAction() != ProcessingCompleteAction && statusUpdater.GetAction() != FailAction { + // update resulted in no-op, due to insufficient permissions + errMsg := fmt.Sprintf("Batch status was not updated to '%s'. Requested by '%s' but owned by '%s'", statusUpdater.GetAction(), subject, batch[param.IntegratorId]) + return http.StatusUnauthorized, errMsg } else { // update resulted in no-op, due to previous batch status - return http.StatusConflict, fmt.Sprintf( - "Batch status was not updated to '%s', batch is already in '%s' state", - targetStatus, - batch[param.Status], - ) + errMsg := fmt.Sprintf("The '%s' endpoint failed, batch is in '%s' state", statusUpdater.GetAction(), batch[param.Status].(string)) + return http.StatusConflict, errMsg } } diff --git a/src/batches/update_status_test.go b/src/batches/update_status_test.go index 1af6c8b..9afbff9 100644 --- a/src/batches/update_status_test.go +++ b/src/batches/update_status_test.go @@ -16,6 +16,7 @@ import ( "github.com/Alvearie/hri-mgmt-api/common/path" "github.com/Alvearie/hri-mgmt-api/common/response" "github.com/Alvearie/hri-mgmt-api/common/test" + "log" "net/http" "os" "reflect" @@ -23,58 +24,66 @@ import ( ) const ( - batchId string = "test-batch" - batchName string = "batchName" - batchTopic string = "test.batch.in" - batchDataType string = "batchDataType" - batchStartDate string = "ignored" - batchRecordCount float64 = float64(1) - integratorId string = "integratorId" - // Note that the following chars must be escaped because RequestBody is used as a regex pattern: ., ), ( - scriptSendComplete string = `{"script":{"source":"if \(ctx\._source\.status == 'started' \\u0026\\u0026 ctx._source.integratorId == '` + integratorId + `'\) {ctx\._source\.status = 'completed'; ctx\._source\.recordCount = 1; ctx\._source\.endDate = '` + test.DatePattern + `';} else {ctx\.op = 'none'}"}}` + "\n" - scriptSendCompleteMetadata string = `{"script":{"lang":"painless","params":{"metadata":{"compression":"gzip","userMetaField1":"metadata"}},"source":"if \(ctx._source.status == 'started' \\u0026\\u0026 ctx._source.integratorId == '` + integratorId + `'\) {ctx._source.status = 'completed'; ctx._source.recordCount = 1; ctx._source.endDate = '` + test.DatePattern + `'; ctx\._source\.metadata = params\.metadata;} else {ctx.op = 'none'}"}}` + "\n" - // Note that the following chars must be escaped because RequestBody is used as a regex pattern: ., ), ( - scriptTerminated string = `{"script":{"source":"if \(ctx._source.status == 'started' \\u0026\\u0026 ctx._source.integratorId == '` + integratorId + `'\) {ctx._source.status = 'terminated'; ctx._source.endDate = '` + test.DatePattern + `';} else {ctx.op = 'none'}"}}` + "\n" - scriptTerminatedMetadata string = `{"script":{"lang":"painless","params":{"metadata":{"compression":"gzip","userMetaField1":"metadata"}},"source":"if \(ctx\._source\.status == 'started' \\u0026\\u0026 ctx._source.integratorId == '` + integratorId + `'\) {ctx\._source\.status = 'terminated'; ctx\._source\.endDate = '` + test.DatePattern + `'; ctx\._source\.metadata = params\.metadata;} else {ctx\.op = 'none'}"}}` + "\n" - transportQueryParams string = "_source=true" + batchId string = "test-batch" + batchName string = "batchName" + integratorId string = "integratorId" + batchTopic string = "test.batch.in" + batchDataType string = "batchDataType" + batchStartDate string = "ignored" + batchExpectedRecordCount float64 = float64(10) + batchActualRecordCount float64 = float64(10) + batchInvalidThreshold float64 = float64(5) + batchInvalidRecordCount float64 = float64(2) + batchFailureMessage string = "Batch Failed" + transportQueryParams string = "_source=true" ) -var batchMetadata = map[string]interface{}{"compression": "gzip", "userMetaField1": "metadata"} +type TestStatusUpdater struct { + UpdateRequest map[string]interface{} + AuthResp error + ScriptErrResp map[string]interface{} +} + +func (TestStatusUpdater) GetAction() string { + return "testAction" +} + +func (t TestStatusUpdater) CheckAuth(_ auth.HriClaims) error { + return t.AuthResp +} + +func (t TestStatusUpdater) GetUpdateScript(_ map[string]interface{}, _ param.Validator, _ auth.HriClaims, _ *log.Logger) (map[string]interface{}, map[string]interface{}) { + return t.UpdateRequest, t.ScriptErrResp +} func TestUpdateStatus(t *testing.T) { activationId := "activationId" _ = os.Setenv(response.EnvOwActivationId, activationId) - validClaims := auth.HriClaims{Scope: auth.HriIntegrator, Subject: integratorId} - - // create some example batches in different states - sendCompleteBatch := createBatch(status.Completed) - sendCompleteJSON, err := json.Marshal(sendCompleteBatch) - if err != nil { - t.Errorf("Unable to create batch JSON string: %s", err.Error()) - } - - terminatedBatch := createBatch(status.Terminated) - terminatedJSON, err := json.Marshal(terminatedBatch) - if err != nil { - t.Errorf("Unable to create batch JSON string: %s", err.Error()) + batch := map[string]interface{}{ + param.BatchId: batchId, + param.Name: batchName, + param.IntegratorId: integratorId, + param.Topic: batchTopic, + param.DataType: batchDataType, + param.Status: status.Completed.String(), + param.StartDate: batchStartDate, + param.RecordCount: batchExpectedRecordCount, + param.ExpectedRecordCount: batchExpectedRecordCount, + param.ActualRecordCount: batchActualRecordCount, + param.InvalidThreshold: batchInvalidThreshold, + param.InvalidRecordCount: batchInvalidRecordCount, } - - failedBatch := createBatch(status.Failed) - failedJSON, err := json.Marshal(failedBatch) + completedJSON, err := json.Marshal(batch) if err != nil { t.Errorf("Unable to create batch JSON string: %s", err.Error()) } - startedBatch := createBatch(status.Started) - startedJSON, err := json.Marshal(startedBatch) - if err != nil { - t.Errorf("Unable to create batch JSON string: %s", err.Error()) - } + validClaims := auth.HriClaims{Scope: auth.HriIntegrator, Subject: integratorId} tests := []struct { name string - targetStatus status.BatchStatus + statusUpdater StatusUpdater params map[string]interface{} claims auth.HriClaims ft *test.FakeTransport @@ -83,40 +92,37 @@ func TestUpdateStatus(t *testing.T) { expectedResponse map[string]interface{} }{ { - name: "invalid openwhisk path", - targetStatus: status.Completed, + name: "invalid openwhisk path", + statusUpdater: TestStatusUpdater{}, params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234", - param.RecordCount: batchRecordCount, + path.ParamOwPath: "/hri/tenants/1234", }, claims: validClaims, ft: test.NewFakeTransport(t), expectedResponse: response.Error(http.StatusBadRequest, "The path is shorter than the requested path parameter; path: [ hri tenants 1234], requested index: 5"), }, { - name: "return error for missing tenantId path param", - targetStatus: status.Completed, + name: "return error for missing tenantId path param", + statusUpdater: TestStatusUpdater{}, params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants", - param.RecordCount: batchRecordCount, + path.ParamOwPath: "/hri/tenants", }, claims: validClaims, ft: test.NewFakeTransport(t), expectedResponse: response.Error(http.StatusBadRequest, "The path is shorter than the requested path parameter; path: [ hri tenants], requested index: 3"), }, { - name: "simple sendComplete", - targetStatus: status.Completed, + name: "success", + statusUpdater: TestStatusUpdater{UpdateRequest: map[string]interface{}{"script": map[string]interface{}{"source": "update script"}}}, params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", - param.RecordCount: batchRecordCount, + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/testAction", }, claims: validClaims, ft: test.NewFakeTransport(t).AddCall( "/1234-batches/_doc/test-batch/_update", test.ElasticCall{ RequestQuery: transportQueryParams, - RequestBody: scriptSendComplete, + RequestBody: "update script", ResponseBody: fmt.Sprintf(` { "_index": "1234-batches", @@ -126,130 +132,35 @@ func TestUpdateStatus(t *testing.T) { "get": { "_source": %s } - }`, sendCompleteJSON), + }`, completedJSON), }, ), - expectedNotification: sendCompleteBatch, + expectedNotification: batch, expectedResponse: response.Success(http.StatusOK, map[string]interface{}{}), }, { - name: "sendComplete with metadata", - targetStatus: status.Completed, + name: "StatusUpdater error", + statusUpdater: TestStatusUpdater{ScriptErrResp: response.MissingParams("test-param")}, params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/" + batchId + "/action/sendComplete", - param.RecordCount: batchRecordCount, - param.Metadata: batchMetadata, - }, - claims: validClaims, - ft: test.NewFakeTransport(t).AddCall( - "/1234-batches/_doc/test-batch/_update", - test.ElasticCall{ - RequestQuery: transportQueryParams, - RequestBody: scriptSendCompleteMetadata, - ResponseBody: fmt.Sprintf(` - { - "_index": "1234-batches", - "_type": "_doc", - "_id": "test-batch", - "result": "updated", - "get": { - "_source": %s - } - }`, sendCompleteJSON), - }, - ), - expectedNotification: sendCompleteBatch, - expectedResponse: response.Success(http.StatusOK, map[string]interface{}{}), - }, - { - name: "sendComplete fails on terminated batch", - targetStatus: status.Completed, - params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", - param.RecordCount: batchRecordCount, - }, - claims: validClaims, - ft: test.NewFakeTransport(t).AddCall( - "/1234-batches/_doc/test-batch/_update", - test.ElasticCall{ - RequestQuery: transportQueryParams, - RequestBody: scriptSendComplete, - ResponseBody: fmt.Sprintf(` - { - "_index": "1234-batches", - "_type": "_doc", - "_id": "test-batch", - "result": "noop", - "get": { - "_source": %s - } - }`, terminatedJSON), - }, - ), - expectedResponse: response.Error(http.StatusConflict, "Batch status was not updated to 'completed', batch is already in 'terminated' state"), - }, - { - name: "sendComplete fails on missing record count parameter", - targetStatus: status.Completed, - params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/testAction", }, claims: validClaims, ft: test.NewFakeTransport(t), - expectedResponse: response.Error(http.StatusBadRequest, "Missing required parameter(s): [recordCount]"), - }, - { - name: "sendComplete fails on bad metadata parameter", - targetStatus: status.Completed, - params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/" + batchId + "/action/sendComplete", - param.RecordCount: batchRecordCount, - param.Metadata: "not json object", - }, - claims: validClaims, - ft: test.NewFakeTransport(t), - expectedResponse: response.Error(http.StatusBadRequest, "Invalid parameter type(s): [metadata must be a map, got string instead.]"), - }, - { - name: "sendComplete fails when batch already in completed state", - targetStatus: status.Completed, - params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", - param.RecordCount: batchRecordCount, - }, - claims: validClaims, - ft: test.NewFakeTransport(t).AddCall( - "/1234-batches/_doc/test-batch/_update", - test.ElasticCall{ - RequestQuery: transportQueryParams, - RequestBody: scriptSendComplete, - ResponseBody: fmt.Sprintf(` - { - "_index": "1234-batches", - "_type": "_doc", - "_id": "test-batch", - "result": "noop", - "get": { - "_source": %s - } - }`, sendCompleteJSON), - }, - ), - expectedResponse: response.Error(http.StatusConflict, "Batch status was not updated to 'completed', batch is already in 'completed' state"), + expectedResponse: response.MissingParams("test-param"), }, + // Can't find a way to make the script encoding fail -> elastic.EncodeQueryBody(updateRequest) + // So there's no unit test for the failure check. { - name: "fail when update result not returned by elastic", - targetStatus: status.Completed, + name: "fail when update result not returned by elastic", + statusUpdater: TestStatusUpdater{}, params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", - param.RecordCount: batchRecordCount, + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/testAction", }, claims: validClaims, ft: test.NewFakeTransport(t).AddCall( "/1234-batches/_doc/test-batch/_update", test.ElasticCall{ RequestQuery: transportQueryParams, - RequestBody: scriptSendComplete, ResponseBody: fmt.Sprintf(` { "_index": "1234-batches", @@ -258,24 +169,22 @@ func TestUpdateStatus(t *testing.T) { "get": { "_source": %s } - }`, sendCompleteJSON), + }`, completedJSON), }, ), expectedResponse: response.Error(http.StatusInternalServerError, "Update result not returned in Elastic response"), }, { - name: "fail when updated document not returned by elastic", - targetStatus: status.Completed, + name: "fail when updated document not returned by elastic", + statusUpdater: TestStatusUpdater{}, params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", - param.RecordCount: batchRecordCount, + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/testAction", }, claims: validClaims, ft: test.NewFakeTransport(t).AddCall( "/1234-batches/_doc/test-batch/_update", test.ElasticCall{ RequestQuery: transportQueryParams, - RequestBody: scriptSendComplete, ResponseBody: ` { "_index": "1234-batches", @@ -288,18 +197,16 @@ func TestUpdateStatus(t *testing.T) { expectedResponse: response.Error(http.StatusInternalServerError, "Updated document not returned in Elastic response: error extracting the get section of the JSON"), }, { - name: "fail when elastic result is unrecognized or invalid", - targetStatus: status.Completed, + name: "fail when elastic result is unrecognized or invalid", + statusUpdater: TestStatusUpdater{}, params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", - param.RecordCount: batchRecordCount, + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/testAction", }, claims: validClaims, ft: test.NewFakeTransport(t).AddCall( "/1234-batches/_doc/test-batch/_update", test.ElasticCall{ RequestQuery: transportQueryParams, - RequestBody: scriptSendComplete, ResponseBody: fmt.Sprintf(` { "_index": "1234-batches", @@ -309,42 +216,22 @@ func TestUpdateStatus(t *testing.T) { "get": { "_source": %s } - }`, sendCompleteJSON), + }`, completedJSON), }, ), expectedResponse: response.Error(http.StatusInternalServerError, "An unexpected error occurred updating the batch, Elastic update returned result 'MOnkeez-bad-result'"), }, { - name: "invalid elastic response", - targetStatus: status.Completed, + name: "fail on nonexistent tenant", + statusUpdater: TestStatusUpdater{}, params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", - param.RecordCount: batchRecordCount, - }, - claims: validClaims, - ft: test.NewFakeTransport(t).AddCall( - "/1234-batches/_doc/test-batch/_update", - test.ElasticCall{ - RequestQuery: transportQueryParams, - RequestBody: scriptSendComplete, - ResponseBody: `{"_index": "1234-batches",`, - }, - ), - expectedResponse: response.Error(http.StatusInternalServerError, "Error parsing the Elastic search response body: unexpected EOF"), - }, - { - name: "fail on nonexistent tenant", - targetStatus: status.Completed, - params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/tenant-that-doesnt-exist/batches/test-batch/action/sendComplete", - param.RecordCount: batchRecordCount, + path.ParamOwPath: "/hri/tenants/tenant-that-doesnt-exist/batches/test-batch/action/testAction", }, claims: validClaims, ft: test.NewFakeTransport(t).AddCall( "/tenant-that-doesnt-exist-batches/_doc/test-batch/_update", test.ElasticCall{ RequestQuery: transportQueryParams, - RequestBody: scriptSendComplete, ResponseStatusCode: http.StatusNotFound, ResponseBody: ` { @@ -357,102 +244,47 @@ func TestUpdateStatus(t *testing.T) { }`, }, ), - expectedResponse: response.Error(http.StatusNotFound, "index_not_found_exception: no such index and [action.auto_create_index] is [false]"), - }, - { - name: "fail when updating nonexistent batch", - targetStatus: status.Completed, - params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/batch-that-doesnt-exist/action/sendComplete", - param.RecordCount: batchRecordCount, - }, - claims: validClaims, - ft: test.NewFakeTransport(t).AddCall( - "/1234-batches/_doc/batch-that-doesnt-exist/_update", - test.ElasticCall{ - RequestQuery: transportQueryParams, - RequestBody: scriptSendComplete, - ResponseStatusCode: http.StatusNotFound, - ResponseBody: ` - { - "error": { - "type": "document_missing_exception", - "reason": "[_doc][batch-that-doesnt-exist]: document missing", - "index": "1234-batches" - }, - "status": 404 - }`, - }, - ), - expectedResponse: response.Error(http.StatusNotFound, "document_missing_exception: [_doc][batch-that-doesnt-exist]: document missing"), + expectedResponse: response.Error(http.StatusNotFound, + "Could not update the status of batch test-batch: index_not_found_exception: no such index and [action.auto_create_index] is [false]"), }, { - name: "fail when unable to send notification", - targetStatus: status.Completed, + name: "fail when result is noop", + statusUpdater: TestStatusUpdater{UpdateRequest: map[string]interface{}{"script": map[string]interface{}{"source": "update script"}}}, params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", - param.RecordCount: batchRecordCount, + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/testAction", }, claims: validClaims, ft: test.NewFakeTransport(t).AddCall( "/1234-batches/_doc/test-batch/_update", test.ElasticCall{ RequestQuery: transportQueryParams, - RequestBody: scriptSendComplete, + RequestBody: "update script", ResponseBody: fmt.Sprintf(` { "_index": "1234-batches", "_type": "_doc", "_id": "test-batch", - "result": "updated", - "get": { - "_source": %s - } - }`, sendCompleteJSON), - }, - ), - expectedNotification: sendCompleteBatch, - writerError: errors.New("Unable to write to Kafka"), - expectedResponse: response.Error(http.StatusInternalServerError, "Unable to write to Kafka"), - }, - { - name: "simple terminate", - targetStatus: status.Terminated, - params: map[string]interface{}{path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/terminate"}, - claims: validClaims, - ft: test.NewFakeTransport(t).AddCall( - "/1234-batches/_doc/test-batch/_update", - test.ElasticCall{ - RequestQuery: transportQueryParams, - RequestBody: scriptTerminated, - ResponseBody: fmt.Sprintf(` - { - "_index": "1234-batches", - "_type": "_doc", - "_id": "test-batch", - "result": "updated", + "result": "noop", "get": { "_source": %s } - }`, terminatedJSON), + }`, completedJSON), }, ), - expectedNotification: terminatedBatch, - expectedResponse: response.Success(http.StatusOK, map[string]interface{}{}), + expectedNotification: batch, + expectedResponse: response.Error(http.StatusConflict, "The 'testAction' endpoint failed, batch is in 'completed' state"), }, { - name: "terminate with metadata", - targetStatus: status.Terminated, + name: "fail when unable to send notification", + statusUpdater: TestStatusUpdater{}, params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/terminate", - param.Metadata: batchMetadata, + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/testAction", }, claims: validClaims, ft: test.NewFakeTransport(t).AddCall( "/1234-batches/_doc/test-batch/_update", test.ElasticCall{ RequestQuery: transportQueryParams, - RequestBody: scriptTerminatedMetadata, ResponseBody: fmt.Sprintf(` { "_index": "1234-batches", @@ -462,88 +294,28 @@ func TestUpdateStatus(t *testing.T) { "get": { "_source": %s } - }`, terminatedJSON), + }`, completedJSON), }, ), - expectedNotification: terminatedBatch, - expectedResponse: response.Success(http.StatusOK, map[string]interface{}{}), - }, - { - name: "terminate fails on failed batch", - targetStatus: status.Terminated, - params: map[string]interface{}{path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/terminate"}, - claims: validClaims, - ft: test.NewFakeTransport(t).AddCall( - "/1234-batches/_doc/test-batch/_update", - test.ElasticCall{ - RequestQuery: transportQueryParams, - RequestBody: scriptTerminated, - ResponseBody: fmt.Sprintf(` - { - "_index": "1234-batches", - "_type": "_doc", - "_id": "test-batch", - "result": "noop", - "get": { - "_source": %s - } - }`, failedJSON), - }, - ), - expectedResponse: response.Error(http.StatusConflict, "Batch status was not updated to 'terminated', batch is already in 'failed' state"), - }, - { - name: "terminate fails on previously terminated batch", - targetStatus: status.Terminated, - params: map[string]interface{}{path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/terminate"}, - claims: validClaims, - ft: test.NewFakeTransport(t).AddCall( - "/1234-batches/_doc/test-batch/_update", - test.ElasticCall{ - RequestQuery: transportQueryParams, - RequestBody: scriptTerminated, - ResponseBody: fmt.Sprintf(` - { - "_index": "1234-batches", - "_type": "_doc", - "_id": "test-batch", - "result": "noop", - "get": { - "_source": %s - } - }`, terminatedJSON), - }, - ), - expectedResponse: response.Error(http.StatusConflict, "Batch status was not updated to 'terminated', batch is already in 'terminated' state"), + expectedNotification: batch, + writerError: errors.New("Unable to write to Kafka"), + expectedResponse: response.Error(http.StatusInternalServerError, "Unable to write to Kafka"), }, { - name: "return error response for Unknown status", - targetStatus: status.Unknown, + name: "AuthCheck fails", + statusUpdater: TestStatusUpdater{AuthResp: errors.New("not authorized")}, params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/blargBlarg", - param.RecordCount: batchRecordCount, + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/testAction", }, claims: validClaims, ft: test.NewFakeTransport(t), - expectedResponse: response.Error(http.StatusUnprocessableEntity, "Cannot update batch to status 'unknown', only 'completed' and 'terminated' are acceptable"), - }, - { - name: "invalid role", - targetStatus: status.Completed, - params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", - param.RecordCount: batchRecordCount, - }, - claims: auth.HriClaims{Scope: auth.HriConsumer, Subject: integratorId}, - ft: test.NewFakeTransport(t), - expectedResponse: response.Error(http.StatusUnauthorized, fmt.Sprintf(auth.MsgIntegratorRoleRequired, "update")), + expectedResponse: response.Error(http.StatusUnauthorized, "not authorized"), }, { - name: "complete fails on bad integrator id", - targetStatus: status.Completed, + name: "Update fails on bad integrator id", + statusUpdater: TestStatusUpdater{}, params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", - param.RecordCount: batchRecordCount, + path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/testAction", }, claims: auth.HriClaims{Scope: auth.HriIntegrator, Subject: "bad integrator"}, ft: test.NewFakeTransport(t).AddCall( @@ -559,35 +331,10 @@ func TestUpdateStatus(t *testing.T) { "get": { "_source": %s } - }`, startedJSON), + }`, completedJSON), }, ), - expectedResponse: response.Error(http.StatusUnauthorized, fmt.Sprintf("Batch status was not updated to 'completed'. Requested by 'bad integrator' but owned by '%s'", integratorId)), - }, - { - name: "terminate fails on bad integrator id", - targetStatus: status.Terminated, - params: map[string]interface{}{ - path.ParamOwPath: "/hri/tenants/1234/batches/test-batch/action/sendComplete", - }, - claims: auth.HriClaims{Scope: auth.HriIntegrator, Subject: "bad integrator"}, - ft: test.NewFakeTransport(t).AddCall( - "/1234-batches/_doc/test-batch/_update", - test.ElasticCall{ - RequestQuery: transportQueryParams, - ResponseBody: fmt.Sprintf(` - { - "_index": "1234-batches", - "_type": "_doc", - "_id": "test-batch", - "result": "noop", - "get": { - "_source": %s - } - }`, startedJSON), - }, - ), - expectedResponse: response.Error(http.StatusUnauthorized, fmt.Sprintf("Batch status was not updated to 'terminated'. Requested by 'bad integrator' but owned by '%s'", integratorId)), + expectedResponse: response.Error(http.StatusUnauthorized, fmt.Sprintf("Batch status was not updated to 'testAction'. Requested by 'bad integrator' but owned by '%s'", integratorId)), }, } @@ -605,23 +352,9 @@ func TestUpdateStatus(t *testing.T) { Error: tt.writerError, } - if result := UpdateStatus(tt.params, param.ParamValidator{}, tt.claims, tt.targetStatus, esClient, writer); !reflect.DeepEqual(result, tt.expectedResponse) { + if result := UpdateStatus(tt.params, param.ParamValidator{}, tt.claims, tt.statusUpdater, esClient, writer); !reflect.DeepEqual(result, tt.expectedResponse) { t.Errorf("UpdateStatus() = %v, expected %v", result, tt.expectedResponse) } }) } } - -func createBatch(status status.BatchStatus) map[string]interface{} { - return map[string]interface{}{ - param.BatchId: batchId, - param.Name: batchName, - param.Topic: batchTopic, - param.DataType: batchDataType, - param.Status: status.String(), - param.StartDate: batchStartDate, - param.RecordCount: batchRecordCount, - param.Metadata: batchMetadata, - param.IntegratorId: integratorId, - } -} diff --git a/src/batches_fail.go b/src/batches_fail.go new file mode 100644 index 0000000..57ce3ee --- /dev/null +++ b/src/batches_fail.go @@ -0,0 +1,54 @@ +// +build !tests + +/* + * (C) Copyright IBM Corp. 2020 + * + * SPDX-License-Identifier: Apache-2.0 + */ + +package main + +import ( + "github.com/Alvearie/hri-mgmt-api/batches" + "github.com/Alvearie/hri-mgmt-api/common/actionloopmin" + "github.com/Alvearie/hri-mgmt-api/common/auth" + "github.com/Alvearie/hri-mgmt-api/common/elastic" + "github.com/Alvearie/hri-mgmt-api/common/kafka" + "github.com/Alvearie/hri-mgmt-api/common/param" + "github.com/Alvearie/hri-mgmt-api/common/response" + "github.com/coreos/go-oidc" + "log" + "net/http" + "os" + "time" +) + +func main() { + actionloopmin.Main(failMain) +} + +func failMain(params map[string]interface{}) map[string]interface{} { + logger := log.New(os.Stdout, "batches/fail: ", log.Llongfile) + start := time.Now() + logger.Printf("start failMain, %s \n", start) + + claims, errResp := auth.GetValidatedClaims(params, auth.AuthValidator{}, oidc.NewProvider) + if errResp != nil { + return errResp + } + + esClient, err := elastic.ClientFromParams(params) + if err != nil { + return response.Error(http.StatusInternalServerError, err.Error()) + } + + kafkaWriter, err := kafka.NewWriterFromParams(params) + if err != nil { + return response.Error(http.StatusInternalServerError, err.Error()) + } + + resp := batches.UpdateStatus(params, param.ParamValidator{}, claims, batches.Fail{}, esClient, kafkaWriter) + + logger.Printf("processing time failMain, %d milliseconds \n", time.Since(start).Milliseconds()) + return resp +} diff --git a/src/batches_processingcomplete.go b/src/batches_processingcomplete.go new file mode 100644 index 0000000..871f23c --- /dev/null +++ b/src/batches_processingcomplete.go @@ -0,0 +1,54 @@ +// +build !tests + +/* + * (C) Copyright IBM Corp. 2020 + * + * SPDX-License-Identifier: Apache-2.0 + */ + +package main + +import ( + "github.com/Alvearie/hri-mgmt-api/batches" + "github.com/Alvearie/hri-mgmt-api/common/actionloopmin" + "github.com/Alvearie/hri-mgmt-api/common/auth" + "github.com/Alvearie/hri-mgmt-api/common/elastic" + "github.com/Alvearie/hri-mgmt-api/common/kafka" + "github.com/Alvearie/hri-mgmt-api/common/param" + "github.com/Alvearie/hri-mgmt-api/common/response" + "github.com/coreos/go-oidc" + "log" + "net/http" + "os" + "time" +) + +func main() { + actionloopmin.Main(processingCompleteMain) +} + +func processingCompleteMain(params map[string]interface{}) map[string]interface{} { + logger := log.New(os.Stdout, "batches/processingComplete: ", log.Llongfile) + start := time.Now() + logger.Printf("start processingCompleteMain, %s \n", start) + + claims, errResp := auth.GetValidatedClaims(params, auth.AuthValidator{}, oidc.NewProvider) + if errResp != nil { + return errResp + } + + esClient, err := elastic.ClientFromParams(params) + if err != nil { + return response.Error(http.StatusInternalServerError, err.Error()) + } + + kafkaWriter, err := kafka.NewWriterFromParams(params) + if err != nil { + return response.Error(http.StatusInternalServerError, err.Error()) + } + + resp := batches.UpdateStatus(params, param.ParamValidator{}, claims, batches.ProcessingComplete{}, esClient, kafkaWriter) + + logger.Printf("processing time processingCompleteMain, %d milliseconds \n", time.Since(start).Milliseconds()) + return resp +} diff --git a/src/batches_sendcomplete.go b/src/batches_sendcomplete.go index 61df411..516944a 100644 --- a/src/batches_sendcomplete.go +++ b/src/batches_sendcomplete.go @@ -10,7 +10,6 @@ package main import ( "github.com/Alvearie/hri-mgmt-api/batches" - "github.com/Alvearie/hri-mgmt-api/batches/status" "github.com/Alvearie/hri-mgmt-api/common/actionloopmin" "github.com/Alvearie/hri-mgmt-api/common/auth" "github.com/Alvearie/hri-mgmt-api/common/elastic" @@ -29,7 +28,7 @@ func main() { } func sendCompleteMain(params map[string]interface{}) map[string]interface{} { - logger := log.New(os.Stdout, "batches/UpdateStatus: ", log.Llongfile) + logger := log.New(os.Stdout, "batches/sendComplete: ", log.Llongfile) start := time.Now() logger.Printf("start sendCompleteMain, %s \n", start) @@ -42,12 +41,13 @@ func sendCompleteMain(params map[string]interface{}) map[string]interface{} { if err != nil { return response.Error(http.StatusInternalServerError, err.Error()) } + kafkaWriter, err := kafka.NewWriterFromParams(params) if err != nil { return response.Error(http.StatusInternalServerError, err.Error()) } - resp := batches.UpdateStatus(params, param.ParamValidator{}, claims, status.Completed, esClient, kafkaWriter) + resp := batches.UpdateStatus(params, param.ParamValidator{}, claims, batches.SendComplete{}, esClient, kafkaWriter) logger.Printf("processing time sendCompleteMain, %d milliseconds \n", time.Since(start).Milliseconds()) return resp diff --git a/src/batches_terminate.go b/src/batches_terminate.go index eccefcf..b292ac0 100644 --- a/src/batches_terminate.go +++ b/src/batches_terminate.go @@ -10,7 +10,6 @@ package main import ( "github.com/Alvearie/hri-mgmt-api/batches" - "github.com/Alvearie/hri-mgmt-api/batches/status" "github.com/Alvearie/hri-mgmt-api/common/actionloopmin" "github.com/Alvearie/hri-mgmt-api/common/auth" "github.com/Alvearie/hri-mgmt-api/common/elastic" @@ -29,7 +28,7 @@ func main() { } func terminateMain(params map[string]interface{}) map[string]interface{} { - logger := log.New(os.Stdout, "batches/UpdateStatus: ", log.Llongfile) + logger := log.New(os.Stdout, "batches/terminate: ", log.Llongfile) start := time.Now() logger.Printf("start terminateMain, %s \n", start) @@ -42,12 +41,13 @@ func terminateMain(params map[string]interface{}) map[string]interface{} { if err != nil { return response.Error(http.StatusInternalServerError, err.Error()) } + kafkaWriter, err := kafka.NewWriterFromParams(params) if err != nil { return response.Error(http.StatusInternalServerError, err.Error()) } - resp := batches.UpdateStatus(params, param.ParamValidator{}, claims, status.Terminated, esClient, kafkaWriter) + resp := batches.UpdateStatus(params, param.ParamValidator{}, claims, batches.Terminate{}, esClient, kafkaWriter) logger.Printf("processing time terminateMain, %d milliseconds \n", time.Since(start).Milliseconds()) return resp diff --git a/src/common/auth/constants.go b/src/common/auth/constants.go index d392736..8f6ff50 100644 --- a/src/common/auth/constants.go +++ b/src/common/auth/constants.go @@ -7,10 +7,12 @@ package auth const ( HriIntegrator string = "hri_data_integrator" + HriInternal string = "hri_internal" HriConsumer string = "hri_consumer" TenantScopePrefix string = "tenant_" MsgAccessTokenMissingScopes = "The access token must have one of these scopes: hri_consumer, hri_data_integrator" MsgIntegratorSubClaimNoMatch = "The token's sub claim (clientId): %s does not match the data integratorId: %s" MsgIntegratorRoleRequired = "Must have hri_data_integrator role to %s a batch" + MsgInternalRoleRequired = "Must have hri_internal role to mark a batch as %s" MsgSubClaimRequiredInJwt = "JWT access token 'sub' claim must be populated" ) diff --git a/src/common/auth/validate_test.go b/src/common/auth/validate_test.go index bd70b00..1f345ba 100644 --- a/src/common/auth/validate_test.go +++ b/src/common/auth/validate_test.go @@ -367,7 +367,7 @@ func TestGetValidatedClaimsExtractionError(t *testing.T) { claims, err := GetValidatedClaims(params, mockValidator, nil) - // we expect to get back an empty set of claims and a bad token error + // we expect to get back an empty set of claims and a bad claims error expClaims := HriClaims{} expErr := response.Error(http.StatusUnauthorized, badClaimsHolderErr) diff --git a/src/common/elastic/client.go b/src/common/elastic/client.go index 31c04ec..b0b6f97 100644 --- a/src/common/elastic/client.go +++ b/src/common/elastic/client.go @@ -15,8 +15,10 @@ import ( "errors" "fmt" "github.com/Alvearie/hri-mgmt-api/common/param" + "github.com/Alvearie/hri-mgmt-api/common/response" service "github.com/IBM/resource-controller-go-sdk-generator/build/generated" "github.com/elastic/go-elasticsearch/v7" + "log" "net/http" "strings" ) @@ -24,10 +26,43 @@ import ( // magical reference date must be used for some reason const DateTimeFormat string = "2006-01-02T15:04:05Z" +type ResponseError struct { + ErrorObj error + + // error code that should be returned by the hri-mgmt-api endpoint + Code int + + // Elastic error type + ErrorType string + + // Elastic root_cause type + RootCause string +} + +func (elasticError ResponseError) Error() string { + if elasticError.ErrorObj == nil { + // An error was built, but no error information was provided. + // Return a generic error message with the statusCode + err := fmt.Errorf(msgUnexpectedErr, elasticError.Code) + return err.Error() + } + + return elasticError.ErrorObj.Error() +} + +func (elasticError ResponseError) LogAndBuildApiResponse(logger *log.Logger, message string) map[string]interface{} { + err := fmt.Errorf("%s: %w", message, elasticError) + logger.Printf("%v", err) + return response.Error(elasticError.Code, fmt.Sprintf("%v", err)) +} + // encodes a map[string]interface{} query body into a byte buffer for the elastic client func EncodeQueryBody(queryBody map[string]interface{}) (*bytes.Buffer, error) { var encodedBuffer bytes.Buffer - err := json.NewEncoder(&encodedBuffer).Encode(queryBody) + encoder := json.NewEncoder(&encodedBuffer) + encoder.SetEscapeHTML(false) + + err := encoder.Encode(queryBody) if err != nil { return nil, errors.New(fmt.Sprintf("Unable to encode query body as byte buffer: %s", err.Error())) } diff --git a/src/common/elastic/client_test.go b/src/common/elastic/client_test.go index f3aa66a..5abadfe 100644 --- a/src/common/elastic/client_test.go +++ b/src/common/elastic/client_test.go @@ -10,13 +10,18 @@ import ( "crypto/tls" "encoding/json" "errors" + "fmt" + "github.com/Alvearie/hri-mgmt-api/common/response" "github.com/Alvearie/hri-mgmt-api/common/test" "github.com/IBM/resource-controller-go-sdk-generator/build/generated" "github.com/elastic/go-elasticsearch/v7" "github.com/golang/mock/gomock" "github.com/stretchr/testify/assert" "io/ioutil" + "log" "net/http" + "os" + "reflect" "strings" "testing" ) @@ -29,6 +34,41 @@ func (t *mockTransport) RoundTrip(req *http.Request) (*http.Response, error) { return &http.Response{Body: ioutil.NopCloser(strings.NewReader(MOCK_RESPONSE))}, nil } +func TestResponseError(t *testing.T) { + logger := log.New(os.Stdout, "client_test/testResponseError: ", log.Llongfile) + responseMessage := "response message" + + testCases := []struct { + name string + code int + error error + expectedResponse map[string]interface{} + }{ + { + name: "Elastic Error Code and Error in Response", + code: http.StatusNotFound, + error: fmt.Errorf("error message"), + expectedResponse: response.Error(http.StatusNotFound, "response message: error message"), + }, { + name: "Elastic Error Code and No Error in Response", + code: http.StatusNotFound, + expectedResponse: response.Error(http.StatusNotFound, + fmt.Sprintf(responseMessage+": "+msgUnexpectedErr, http.StatusNotFound)), + }, + } + + for _, tc := range testCases { + + t.Run(tc.name, func(t *testing.T) { + actual := &ResponseError{ErrorObj: tc.error, Code: tc.code} + actualResponse := actual.LogAndBuildApiResponse(logger, responseMessage) + if !reflect.DeepEqual(tc.expectedResponse, actualResponse) { + t.Errorf("actual response %v, expected response %v", actualResponse, tc.expectedResponse) + } + }) + } +} + func TestClientFromTransport(t *testing.T) { client, err := ClientFromTransport(&mockTransport{}) if err != nil { @@ -437,12 +477,6 @@ func TestFromConfigError(t *testing.T) { assert.NotNil(t, client) } -func TestIndexFromTenantId(t *testing.T) { - tenant1 := "fakeTenant" - rtnIndex := IndexFromTenantId(tenant1) - assert.Equal(t, "fakeTenant-batches", rtnIndex) -} - func TestTenantIdFromIndex(t *testing.T) { tenant1 := "fakeTenant-batches" rtnIndex := TenantIdFromIndex(tenant1) diff --git a/src/common/elastic/decoder.go b/src/common/elastic/decoder.go index a8f3ef9..62ab048 100644 --- a/src/common/elastic/decoder.go +++ b/src/common/elastic/decoder.go @@ -8,112 +8,115 @@ package elastic import ( "encoding/json" "fmt" - "github.com/Alvearie/hri-mgmt-api/common/response" "github.com/elastic/go-elasticsearch/v7/esapi" - "log" "net/http" ) -const msgClientErr string = "Elastic client error: %s" -const MsgNilResponse string = "Elastic client returned nil response without an error" -const msgParseErr string = "Error parsing the Elastic search response body: %s" -const msgDocNotFound string = "The document for tenantId: %s with document (batch) ID: %s was not found" -const msgResponseErr string = "%s: %s" -const msgUnexpectedErr string = "Unexpected Elastic Error - %s" -const msgTransformResult string = "Error transforming result -> %s" - -func handleNotFound(body map[string]interface{}, tenantId string, logger *log.Logger) (string, map[string]interface{}) { - var msg string - var rtnBody map[string]interface{} - if foundFlag, ok := body["found"]; ok { - if foundFlag == false { //handle HTTP 404/NotFound errors - logger.Println("got 'found=false' -> Find batchId...") - batchId := body["_id"] - msg = fmt.Sprintf(msgDocNotFound, tenantId, batchId) - rtnBody = map[string]interface{}{"error": fmt.Sprintf(msgDocNotFound, tenantId, batchId)} - } - } - return msg, rtnBody -} +const msgClientErr string = "elasticsearch client error: %w" +const MsgNilResponse string = "elasticsearch client returned nil response without an error" +const msgParseErr string = "error parsing the Elasticsearch response body: %w" +const msgUnexpectedErr string = "unexpected Elasticsearch %d error" +const msgEmptyResultErr string = "unexpected empty result" -// If error is nil, attempts to parse the json body of the Response. -// If successful, returns parsed body and optionally an error map, otherwise returns nil body and an error map. -// Note that this function will close the Response Body before returning. -func DecodeBody(res *esapi.Response, err error, tenantId string, logger *log.Logger) (map[string]interface{}, map[string]interface{}) { - respErr := handleRespErr(err, logger, res) - if respErr != nil { - return nil, respErr +func DecodeBody(res *esapi.Response, elasticClientError error) (map[string]interface{}, *ResponseError) { + err := checkForClientErr(res, elasticClientError) + if err != nil { + return nil, err } defer res.Body.Close() var body map[string]interface{} if err := json.NewDecoder(res.Body).Decode(&body); err != nil { - msg := fmt.Sprintf(msgParseErr, err.Error()) - logger.Println(msg) - return nil, response.Error(http.StatusInternalServerError, msg) + err = fmt.Errorf(msgParseErr, err) + return nil, &ResponseError{ErrorObj: err, Code: http.StatusInternalServerError} } if res.IsError() { - var msg string if errBody, ok := body["error"].(map[string]interface{}); ok { - if rootCause, ok := errBody["root_cause"].(map[string]interface{}); ok { - msg = fmt.Sprintf(msgResponseErr, rootCause["type"], rootCause["reason"]) - } else { - msg = fmt.Sprintf(msgResponseErr, errBody["type"], errBody["reason"]) - } - } else if body["found"] != nil { - msg, body = handleNotFound(body, tenantId, logger) - } else { //Default: err handler -> No "error" element in body - msg = fmt.Sprintf(msgUnexpectedErr, body) - return nil, response.Error(http.StatusInternalServerError, msg) + return nil, getErrorFromElasticErrorResponse(errBody, res.StatusCode) + } else if errMsg, ok := body["error"].(string); ok { + return nil, &ResponseError{ErrorObj: fmt.Errorf(errMsg), Code: res.StatusCode} } - logger.Println(msg) - return body, response.Error(res.StatusCode, msg) + // Elastic returned an error code, but no error information was returned in the response. This is normal Elastic + // behavior for some endpoints, most notably when a document isn't found using the Get Document API. + return body, &ResponseError{ErrorObj: nil, Code: res.StatusCode} } return body, nil } -func DecodeBodyFromJsonArray(res *esapi.Response, err error, logger *log.Logger) ([]map[string]interface{}, map[string]interface{}) { - respErr := handleRespErr(err, logger, res) - var body []map[string]interface{} - if respErr != nil { - return body, respErr +func getErrorFromElasticErrorResponse(elasticError map[string]interface{}, statusCode int) *ResponseError { + errorType := elasticError["type"].(string) + err := fmt.Errorf("%s: %s", errorType, elasticError["reason"].(string)) + + if rootCauses, ok := elasticError["root_cause"].([]interface{}); ok && len(rootCauses) > 0 { + // The elastic error returned a root_cause section with more detailed error information. + + // Elastic's root_cause field contains a list of root causes. However, only one root cause is usually present, + // so only the first root cause will be returned in the response. + rootCause := rootCauses[0].(map[string]interface{}) + rootCauseType := rootCause["type"].(string) + + if rootCauseType != errorType { + // Elastic will often duplicate the error "type" and "reason" in the root_cause section, so the additional error + // information is not added if both error and root_error types are identical. + rootCauseErr := fmt.Errorf("%s: %s", rootCauseType, rootCause["reason"]) + return &ResponseError{ErrorObj: fmt.Errorf("%v: %w", err, rootCauseErr), Code: statusCode, + ErrorType: errorType, RootCause: rootCauseType} + } } - if err := json.NewDecoder(res.Body).Decode(&body); err != nil { - msg := fmt.Sprintf(msgParseErr, err.Error()) - logger.Println(msg) - return nil, response.Error(http.StatusInternalServerError, msg) + return &ResponseError{ErrorObj: err, Code: statusCode, ErrorType: errorType} +} + +func DecodeBodyFromJsonArray(res *esapi.Response, err error) ([]map[string]interface{}, *ResponseError) { + clientErr := checkForClientErr(res, err) + if clientErr != nil { + return nil, clientErr + } + + defer res.Body.Close() + + if res.IsError() { + // If an error occurred, then the response is (probably) an elastic error + _, elasticErr := DecodeBody(res, err) + if elasticErr != nil && len(elasticErr.ErrorType) != 0 { + // DecodeBody was able to extract error information + return nil, elasticErr + } + + // DecodeBody wasn't able to extract error information, or the response was in some unanticipated format. + return nil, &ResponseError{ErrorObj: fmt.Errorf(msgUnexpectedErr, res.StatusCode), Code: http.StatusInternalServerError} + } + + var body []map[string]interface{} + if parseErr := json.NewDecoder(res.Body).Decode(&body); parseErr != nil { + return nil, &ResponseError{ErrorObj: fmt.Errorf(msgParseErr, parseErr), Code: http.StatusInternalServerError} } return body, nil } -func DecodeFirstArrayElement(res *esapi.Response, err error, logger *log.Logger) (map[string]interface{}, map[string]interface{}) { - decoded, errResp := DecodeBodyFromJsonArray(res, err, logger) +func DecodeFirstArrayElement(res *esapi.Response, err error) (map[string]interface{}, *ResponseError) { + decoded, errResp := DecodeBodyFromJsonArray(res, err) if errResp != nil { return nil, errResp } if len(decoded) == 0 { - msg := fmt.Sprintf(msgTransformResult, "Uh-Oh: we got no Object inside this ElasticSearch Results Array") - logger.Println(msg) - return nil, response.Error(http.StatusInternalServerError, msg) + return nil, &ResponseError{ErrorObj: fmt.Errorf(msgEmptyResultErr), Code: http.StatusInternalServerError} } - return decoded[0], errResp + + return decoded[0], nil } -func handleRespErr(err error, logger *log.Logger, res *esapi.Response) map[string]interface{} { +func checkForClientErr(res *esapi.Response, err error) *ResponseError { if err != nil { - msg := fmt.Sprintf(msgClientErr, err.Error()) - logger.Println(msg) - return response.Error(http.StatusInternalServerError, msg) + return &ResponseError{ErrorObj: fmt.Errorf(msgClientErr, err), Code: http.StatusInternalServerError} } else if res == nil { - logger.Print(MsgNilResponse) - return response.Error(http.StatusInternalServerError, MsgNilResponse) + return &ResponseError{ErrorObj: fmt.Errorf(MsgNilResponse), Code: http.StatusInternalServerError} } return nil } diff --git a/src/common/elastic/decoder_test.go b/src/common/elastic/decoder_test.go index 3382168..5c283b1 100644 --- a/src/common/elastic/decoder_test.go +++ b/src/common/elastic/decoder_test.go @@ -9,174 +9,327 @@ import ( "bytes" "errors" "fmt" - "github.com/Alvearie/hri-mgmt-api/common/response" "github.com/elastic/go-elasticsearch/v7/esapi" "io/ioutil" - "log" "net/http" "reflect" "testing" ) func TestDecodeBody(t *testing.T) { - errText := "errText" - errType := "errType" - errReason := "errReason" - errStatus := 300 - tenantId := "tenant123" - batchId := "batch_no_exist" - indexName := tenantId + "-" + batchId - logger := log.New(ioutil.Discard, "responses/test: ", log.Llongfile) - testCases := []struct { - name string - res *esapi.Response - err error - expectedBody map[string]interface{} - expectedErr map[string]interface{} + name string + res *esapi.Response + clientError error + expectedBody map[string]interface{} + expectedResponseError *ResponseError }{ { - name: "has-err", - err: errors.New(errText), - expectedErr: response.Error(http.StatusInternalServerError, fmt.Sprintf(msgClientErr, errText)), - }, - { - name: "nil-response", - expectedErr: response.Error(http.StatusInternalServerError, MsgNilResponse), - }, - { - name: "bad-json", - res: &esapi.Response{Body: ioutil.NopCloser(bytes.NewReader([]byte(`{bad json: "`)))}, - expectedErr: response.Error(http.StatusInternalServerError, fmt.Sprintf(msgParseErr, "invalid character 'b' looking for beginning of object key string")), - }, - { - name: "has-root-cause", - res: &esapi.Response{StatusCode: errStatus, Body: ioutil.NopCloser(bytes.NewReader([]byte(fmt.Sprintf(`{"error": {"root_cause": {"type": "%s", "reason": "%s"}}}`, errType, errReason))))}, - expectedErr: response.Error(errStatus, fmt.Sprintf(msgResponseErr, errType, errReason)), - expectedBody: map[string]interface{}{"error": map[string]interface{}{"root_cause": map[string]interface{}{"type": errType, "reason": errReason}}}, - }, - { - name: "no-root-cause", - res: &esapi.Response{StatusCode: errStatus, Body: ioutil.NopCloser(bytes.NewReader([]byte(fmt.Sprintf(`{"error": {"type": "%s", "reason": "%s"}}`, errType, errReason))))}, - expectedErr: response.Error(errStatus, fmt.Sprintf(msgResponseErr, errType, errReason)), - expectedBody: map[string]interface{}{"error": map[string]interface{}{"type": errType, "reason": errReason}}, - }, - { - name: "batch-not-found", - res: &esapi.Response{StatusCode: http.StatusNotFound, Body: ioutil.NopCloser(bytes.NewReader([]byte(fmt.Sprintf(`{"_index": "%s", "_type":"_doc", "_id": "%s", "found":false}`, indexName, batchId))))}, - expectedErr: response.Error(http.StatusNotFound, fmt.Sprintf(msgDocNotFound, tenantId, batchId)), - expectedBody: map[string]interface{}{"error": fmt.Sprintf(msgDocNotFound, tenantId, batchId)}, - }, - { - name: "unexpected-err", - res: &esapi.Response{StatusCode: http.StatusServiceUnavailable, Body: ioutil.NopCloser(bytes.NewReader([]byte(`{"unexpected response": " we did not expect this response:"}`)))}, - expectedErr: response.Error(http.StatusInternalServerError, fmt.Sprintf(msgUnexpectedErr, map[string]interface{}{"unexpected response: we did not expect this response": ""})), - }, - { - res: &esapi.Response{Body: ioutil.NopCloser(bytes.NewReader([]byte(`{"good": "json"}`)))}, - expectedBody: map[string]interface{}{"good": "json"}, + name: "200 OK Response", + res: &esapi.Response{StatusCode: http.StatusOK, Body: ioutil.NopCloser(bytes.NewReader([]byte(` + {"good": "json"} + `)))}, + expectedBody: map[string]interface{}{ + "good": "json", + }, + }, { + name: "Elastic Client Error", + clientError: errors.New("client Error"), + expectedResponseError: &ResponseError{ + ErrorObj: fmt.Errorf(msgClientErr, errors.New("client Error")), + Code: http.StatusInternalServerError, + }, + }, { + name: "Elastic No Response No Client Error", + expectedResponseError: &ResponseError{ + ErrorObj: errors.New(MsgNilResponse), + Code: http.StatusInternalServerError, + }, + }, { + name: "Bad Json Response", + res: &esapi.Response{StatusCode: http.StatusBadRequest, Body: ioutil.NopCloser(bytes.NewReader([]byte(` + {"bad": "json" + `)))}, + expectedResponseError: &ResponseError{ + ErrorObj: fmt.Errorf(msgParseErr, errors.New("unexpected EOF")), + Code: http.StatusInternalServerError, + }, + }, { + name: "Elastic Error Response", + res: &esapi.Response{StatusCode: http.StatusNotFound, Body: ioutil.NopCloser(bytes.NewReader([]byte(` + { + "error": { + "type": "index_not_found_exception", + "reason": "no such index", + "index_uuid": "_na_", + "resource.type": "index_or_alias", + "resource.id": "tenant-batches", + "index": "tenant-batches" + }, + "status": 404 + } + `)))}, + expectedResponseError: &ResponseError{ + ErrorObj: fmt.Errorf("%s: %s", "index_not_found_exception", "no such index"), + Code: http.StatusNotFound, + ErrorType: "index_not_found_exception"}, + }, { + name: "Elastic Error Response with Identical Root Cause", + res: &esapi.Response{StatusCode: http.StatusNotFound, Body: ioutil.NopCloser(bytes.NewReader([]byte(` + { + "error": { + "root_cause": [ + { + "type": "index_not_found_exception", + "reason": "no such index", + "index_uuid": "_na_", + "resource.type": "index_or_alias", + "resource.id": "tenant-batches", + "index": "tenant-batches" + } + ], + "type": "index_not_found_exception", + "reason": "no such index", + "index_uuid": "_na_", + "resource.type": "index_or_alias", + "resource.id": "tenant-batches", + "index": "tenant-batches" + }, + "status": 404 + } + `)))}, + expectedResponseError: &ResponseError{ + ErrorObj: fmt.Errorf("%s: %s", "index_not_found_exception", "no such index"), + Code: http.StatusNotFound, ErrorType: "index_not_found_exception"}, + }, { + name: "Elastic Error Response with Different Root Cause", + res: &esapi.Response{StatusCode: http.StatusNotFound, Body: ioutil.NopCloser(bytes.NewReader([]byte(` + { + "error": { + "root_cause": [ + { + "type": "some_other_error", + "reason": "some other reason", + "index_uuid": "_na_", + "resource.type": "index_or_alias", + "resource.id": "tenant-batches", + "index": "tenant-batches" + },{ + "type": "additional errors", + "reason": "will be ignored" + } + ], + "type": "index_not_found_exception", + "reason": "no such index", + "index_uuid": "_na_", + "resource.type": "index_or_alias", + "resource.id": "tenant-batches", + "index": "tenant-batches" + }, + "status": 404 + } + `)))}, + expectedResponseError: &ResponseError{ + ErrorObj: fmt.Errorf("%v: %w", + fmt.Errorf("%s: %s", "index_not_found_exception", "no such index"), + fmt.Errorf("%s: %s", "some_other_error", "some other reason"), + ), + Code: http.StatusNotFound, + ErrorType: "index_not_found_exception", + RootCause: "some_other_error", + }, + }, { + name: "Elastic Error Message", + res: &esapi.Response{StatusCode: http.StatusNotFound, Body: ioutil.NopCloser(bytes.NewReader([]byte(` + { + "error": "alias [test-batches] missing", + "status": 404 + } + `)))}, + expectedResponseError: &ResponseError{ + ErrorObj: fmt.Errorf("alias [test-batches] missing"), + Code: http.StatusNotFound}, + }, { + name: "Elastic No Error Context", + // + res: &esapi.Response{StatusCode: http.StatusNotFound, Body: ioutil.NopCloser(bytes.NewReader([]byte(` + { + "_index": "test-batches", + "_type": "_doc", + "_id": "missing_doc_id", + "found": false + } + `)))}, + expectedResponseError: &ResponseError{ + ErrorObj: nil, + Code: http.StatusNotFound, + }, + expectedBody: map[string]interface{}{ + "_index": "test-batches", + "_type": "_doc", + "_id": "missing_doc_id", + "found": false, + }, }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { - body, err := DecodeBody(tc.res, tc.err, tenantId, logger) + body, responseErr := DecodeBody(tc.res, tc.clientError) if !reflect.DeepEqual(body, tc.expectedBody) { t.Errorf("Unexpected Body. Expected: [%v], Actual: [%v]", tc.expectedBody, body) } - if !reflect.DeepEqual(err, tc.expectedErr) { - t.Errorf("Unexpected Error. Expected: [%v], Actual: [%v]", tc.expectedErr, err) + if !reflect.DeepEqual(responseErr, tc.expectedResponseError) { + t.Errorf("Unexpected ResponseError. Expected: [%v], Actual: [%v]", + tc.expectedResponseError, responseErr) } }) } } func TestDecodeBodyFromJsonArray(t *testing.T) { - errText := "arrayErrText" - logger := log.New(ioutil.Discard, "decodeBodyFromArray/test: ", log.Llongfile) - var validResult []map[string]interface{} - validValuesMap := map[string]interface{}{ - "valid": "json-array", - "status": "green", - } - validResult = append(validResult, validValuesMap) - testCases := []struct { - name string - res *esapi.Response - err error - expectedBody []map[string]interface{} - expectedErr map[string]interface{} + name string + res *esapi.Response + clientError error + expectedBody []map[string]interface{} + expectedResponseError *ResponseError }{ { - name: "success-case", - res: &esapi.Response{Body: ioutil.NopCloser(bytes.NewReader([]byte(`[{"valid": "json-array", "status": "green"}]`)))}, - expectedBody: validResult, - }, - { - name: "has-err", - err: errors.New(errText), - expectedErr: response.Error(http.StatusInternalServerError, fmt.Sprintf(msgClientErr, errText)), - }, - { - name: "nil-response", - expectedErr: response.Error(http.StatusInternalServerError, MsgNilResponse), - }, - { - name: "decoder-error", - res: &esapi.Response{Body: ioutil.NopCloser(bytes.NewReader([]byte(`{bad json:[]][ "`)))}, - expectedErr: response.Error(http.StatusInternalServerError, fmt.Sprintf(msgParseErr, "invalid character 'b' looking for beginning of object key string")), + name: "success-case", + res: &esapi.Response{StatusCode: http.StatusOK, Body: ioutil.NopCloser(bytes.NewReader([]byte(` + [ + {"valid": "json-array", "status": "green"} + ] + `)))}, + expectedBody: []map[string]interface{}{ + map[string]interface{}{ + "valid": "json-array", + "status": "green", + }, + }, + }, { + name: "Elastic Client Error", + clientError: errors.New("client Error"), + expectedResponseError: &ResponseError{ + ErrorObj: fmt.Errorf(msgClientErr, errors.New("client Error")), + Code: http.StatusInternalServerError, + }, + }, { + name: "Elastic No Response No Client Error", + expectedResponseError: &ResponseError{ + ErrorObj: errors.New(MsgNilResponse), + Code: http.StatusInternalServerError, + }, + }, { + name: "Bad Json Response", + res: &esapi.Response{StatusCode: http.StatusOK, Body: ioutil.NopCloser(bytes.NewReader([]byte(` + [{"bad": "json" + `)))}, + expectedResponseError: &ResponseError{ + ErrorObj: fmt.Errorf(msgParseErr, errors.New("unexpected EOF")), + Code: http.StatusInternalServerError, + }, + }, { + name: "Elastic Error Response", + res: &esapi.Response{StatusCode: http.StatusNotFound, Body: ioutil.NopCloser(bytes.NewReader([]byte(` + { + "error": { + "root_cause": [ + { + "type": "illegal_argument_exception", + "reason": "request [/_cat/indices/*-batches] contains unrecognized parameter: [badParam]" + } + ], + "type": "illegal_argument_exception", + "reason": "request [/_cat/indices/*-batches] contains unrecognized parameter: [badParam]" + }, + "status": 400 + } + `)))}, + expectedResponseError: &ResponseError{ + ErrorObj: fmt.Errorf("%s: %s", + "illegal_argument_exception", + "request [/_cat/indices/*-batches] contains unrecognized parameter: [badParam]", + ), + Code: http.StatusNotFound, + ErrorType: "illegal_argument_exception", + }, + }, { + name: "Unrecognized Elastic Error Response", + res: &esapi.Response{StatusCode: http.StatusBadRequest, Body: ioutil.NopCloser(bytes.NewReader([]byte(` + [ + {"valid": "json-array", "status": "green"} + ] + `)))}, + expectedResponseError: &ResponseError{ + ErrorObj: fmt.Errorf(msgUnexpectedErr, http.StatusBadRequest), + Code: http.StatusInternalServerError, + }, }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { - body, err := DecodeBodyFromJsonArray(tc.res, tc.err, logger) + body, responseErr := DecodeBodyFromJsonArray(tc.res, tc.clientError) if !reflect.DeepEqual(body, tc.expectedBody) { t.Errorf("Unexpected Body. Expected: [%v], Actual: [%v]", tc.expectedBody, body) } - if !reflect.DeepEqual(err, tc.expectedErr) { - t.Errorf("Unexpected Error. Expected: [%v], Actual: [%v]", tc.expectedErr, err) + if !reflect.DeepEqual(responseErr, tc.expectedResponseError) { + t.Errorf("Unexpected ResponseError. Expected: [%v], Actual: [%v]", + tc.expectedResponseError, responseErr) } }) } } -func TestDecodeFirstArrayElement(t *testing.T) { - logger := log.New(ioutil.Discard, "decodeBodyFromArrayAsMap/test: ", log.Llongfile) - +func TestDecodeFirstArrayElementUpdateMe(t *testing.T) { testCases := []struct { - name string - res *esapi.Response - err error - expectedBody map[string]interface{} - expectedErr map[string]interface{} + name string + res *esapi.Response + clientError error + expectedBody map[string]interface{} + expectedResponseError *ResponseError }{ { - name: "success-case", - res: &esapi.Response{Body: ioutil.NopCloser(bytes.NewReader([]byte(`[{"valid": "json-map-from-array", "status": "porcupine"}]`)))}, - expectedBody: map[string]interface{}{"valid": "json-map-from-array", "status": "porcupine"}, - }, - { - name: "empty-array", - res: &esapi.Response{Body: ioutil.NopCloser(bytes.NewReader([]byte(`[]`)))}, - expectedErr: response.Error(http.StatusInternalServerError, "Error transforming result -> Uh-Oh: we got no Object inside this ElasticSearch Results Array"), - }, - { - name: "decoded-body-from-array-error", - res: &esapi.Response{Body: ioutil.NopCloser(bytes.NewReader([]byte(`{bad json:[]][ "`)))}, - expectedErr: response.Error(http.StatusInternalServerError, fmt.Sprintf(msgParseErr, "invalid character 'b' looking for beginning of object key string")), + name: "success-case", + res: &esapi.Response{StatusCode: http.StatusOK, Body: ioutil.NopCloser(bytes.NewReader([]byte(` + [ + {"valid": "json-array", "status": "green"} + ] + `)))}, + expectedBody: map[string]interface{}{ + "valid": "json-array", + "status": "green", + }, + }, { + name: "Elastic Client Error", + clientError: errors.New("client Error"), + expectedResponseError: &ResponseError{ + ErrorObj: fmt.Errorf(msgClientErr, errors.New("client Error")), + Code: http.StatusInternalServerError, + }, + }, { + name: "Empty Result", + res: &esapi.Response{StatusCode: http.StatusOK, Body: ioutil.NopCloser(bytes.NewReader([]byte(` + [] + `)))}, + expectedResponseError: &ResponseError{ + ErrorObj: fmt.Errorf(msgEmptyResultErr), + Code: http.StatusInternalServerError, + }, }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { - body, err := DecodeFirstArrayElement(tc.res, tc.err, logger) + body, responseErr := DecodeFirstArrayElement(tc.res, tc.clientError) if !reflect.DeepEqual(body, tc.expectedBody) { t.Errorf("Unexpected Body. Expected: [%v], Actual: [%v]", tc.expectedBody, body) } - if !reflect.DeepEqual(err, tc.expectedErr) { - t.Errorf("Unexpected Error. Expected: [%v], Actual: [%v]", tc.expectedErr, err) + if !reflect.DeepEqual(responseErr, tc.expectedResponseError) { + t.Errorf("Unexpected ResponseError. Expected: [%v], Actual: [%v]", + tc.expectedResponseError, responseErr) } }) } diff --git a/src/common/eventstreams/client_utils.go b/src/common/eventstreams/client_utils.go index 3441626..7772905 100644 --- a/src/common/eventstreams/client_utils.go +++ b/src/common/eventstreams/client_utils.go @@ -11,12 +11,16 @@ const ( TopicPrefix string = "ingest." InSuffix string = ".in" NotificationSuffix string = ".notification" + OutSuffix string = ".out" + InvalidSuffix string = ".invalid" ) //See documentation on HRI topic naming conventions: https://alvearie.io/HRI/admin.html#onboarding-new-data-integrators -func CreateTopicNames(tenantId string, streamId string) (string, string) { +func CreateTopicNames(tenantId string, streamId string) (string, string, string, string) { baseTopicName := strings.Join([]string{tenantId, streamId}, ".") inTopicName := TopicPrefix + baseTopicName + InSuffix notificationTopicName := TopicPrefix + baseTopicName + NotificationSuffix - return inTopicName, notificationTopicName + outTopicName := TopicPrefix + baseTopicName + OutSuffix + invalidTopicName := TopicPrefix + baseTopicName + InvalidSuffix + return inTopicName, notificationTopicName, outTopicName, invalidTopicName } diff --git a/src/common/eventstreams/client_utils_test.go b/src/common/eventstreams/client_utils_test.go index 93b5d64..d6ceac3 100644 --- a/src/common/eventstreams/client_utils_test.go +++ b/src/common/eventstreams/client_utils_test.go @@ -14,16 +14,20 @@ func TestCreateTopicNames(t *testing.T) { tenantId := "tenant1" streamId := "dataIntegrator1.qualifier1" - inTopicName, notificationTopicName := CreateTopicNames(tenantId, streamId) + inTopicName, notificationTopicName, outTopicName, invalidTopicName := CreateTopicNames(tenantId, streamId) assert.Equal(t, "ingest."+tenantId+"."+streamId+".in", inTopicName) assert.Equal(t, "ingest."+tenantId+"."+streamId+".notification", notificationTopicName) + assert.Equal(t, "ingest."+tenantId+"."+streamId+".out", outTopicName) + assert.Equal(t, "ingest."+tenantId+"."+streamId+".invalid", invalidTopicName) } func TestCreateTopicNamesNoQualifier(t *testing.T) { tenantId := "tenant1" streamId := "dataIntegrator1" - inTopicName, notificationTopicName := CreateTopicNames(tenantId, streamId) + inTopicName, notificationTopicName, outTopicName, invalidTopicName := CreateTopicNames(tenantId, streamId) assert.Equal(t, "ingest."+tenantId+"."+streamId+".in", inTopicName) assert.Equal(t, "ingest."+tenantId+"."+streamId+".notification", notificationTopicName) + assert.Equal(t, "ingest."+tenantId+"."+streamId+".out", outTopicName) + assert.Equal(t, "ingest."+tenantId+"."+streamId+".invalid", invalidTopicName) } diff --git a/src/common/param/parameters.go b/src/common/param/parameters.go index 374a4b8..43f42ed 100644 --- a/src/common/param/parameters.go +++ b/src/common/param/parameters.go @@ -16,15 +16,22 @@ const ( OidcIssuer string = "issuer" JwtAudienceId string = "jwtAudienceId" - BatchId string = "id" - DataType string = "dataType" - Metadata string = "metadata" - Name string = "name" - Status string = "status" - StartDate string = "startDate" - Topic string = "topic" - RecordCount string = "recordCount" - IntegratorId string = "integratorId" + Validation string = "validation" + + BatchId string = "id" + DataType string = "dataType" + Metadata string = "metadata" + Name string = "name" + IntegratorId string = "integratorId" + Status string = "status" + StartDate string = "startDate" + Topic string = "topic" + RecordCount string = "recordCount" // deprecated + ExpectedRecordCount string = "expectedRecordCount" + ActualRecordCount string = "actualRecordCount" + InvalidThreshold string = "invalidThreshold" + InvalidRecordCount string = "invalidRecordCount" + FailureMessage string = "failureMessage" TenantId string = "tenantId" diff --git a/src/go.sum b/src/go.sum index 63b29b0..fa7b5fe 100644 --- a/src/go.sum +++ b/src/go.sum @@ -1,5 +1,4 @@ cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= github.com/DataDog/zstd v1.4.0 h1:vhoV+DUHnRZdKW1i5UMjAk2G4JY8wN4ayRfYDNdEhwo= github.com/DataDog/zstd v1.4.0/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo= github.com/IBM/event-streams-go-sdk-generator v1.0.0 h1:XLm+MsdH6Dod4uXdiRhdbwOI8gEVnQ4YZrSLRQ4wMbY= @@ -20,7 +19,6 @@ github.com/golang/mock v1.5.0 h1:jlYHihg//f7RRwuPfptm04yp4s7O6Kw8EZiVYIGcH0g= github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8= github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4= github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= github.com/pierrec/lz4 v2.0.5+incompatible h1:2xWsjqPFWcplujydGg4WmhC/6fZqK42wMM8aXeqhl0I= @@ -51,12 +49,10 @@ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20210614182718-04defd469f4e h1:XpT3nA5TvE525Ne3hInMh6+GETgn27Zfm9dxsThnX2Q= golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d h1:TzXSXBo42m9gQenoE3b9BGiEpg5IG2JkU5FkPIawgtw= golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58 h1:8gQV6CLnAEikrhgkHFbMAEhagSSnXWGV915qUMm9mrU= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -74,7 +70,6 @@ golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8T golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/square/go-jose.v2 v2.4.1 h1:H0TmLt7/KmzlrDOpa1F+zr0Tk90PbJYBfsVUmRLrf9Y= diff --git a/src/healthcheck/get.go b/src/healthcheck/get.go index af83776..2274909 100644 --- a/src/healthcheck/get.go +++ b/src/healthcheck/get.go @@ -30,9 +30,9 @@ func Get(params map[string]interface{}, client *elasticsearch.Client, partReader //1. Do ElasticSearch healthCheck call resp, err := client.Cat.Health(client.Cat.Health.WithFormat("json")) - respBody, errResp := elastic.DecodeFirstArrayElement(resp, err, logger) - if errResp != nil { - return errResp + respBody, elasticErr := elastic.DecodeFirstArrayElement(resp, err) + if elasticErr != nil { + return elasticErr.LogAndBuildApiResponse(logger, "Could not perform elasticsearch health check") } var isErr bool = false diff --git a/src/healthcheck/get_test.go b/src/healthcheck/get_test.go index 3e724cb..5f1e974 100644 --- a/src/healthcheck/get_test.go +++ b/src/healthcheck/get_test.go @@ -132,29 +132,17 @@ func TestHealthcheck(t *testing.T) { kafkaReader: defaultKafkaReader, expected: response.Error(http.StatusServiceUnavailable, "HRI Service Temporarily Unavailable | error Detail: ElasticSearch status: red, clusterId: NotReported, unixTimestamp: NotReported"), }, - { - name: "bad-ES-response-body-EOF", - transport: test.NewFakeTransport(t).AddCall( - "/_cat/health", - test.ElasticCall{ - ResponseErr: errors.New("client error"), - }, - ), - kafkaReader: defaultKafkaReader, - expected: response.Error(http.StatusInternalServerError, "Elastic client error: client error"), - }, { name: "ES-client-error", transport: test.NewFakeTransport(t).AddCall( "/_cat/health", test.ElasticCall{ - ResponseBody: test.ReaderToString(ioutil.NopCloser(bytes.NewReader([]byte(``)))), + ResponseErr: errors.New("client error"), }, ), kafkaReader: defaultKafkaReader, - expected: response.Error( - http.StatusInternalServerError, - "Error parsing the Elastic search response body: EOF"), + expected: response.Error(http.StatusInternalServerError, + "Could not perform elasticsearch health check: elasticsearch client error: client error"), }, { name: "Kafka-connection-returns-err", @@ -183,7 +171,7 @@ func TestHealthcheck(t *testing.T) { kafkaReader: test.FakePartitionReader{ T: t, Partitions: nil, - Err: errors.New("Error contacting Kafka cluster: could not read partitions"), + Err: errors.New("ResponseError contacting Kafka cluster: could not read partitions"), }, expected: response.Error(http.StatusServiceUnavailable, "HRI Service Temporarily Unavailable | error Detail: Kafka status: Kafka Connection/Read Partition failed"), }, @@ -214,7 +202,7 @@ func TestHealthcheck(t *testing.T) { kafkaReader: test.FakePartitionReader{ T: t, Partitions: nil, - Err: errors.New("Error contacting Kafka cluster: could not read partitions"), + Err: errors.New("ResponseError contacting Kafka cluster: could not read partitions"), }, expected: response.Error(http.StatusServiceUnavailable, "HRI Service Temporarily Unavailable | error Detail: ElasticSearch status: red, clusterId: 8165307e-6130-4581-942d-20fcfc4e795d, unixTimestamp: 1578512886| Kafka status: Kafka Connection/Read Partition failed"), }, diff --git a/src/streams/create.go b/src/streams/create.go index 5df1c6d..8f1be0a 100644 --- a/src/streams/create.go +++ b/src/streams/create.go @@ -18,6 +18,7 @@ import ( "os" "reflect" "strconv" + "strings" ) const ( @@ -26,6 +27,8 @@ const ( invalidCleanupPolicy int32 = 42240 cleanupFailureMsg string = ", cleanup of associated topic [%s] also failed: %s" + + onePartition int64 = 1 ) func Create( @@ -40,6 +43,7 @@ func Create( args, param.Info{param.NumPartitions, reflect.Float64}, param.Info{param.RetentionMs, reflect.Float64}, + param.Info{param.Validation, reflect.Bool}, ) if errResp != nil { logger.Printf("Bad input params: %s", errResp) @@ -81,7 +85,7 @@ func Create( return response.Error(http.StatusBadRequest, err.Error()) } - inTopicName, notificationTopicName := eventstreams.CreateTopicNames(tenantId, streamId) + inTopicName, notificationTopicName, outTopicName, invalidTopicName := eventstreams.CreateTopicNames(tenantId, streamId) //numPartitions and retentionTime configs are required, the rest are optional numPartitions := int64(args[param.NumPartitions].(float64)) @@ -92,9 +96,11 @@ func Create( PartitionCount: numPartitions, Configs: topicConfigs, } + + //create notification topic with only 1 Partition b/c of the small expected msg volume for this topic notificationTopicRequest := es.TopicCreateRequest{ Name: notificationTopicName, - PartitionCount: numPartitions, + PartitionCount: onePartition, Configs: topicConfigs, } @@ -109,13 +115,59 @@ func Create( if notificationErr != nil { logger.Printf("Unable to create new topic [%s]. %s", notificationTopicName, notificationErr.Error()) logger.Printf("Deleting associated 'in' topic [%s]...", inTopicName) - _, _, deleteErr := service.DeleteTopic(context.Background(), inTopicName) - var deleteErrorMsg = "" - if deleteErr != nil { - logger.Printf("Unable to delete topic [%s]. %s", inTopicName, deleteErr.Error()) - deleteErrorMsg = fmt.Sprintf(cleanupFailureMsg, inTopicName, deleteErr.Error()) + inTopicDeleted := deleteTopic(context.Background(), inTopicName, service, logger) //value will be bool, true if deleted successfully without errors + + var deleteTopicsErrorMsg = createDeleteErrorMsg(inTopicName, inTopicDeleted, true, true) + return getCreateResponseError(notificationResponse, service.HandleModelError(notificationErr), deleteTopicsErrorMsg) + } + + validation := args[param.Validation].(bool) + logger.Printf("Value of validation is [%t]", validation) + + // if validation is enabled, create the out and invalid topics for the given tenant and stream pairing + if validation { + outTopicRequest := es.TopicCreateRequest{ + Name: outTopicName, + PartitionCount: numPartitions, + Configs: topicConfigs, + } + + //create invalid topic with only 1 Partition b/c of the small expected msg volume for this topic + invalidTopicRequest := es.TopicCreateRequest{ + Name: invalidTopicName, + PartitionCount: onePartition, + Configs: topicConfigs, + } + + _, outResponse, outErr := service.CreateTopic(context.Background(), outTopicRequest) + if outErr != nil { + logger.Printf("Unable to create new topic [%s]. %s", outTopicName, outErr.Error()) + + //delete associated in and notification topics + logger.Printf("Deleting associated 'in' topic [%s]...", inTopicName) + inTopicDeleted := deleteTopic(context.Background(), inTopicName, service, logger) + logger.Printf("Deleting associated 'notification' topic [%s]...", notificationTopicName) + notificationTopicDeleted := deleteTopic(context.Background(), notificationTopicName, service, logger) + + var deleteTopicsErrorMsg = createDeleteErrorMsg(inTopicName, inTopicDeleted, notificationTopicDeleted, true) + return getCreateResponseError(outResponse, service.HandleModelError(outErr), deleteTopicsErrorMsg) + } + + _, invalidResponse, invalidErr := service.CreateTopic(context.Background(), invalidTopicRequest) + if invalidErr != nil { + logger.Printf("Unable to create new topic [%s]. %s", invalidTopicName, invalidErr.Error()) + + //delete associated in, notification, and out topics + logger.Printf("Deleting associated 'in' topic [%s]...", inTopicName) + inTopicDeleted := deleteTopic(context.Background(), inTopicName, service, logger) + logger.Printf("Deleting associated 'notification' topic [%s]...", notificationTopicName) + notificationTopicDeleted := deleteTopic(context.Background(), notificationTopicName, service, logger) + logger.Printf("Deleting associated 'out' topic [%s]...", outTopicName) + outTopicDeleted := deleteTopic(context.Background(), outTopicName, service, logger) + + var deleteTopicsErrorMsg = createDeleteErrorMsg(inTopicName, inTopicDeleted, notificationTopicDeleted, outTopicDeleted) + return getCreateResponseError(invalidResponse, service.HandleModelError(invalidErr), deleteTopicsErrorMsg) } - return getCreateResponseError(notificationResponse, service.HandleModelError(notificationErr), deleteErrorMsg) } // return the name of the created stream @@ -194,3 +246,40 @@ func getCreateResponseError(resp *http.Response, err *es.ModelError, deleteError } return response.Error(http.StatusInternalServerError, err.Message+deleteError) } + +func deleteTopic(ctx context.Context, topicName string, service eventstreams.Service, logger *log.Logger) bool { + _, _, deleteErr := service.DeleteTopic(ctx, topicName) + + if deleteErr != nil { + logger.Printf("Unable to delete topic [%s]. %s", topicName, deleteErr.Error()) + return false //topic was not deleted successfully and there was an error + } + return true //topic was deleted successfully +} + +func createDeleteErrorMsg(inTopicName string, inTopicDeleted bool, notificationTopicDeleted bool, outTopicDeleted bool) string { + if inTopicDeleted && notificationTopicDeleted && outTopicDeleted { + return "" //if all topics deleted successfully, return empty error message + } + + //if topics had errors when deleting, add them to the slice + var deleteErrorTopics []string + if !inTopicDeleted { + deleteErrorTopics = append(deleteErrorTopics, "in") + } + if !notificationTopicDeleted { + deleteErrorTopics = append(deleteErrorTopics, "notification") + } + if !outTopicDeleted { + deleteErrorTopics = append(deleteErrorTopics, "out") + } + + deleteError := fmt.Sprintf("failed to delete %s topic(s)", strings.Join(deleteErrorTopics, ",")) + + //get topic name without in suffix + inTopic := []rune(inTopicName) + endIndex := len(inTopic) - len(eventstreams.InSuffix) + topic := string(inTopic[0:endIndex]) + + return fmt.Sprintf(cleanupFailureMsg, topic, deleteError) +} diff --git a/src/streams/create_test.go b/src/streams/create_test.go index 455c5b6..3227677 100644 --- a/src/streams/create_test.go +++ b/src/streams/create_test.go @@ -65,39 +65,60 @@ func TestCreate(t *testing.T) { path.ParamOwPath: fmt.Sprintf("/hri/tenants/%s/streams/%s", tenantId, streamId1), param.NumPartitions: numPartitions, param.RetentionMs: retentionMs, + param.Validation: false, + } + validArgsWithValidation := map[string]interface{}{ + path.ParamOwPath: fmt.Sprintf("/hri/tenants/%s/streams/%s", tenantId, streamId1), + param.NumPartitions: numPartitions, + param.RetentionMs: retentionMs, + param.Validation: true, } validArgsNoQualifier := map[string]interface{}{ path.ParamOwPath: fmt.Sprintf("/hri/tenants/%s/streams/%s", tenantId, streamId2), param.NumPartitions: numPartitions, param.RetentionMs: retentionMs, + param.Validation: false, + } + validArgsNoQualifierWithValidation := map[string]interface{}{ + path.ParamOwPath: fmt.Sprintf("/hri/tenants/%s/streams/%s", tenantId, streamId2), + param.NumPartitions: numPartitions, + param.RetentionMs: retentionMs, + param.Validation: true, } missingPathArgs := map[string]interface{}{ param.NumPartitions: numPartitions, param.RetentionMs: retentionMs, + //param.Validation: false, } missingTenantArgs := map[string]interface{}{ path.ParamOwPath: fmt.Sprintf("/hri/tenants"), param.NumPartitions: numPartitions, param.RetentionMs: retentionMs, + //param.Validation: false, } missingStreamArgs := map[string]interface{}{ path.ParamOwPath: fmt.Sprintf("/hri/tenants/%s/streams", tenantId), param.NumPartitions: numPartitions, param.RetentionMs: retentionMs, + //param.Validation: false, } badParamResponse := map[string]interface{}{"bad": "param"} testCases := []struct { - name string - args map[string]interface{} - validatorResponse map[string]interface{} - modelInError *es.ModelError - modelNotificationError *es.ModelError - mockResponse *http.Response - deleteError error - expectedTopic string - expected map[string]interface{} + name string + args map[string]interface{} + validatorResponse map[string]interface{} + modelInError *es.ModelError + modelNotificationError *es.ModelError + modelOutError *es.ModelError + modelInvalidError *es.ModelError + mockResponse *http.Response + deleteInError error + deleteNotificationError error + deleteOutError error + expectedTopic string + expected map[string]interface{} }{ { name: "bad-param", @@ -171,14 +192,126 @@ func TestCreate(t *testing.T) { expectedTopic: baseTopicName, }, { - name: "notification-fail-and-delete-fail", + name: "out-topic-already-exists", + args: validArgsWithValidation, + modelOutError: &TopicAlreadyExistsError, + mockResponse: &StatusUnprocessableEntity, + expected: response.Error(http.StatusConflict, topicAlreadyExistsMessage), + expectedTopic: baseTopicName, + }, + { + name: "invalid-topic-already-exists", + args: validArgsWithValidation, + modelInvalidError: &TopicAlreadyExistsError, + mockResponse: &StatusUnprocessableEntity, + expected: response.Error(http.StatusConflict, topicAlreadyExistsMessage), + expectedTopic: baseTopicName, + }, + { + name: "notification-topic-create-fail-and-delete-in-topic-fail", args: validArgs, modelNotificationError: &TopicAlreadyExistsError, mockResponse: &StatusUnprocessableEntity, - deleteError: errors.New("failed to delete in topic"), - expected: response.Error(http.StatusConflict, topicAlreadyExistsMessage+fmt.Sprintf(cleanupFailureMsg, eventstreams.TopicPrefix+baseTopicName+eventstreams.InSuffix, "failed to delete in topic")), + deleteInError: errors.New("failed to delete in topic"), + expected: response.Error(http.StatusConflict, topicAlreadyExistsMessage+fmt.Sprintf(cleanupFailureMsg, eventstreams.TopicPrefix+baseTopicName, "failed to delete in topic(s)")), expectedTopic: baseTopicName, }, + { + name: "out-topic-create-fail-and-delete-in-topic-fail", + args: validArgsWithValidation, + modelOutError: &TopicAlreadyExistsError, + mockResponse: &StatusUnprocessableEntity, + deleteInError: errors.New("failed to delete in topic"), + expected: response.Error(http.StatusConflict, topicAlreadyExistsMessage+fmt.Sprintf(cleanupFailureMsg, eventstreams.TopicPrefix+baseTopicName, "failed to delete in topic(s)")), + expectedTopic: baseTopicName, + }, + { + name: "out-topic-create-fail-and-delete-notification-topic-fail", + args: validArgsWithValidation, + modelOutError: &TopicAlreadyExistsError, + mockResponse: &StatusUnprocessableEntity, + deleteNotificationError: errors.New("failed to delete notification topic"), + expected: response.Error(http.StatusConflict, topicAlreadyExistsMessage+fmt.Sprintf(cleanupFailureMsg, eventstreams.TopicPrefix+baseTopicName, "failed to delete notification topic(s)")), + expectedTopic: baseTopicName, + }, + { + name: "invalid-topic-create-fail-and-delete-in-topic-fail", + args: validArgsWithValidation, + modelInvalidError: &TopicAlreadyExistsError, + mockResponse: &StatusUnprocessableEntity, + deleteInError: errors.New("failed to delete in topic"), + expected: response.Error(http.StatusConflict, topicAlreadyExistsMessage+fmt.Sprintf(cleanupFailureMsg, eventstreams.TopicPrefix+baseTopicName, "failed to delete in topic(s)")), + expectedTopic: baseTopicName, + }, + { + name: "invalid-topic-create-fail-and-delete-notification-topic-fail", + args: validArgsWithValidation, + modelInvalidError: &TopicAlreadyExistsError, + mockResponse: &StatusUnprocessableEntity, + deleteNotificationError: errors.New("failed to delete notification topic"), + expected: response.Error(http.StatusConflict, topicAlreadyExistsMessage+fmt.Sprintf(cleanupFailureMsg, eventstreams.TopicPrefix+baseTopicName, "failed to delete notification topic(s)")), + expectedTopic: baseTopicName, + }, + { + name: "invalid-topic-create-fail-and-delete-out-topic-fail", + args: validArgsWithValidation, + modelInvalidError: &TopicAlreadyExistsError, + mockResponse: &StatusUnprocessableEntity, + deleteOutError: errors.New("failed to delete out topic"), + expected: response.Error(http.StatusConflict, topicAlreadyExistsMessage+fmt.Sprintf(cleanupFailureMsg, eventstreams.TopicPrefix+baseTopicName, "failed to delete out topic(s)")), + expectedTopic: baseTopicName, + }, + { + name: "out-topic-create-fail-and-delete-in-topic-fail-and-delete-notification-topic-fail", + args: validArgsWithValidation, + modelOutError: &TopicAlreadyExistsError, + mockResponse: &StatusUnprocessableEntity, + deleteInError: errors.New("failed to delete in topic"), + deleteNotificationError: errors.New("failed to notification in topic"), + expected: response.Error(http.StatusConflict, topicAlreadyExistsMessage+fmt.Sprintf(cleanupFailureMsg, eventstreams.TopicPrefix+baseTopicName, "failed to delete in,notification topic(s)")), + expectedTopic: baseTopicName, + }, + { + name: "invalid-topic-create-fail-and-delete-in-topic-fail-and-delete-notification-topic-fail", + args: validArgsWithValidation, + modelInvalidError: &TopicAlreadyExistsError, + mockResponse: &StatusUnprocessableEntity, + deleteInError: errors.New("failed to delete in topic"), + deleteNotificationError: errors.New("failed to notification in topic"), + expected: response.Error(http.StatusConflict, topicAlreadyExistsMessage+fmt.Sprintf(cleanupFailureMsg, eventstreams.TopicPrefix+baseTopicName, "failed to delete in,notification topic(s)")), + expectedTopic: baseTopicName, + }, + { + name: "invalid-topic-create-fail-and-delete-in-topic-fail-and-delete-out-topic-fail", + args: validArgsWithValidation, + modelInvalidError: &TopicAlreadyExistsError, + mockResponse: &StatusUnprocessableEntity, + deleteInError: errors.New("failed to delete in topic"), + deleteOutError: errors.New("failed to delete out topic"), + expected: response.Error(http.StatusConflict, topicAlreadyExistsMessage+fmt.Sprintf(cleanupFailureMsg, eventstreams.TopicPrefix+baseTopicName, "failed to delete in,out topic(s)")), + expectedTopic: baseTopicName, + }, + { + name: "invalid-topic-create-fail-and-delete-notification-topic-fail-and-delete-out-topic-fail", + args: validArgsWithValidation, + modelInvalidError: &TopicAlreadyExistsError, + mockResponse: &StatusUnprocessableEntity, + deleteNotificationError: errors.New("failed to delete notification topic"), + deleteOutError: errors.New("failed to out notification topic"), + expected: response.Error(http.StatusConflict, topicAlreadyExistsMessage+fmt.Sprintf(cleanupFailureMsg, eventstreams.TopicPrefix+baseTopicName, "failed to delete notification,out topic(s)")), + expectedTopic: baseTopicName, + }, + { + name: "invalid-topic-create-fail-and-delete-in-topic-fail-and-delete-notification-topic-fail-and-delete-out-topic-fail", + args: validArgsWithValidation, + modelInvalidError: &TopicAlreadyExistsError, + mockResponse: &StatusUnprocessableEntity, + deleteInError: errors.New("failed to delete in topic"), + deleteNotificationError: errors.New("failed to delete notification topic"), + deleteOutError: errors.New("failed to out notification topic"), + expected: response.Error(http.StatusConflict, topicAlreadyExistsMessage+fmt.Sprintf(cleanupFailureMsg, eventstreams.TopicPrefix+baseTopicName, "failed to delete in,notification,out topic(s)")), + expectedTopic: baseTopicName, + }, { name: "invalid-cleanup-policy", args: validArgs, @@ -203,6 +336,22 @@ func TestCreate(t *testing.T) { expected: response.Error(http.StatusInternalServerError, kafkaConnectionMessage), expectedTopic: baseTopicName, }, + { + name: "out-conn-error", + args: validArgsWithValidation, + modelOutError: &OtherError, + mockResponse: &StatusUnprocessableEntity, + expected: response.Error(http.StatusInternalServerError, kafkaConnectionMessage), + expectedTopic: baseTopicName, + }, + { + name: "invalid-conn-error", + args: validArgsWithValidation, + modelInvalidError: &OtherError, + mockResponse: &StatusUnprocessableEntity, + expected: response.Error(http.StatusInternalServerError, kafkaConnectionMessage), + expectedTopic: baseTopicName, + }, { name: "good-request-qualifier", args: validArgs, @@ -215,6 +364,18 @@ func TestCreate(t *testing.T) { expected: response.Success(http.StatusCreated, map[string]interface{}{param.StreamId: streamId2}), expectedTopic: baseTopicNameNoQualifier, }, + { + name: "good-request-qualifier-with-validation", + args: validArgsWithValidation, + expected: response.Success(http.StatusCreated, map[string]interface{}{param.StreamId: streamId1}), + expectedTopic: baseTopicName, + }, + { + name: "good-request-no-qualifier-with-validation", + args: validArgsNoQualifierWithValidation, + expected: response.Success(http.StatusCreated, map[string]interface{}{param.StreamId: streamId2}), + expectedTopic: baseTopicNameNoQualifier, + }, } for _, tc := range testCases { @@ -223,6 +384,7 @@ func TestCreate(t *testing.T) { Required: []param.Info{ {param.NumPartitions, reflect.Float64}, {param.RetentionMs, reflect.Float64}, + {param.Validation, reflect.Bool}, }, Optional: []param.Info{ {param.CleanupPolicy, reflect.String}, @@ -240,12 +402,33 @@ func TestCreate(t *testing.T) { var mockInErr error var mockNotificationErr error + var mockOutErr error + var mockInvalidErr error if tc.modelInError != nil { mockInErr = errors.New(tc.modelInError.Message) } if tc.modelNotificationError != nil { mockNotificationErr = errors.New(tc.modelNotificationError.Message) } + if tc.modelOutError != nil { + mockOutErr = errors.New(tc.modelOutError.Message) + } + if tc.modelInvalidError != nil { + mockInvalidErr = errors.New(tc.modelInvalidError.Message) + } + + var mockDeleteInError error + var mockDeleteNotificationError error + var mockDeleteOutError error + if tc.deleteInError != nil { + mockDeleteInError = tc.deleteInError + } + if tc.deleteNotificationError != nil { + mockDeleteNotificationError = tc.deleteNotificationError + } + if tc.deleteOutError != nil { + mockDeleteOutError = tc.deleteOutError + } mockService. EXPECT(). @@ -259,6 +442,18 @@ func TestCreate(t *testing.T) { Return(nil, tc.mockResponse, mockNotificationErr). MaxTimes(1) + mockService. + EXPECT(). + CreateTopic(context.Background(), getTestTopicRequest(tc.expectedTopic, eventstreams.OutSuffix)). + Return(nil, tc.mockResponse, mockOutErr). + MaxTimes(1) + + mockService. + EXPECT(). + CreateTopic(context.Background(), getTestTopicRequest(tc.expectedTopic, eventstreams.InvalidSuffix)). + Return(nil, tc.mockResponse, mockInvalidErr). + MaxTimes(1) + mockService. EXPECT(). HandleModelError(mockInErr). @@ -271,10 +466,34 @@ func TestCreate(t *testing.T) { Return(tc.modelNotificationError). MaxTimes(1) + mockService. + EXPECT(). + HandleModelError(mockOutErr). + Return(tc.modelOutError). + MaxTimes(1) + + mockService. + EXPECT(). + HandleModelError(mockInvalidErr). + Return(tc.modelInvalidError). + MaxTimes(1) + mockService. EXPECT(). DeleteTopic(context.Background(), eventstreams.TopicPrefix+tc.expectedTopic+eventstreams.InSuffix). - Return(nil, nil, tc.deleteError). + Return(nil, nil, mockDeleteInError). + MaxTimes(1) + + mockService. + EXPECT(). + DeleteTopic(context.Background(), eventstreams.TopicPrefix+tc.expectedTopic+eventstreams.NotificationSuffix). + Return(nil, nil, mockDeleteNotificationError). + MaxTimes(1) + + mockService. + EXPECT(). + DeleteTopic(context.Background(), eventstreams.TopicPrefix+tc.expectedTopic+eventstreams.OutSuffix). + Return(nil, nil, mockDeleteOutError). MaxTimes(1) t.Run(tc.name, func(t *testing.T) { @@ -325,6 +544,69 @@ func TestSetUpTopicConfigs(t *testing.T) { assert.Equal(t, expectedConfigs, configs) } +func TestCreateDeleteErrorMsg(t *testing.T) { + tenantId := "tenant123" + streamId1 := "data-integrator123.qualifier_123" + baseTopicName := strings.Join([]string{tenantId, streamId1}, ".") + + testCases := []struct { + inTopicName string + inTopicDeleted bool + notificationTopicDeleted bool + outTopicDeleted bool + expected string + }{ + { + inTopicDeleted: true, + notificationTopicDeleted: true, + outTopicDeleted: true, + expected: "", + }, + { + inTopicDeleted: false, + notificationTopicDeleted: true, + outTopicDeleted: true, + expected: fmt.Sprintf(cleanupFailureMsg, baseTopicName, "failed to delete in topic(s)"), + }, + { + inTopicDeleted: true, + notificationTopicDeleted: false, + outTopicDeleted: true, + expected: fmt.Sprintf(cleanupFailureMsg, baseTopicName, "failed to delete notification topic(s)"), + }, + { + inTopicDeleted: false, + notificationTopicDeleted: false, + outTopicDeleted: true, + expected: fmt.Sprintf(cleanupFailureMsg, baseTopicName, "failed to delete in,notification topic(s)"), + }, + { + inTopicDeleted: true, + notificationTopicDeleted: true, + outTopicDeleted: false, + expected: fmt.Sprintf(cleanupFailureMsg, baseTopicName, "failed to delete out topic(s)"), + }, + { + inTopicDeleted: true, + notificationTopicDeleted: false, + outTopicDeleted: false, + expected: fmt.Sprintf(cleanupFailureMsg, baseTopicName, "failed to delete notification,out topic(s)"), + }, + { + inTopicDeleted: false, + notificationTopicDeleted: false, + outTopicDeleted: false, + expected: fmt.Sprintf(cleanupFailureMsg, baseTopicName, "failed to delete in,notification,out topic(s)"), + }, + } + + for _, tc := range testCases { + if actual := createDeleteErrorMsg(baseTopicName+eventstreams.InSuffix, tc.inTopicDeleted, tc.notificationTopicDeleted, tc.outTopicDeleted); !reflect.DeepEqual(tc.expected, actual) { + t.Error(fmt.Sprintf("Expected: [%v], actual: [%v]", tc.expected, actual)) + } + } +} + func TestSetUpTopicConfigsNoExtras(t *testing.T) { expectedConfigs := []es.ConfigCreate{ { diff --git a/src/streams/delete.go b/src/streams/delete.go index 2b12c9d..5f27ecd 100644 --- a/src/streams/delete.go +++ b/src/streams/delete.go @@ -15,14 +15,26 @@ import ( "log" "net/http" "os" + "reflect" ) func Delete( args map[string]interface{}, + validator param.Validator, service eventstreams.Service) map[string]interface{} { logger := log.New(os.Stdout, "streams/delete: ", log.Llongfile) + // validate that required input params are present + errResp := validator.Validate( + args, + param.Info{param.Validation, reflect.Bool}, + ) + if errResp != nil { + logger.Printf("Bad input params: %s", errResp) + return errResp + } + // extract tenantId and streamId path params from URL tenantId, err := path.ExtractParam(args, param.TenantIndex) if err != nil { @@ -35,7 +47,7 @@ func Delete( return response.Error(http.StatusBadRequest, err.Error()) } - inTopicName, notificationTopicName := eventstreams.CreateTopicNames(tenantId, streamId) + inTopicName, notificationTopicName, outTopicName, invalidTopicName := eventstreams.CreateTopicNames(tenantId, streamId) // delete the in and notification topics for the given tenant and data integrator pairing _, inResp, inErr := service.DeleteTopic(context.Background(), inTopicName) @@ -50,6 +62,23 @@ func Delete( return getDeleteResponseError(notificationResp, service.HandleModelError(notificationErr)) } + validation := args[param.Validation].(bool) + + //if validation is enabled, delete the out and invalid topics that were created + if validation { + _, outResp, outErr := service.DeleteTopic(context.Background(), outTopicName) + if outErr != nil { + logger.Printf("Unable to delete topic [%s]. %s", outTopicName, outErr.Error()) + return getDeleteResponseError(outResp, service.HandleModelError(outErr)) + } + + _, invalidResp, invalidErr := service.DeleteTopic(context.Background(), invalidTopicName) + if invalidErr != nil { + logger.Printf("Unable to delete topic [%s]. %s", invalidTopicName, invalidErr.Error()) + return getDeleteResponseError(invalidResp, service.HandleModelError(invalidErr)) + } + } + return response.Success(http.StatusOK, map[string]interface{}{}) } diff --git a/src/streams/delete_test.go b/src/streams/delete_test.go index 34cc38a..25e9bc9 100644 --- a/src/streams/delete_test.go +++ b/src/streams/delete_test.go @@ -10,6 +10,7 @@ import ( "errors" "fmt" "github.com/Alvearie/hri-mgmt-api/common/eventstreams" + "github.com/Alvearie/hri-mgmt-api/common/param" "github.com/Alvearie/hri-mgmt-api/common/path" "github.com/Alvearie/hri-mgmt-api/common/response" "github.com/Alvearie/hri-mgmt-api/common/test" @@ -37,9 +38,19 @@ func TestDelete(t *testing.T) { validArgs := map[string]interface{}{ path.ParamOwPath: fmt.Sprintf("/hri/tenants/%s/streams/%s", tenantId, streamId1), + param.Validation: false, + } + validArgsWithValidation := map[string]interface{}{ + path.ParamOwPath: fmt.Sprintf("/hri/tenants/%s/streams/%s", tenantId, streamId1), + param.Validation: true, } validArgsNoQualifier := map[string]interface{}{ path.ParamOwPath: fmt.Sprintf("/hri/tenants/%s/streams/%s", tenantId, streamId2), + param.Validation: false, + } + validArgsNoQualifierWithValidation := map[string]interface{}{ + path.ParamOwPath: fmt.Sprintf("/hri/tenants/%s/streams/%s", tenantId, streamId2), + param.Validation: true, } missingPathArgs := map[string]interface{}{} missingTenantArgs := map[string]interface{}{ @@ -60,6 +71,8 @@ func TestDelete(t *testing.T) { validatorResponse map[string]interface{} modelInError *es.ModelError modelNotificationError *es.ModelError + modelOutError *es.ModelError + modelInvalidError *es.ModelError mockResponse *http.Response expectedStream string expected map[string]interface{} @@ -111,6 +124,22 @@ func TestDelete(t *testing.T) { expected: response.Error(http.StatusInternalServerError, kafkaConnectionMessage), expectedStream: streamName, }, + { + name: "out-conn-error", + args: validArgsWithValidation, + modelOutError: &OtherError, + mockResponse: &StatusUnprocessableEntity, + expected: response.Error(http.StatusInternalServerError, kafkaConnectionMessage), + expectedStream: streamName, + }, + { + name: "invalid-conn-error", + args: validArgsWithValidation, + modelInvalidError: &OtherError, + mockResponse: &StatusUnprocessableEntity, + expected: response.Error(http.StatusInternalServerError, kafkaConnectionMessage), + expectedStream: streamName, + }, { name: "in-topic-not-found", args: validArgs, @@ -127,6 +156,22 @@ func TestDelete(t *testing.T) { expected: response.Error(http.StatusNotFound, topicNotFoundMessage), expectedStream: streamName, }, + { + name: "out-topic-not-found", + args: validArgsWithValidation, + modelOutError: &NotFoundError, + mockResponse: &StatusNotFound, + expected: response.Error(http.StatusNotFound, topicNotFoundMessage), + expectedStream: streamName, + }, + { + name: "invalid-topic-not-found", + args: validArgsWithValidation, + modelInvalidError: &NotFoundError, + mockResponse: &StatusNotFound, + expected: response.Error(http.StatusNotFound, topicNotFoundMessage), + expectedStream: streamName, + }, { name: "good-request-qualifier", args: validArgs, @@ -139,22 +184,50 @@ func TestDelete(t *testing.T) { expected: response.Success(http.StatusOK, map[string]interface{}{}), expectedStream: streamNameNoQualifier, }, + { + name: "good-request-qualifier-with-validation", + args: validArgsWithValidation, + expected: response.Success(http.StatusOK, map[string]interface{}{}), + expectedStream: streamName, + }, + { + name: "good-request-no-qualifier-with-validation", + args: validArgsNoQualifierWithValidation, + expected: response.Success(http.StatusOK, map[string]interface{}{}), + expectedStream: streamNameNoQualifier, + }, } for _, tc := range testCases { + validator := test.FakeValidator{ + T: t, + Required: []param.Info{ + {param.Validation, reflect.Bool}, + }, + Response: tc.validatorResponse, + } + controller := gomock.NewController(t) defer controller.Finish() mockService := test.NewMockService(controller) var mockInErr error var mockNotificationErr error + var mockOutErr error + var mockInvalidErr error if tc.modelInError != nil { mockInErr = errors.New(tc.modelInError.Message) } if tc.modelNotificationError != nil { mockNotificationErr = errors.New(tc.modelNotificationError.Message) } + if tc.modelOutError != nil { + mockOutErr = errors.New(tc.modelOutError.Message) + } + if tc.modelInvalidError != nil { + mockInvalidErr = errors.New(tc.modelInvalidError.Message) + } mockService. EXPECT(). @@ -166,6 +239,16 @@ func TestDelete(t *testing.T) { DeleteTopic(context.Background(), eventstreams.TopicPrefix+tc.expectedStream+eventstreams.NotificationSuffix). Return(nil, tc.mockResponse, mockNotificationErr). MaxTimes(1) + mockService. + EXPECT(). + DeleteTopic(context.Background(), eventstreams.TopicPrefix+tc.expectedStream+eventstreams.OutSuffix). + Return(nil, tc.mockResponse, mockOutErr). + MaxTimes(1) + mockService. + EXPECT(). + DeleteTopic(context.Background(), eventstreams.TopicPrefix+tc.expectedStream+eventstreams.InvalidSuffix). + Return(nil, tc.mockResponse, mockInvalidErr). + MaxTimes(1) mockService. EXPECT(). @@ -177,9 +260,19 @@ func TestDelete(t *testing.T) { HandleModelError(mockNotificationErr). Return(tc.modelNotificationError). MaxTimes(1) + mockService. + EXPECT(). + HandleModelError(mockOutErr). + Return(tc.modelOutError). + MaxTimes(1) + mockService. + EXPECT(). + HandleModelError(mockInvalidErr). + Return(tc.modelInvalidError). + MaxTimes(1) t.Run(tc.name, func(t *testing.T) { - if actual := Delete(tc.args, mockService); !reflect.DeepEqual(tc.expected, actual) { + if actual := Delete(tc.args, validator, mockService); !reflect.DeepEqual(tc.expected, actual) { t.Error(fmt.Sprintf("Expected: [%v], actual: [%v]", tc.expected, actual)) } }) diff --git a/src/streams/get.go b/src/streams/get.go index 691b7f4..76f98a2 100644 --- a/src/streams/get.go +++ b/src/streams/get.go @@ -66,6 +66,8 @@ func GetStreamNames(topics []es.TopicDetail, tenantId string) []map[string]inter streamId := strings.TrimPrefix(topicName, eventstreams.TopicPrefix+tenantId+".") streamId = strings.TrimSuffix(streamId, eventstreams.InSuffix) streamId = strings.TrimSuffix(streamId, eventstreams.NotificationSuffix) + streamId = strings.TrimSuffix(streamId, eventstreams.OutSuffix) + streamId = strings.TrimSuffix(streamId, eventstreams.InvalidSuffix) //take unique stream names, we don't want duplicates due to a stream's multiple topics (in/notification) if _, seen := seenStreamIds[streamId]; !seen { @@ -79,5 +81,6 @@ func GetStreamNames(topics []es.TopicDetail, tenantId string) []map[string]inter func validTopicName(topicName string) bool { return strings.HasPrefix(topicName, eventstreams.TopicPrefix) && - (strings.HasSuffix(topicName, eventstreams.InSuffix) || strings.HasSuffix(topicName, eventstreams.NotificationSuffix)) + (strings.HasSuffix(topicName, eventstreams.InSuffix) || strings.HasSuffix(topicName, eventstreams.NotificationSuffix) || + strings.HasSuffix(topicName, eventstreams.OutSuffix) || strings.HasSuffix(topicName, eventstreams.InvalidSuffix)) } diff --git a/src/streams/get_test.go b/src/streams/get_test.go index 0c01798..b006bfb 100644 --- a/src/streams/get_test.go +++ b/src/streams/get_test.go @@ -168,12 +168,76 @@ func TestGetStreamNames(t *testing.T) { topics: []es.TopicDetail{ {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.InvalidSuffix}, {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.InvalidSuffix}, {Name: eventstreams.TopicPrefix + tenant2WithQualifier + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant2WithQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant2WithQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant2WithQualifier + eventstreams.InvalidSuffix}, {Name: eventstreams.TopicPrefix + tenant2NoQualifier + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant2NoQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant2NoQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant2NoQualifier + eventstreams.InvalidSuffix}, + }, + tenantId: tenantId1, + expected: []map[string]interface{}{ + {param.StreamId: streamId}, + {param.StreamId: streamIdNoQualifier}, + }, + }, + { + name: "with-optional-qualifier-in-only", + topics: []es.TopicDetail{ + {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.InSuffix}, + {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.InSuffix}, + {Name: eventstreams.TopicPrefix + tenant2WithQualifier + eventstreams.InSuffix}, + {Name: eventstreams.TopicPrefix + tenant2NoQualifier + eventstreams.InSuffix}, + }, + tenantId: tenantId1, + expected: []map[string]interface{}{ + {param.StreamId: streamId}, + {param.StreamId: streamIdNoQualifier}, + }, + }, + { + name: "with-optional-qualifier-out-only", + topics: []es.TopicDetail{ + {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant2WithQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant2NoQualifier + eventstreams.OutSuffix}, + }, + tenantId: tenantId1, + expected: []map[string]interface{}{ + {param.StreamId: streamId}, + {param.StreamId: streamIdNoQualifier}, + }, + }, + { + name: "with-optional-qualifier-invalid-only", + topics: []es.TopicDetail{ + {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.InvalidSuffix}, + {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.InvalidSuffix}, + {Name: eventstreams.TopicPrefix + tenant2WithQualifier + eventstreams.InvalidSuffix}, + {Name: eventstreams.TopicPrefix + tenant2NoQualifier + eventstreams.InvalidSuffix}, + }, + tenantId: tenantId1, + expected: []map[string]interface{}{ + {param.StreamId: streamId}, + {param.StreamId: streamIdNoQualifier}, + }, + }, + { + name: "with-optional-qualifier-notification-only", + topics: []es.TopicDetail{ + {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant2WithQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant2NoQualifier + eventstreams.NotificationSuffix}, }, tenantId: tenantId1, expected: []map[string]interface{}{ @@ -187,10 +251,16 @@ func TestGetStreamNames(t *testing.T) { topics: []es.TopicDetail{ {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.InvalidSuffix}, {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.InvalidSuffix}, {Name: eventstreams.TopicPrefix + tenant0WithQualifier + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant0WithQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant0WithQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant0WithQualifier + eventstreams.InvalidSuffix}, }, tenantId: tenantId0, expected: []map[string]interface{}{{param.StreamId: streamId}}, @@ -200,8 +270,12 @@ func TestGetStreamNames(t *testing.T) { topics: []es.TopicDetail{ {Name: eventstreams.TopicPrefix + tenant1ExtraPeriod + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant1ExtraPeriod + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant1ExtraPeriod + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant1ExtraPeriod + eventstreams.InvalidSuffix}, {Name: eventstreams.TopicPrefix + tenant0WithQualifier + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant0WithQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant0WithQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant0WithQualifier + eventstreams.InvalidSuffix}, }, tenantId: tenantId1, expected: []map[string]interface{}{{param.StreamId: streamIdExtraPeriod}}, @@ -211,12 +285,20 @@ func TestGetStreamNames(t *testing.T) { topics: []es.TopicDetail{ {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant1WithQualifier + eventstreams.InvalidSuffix}, {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.InvalidSuffix}, {Name: eventstreams.TopicPrefix + tenant2WithQualifier + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant2WithQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant2WithQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant2WithQualifier + eventstreams.InvalidSuffix}, {Name: eventstreams.TopicPrefix + tenant2NoQualifier + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant2NoQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant2NoQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant2NoQualifier + eventstreams.InvalidSuffix}, }, tenantId: tenantId0, expected: []map[string]interface{}{}, @@ -226,10 +308,16 @@ func TestGetStreamNames(t *testing.T) { topics: []es.TopicDetail{ {Name: tenant1WithQualifier + eventstreams.InSuffix}, {Name: tenant1WithQualifier + eventstreams.NotificationSuffix}, + {Name: tenant1WithQualifier + eventstreams.OutSuffix}, + {Name: tenant1WithQualifier + eventstreams.InvalidSuffix}, {Name: "bad-prefix" + tenant1WithQualifier + eventstreams.InSuffix}, {Name: "bad-prefix" + tenant1WithQualifier + eventstreams.NotificationSuffix}, + {Name: "bad-prefix" + tenant1WithQualifier + eventstreams.OutSuffix}, + {Name: "bad-prefix" + tenant1WithQualifier + eventstreams.InvalidSuffix}, {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.InvalidSuffix}, }, tenantId: tenantId1, expected: []map[string]interface{}{{param.StreamId: streamIdNoQualifier}}, @@ -243,6 +331,8 @@ func TestGetStreamNames(t *testing.T) { {Name: eventstreams.TopicPrefix + tenant1WithQualifier + "bad-suffix"}, {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.InSuffix}, {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.NotificationSuffix}, + {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.OutSuffix}, + {Name: eventstreams.TopicPrefix + tenant1NoQualifier + eventstreams.InvalidSuffix}, }, tenantId: tenantId1, expected: []map[string]interface{}{{param.StreamId: streamIdNoQualifier}}, diff --git a/src/streams_delete.go b/src/streams_delete.go index 2ad0dd9..fd6c846 100644 --- a/src/streams_delete.go +++ b/src/streams_delete.go @@ -11,6 +11,7 @@ package main import ( "github.com/Alvearie/hri-mgmt-api/common/actionloopmin" "github.com/Alvearie/hri-mgmt-api/common/eventstreams" + "github.com/Alvearie/hri-mgmt-api/common/param" "github.com/Alvearie/hri-mgmt-api/streams" "log" "os" @@ -31,7 +32,7 @@ func deleteStreamMain(params map[string]interface{}) map[string]interface{} { return err } - resp := streams.Delete(params, service) + resp := streams.Delete(params, param.ParamValidator{}, service) logger.Printf("processing time deleteStreamMain, %d milliseconds \n", time.Since(start).Milliseconds()) return resp } diff --git a/src/tenants/create.go b/src/tenants/create.go index f9f5442..25e7fb2 100644 --- a/src/tenants/create.go +++ b/src/tenants/create.go @@ -6,6 +6,7 @@ package tenants import ( + "fmt" "github.com/Alvearie/hri-mgmt-api/common/elastic" "github.com/Alvearie/hri-mgmt-api/common/param" "github.com/Alvearie/hri-mgmt-api/common/path" @@ -37,15 +38,11 @@ func Create( //create new index indexRes, err := esClient.Indices.Create(elastic.IndexFromTenantId(tenantId)) - if err != nil { - logger.Printf("Unable to publish new tenant [%s]. %s", tenantId, err.Error()) - return response.Error(http.StatusInternalServerError, err.Error()) - } - // parse the response - _, errRes := elastic.DecodeBody(indexRes, err, tenantId, logger) - if errRes != nil { - return errRes + _, elasticErr := elastic.DecodeBody(indexRes, err) + if elasticErr != nil { + return elasticErr.LogAndBuildApiResponse(logger, + fmt.Sprintf("Unable to publish new tenant [%s]", tenantId)) } // return the ID of the newly created tenant diff --git a/src/tenants/create_test.go b/src/tenants/create_test.go index 7abd120..4793676 100644 --- a/src/tenants/create_test.go +++ b/src/tenants/create_test.go @@ -63,22 +63,10 @@ func TestCreate(t *testing.T) { ), expected: response.Error( http.StatusInternalServerError, - fmt.Sprintf("%s", elasticErrMsg), + fmt.Sprintf( + "Unable to publish new tenant [tenant-123_]: elasticsearch client error: %s", elasticErrMsg), ), }, - { - name: "bad-response no error", - args: validArgs, - transport: test.NewFakeTransport(t).AddCall( - fmt.Sprintf("/%s-batches", tenantId), - test.ElasticCall{ - ResponseBody: `{bad json message : "`, - }, - ), - expected: response.Error( - http.StatusInternalServerError, - "Error parsing the Elastic search response body: invalid character 'b' looking for beginning of object key string"), - }, { name: "good-request", args: validArgs, diff --git a/src/tenants/delete.go b/src/tenants/delete.go index 2b091e3..3a8511b 100644 --- a/src/tenants/delete.go +++ b/src/tenants/delete.go @@ -6,6 +6,7 @@ package tenants import ( + "fmt" // "fmt" "github.com/Alvearie/hri-mgmt-api/common/elastic" "github.com/Alvearie/hri-mgmt-api/common/param" @@ -31,16 +32,10 @@ func Delete(params map[string]interface{}, client *elasticsearch.Client) map[str //make call to elastic to delete tenant res, err2 := client.Indices.Delete(index) - if err2 != nil { - logger.Println(err2.Error()) - return response.Error(http.StatusInternalServerError, err2.Error()) - } - - _, errResp := elastic.DecodeBody(res, err2, tenantId, logger) - if errResp != nil { - logger.Printf("Error in decode: %v", errResp) - return errResp + _, elasticErr := elastic.DecodeBody(res, err2) + if elasticErr != nil { + return elasticErr.LogAndBuildApiResponse(logger, fmt.Sprintf("Could not delete tenant [%s]", tenantId)) } return map[string]interface{}{ diff --git a/src/tenants/delete_test.go b/src/tenants/delete_test.go index 43aa73e..b66a3d5 100644 --- a/src/tenants/delete_test.go +++ b/src/tenants/delete_test.go @@ -54,22 +54,9 @@ func TestDelete(t *testing.T) { ), expected: response.Error( http.StatusInternalServerError, - fmt.Sprintf("%s", elasticErrMsg), + fmt.Sprintf("Could not delete tenant [tenant123]: elasticsearch client error: %s", elasticErrMsg), ), }, - { - name: "body decode error on ES OK Response", - args: validArgs, - transport: test.NewFakeTransport(t).AddCall( - fmt.Sprintf("/%s-batches", tenantId), - test.ElasticCall{ - ResponseBody: `{bad json message : "`, - }, - ), - expected: response.Error( - http.StatusInternalServerError, - "Error parsing the Elastic search response body: invalid character 'b' looking for beginning of object key string"), - }, { name: "good-request", args: validArgs, diff --git a/src/tenants/get.go b/src/tenants/get.go index 2c799af..25cd70d 100644 --- a/src/tenants/get.go +++ b/src/tenants/get.go @@ -19,14 +19,10 @@ func Get(client *elasticsearch.Client) map[string]interface{} { //Use elastic to return the list of indices res, err := client.Cat.Indices(client.Cat.Indices.WithH("index"), client.Cat.Indices.WithFormat("json")) - if err != nil { - logger.Println(err.Error()) - return response.Error(http.StatusInternalServerError, err.Error()) - } - body, errResp := elastic.DecodeBodyFromJsonArray(res, err, logger) - if errResp != nil { - return errResp + body, elasticErr := elastic.DecodeBodyFromJsonArray(res, err) + if elasticErr != nil { + return elasticErr.LogAndBuildApiResponse(logger, "Could not retrieve tenants") } //sort through result for tenantIds and add to an array diff --git a/src/tenants/get_by_id.go b/src/tenants/get_by_id.go index 9fe4577..220e6b8 100644 --- a/src/tenants/get_by_id.go +++ b/src/tenants/get_by_id.go @@ -6,6 +6,7 @@ package tenants import ( + "fmt" // "fmt" "github.com/Alvearie/hri-mgmt-api/common/elastic" "github.com/Alvearie/hri-mgmt-api/common/param" @@ -32,19 +33,15 @@ func GetById(params map[string]interface{}, client *elasticsearch.Client) map[st index := elastic.IndexFromTenantId(tenantId) var res, err2 = client.Cat.Indices(client.Cat.Indices.WithIndex(index), client.Cat.Indices.WithFormat("json")) - if err2 != nil { - logger.Println(err2.Error()) - return response.Error(http.StatusInternalServerError, err2.Error()) - } + resultBody, elasticErr := elastic.DecodeBodyFromJsonArray(res, err2) + if elasticErr != nil { + if elasticErr.Code == http.StatusNotFound { + msg := "Tenant: " + tenantId + " not found" + logger.Println(msg) + return response.Error(http.StatusNotFound, msg) + } - if res.StatusCode == http.StatusNotFound { - logger.Println("Tenant not found") - return response.Error(http.StatusNotFound, "Tenant: "+tenantId+" not found") - } - // Decode response - resultBody, errResp := elastic.DecodeBodyFromJsonArray(res, err2, logger) - if errResp != nil { - return errResp + return elasticErr.LogAndBuildApiResponse(logger, fmt.Sprintf("Could not retrieve tenant %s", tenantId)) } return response.Success(http.StatusOK, resultBody[0]) diff --git a/src/tenants/get_by_id_test.go b/src/tenants/get_by_id_test.go index 52380c5..014c6fa 100644 --- a/src/tenants/get_by_id_test.go +++ b/src/tenants/get_by_id_test.go @@ -64,7 +64,7 @@ func TestGetById(t *testing.T) { ), expected: response.Error( http.StatusInternalServerError, - fmt.Sprintf("%s", elasticErrMsg), + fmt.Sprintf("Could not retrieve tenant pi001: elasticsearch client error: %s", elasticErrMsg), ), }, { @@ -103,33 +103,6 @@ func TestGetById(t *testing.T) { ), expected: response.Error(http.StatusNotFound, "Tenant: bad-tenant not found"), }, - { - name: "body decode error on ES OK Response", - args: validPathArg, - transport: test.NewFakeTransport(t).AddCall( - "/_cat/indices/pi001-batches", - test.ElasticCall{ - ResponseBody: `{bad json message : "`, - }, - ), - expected: response.Error( - http.StatusInternalServerError, - "Error parsing the Elastic search response body: invalid character 'b' looking for beginning of object key string"), - }, - { - name: "body decode error on ES Response: 400 Bad Request", - args: validPathArg, - transport: test.NewFakeTransport(t).AddCall( - "/_cat/indices/pi001-batches", - test.ElasticCall{ - ResponseStatusCode: http.StatusBadRequest, - ResponseBody: `{bad json message : "`, - }, - ), - expected: response.Error( - http.StatusInternalServerError, - "Error parsing the Elastic search response body: invalid character 'b' looking for beginning of object key string"), - }, { name: "success-case", args: validPathArg, diff --git a/src/tenants/get_test.go b/src/tenants/get_test.go index e01538e..3a7b05b 100644 --- a/src/tenants/get_test.go +++ b/src/tenants/get_test.go @@ -49,34 +49,9 @@ func TestGet(t *testing.T) { ), expected: response.Error( http.StatusInternalServerError, - fmt.Sprintf("%s", elasticErrMsg), + fmt.Sprintf("Could not retrieve tenants: elasticsearch client error: %s", elasticErrMsg), ), }, - { - name: "body decode error on ES OK Response", - ft: test.NewFakeTransport(t).AddCall( - "/_cat/indices", - test.ElasticCall{ - ResponseBody: `{bad json message : "`, - }, - ), - expected: response.Error( - http.StatusInternalServerError, - "Error parsing the Elastic search response body: invalid character 'b' looking for beginning of object key string"), - }, - { - name: "body decode error on ES Response: 400 Bad Request", - ft: test.NewFakeTransport(t).AddCall( - "/_cat/indices", - test.ElasticCall{ - ResponseStatusCode: http.StatusBadRequest, - ResponseBody: `{bad json message : "`, - }, - ), - expected: response.Error( - http.StatusInternalServerError, - "Error parsing the Elastic search response body: invalid character 'b' looking for beginning of object key string"), - }, {"simple", map[string]interface{}{path.ParamOwPath: "/hri/tenants/"}, test.NewFakeTransport(t).AddCall( diff --git a/test/Gemfile.lock b/test/Gemfile.lock index 98d784b..e4f13e4 100644 --- a/test/Gemfile.lock +++ b/test/Gemfile.lock @@ -1,44 +1,46 @@ GEM remote: https://rubygems.org/ specs: - diff-lcs (1.3) - digest-crc (0.5.1) + diff-lcs (1.4.4) + digest-crc (0.6.3) + rake (>= 12.0.0, < 14.0.0) domain_name (0.5.20190701) unf (>= 0.0.5, < 1.0.0) http-accept (1.7.0) http-cookie (1.0.3) domain_name (~> 0.5) - httplog (1.4.2) + httplog (1.4.3) rack (>= 1.0) rainbow (>= 2.0.0) mime-types (3.3.1) mime-types-data (~> 3.2015) - mime-types-data (3.2020.0425) + mime-types-data (3.2021.0212) net_http_ssl_fix (0.0.10) netrc (0.11.0) - rack (2.2.2) + rack (2.2.3) rainbow (3.0.0) + rake (13.0.3) rest-client (2.1.0) http-accept (>= 1.7.0, < 2.0) http-cookie (>= 1.0.2, < 2.0) mime-types (>= 1.16, < 4.0) netrc (~> 0.8) - rspec (3.9.0) - rspec-core (~> 3.9.0) - rspec-expectations (~> 3.9.0) - rspec-mocks (~> 3.9.0) - rspec-core (3.9.1) - rspec-support (~> 3.9.1) - rspec-expectations (3.9.1) + rspec (3.10.0) + rspec-core (~> 3.10.0) + rspec-expectations (~> 3.10.0) + rspec-mocks (~> 3.10.0) + rspec-core (3.10.1) + rspec-support (~> 3.10.0) + rspec-expectations (3.10.1) diff-lcs (>= 1.2.0, < 2.0) - rspec-support (~> 3.9.0) - rspec-mocks (3.9.1) + rspec-support (~> 3.10.0) + rspec-mocks (3.10.2) diff-lcs (>= 1.2.0, < 2.0) - rspec-support (~> 3.9.0) - rspec-support (3.9.2) + rspec-support (~> 3.10.0) + rspec-support (3.10.2) rspec_junit_formatter (0.4.1) rspec-core (>= 2, < 4, != 2.12.0) - ruby-kafka (1.0.0) + ruby-kafka (1.3.0) digest-crc unf (0.1.4) unf_ext @@ -63,4 +65,4 @@ RUBY VERSION ruby 2.6.5p114 BUNDLED WITH - 2.1.4 + 2.2.8 diff --git a/test/README.md b/test/README.md index 46995fc..f6fcd0b 100644 --- a/test/README.md +++ b/test/README.md @@ -6,7 +6,7 @@ ```bash brew install gnupg gnupg2 ``` - NOTE: This is dependent on Hombrew to be completely installed first. + NOTE: This is dependent on Homebrew to be completely installed first. 3. Install Ruby @@ -15,10 +15,10 @@ ```bash rvm list ``` -5. If they're not listed in the script response above, install ruby 2.5.0 +5. If they're not listed in the script response above, install ruby 2.6.5 ```bash - rvm install ruby-2.5.0 - rvm use --default ruby-2.5.0 + rvm install ruby-2.6.5 + rvm use --default ruby-2.6.5 gem install bundler ``` @@ -31,33 +31,47 @@ 7. (Optional) If running in IntelliJ, configure this project as an RVM Ruby project * Install the Ruby plugin `IntelliJ IDEA > Preferences... > Plugins` - * Configure Project `File > Project Structure > Project Settings > Project` and select `RVM: ruby-2.5.0`. + * Configure Project `File > Project Structure > Project Settings > Project` and select `RVM: ruby-2.6.5`. * Configure Module `File > Project Structure > Project Settings > Modules` and reconfigure module with RVM Ruby. NOTE: Ensure that your Ruby versions match across terminal default, Gemfile, and Gemfile.lock. If using IntelliJ, Ruby version in your module should match as well. -8. (Optional) To run tests locally - - Export these environment variables. You can get most of the values from GitHub actions. Check IBM cloud service credentials or 1password for secure ones. - * HRI_API_KEY - * ELASTIC_URL - * ELASTIC_USER - * ELASTIC_PASSWORD - * SASL_PLAIN_PASSWORD - * COS_URL - * IAM_CLOUD_URL - * CLOUD_API_KEY - * APPID_URL - * APPID_TENANT - * TRAVIS_BRANCH - - Get an unencrypted copy of `jwt_assertion_tokens.json` - - Login with the IBM Cloud CLI and set the Functions namespace to match the branch being tested: - ```ibmcloud fn property set --namespace ``` - - Run the tests: - ```rspec spec --tag ~@broken``` +8. (Optional) To run tests locally, export these environment variables. You can get most of the values from GitHub actions. Check IBM cloud service credentials or our password manager for secure ones. + + - ELASTIC_URL - Found in GitHub actions + - ELASTIC_USER - Found in GitHub actions + - ELASTIC_PASSWORD - IBM Cloud -> Elasticsearch service -> Service credentials -> elastic-search-credential -> "password" field + - SASL_PLAIN_PASSWORD + - COS_URL - Found in GitHub actions + - IAM_CLOUD_URL - Found in GitHub actions + - CLOUD_API_KEY - Password Manager + - EVENTSTREAMS_BROKERS - Found in GitHub actions + - APPID_URL - Found in GitHub actions + - APPID_TENANT - Found in GitHub actions + - JWT_AUDIENCE_ID - Found in GitHub actions + + You will also need to set an environment variable called TRAVIS_BRANCH that corresponds to your current working branch. + + Then, install the IBM Cloud CLI, the Functions CLI, and the Event Streams CLI. You can find the RESOURCE_GROUP in GitHub actions and the CLOUD_API_KEY in our password manager: + ```bash + curl -sL https://ibm.biz/idt-installer | bash + bx login --apikey {CLOUD_API_KEY} + bx target -g {RESOURCE_GROUP} + bx plugin install cloud-functions + bx fn property set --namespace {TRAVIS_BRANCH} + bx plugin install event-streams + bx es init + ``` + + Select the number corresponding to the KAFKA_INSTANCE in GitHub actions. + + Then, from within the top directory of this project, run the integration tests with: + + ```rspec test/spec --tag ~@broken``` # Dredd Tests Dredd is used to verify the implemented API meets our published [specification](https://github.com/Alvearie/hri-api-spec/blob/main/management-api/management.yml). -By default it generates a test for every endpoint, uses the example values for input, and verifies the response matches the 200 response schema. All other responses are skipped. Ruby 'hooks' are used to modify the default behavior and do setup/teardown. +By default, it generates a test for every endpoint, uses the example values for input, and verifies the response matches the 200 response schema. All other responses are skipped. Ruby 'hooks' are used to modify the default behavior and do setup/teardown. Here are some helpful documentation links: * https://dredd.org/en/latest/hooks/ruby.html * https://dredd.org/en/latest/data-structures.html#transaction diff --git a/test/env.rb b/test/env.rb index 206a37d..39256af 100644 --- a/test/env.rb +++ b/test/env.rb @@ -24,8 +24,9 @@ require_relative './spec/helper' require_relative './spec/elastic_helper' +require_relative './spec/event_streams_helper' require_relative './spec/hri_helper' require_relative './spec/cos_helper' require_relative './spec/iam_helper' require_relative './spec/app_id_helper' -require_relative './spec/event_streams_helper' \ No newline at end of file +require_relative './spec/slack_helper' \ No newline at end of file diff --git a/test/spec/app_id_helper.rb b/test/spec/app_id_helper.rb index 43bbfed..751c988 100644 --- a/test/spec/app_id_helper.rb +++ b/test/spec/app_id_helper.rb @@ -20,13 +20,9 @@ def get_access_token(application_name, scopes, audience_override = nil) end end - if @credentials.nil? - raise "Unable to get AppID Application credentials for #{application_name}" - end - response = @helper.rest_post("#{@app_id_url}/oauth/v4/#{ENV['APPID_TENANT']}/token", {'grant_type' => 'client_credentials', 'scope' => scopes, 'audience' => (audience_override.nil? ? ENV['JWT_AUDIENCE_ID'] : audience_override)}, {'Content-Type' => 'application/x-www-form-urlencoded', 'Accept' => 'application/json', 'Authorization' => "Basic #{@credentials}"}) raise 'App ID token request failed' unless response.code == 200 JSON.parse(response.body)['access_token'] end -end +end \ No newline at end of file diff --git a/test/spec/cos_helper.rb b/test/spec/cos_helper.rb index 82c14ae..f2cef47 100644 --- a/test/spec/cos_helper.rb +++ b/test/spec/cos_helper.rb @@ -16,4 +16,9 @@ def get_object_data(bucket_name, object_path) response.code == 200 ? response.body : File.read(File.join(File.dirname(__FILE__), 'fhir_practitioners.txt')) end + def upload_object_data(bucket_name, object_path, object) + response = @helper.rest_put("#{@cos_url}/#{bucket_name}/#{object_path}", object, {'Authorization' => "Bearer #{@iam_token}"}) + raise 'Failed to upload object to COS' unless response.code == 200 + end + end \ No newline at end of file diff --git a/test/spec/dredd_hooks.rb b/test/spec/dredd_hooks.rb index 6196578..62b4f92 100644 --- a/test/spec/dredd_hooks.rb +++ b/test/spec/dredd_hooks.rb @@ -6,7 +6,7 @@ require_relative '../env' include DreddHooks::Methods -DEFAULT_TENANT_ID = 'provider1234' +DREDD_TENANT_ID = 'provider1234' TENANT_ID_TENANTS_STREAMS = "#{ENV['TRAVIS_BRANCH'].tr('.-', '')}".downcase TENANT_ID_BATCHES = 'test' @@ -16,7 +16,8 @@ puts 'before all' @iam_token = IAMHelper.new.get_access_token app_id_helper = AppIDHelper.new - @token_all_roles = app_id_helper.get_access_token('hri_integration_tenant_test_integrator_consumer', 'tenant_test hri_data_integrator hri_consumer') + @token_integrator = app_id_helper.get_access_token('hri_integration_tenant_test_data_integrator', 'tenant_test tenant_provider1234 hri_data_integrator') + @token_internal = app_id_helper.get_access_token('hri_integration_tenant_test_internal', 'tenant_test tenant_provider1234 hri_consumer hri_internal') @token_invalid_tenant = app_id_helper.get_access_token('hri_integration_tenant_test_invalid', 'tenant_test_invalid') # uncomment to print out all the transaction names @@ -40,14 +41,14 @@ before 'tenant > /tenants/{tenantId} > Create new tenant > 201 > application/json' do |transaction| puts 'before create tenant 201' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_TENANTS_STREAMS) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_TENANTS_STREAMS) transaction['request']['headers']['Authorization'] = "Bearer #{@iam_token}" end before 'tenant > /tenants/{tenantId} > Create new tenant > 401 > application/json' do |transaction| puts 'before create tenant 401' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_TENANTS_STREAMS) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_TENANTS_STREAMS) end # POST /tenants/{tenant_id}/streams/{integrator_id} @@ -55,14 +56,20 @@ before 'stream > /tenants/{tenantId}/streams/{streamId} > Create new Stream for a Tenant > 201 > application/json' do |transaction| puts 'before create stream 201' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_TENANTS_STREAMS) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_TENANTS_STREAMS) transaction['request']['headers']['Authorization'] = "Bearer #{@iam_token}" end +before 'stream > /tenants/{tenantId}/streams/{streamId} > Create new Stream for a Tenant > 401 > application/json' do |transaction| + puts 'before create stream 401' + transaction['skip'] = false + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_TENANTS_STREAMS) +end + before 'stream > /tenants/{tenantId}/streams/{streamId} > Create new Stream for a Tenant > 400 > application/json' do |transaction| puts 'before create stream 400' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_TENANTS_STREAMS) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_TENANTS_STREAMS) transaction['request']['headers']['Authorization'] = "Bearer #{@iam_token}" transaction['request']['body'] = '{bad json string"' end @@ -72,15 +79,21 @@ before 'stream > /tenants/{tenantId}/streams/{streamId} > Delete a Stream for a Tenant > 200 > application/json' do |transaction| puts 'before delete stream 200' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_TENANTS_STREAMS) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_TENANTS_STREAMS) transaction['request']['headers']['Authorization'] = "Bearer #{@iam_token}" end +before 'stream > /tenants/{tenantId}/streams/{streamId} > Delete a Stream for a Tenant > 401 > application/json' do |transaction| + puts 'before delete stream 401' + transaction['skip'] = false + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_TENANTS_STREAMS) +end + before 'stream > /tenants/{tenantId}/streams/{streamId} > Delete a Stream for a Tenant > 404 > application/json' do |transaction| puts 'before delete stream 404' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, 'missingTenant') transaction['request']['headers']['Authorization'] = "Bearer #{@iam_token}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, 'invalid') end # DELETE /tenants/{tenant_id} @@ -88,21 +101,31 @@ before 'tenant > /tenants/{tenantId} > Delete a specific tenant > 200 > application/json' do |transaction| puts 'before delete tenant 200' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_TENANTS_STREAMS) transaction['expected']['headers'].delete('Content-Type') + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_TENANTS_STREAMS) transaction['request']['headers']['Authorization'] = "Bearer #{@iam_token}" end +after 'tenant > /tenants/{tenantId} > Delete a specific tenant > 200 > application/json' do |transaction| + puts 'after delete tenant 200' + unless @batch_id.nil? + response = elastic.es_delete_batch(TENANT_ID_BATCHES, @batch_id) + raise 'Batch was not deleted from Elastic' unless response.code == 200 + parsed_response = JSON.parse(response.body) + raise 'Batch was not deleted from Elastic' unless parsed_response['_id'] == @batch_id && parsed_response['result'] == 'deleted' + end +end + before 'tenant > /tenants/{tenantId} > Delete a specific tenant > 401 > application/json' do |transaction| puts 'before delete tenant 401' - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_TENANTS_STREAMS) transaction['skip'] = false + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_TENANTS_STREAMS) end before 'tenant > /tenants/{tenantId} > Delete a specific tenant > 404 > application/json' do |transaction| puts 'before delete tenant 404' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, 'invalid') + transaction['fullPath'].gsub!(DREDD_TENANT_ID, 'invalid') transaction['request']['headers']['Authorization'] = "Bearer #{@iam_token}" end @@ -111,14 +134,19 @@ before 'stream > /tenants/{tenantId}/streams > Get all Streams for Tenant > 200 > application/json' do |transaction| puts 'before get streams 200' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_TENANTS_STREAMS) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_TENANTS_STREAMS) transaction['request']['headers']['Authorization'] = "Bearer #{@iam_token}" end +before 'stream > /tenants/{tenantId}/streams > Get all Streams for Tenant > 401 > application/json' do |transaction| + puts 'before get streams 401' + transaction['skip'] = false +end + before 'stream > /tenants/{tenantId}/streams > Get all Streams for Tenant > 404 > application/json' do |transaction| puts 'before get streams 404' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, 'invalid') + transaction['fullPath'].gsub!(DREDD_TENANT_ID, 'invalid') transaction['request']['headers']['Authorization'] = "Bearer #{@iam_token}" end @@ -127,14 +155,14 @@ before 'tenant > /tenants > Get a list of all tenantIds > 200 > application/json' do |transaction| puts 'before get tenants 200' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_TENANTS_STREAMS) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_TENANTS_STREAMS) transaction['request']['headers']['Authorization'] = "Bearer #{@iam_token}" end before 'tenant > /tenants > Get a list of all tenantIds > 401 > application/json' do |transaction| puts 'before get tenants 401' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_TENANTS_STREAMS) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_TENANTS_STREAMS) end # GET /tenants/{tenant_id} @@ -142,20 +170,20 @@ before 'tenant > /tenants/{tenantId} > Get information on a specific elastic index > 200 > application/json' do |transaction| puts 'before get tenant 200' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_TENANTS_STREAMS) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_TENANTS_STREAMS) transaction['request']['headers']['Authorization'] = "Bearer #{@iam_token}" end before 'tenant > /tenants/{tenantId} > Get information on a specific elastic index > 401 > application/json' do |transaction| puts 'before get tenant 401' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_TENANTS_STREAMS) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_TENANTS_STREAMS) end before 'tenant > /tenants/{tenantId} > Get information on a specific elastic index > 404 > application/json' do |transaction| puts 'before get tenant 404' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, 'invalid') + transaction['fullPath'].gsub!(DREDD_TENANT_ID, 'invalid') transaction['request']['headers']['Authorization'] = "Bearer #{@iam_token}" end @@ -164,15 +192,23 @@ before 'batch > /tenants/{tenantId}/batches > Get Batches for Tenant > 200 > application/json' do |transaction| puts 'before get batches 200' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) - transaction['request']['headers']['Authorization'] = "Bearer #{@token_all_roles}" + transaction['request']['headers']['Authorization'] = "Bearer #{@token_integrator}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) end before 'batch > /tenants/{tenantId}/batches > Get Batches for Tenant > 401 > application/json' do |transaction| puts 'before get batches 401' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) transaction['request']['headers']['Authorization'] = "Bearer #{@token_invalid_tenant}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) +end + +before 'batch > /tenants/{tenantId}/batches > Get Batches for Tenant > 400 > application/json' do |transaction| + puts 'before get batches 400' + transaction['skip'] = false + transaction['request']['headers']['Authorization'] = "Bearer #{@token_integrator}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) + transaction['fullPath'].gsub!('gteDate=', 'gteDate=INVALID') end # GET /tenants/{tenantId}/batches/{batchId} @@ -183,9 +219,9 @@ if @batch_id.nil? transaction['fail'] = 'nil batch_id' else - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) transaction['fullPath'].gsub!('batch12345', @batch_id) - transaction['request']['headers']['Authorization'] = "Bearer #{@token_all_roles}" + transaction['request']['headers']['Authorization'] = "Bearer #{@token_integrator}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) end end @@ -195,9 +231,9 @@ if @batch_id.nil? transaction['fail'] = 'nil batch_id' else - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) transaction['fullPath'].gsub!('batch12345', @batch_id) transaction['request']['headers']['Authorization'] = "Bearer #{@token_invalid_tenant}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) end end @@ -207,9 +243,9 @@ if @batch_id.nil? transaction['fail'] = 'nil batch_id' else - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) transaction['fullPath'].gsub!('batch12345', 'INVALID') - transaction['request']['headers']['Authorization'] = "Bearer #{@token_all_roles}" + transaction['request']['headers']['Authorization'] = "Bearer #{@token_integrator}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) end end @@ -218,8 +254,8 @@ before 'batch > /tenants/{tenantId}/batches > Create Batch > 201 > application/json' do |transaction| puts 'before create batch 201' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) - transaction['request']['headers']['Authorization'] = "Bearer #{@token_all_roles}" + transaction['request']['headers']['Authorization'] = "Bearer #{@token_integrator}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) end after 'batch > /tenants/{tenantId}/batches > Create Batch > 201 > application/json' do |transaction| @@ -232,68 +268,71 @@ before 'batch > /tenants/{tenantId}/batches > Create Batch > 400 > application/json' do |transaction| puts 'before create batch 400' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) transaction['request']['body'] = '{bad json string"' end before 'batch > /tenants/{tenantId}/batches > Create Batch > 401 > application/json' do |transaction| puts 'before create batch 401' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) transaction['request']['headers']['Authorization'] = "Bearer #{@token_invalid_tenant}" end # PUT /tenants/{tenantId}/batches/{batchId}/action/sendComplete -before 'batch > /tenants/{tenantId}/batches/{batchId}/action/sendComplete > Update Batch status to Send Complete > 200 > application/json' do |transaction| +before 'batch > /tenants/{tenantId}/batches/{batchId}/action/sendComplete > Indicate the Batch has been sent > 200 > application/json' do |transaction| puts 'before sendComplete 200' transaction['skip'] = false if @batch_id.nil? transaction['fail'] = 'nil batch_id' else - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) transaction['fullPath'].gsub!('batch12345', @batch_id) - transaction['request']['headers']['Authorization'] = "Bearer #{@token_all_roles}" - update_batch_script = { - script: { - source: 'ctx._source.status = params.status', - lang: 'painless', - params: { - status: 'started' - } - } - }.to_json - elastic.es_batch_update(TENANT_ID_BATCHES, @batch_id, update_batch_script) + transaction['request']['headers']['Authorization'] = "Bearer #{@token_integrator}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) + elastic.es_batch_update(TENANT_ID_BATCHES, @batch_id, ' + { + "script" : { + "source": "ctx._source.status = params.status", + "lang": "painless", + "params" : { + "status" : "started" + } + } + }') end end -before 'batch > /tenants/{tenantId}/batches/{batchId}/action/sendComplete > Update Batch status to Send Complete > 400 > application/json' do |transaction| +before 'batch > /tenants/{tenantId}/batches/{batchId}/action/sendComplete > Indicate the Batch has been sent > 400 > application/json' do |transaction| puts 'before sendComplete 400' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) transaction['request']['body'] = '{bad json string"' end -before 'batch > /tenants/{tenantId}/batches/{batchId}/action/sendComplete > Update Batch status to Send Complete > 401 > application/json' do |transaction| +before 'batch > /tenants/{tenantId}/batches/{batchId}/action/sendComplete > Indicate the Batch has been sent > 401 > application/json' do |transaction| puts 'before sendComplete 401' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) transaction['request']['headers']['Authorization'] = "Bearer #{@token_invalid_tenant}" end -before 'batch > /tenants/{tenantId}/batches/{batchId}/action/sendComplete > Update Batch status to Send Complete > 404 > application/json' do |transaction| +before 'batch > /tenants/{tenantId}/batches/{batchId}/action/sendComplete > Indicate the Batch has been sent > 404 > application/json' do |transaction| puts 'before sendComplete 404' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) - transaction['request']['headers']['Authorization'] = "Bearer #{@token_all_roles}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) + transaction['request']['headers']['Authorization'] = "Bearer #{@token_integrator}" end -before 'batch > /tenants/{tenantId}/batches/{batchId}/action/sendComplete > Update Batch status to Send Complete > 409 > application/json' do |transaction| +before 'batch > /tenants/{tenantId}/batches/{batchId}/action/sendComplete > Indicate the Batch has been sent > 409 > application/json' do |transaction| puts 'before sendComplete 409' transaction['skip'] = false if @batch_id.nil? transaction['fail'] = 'nil batch_id' else + transaction['fullPath'].gsub!('batch12345', @batch_id) + transaction['request']['headers']['Authorization'] = "Bearer #{@token_integrator}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) elastic.es_batch_update(TENANT_ID_BATCHES, @batch_id, ' { "script" : { @@ -304,9 +343,6 @@ } } }') - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) - transaction['fullPath'].gsub!('batch12345', @batch_id) - transaction['request']['headers']['Authorization'] = "Bearer #{@token_all_roles}" end end @@ -318,39 +354,42 @@ if @batch_id.nil? transaction['fail'] = 'nil batch_id' else - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) - transaction['fullPath'].gsub!("batch12345", @batch_id) - update_batch_script = { - script: { - source: 'ctx._source.status = params.status', - lang: 'painless', - params: { - status: 'sendCompleted' - } - } - }.to_json - elastic.es_batch_update(TENANT_ID_BATCHES, @batch_id, update_batch_script) + transaction['fullPath'].gsub!('batch12345', @batch_id) + transaction['request']['headers']['Authorization'] = "Bearer #{@token_internal}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) + elastic.es_batch_update(TENANT_ID_BATCHES, @batch_id, ' + { + "script" : { + "source": "ctx._source.status = params.status", + "lang": "painless", + "params" : { + "status" : "sendCompleted" + } + } + }') end end before 'batch > /tenants/{tenantId}/batches/{batchId}/action/processingComplete > Indicate the Batch has been processed (Internal) > 400 > application/json' do |transaction| puts 'before processingComplete 400' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) transaction['request']['body'] = '{bad json string"' end before 'batch > /tenants/{tenantId}/batches/{batchId}/action/processingComplete > Indicate the Batch has been processed (Internal) > 401 > application/json' do |transaction| puts 'before processingComplete 401' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) transaction['request']['headers']['Authorization'] = "Bearer #{@token_invalid_tenant}" end before 'batch > /tenants/{tenantId}/batches/{batchId}/action/processingComplete > Indicate the Batch has been processed (Internal) > 404 > application/json' do |transaction| puts 'before processingComplete 404' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) + transaction['fullPath'].gsub!('batch12345', 'invalid') + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) + transaction['request']['headers']['Authorization'] = "Bearer #{@token_internal}" end before 'batch > /tenants/{tenantId}/batches/{batchId}/action/processingComplete > Indicate the Batch has been processed (Internal) > 409 > application/json' do |transaction| @@ -359,32 +398,33 @@ if @batch_id.nil? transaction['fail'] = 'nil batch_id' else - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) - transaction['fullPath'].gsub!("batch12345", @batch_id) - update_batch_script = { - script: { - source: 'ctx._source.status = params.status', - lang: 'painless', - params: { - status: 'completed' - } - } - }.to_json - elastic.es_batch_update(TENANT_ID_BATCHES, @batch_id, update_batch_script) + transaction['fullPath'].gsub!('batch12345', @batch_id) + transaction['request']['headers']['Authorization'] = "Bearer #{@token_internal}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) + elastic.es_batch_update(TENANT_ID_BATCHES, @batch_id, ' + { + "script" : { + "source": "ctx._source.status = params.status", + "lang": "painless", + "params" : { + "status" : "completed" + } + } + }') end end # PUT /tenants/{tenantId}/batches/{batchId}/action/terminate -before 'batch > /tenants/{tenantId}/batches/{batchId}/action/terminate > Terminate Batch > 200 > application/json' do |transaction| +before 'batch > /tenants/{tenantId}/batches/{batchId}/action/terminate > Terminate the Batch > 200 > application/json' do |transaction| puts 'before terminate 200' transaction['skip'] = false if @batch_id.nil? transaction['fail'] = 'nil batch_id' else - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) transaction['fullPath'].gsub!('batch12345', @batch_id) - transaction['request']['headers']['Authorization'] = "Bearer #{@token_all_roles}" + transaction['request']['headers']['Authorization'] = "Bearer #{@token_integrator}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) elastic.es_batch_update(TENANT_ID_BATCHES, @batch_id, ' { "script" : { @@ -398,26 +438,29 @@ end end -before 'batch > /tenants/{tenantId}/batches/{batchId}/action/terminate > Terminate Batch > 401 > application/json' do |transaction| +before 'batch > /tenants/{tenantId}/batches/{batchId}/action/terminate > Terminate the Batch > 401 > application/json' do |transaction| puts 'before terminate 401' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) transaction['request']['headers']['Authorization'] = "Bearer #{@token_invalid_tenant}" end -before 'batch > /tenants/{tenantId}/batches/{batchId}/action/terminate > Terminate Batch > 404 > application/json' do |transaction| +before 'batch > /tenants/{tenantId}/batches/{batchId}/action/terminate > Terminate the Batch > 404 > application/json' do |transaction| puts 'before terminate 404' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) - transaction['request']['headers']['Authorization'] = "Bearer #{@token_all_roles}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) + transaction['request']['headers']['Authorization'] = "Bearer #{@token_integrator}" end -before 'batch > /tenants/{tenantId}/batches/{batchId}/action/terminate > Terminate Batch > 409 > application/json' do |transaction| +before 'batch > /tenants/{tenantId}/batches/{batchId}/action/terminate > Terminate the Batch > 409 > application/json' do |transaction| puts 'before terminate 409' transaction['skip'] = false if @batch_id.nil? transaction['fail'] = 'nil batch_id' else + transaction['fullPath'].gsub!('batch12345', @batch_id) + transaction['request']['headers']['Authorization'] = "Bearer #{@token_integrator}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) elastic.es_batch_update(TENANT_ID_BATCHES, @batch_id, ' { "script" : { @@ -428,9 +471,6 @@ } } }') - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) - transaction['fullPath'].gsub!('batch12345', @batch_id) - transaction['request']['headers']['Authorization'] = "Bearer #{@token_all_roles}" end end @@ -442,6 +482,9 @@ if @batch_id.nil? transaction['fail'] = 'nil batch_id' else + transaction['fullPath'].gsub!('batch12345', @batch_id) + transaction['request']['headers']['Authorization'] = "Bearer #{@token_internal}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) elastic.es_batch_update(TENANT_ID_BATCHES, @batch_id, ' { "script" : { @@ -452,29 +495,29 @@ } } }') - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) - transaction['fullPath'].gsub!('batch12345', @batch_id) end end before 'batch > /tenants/{tenantId}/batches/{batchId}/action/fail > Fail the Batch (Internal) > 400 > application/json' do |transaction| puts 'before fail 400' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) transaction['request']['body'] = '{bad json string"' end before 'batch > /tenants/{tenantId}/batches/{batchId}/action/fail > Fail the Batch (Internal) > 401 > application/json' do |transaction| puts 'before fail 401' transaction['skip'] = false - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) transaction['request']['headers']['Authorization'] = "Bearer #{@token_invalid_tenant}" end before 'batch > /tenants/{tenantId}/batches/{batchId}/action/fail > Fail the Batch (Internal) > 404 > application/json' do |transaction| puts 'before fail 404' - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) transaction['skip'] = false + transaction['fullPath'].gsub!('batch12345', 'invalid') + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) + transaction['request']['headers']['Authorization'] = "Bearer #{@token_internal}" end before 'batch > /tenants/{tenantId}/batches/{batchId}/action/fail > Fail the Batch (Internal) > 409 > application/json' do |transaction| @@ -483,27 +526,18 @@ if @batch_id.nil? transaction['fail'] = 'nil batch_id' else - transaction['fullPath'].gsub!(DEFAULT_TENANT_ID, TENANT_ID_BATCHES) - transaction['fullPath'].gsub!("batch12345", @batch_id) - update_batch_script = { - script: { - source: 'ctx._source.status = params.status', - lang: 'painless', - params: { - status: 'failed' - } - } - }.to_json - elastic.es_batch_update(TENANT_ID_BATCHES, @batch_id, update_batch_script) + transaction['fullPath'].gsub!('batch12345', @batch_id) + transaction['request']['headers']['Authorization'] = "Bearer #{@token_internal}" + transaction['fullPath'].gsub!(DREDD_TENANT_ID, TENANT_ID_BATCHES) + elastic.es_batch_update(TENANT_ID_BATCHES, @batch_id, ' + { + "script" : { + "source": "ctx._source.status = params.status", + "lang": "painless", + "params" : { + "status" : "failed" + } + } + }') end end - -after_all do |transactions| - puts 'after_all' - unless @batch_id.nil? - elastic.es_delete_batch(TENANT_ID_BATCHES, @batch_id) - end - - # make sure the tenant index is deleted - elastic.delete_index(TENANT_ID_TENANTS_STREAMS) -end \ No newline at end of file diff --git a/test/spec/event_streams_helper.rb b/test/spec/event_streams_helper.rb index 443e961..edecc5d 100644 --- a/test/spec/event_streams_helper.rb +++ b/test/spec/event_streams_helper.rb @@ -12,4 +12,18 @@ def get_topics @helper.exec_command("bx es topics").split("\n").map { |t| t.strip } end + def create_topic(topic, partitions) + unless get_topics.include?(topic) + @helper.exec_command("ibmcloud es topic-create #{topic} -p #{partitions}") + Logger.new(STDOUT).info("Topic #{topic} created.") + end + end + + def delete_topic(topic) + if get_topics.include?(topic) + @helper.exec_command("ibmcloud es topic-delete #{topic} -f") + Logger.new(STDOUT).info("Topic #{topic} deleted.") + end + end + end \ No newline at end of file diff --git a/test/spec/hri_helper.rb b/test/spec/hri_helper.rb index e8bf447..4251456 100644 --- a/test/spec/hri_helper.rb +++ b/test/spec/hri_helper.rb @@ -33,11 +33,11 @@ def hri_post_batch(tenant_id, request_body, override_headers = {}) @helper.rest_post(url, request_body, headers) end - def hri_put_batch(tenant_id, batch_id, action, record_count = {}, override_headers = {}) + def hri_put_batch(tenant_id, batch_id, action, additional_params = {}, override_headers = {}) url = "#{@base_url}/tenants/#{tenant_id}/batches/#{batch_id}/action/#{action}" headers = { 'Accept' => 'application/json', 'Content-Type' => 'application/json' }.merge(override_headers) - @helper.rest_put(url, record_count.to_json, headers) + @helper.rest_put(url, additional_params.to_json, headers) end def hri_post_tenant(tenant_id, request_body = nil, override_headers = {}) diff --git a/test/spec/hri_management_api_spec.rb b/test/spec/hri_management_api_no_validation_spec.rb similarity index 81% rename from test/spec/hri_management_api_spec.rb rename to test/spec/hri_management_api_no_validation_spec.rb index 76568d5..37e878a 100644 --- a/test/spec/hri_management_api_spec.rb +++ b/test/spec/hri_management_api_no_validation_spec.rb @@ -4,7 +4,7 @@ require_relative '../env' -describe 'HRI Management API ' do +describe 'HRI Management API Without Validation' do INVALID_ID = 'INVALID' TENANT_ID = 'test' @@ -32,29 +32,26 @@ @batch_prefix = "rspec-#{ENV['TRAVIS_BRANCH'].delete('.')}" @batch_name = "#{@batch_prefix}-#{SecureRandom.uuid}" create_batch = { - name: @batch_name, - status: STATUS, - recordCount: 1, - dataType: DATA_TYPE, - topic: BATCH_INPUT_TOPIC, - startDate: @start_date, - metadata: { - rspec1: 'test1', - rspec2: 'test2', - rspec3: { - rspec3A: 'test3A', - rspec3B: 'test3B' - } + name: @batch_name, + status: STATUS, + recordCount: 1, + dataType: DATA_TYPE, + topic: BATCH_INPUT_TOPIC, + startDate: @start_date, + metadata: { + rspec1: 'test1', + rspec2: 'test2', + rspec3: { + rspec3A: 'test3A', + rspec3B: 'test3B' } + } }.to_json - @batch_id, @new_batch_id = '-', '-' - while @batch_id[-1] == '-' - response = @elastic.es_create_batch(TENANT_ID, create_batch) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @batch_id = parsed_response['_id'] - Logger.new(STDOUT).info("New Batch Created With ID: #{@batch_id}") - end + response = @elastic.es_create_batch(TENANT_ID, create_batch) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @batch_id = parsed_response['_id'] + Logger.new(STDOUT).info("New Batch Created With ID: #{@batch_id}") #Get AppId Access Tokens @token_invalid_tenant = @app_id_helper.get_access_token('hri_integration_tenant_test_invalid', 'tenant_test_invalid') @@ -76,7 +73,6 @@ context 'POST /tenants/{tenant_id}' do it 'Success' do - #Create New Tenant response = @hri_helper.hri_post_tenant(TEST_TENANT_ID) expect(response.code).to eq 201 parsed_response = JSON.parse(response.body) @@ -165,13 +161,6 @@ expect(parsed_response['errorDescription']).to eql 'Missing required parameter(s): [retentionMs]' end - it 'Invalid Stream Name' do - response = @hri_helper.hri_post_tenant_stream(TEST_TENANT_ID, ".#{TEST_INTEGRATOR_ID}.#{TEST_INTEGRATOR_ID}", @stream_info.to_json) - expect(response.code).to eq 400 - parsed_response = JSON.parse(response.body) - expect(parsed_response['errorDescription']).to include "StreamId: .#{TEST_INTEGRATOR_ID}.#{TEST_INTEGRATOR_ID} must be lower-case alpha-numeric, '-', or '_', and no more than one '.'." - end - it 'Invalid retentionMs' do @stream_info[:retentionMs] = '3600000' response = @hri_helper.hri_post_tenant_stream(TEST_TENANT_ID, TEST_INTEGRATOR_ID, @stream_info.to_json) @@ -180,6 +169,13 @@ expect(parsed_response['errorDescription']).to eql 'Invalid parameter type(s): [retentionMs must be a float64, got string instead.]' end + it 'Invalid Stream Name' do + response = @hri_helper.hri_post_tenant_stream(TEST_TENANT_ID, ".#{TEST_INTEGRATOR_ID}.#{TEST_INTEGRATOR_ID}", @stream_info.to_json) + expect(response.code).to eq 400 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to include "StreamId: .#{TEST_INTEGRATOR_ID}.#{TEST_INTEGRATOR_ID} must be lower-case alpha-numeric, '-', or '_', and no more than one '.'." + end + it 'Invalid cleanupPolicy' do @stream_info[:cleanupPolicy] = 12345 response = @hri_helper.hri_post_tenant_stream(TEST_TENANT_ID, TEST_INTEGRATOR_ID, @stream_info.to_json) @@ -266,7 +262,7 @@ response = @hri_helper.hri_delete_tenant(INVALID_ID) expect(response.code).to eq 404 parsed_response = JSON.parse(response.body) - expect(parsed_response['errorDescription']).to eql 'index_not_found_exception: no such index [INVALID-batches]' + expect(parsed_response['errorDescription']).to eql "Could not delete tenant [#{INVALID_ID}]: index_not_found_exception: no such index [#{INVALID_ID}-batches]" end it 'Delete - Unauthorized' do @@ -287,6 +283,29 @@ expect(parsed_response['results'][0]['id']).to eql INTEGRATOR_ID end + it 'Success With Invalid Topic Only' do + invalid_topic = "ingest.#{TENANT_ID}.#{TEST_INTEGRATOR_ID}.invalid" + @event_streams_helper.create_topic(invalid_topic, 1) + response = @hri_helper.hri_get_tenant_streams(TENANT_ID) + expect(response.code).to eq 200 + parsed_response = JSON.parse(response.body) + stream_found = false + parsed_response['results'].each do |integrator| + stream_found = true if integrator['id'] == TEST_INTEGRATOR_ID + end + raise "Tenant Stream Not Found: #{TEST_INTEGRATOR_ID}" unless stream_found + + Timeout.timeout(15, nil, "Timed out waiting for the '#{invalid_topic}' topic to be deleted") do + loop do + break if @event_streams_helper.get_topics.include?(invalid_topic) + end + loop do + @event_streams_helper.delete_topic(invalid_topic) + break unless @event_streams_helper.get_topics.include?(invalid_topic) + end + end + end + it 'Missing Authorization' do response = @hri_helper.hri_get_tenant_streams(TENANT_ID, {'Authorization' => nil}) expect(response.code).to eq 401 @@ -474,13 +493,12 @@ } } } - while @new_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @new_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Batch Created With ID: #{@new_batch_id}") - end + + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @new_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Batch Created With ID: #{@new_batch_id}") #Modify Batch Integrator ID update_batch_script = { @@ -539,7 +557,7 @@ response = @hri_helper.hri_get_batches(TENANT_ID, nil, {'Authorization' => "Bearer #{@token_invalid_tenant}"}) expect(response.code).to eq 401 parsed_response = JSON.parse(response.body) - expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant 'test' is not included in the authorized scopes: .") + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") end it 'Unauthorized - No Roles' do @@ -593,13 +611,11 @@ } } } - while @new_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @new_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Batch Created With ID: #{@new_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @new_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Batch Created With ID: #{@new_batch_id}") #Get Batch response = @hri_helper.hri_get_batch(TENANT_ID, @new_batch_id, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) @@ -640,13 +656,11 @@ } } } - while @new_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @new_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Batch Created With ID: #{@new_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @new_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Batch Created With ID: #{@new_batch_id}") #Modify Batch Integrator ID update_batch_script = { @@ -688,7 +702,7 @@ response = @hri_helper.hri_get_batch(TENANT_ID, @batch_id, {'Authorization' => "Bearer #{@token_invalid_tenant}"}) expect(response.code).to eq 401 parsed_response = JSON.parse(response.body) - expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant 'test' is not included in the authorized scopes: .") + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") end it 'Unauthorized - No Roles' do @@ -730,13 +744,11 @@ it 'Successful Batch Creation' do #Create Batch - while @new_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_all_roles}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @new_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Batch Created With ID: #{@new_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_all_roles}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @new_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Batch Created With ID: #{@new_batch_id}") #Verify Batch in Elastic response = @elastic.es_get_batch(TENANT_ID, @new_batch_id) @@ -883,7 +895,7 @@ response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_invalid_tenant}"}) expect(response.code).to eq 401 parsed_response = JSON.parse(response.body) - expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant 'test' is not included in the authorized scopes: .") + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") end it 'Unauthorized - No Roles' do @@ -912,8 +924,8 @@ context 'PUT /tenants/{tenantId}/batches/{batchId}/action/sendComplete' do before(:all) do - @record_count = { - recordCount: 1, + @expected_record_count = { + expectedRecordCount: 1, metadata: { rspec1: 'test3', rspec2: 'test4', @@ -936,21 +948,77 @@ } } } - @send_complete_batch_id = '-' end it 'Success' do #Create Batch - while @send_complete_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @send_complete_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") + + #Set Batch Complete + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 200 + + #Verify Batch Complete + response = @hri_helper.hri_get_batch(TENANT_ID, @send_complete_batch_id, {'Authorization' => "Bearer #{@token_consumer_role_only}"}) + expect(response.code).to eq 200 + parsed_response = JSON.parse(response.body) + expect(parsed_response['status']).to eql 'completed' + expect(parsed_response['endDate']).to_not be_nil + + #Verify Kafka Message + Timeout.timeout(KAFKA_TIMEOUT) do + Logger.new(STDOUT).info("Waiting for a Kafka message with Batch ID: #{@send_complete_batch_id} and status: completed") + @kafka_consumer.each_message do |message| + parsed_message = JSON.parse(message.value) + if parsed_message['id'] == @send_complete_batch_id && parsed_message['status'] == 'completed' + @message_found = true + expect(parsed_message['dataType']).to eql(DATA_TYPE) + expect(parsed_message['id']).to eql(@send_complete_batch_id) + expect(parsed_message['name']).to eql(@batch_name) + expect(parsed_message['topic']).to eql(BATCH_INPUT_TOPIC) + expect(parsed_message['status']).to eql('completed') + expect(DateTime.parse(parsed_message['startDate']).strftime("%Y-%m-%d")).to eq(Date.today.strftime("%Y-%m-%d")) + expect(DateTime.parse(parsed_message['endDate']).strftime("%Y-%m-%d")).to eq(Date.today.strftime("%Y-%m-%d")) + expect(parsed_message['metadata']['rspec1']).to eql('test3') + expect(parsed_message['metadata']['rspec2']).to eql('test4') + expect(parsed_message['metadata']['rspec4']['rspec4A']).to eql('test4A') + expect(parsed_message['metadata']['rspec4']['rspec4B']).to eql('test4B') + expect(parsed_message['metadata']['rspec3']).to be_nil + expect(parsed_message['expectedRecordCount']).to eq 1 + expect(parsed_message['recordCount']).to eq 1 + break + end + end + expect(@message_found).to be true end + end + + it 'Success with recordCount' do + record_count = { + recordCount: 1, + metadata: { + rspec1: 'test3', + rspec2: 'test4', + rspec4: { + rspec4A: 'test4A', + rspec4B: 'test4B' + } + } + } + + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") #Set Batch Complete - response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) expect(response.code).to eq 200 #Verify Batch Complete @@ -979,6 +1047,8 @@ expect(parsed_message['metadata']['rspec4']['rspec4A']).to eql('test4A') expect(parsed_message['metadata']['rspec4']['rspec4B']).to eql('test4B') expect(parsed_message['metadata']['rspec3']).to be_nil + expect(parsed_message['expectedRecordCount']).to eq 1 + expect(parsed_message['recordCount']).to eq 1 break end end @@ -987,7 +1057,7 @@ end it 'Invalid Batch ID' do - response = @hri_helper.hri_put_batch(TENANT_ID, INVALID_ID, 'sendComplete', @record_count, {'Authorization' => "Bearer #{@token_all_roles}"}) + response = @hri_helper.hri_put_batch(TENANT_ID, INVALID_ID, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_all_roles}"}) expect(response.code).to eq 404 parsed_response = JSON.parse(response.body) expect(parsed_response['errorDescription']).to include('document_missing_exception') @@ -997,25 +1067,23 @@ response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'sendComplete', nil, {'Authorization' => "Bearer #{@token_all_roles}"}) expect(response.code).to eq 400 parsed_response = JSON.parse(response.body) - expect(parsed_response['errorDescription']).to eql('Missing required parameter(s): [recordCount]') + expect(parsed_response['errorDescription']).to eql('Missing required parameter(s): [expectedRecordCount]') end it 'Invalid Record Count' do - response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'sendComplete', {recordCount: "1"}, {'Authorization' => "Bearer #{@token_all_roles}"}) + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'sendComplete', {expectedRecordCount: "1"}, {'Authorization' => "Bearer #{@token_all_roles}"}) expect(response.code).to eq 400 parsed_response = JSON.parse(response.body) - expect(parsed_response['errorDescription']).to eql('Invalid parameter type(s): [recordCount must be a float64, got string instead.]') + expect(parsed_response['errorDescription']).to eql('Invalid parameter type(s): [expectedRecordCount must be a float64, got string instead.]') end it 'Conflict: Batch with a status other than started' do #Create Batch - while @send_complete_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @send_complete_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") #Update Batch to Terminated Status update_batch_script = { @@ -1040,10 +1108,10 @@ expect(parsed_response['_source']['status']).to eql('terminated') #Attempt to complete batch - response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) expect(response.code).to eq 409 parsed_response = JSON.parse(response.body) - expect(parsed_response['errorDescription']).to eql "Batch status was not updated to 'completed', batch is already in 'terminated' state" + expect(parsed_response['errorDescription']).to eql "The 'sendComplete' endpoint failed, batch is in 'terminated' state" #Delete batch response = @elastic.es_delete_batch(TENANT_ID, @send_complete_batch_id) @@ -1052,13 +1120,11 @@ it 'Conflict: Batch that already has a completed status' do #Create Batch - while @send_complete_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @send_complete_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") #Update Batch to Completed Status update_batch_script = { @@ -1082,22 +1148,20 @@ parsed_response = JSON.parse(response.body) expect(parsed_response['_source']['status']).to eql('completed') - #Attempt to terminate batch - response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + #Attempt to complete batch + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) expect(response.code).to eq 409 parsed_response = JSON.parse(response.body) - expect(parsed_response['errorDescription']).to eql "Batch status was not updated to 'completed', batch is already in 'completed' state" + expect(parsed_response['errorDescription']).to eql "The 'sendComplete' endpoint failed, batch is in 'completed' state" end it 'Integrator ID can not update batches created with a different Integrator ID' do #Create Batch - while @send_complete_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @send_complete_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") #Modify Batch Integrator ID update_batch_script = { @@ -1122,37 +1186,35 @@ expect(parsed_response['_source']['integratorId']).to eql('modified-integrator-id') #Verify Batch Not Updated With Different Integrator ID - response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) expect(response.code).to eq 401 parsed_response = JSON.parse(response.body) expect(parsed_response['errorDescription']).to include("but owned by 'modified-integrator-id") end it 'Unauthorized - Missing Authorization' do - response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'sendComplete', @record_count) + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'sendComplete', @expected_record_count) expect(response.code).to eq 401 parsed_response = JSON.parse(response.body) expect(parsed_response['errorDescription']).to eql('Missing Authorization header') end it 'Unauthorized - Invalid Tenant ID' do - response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'sendComplete', @record_count, {'Authorization' => "Bearer #{@token_invalid_tenant}"}) + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_invalid_tenant}"}) expect(response.code).to eq 401 parsed_response = JSON.parse(response.body) - expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant 'test' is not included in the authorized scopes: .") + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") end it 'Unauthorized - No Roles' do #Create Batch - while @send_complete_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @send_complete_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") - response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @record_count, {'Authorization' => "Bearer #{@token_no_roles}"}) + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_no_roles}"}) expect(response.code).to eq 401 parsed_response = JSON.parse(response.body) expect(parsed_response['errorDescription']).to eql('Must have hri_data_integrator role to update a batch') @@ -1160,15 +1222,13 @@ it 'Unauthorized - Consumer Role Can Not Update Batch Status' do #Create Batch - while @send_complete_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @send_complete_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") - response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @record_count, {'Authorization' => "Bearer #{@token_consumer_role_only}"}) + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_consumer_role_only}"}) expect(response.code).to eq 401 parsed_response = JSON.parse(response.body) expect(parsed_response['errorDescription']).to eql('Must have hri_data_integrator role to update a batch') @@ -1176,15 +1236,13 @@ it 'Unauthorized - Invalid Audience' do #Create Batch - while @send_complete_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @send_complete_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") - response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @record_count, {'Authorization' => "Bearer #{@token_invalid_audience}"}) + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_invalid_audience}"}) expect(response.code).to eq 401 parsed_response = JSON.parse(response.body) expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") @@ -1219,18 +1277,15 @@ } } } - @terminate_batch_id = '-' end it 'Success' do #Create Batch - while @terminate_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @terminate_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") #Terminate Batch response = @hri_helper.hri_put_batch(TENANT_ID, @terminate_batch_id, 'terminate', @terminate_metadata, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) @@ -1278,13 +1333,11 @@ it 'Conflict: Batch with a status other than started' do #Create Batch - while @terminate_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @terminate_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") #Update Batch to Completed Status update_batch_script = { @@ -1312,7 +1365,7 @@ response = @hri_helper.hri_put_batch(TENANT_ID, @terminate_batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) expect(response.code).to eq 409 parsed_response = JSON.parse(response.body) - expect(parsed_response['errorDescription']).to eql "Batch status was not updated to 'terminated', batch is already in 'completed' state" + expect(parsed_response['errorDescription']).to eql "The 'terminate' endpoint failed, batch is in 'completed' state" #Delete batch response = @elastic.es_delete_batch(TENANT_ID, @terminate_batch_id) @@ -1321,13 +1374,11 @@ it 'Conflict: Batch that already has a terminated status' do #Create Batch - while @terminate_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_all_roles}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @terminate_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_all_roles}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") #Update Batch to Completed Status update_batch_script = { @@ -1355,18 +1406,16 @@ response = @hri_helper.hri_put_batch(TENANT_ID, @terminate_batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_all_roles}"}) expect(response.code).to eq 409 parsed_response = JSON.parse(response.body) - expect(parsed_response['errorDescription']).to eql "Batch status was not updated to 'terminated', batch is already in 'terminated' state" + expect(parsed_response['errorDescription']).to eql "The 'terminate' endpoint failed, batch is in 'terminated' state" end it 'Integrator ID can not update batches created with a different Integrator ID' do #Create Batch - while @terminate_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @terminate_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") #Modify Batch Integrator ID update_batch_script = { @@ -1408,18 +1457,16 @@ response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_invalid_tenant}"}) expect(response.code).to eq 401 parsed_response = JSON.parse(response.body) - expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant 'test' is not included in the authorized scopes: .") + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") end it 'Unauthorized - No Roles' do #Create Batch - while @terminate_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @terminate_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") response = @hri_helper.hri_put_batch(TENANT_ID, @terminate_batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_no_roles}"}) expect(response.code).to eq 401 @@ -1429,13 +1476,11 @@ it 'Unauthorized - Consumer Role Can Not Update Batch Status' do #Create Batch - while @terminate_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @terminate_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") response = @hri_helper.hri_put_batch(TENANT_ID, @terminate_batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_consumer_role_only}"}) expect(response.code).to eq 401 @@ -1445,13 +1490,11 @@ it 'Unauthorized - Invalid Audience' do #Create Batch - while @terminate_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @terminate_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") response = @hri_helper.hri_put_batch(TENANT_ID, @terminate_batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_invalid_audience}"}) expect(response.code).to eq 401 @@ -1464,7 +1507,6 @@ context 'End to End Test Using COS Object Data' do it 'Create Batch, Produce Kafka Message with COS Object Data, Read Kafka Message, and Send Complete' do - @end_to_end_batch_id = '-' @input_data = COSHelper.new.get_object_data('spark-output-2', 'dev_test_of_2/f_drug_clm/schema.json') #Create Batch @@ -1486,13 +1528,11 @@ } } } - while @end_to_end_batch_id[-1] == '-' - response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_all_roles}"}) - expect(response.code).to eq 201 - parsed_response = JSON.parse(response.body) - @end_to_end_batch_id = parsed_response['id'] - Logger.new(STDOUT).info("End to End: Batch Created With ID: #{@end_to_end_batch_id}") - end + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_all_roles}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @end_to_end_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("End to End: Batch Created With ID: #{@end_to_end_batch_id}") #Verify Kafka Message Timeout.timeout(KAFKA_TIMEOUT) do @@ -1532,10 +1572,10 @@ Logger.new(STDOUT).info("End to End: Kafka message received for the new batch containing COS object data") #Set Batch Complete - @record_count = { - recordCount: 1 + @expected_record_count = { + expectedRecordCount: 1 } - response = @hri_helper.hri_put_batch(TENANT_ID, @end_to_end_batch_id, 'sendComplete', @record_count, {'Authorization' => "Bearer #{@token_all_roles}"}) + response = @hri_helper.hri_put_batch(TENANT_ID, @end_to_end_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_all_roles}"}) expect(response.code).to eq 200 #Verify Batch Complete @@ -1553,6 +1593,8 @@ if parsed_message['id'] == @end_to_end_batch_id @message_found = true expect(parsed_message['status']).to eql('completed') + expect(parsed_message['expectedRecordCount']).to eq 1 + expect(parsed_message['recordCount']).to eq 1 break end end @@ -1563,4 +1605,4 @@ end -end +end \ No newline at end of file diff --git a/test/spec/hri_management_api_validation_spec.rb b/test/spec/hri_management_api_validation_spec.rb new file mode 100644 index 0000000..03afd3e --- /dev/null +++ b/test/spec/hri_management_api_validation_spec.rb @@ -0,0 +1,1353 @@ +# (C) Copyright IBM Corp. 2020 +# +# SPDX-License-Identifier: Apache-2.0 + +require_relative '../env' + +describe 'HRI Management API With Validation' do + + INVALID_ID = 'INVALID' + TENANT_ID = 'test' + INTEGRATOR_ID = 'claims' + DATA_TYPE = 'rspec-batch' + BATCH_INPUT_TOPIC = "ingest.#{TENANT_ID}.#{INTEGRATOR_ID}.in" + KAFKA_TIMEOUT = 60 + INVALID_THRESHOLD = 5 + INVALID_RECORD_COUNT = 3 + ACTUAL_RECORD_COUNT = 15 + EXPECTED_RECORD_COUNT = 15 + FAILURE_MESSAGE = 'Rspec Failure Message' + + before(:all) do + @elastic = ElasticHelper.new + @app_id_helper = AppIDHelper.new + @hri_helper = HRIHelper.new(`bx fn api list`.scan(/https.*hri/).first) + @start_date = DateTime.now + + #Initialize Kafka Consumer + @kafka = Kafka.new(ENV['EVENTSTREAMS_BROKERS'], sasl_plain_username: 'token', sasl_plain_password: ENV['SASL_PLAIN_PASSWORD'], ssl_ca_certs_from_system: true) + @kafka_consumer = @kafka.consumer(group_id: 'rspec-test-consumer') + @kafka_consumer.subscribe("ingest.#{TENANT_ID}.#{INTEGRATOR_ID}.notification") + + #Create Batch + @batch_prefix = "rspec-#{ENV['TRAVIS_BRANCH'].delete('.')}" + @batch_name = "#{@batch_prefix}-#{SecureRandom.uuid}" + create_batch = { + name: @batch_name, + status: 'started', + recordCount: 1, + dataType: DATA_TYPE, + topic: BATCH_INPUT_TOPIC, + startDate: @start_date, + metadata: { + rspec1: 'test1', + rspec2: 'test2', + rspec3: { + rspec3A: 'test3A', + rspec3B: 'test3B' + } + } + }.to_json + response = @elastic.es_create_batch(TENANT_ID, create_batch) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @batch_id = parsed_response['_id'] + Logger.new(STDOUT).info("New Batch Created With ID: #{@batch_id}") + + #Get AppId Access Tokens + @token_invalid_tenant = @app_id_helper.get_access_token('hri_integration_tenant_test_invalid', 'tenant_test_invalid') + @token_no_roles = @app_id_helper.get_access_token('hri_integration_tenant_test', 'tenant_test') + @token_integrator_role_only = @app_id_helper.get_access_token('hri_integration_tenant_test_data_integrator', 'tenant_test hri_data_integrator') + @token_consumer_role_only = @app_id_helper.get_access_token('hri_integration_tenant_test_data_consumer', 'tenant_test hri_consumer') + @token_all_roles = @app_id_helper.get_access_token('hri_integration_tenant_test_integrator_consumer', 'tenant_test hri_data_integrator hri_consumer') + @token_internal_role_only = @app_id_helper.get_access_token('hri_integration_tenant_test_internal', 'tenant_test hri_internal') + @token_invalid_audience = @app_id_helper.get_access_token('hri_integration_tenant_test_integrator_consumer', 'tenant_test hri_data_integrator hri_consumer', ENV['APPID_TENANT']) + end + + after(:all) do + #Delete Batches + response = @elastic.es_delete_by_query(TENANT_ID, "name:rspec-#{ENV['TRAVIS_BRANCH']}*") + response.nil? ? (raise 'Elastic batch delete did not return a response') : (expect(response.code).to eq 200) + Logger.new(STDOUT).info("Delete test batches by query response #{response.body}") + @kafka_consumer.stop + end + + context 'PUT /tenants/{tenantId}/batches/{batchId}/action/sendComplete' do + + before(:all) do + @expected_record_count = { + expectedRecordCount: EXPECTED_RECORD_COUNT, + metadata: { + rspec1: 'test3', + rspec2: 'test4', + rspec4: { + rspec4A: 'test4A', + rspec4B: 'test4B' + } + } + } + @batch_template = { + name: @batch_name, + dataType: DATA_TYPE, + topic: BATCH_INPUT_TOPIC, + invalidThreshold: INVALID_THRESHOLD, + metadata: { + rspec1: 'test1', + rspec2: 'test2', + rspec3: { + rspec3A: 'test3A', + rspec3B: 'test3B' + } + } + } + end + + it 'Success' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") + + #Set Batch to Send Completed + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 200 + + #Verify Batch Send Completed + response = @hri_helper.hri_get_batch(TENANT_ID, @send_complete_batch_id, {'Authorization' => "Bearer #{@token_consumer_role_only}"}) + expect(response.code).to eq 200 + parsed_response = JSON.parse(response.body) + expect(parsed_response['status']).to eql 'sendCompleted' + expect(parsed_response['endDate']).to be_nil + expect(parsed_response['expectedRecordCount']).to eq EXPECTED_RECORD_COUNT + expect(parsed_response['recordCount']).to eq EXPECTED_RECORD_COUNT + + #Verify Kafka Message + Timeout.timeout(KAFKA_TIMEOUT) do + Logger.new(STDOUT).info("Waiting for a Kafka message with Batch ID: #{@send_complete_batch_id} and status: sendCompleted") + @kafka_consumer.each_message do |message| + parsed_message = JSON.parse(message.value) + if parsed_message['id'] == @send_complete_batch_id && parsed_message['status'] == 'sendCompleted' + @message_found = true + expect(parsed_message['dataType']).to eql(DATA_TYPE) + expect(parsed_message['id']).to eql(@send_complete_batch_id) + expect(parsed_message['name']).to eql(@batch_name) + expect(parsed_message['topic']).to eql(BATCH_INPUT_TOPIC) + expect(parsed_message['invalidThreshold']).to eql(INVALID_THRESHOLD) + expect(parsed_message['expectedRecordCount']).to eq EXPECTED_RECORD_COUNT + expect(parsed_message['recordCount']).to eq EXPECTED_RECORD_COUNT + expect(DateTime.parse(parsed_message['startDate']).strftime("%Y-%m-%d")).to eq(Date.today.strftime("%Y-%m-%d")) + expect(parsed_message['metadata']['rspec1']).to eql('test3') + expect(parsed_message['metadata']['rspec2']).to eql('test4') + expect(parsed_message['metadata']['rspec4']['rspec4A']).to eql('test4A') + expect(parsed_message['metadata']['rspec4']['rspec4B']).to eql('test4B') + expect(parsed_message['metadata']['rspec3']).to be_nil + break + end + end + expect(@message_found).to be true + end + end + + it 'Invalid Tenant ID' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_invalid_tenant}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") + end + + it 'Invalid Batch ID' do + response = @hri_helper.hri_put_batch(TENANT_ID, INVALID_ID, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_all_roles}"}) + expect(response.code).to eq 404 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to include('document_missing_exception') + end + + it 'Missing Record Count' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'sendComplete', nil, {'Authorization' => "Bearer #{@token_all_roles}"}) + expect(response.code).to eq 400 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Missing required parameter(s): [expectedRecordCount]') + end + + it 'Invalid Record Count' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'sendComplete', {expectedRecordCount: "1"}, {'Authorization' => "Bearer #{@token_all_roles}"}) + expect(response.code).to eq 400 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Invalid parameter type(s): [expectedRecordCount must be a float64, got string instead.]') + end + + it 'Conflict: Batch with a status of completed' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") + + #Update Batch to Completed Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'completed' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @send_complete_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "completed"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @send_complete_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('completed') + + #Attempt to send complete batch + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 409 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql "The 'sendComplete' endpoint failed, batch is in 'completed' state" + end + + it 'Conflict: Batch with a status of terminated' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") + + #Update Batch to Terminated Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'terminated' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @send_complete_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "terminated"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @send_complete_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('terminated') + + #Attempt to send complete batch + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 409 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql "The 'sendComplete' endpoint failed, batch is in 'terminated' state" + end + + it 'Conflict: Batch with a status of failed' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") + + #Update Batch to Failed Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'failed' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @send_complete_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "failed"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @send_complete_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('failed') + + #Attempt to send complete batch + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 409 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql "The 'sendComplete' endpoint failed, batch is in 'failed' state" + end + + it 'Conflict: Batch that already has a sendCompleted status' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") + + #Update Batch to Completed Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'sendCompleted' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @send_complete_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "sendCompleted"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @send_complete_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('sendCompleted') + + #Attempt to send complete batch + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 409 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql "The 'sendComplete' endpoint failed, batch is in 'sendCompleted' state" + end + + it 'Unauthorized - Missing Authorization' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'sendComplete', @expected_record_count) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Missing Authorization header') + end + + it 'Unauthorized - Invalid Tenant ID' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_invalid_tenant}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") + end + + it 'Unauthorized - No Roles' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") + + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_no_roles}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Must have hri_data_integrator role to update a batch') + end + + it 'Unauthorized - Consumer Role Can Not Update Batch Status' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") + + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_consumer_role_only}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Must have hri_data_integrator role to update a batch') + end + + it 'Unauthorized - Invalid Audience' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @send_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Send Complete Batch Created With ID: #{@send_complete_batch_id}") + + response = @hri_helper.hri_put_batch(TENANT_ID, @send_complete_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_invalid_audience}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") + end + + end + + context 'PUT /tenants/{tenantId}/batches/{batchId}/action/processingComplete' do + + before(:all) do + @record_counts = { + invalidRecordCount: INVALID_RECORD_COUNT, + actualRecordCount: ACTUAL_RECORD_COUNT + } + @batch_template = { + name: @batch_name, + dataType: DATA_TYPE, + topic: BATCH_INPUT_TOPIC, + invalidThreshold: INVALID_THRESHOLD, + metadata: { + rspec1: 'test1', + rspec2: 'test2', + rspec3: { + rspec3A: 'test3A', + rspec3B: 'test3B' + } + } + } + end + + it 'Success' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @processing_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Processing Complete Batch Created With ID: #{@processing_complete_batch_id}") + + #Update Batch to sendCompleted Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'sendCompleted' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @processing_complete_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "sendCompleted"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @processing_complete_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('sendCompleted') + + #Set Batch to Completed + response = @hri_helper.hri_put_batch(TENANT_ID, @processing_complete_batch_id, 'processingComplete', @record_counts, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 200 + + #Verify Batch Processing Completed + response = @hri_helper.hri_get_batch(TENANT_ID, @processing_complete_batch_id, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 200 + parsed_response = JSON.parse(response.body) + expect(parsed_response['status']).to eql 'completed' + expect(parsed_response['endDate']).to_not be_nil + expect(parsed_response['invalidRecordCount']).to eq INVALID_RECORD_COUNT + + #Verify Kafka Message + Timeout.timeout(KAFKA_TIMEOUT) do + Logger.new(STDOUT).info("Waiting for a Kafka message with Batch ID: #{@processing_complete_batch_id} and status: completed") + @kafka_consumer.each_message do |message| + parsed_message = JSON.parse(message.value) + if parsed_message['id'] == @processing_complete_batch_id && parsed_message['status'] == 'completed' + @message_found = true + expect(parsed_message['dataType']).to eql(DATA_TYPE) + expect(parsed_message['id']).to eql(@processing_complete_batch_id) + expect(parsed_message['name']).to eql(@batch_name) + expect(parsed_message['topic']).to eql(BATCH_INPUT_TOPIC) + expect(parsed_message['invalidThreshold']).to eql(INVALID_THRESHOLD) + expect(parsed_message['invalidRecordCount']).to eql(INVALID_RECORD_COUNT) + expect(parsed_message['actualRecordCount']).to eql(ACTUAL_RECORD_COUNT) + expect(DateTime.parse(parsed_message['startDate']).strftime("%Y-%m-%d")).to eq(Date.today.strftime("%Y-%m-%d")) + expect(DateTime.parse(parsed_message['endDate']).strftime("%Y-%m-%d")).to eq(Date.today.strftime("%Y-%m-%d")) + expect(parsed_message['metadata']['rspec1']).to eql('test1') + expect(parsed_message['metadata']['rspec2']).to eql('test2') + expect(parsed_message['metadata']['rspec3']['rspec3A']).to eql('test3A') + expect(parsed_message['metadata']['rspec3']['rspec3B']).to eql('test3B') + break + end + end + expect(@message_found).to be true + end + end + + it 'Invalid Tenant ID' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'processingComplete', @record_counts, {'Authorization' => "Bearer #{@token_invalid_tenant}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") + end + + it 'Invalid Batch ID' do + response = @hri_helper.hri_put_batch(TENANT_ID, INVALID_ID, 'processingComplete', @record_counts, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 404 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to include('document_missing_exception') + end + + it 'Missing invalidRecordCount' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'processingComplete', {actualRecordCount: ACTUAL_RECORD_COUNT}, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 400 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Missing required parameter(s): [invalidRecordCount]') + end + + it 'Invalid invalidRecordCount' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'processingComplete', {invalidRecordCount: "1", actualRecordCount: ACTUAL_RECORD_COUNT}, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 400 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Invalid parameter type(s): [invalidRecordCount must be a float64, got string instead.]') + end + + it 'Missing actualRecordCount' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'processingComplete', {invalidRecordCount: INVALID_RECORD_COUNT}, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 400 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Missing required parameter(s): [actualRecordCount]') + end + + it 'Invalid actualRecordCount' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'processingComplete', {actualRecordCount: "1", invalidRecordCount: INVALID_RECORD_COUNT}, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 400 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Invalid parameter type(s): [actualRecordCount must be a float64, got string instead.]') + end + + it 'Conflict: Batch with a status of started' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @processing_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Processing Complete Batch Created With ID: #{@processing_complete_batch_id}") + + #Attempt to process complete batch + response = @hri_helper.hri_put_batch(TENANT_ID, @processing_complete_batch_id, 'processingComplete', @record_counts, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 409 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql "The 'processingComplete' endpoint failed, batch is in 'started' state" + end + + it 'Conflict: Batch with a status of terminated' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @processing_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Processing Complete Batch Created With ID: #{@processing_complete_batch_id}") + + #Update Batch to Terminated Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'terminated' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @processing_complete_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "terminated"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @processing_complete_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('terminated') + + #Attempt to process complete batch + response = @hri_helper.hri_put_batch(TENANT_ID, @processing_complete_batch_id, 'processingComplete', @record_counts, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 409 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql "The 'processingComplete' endpoint failed, batch is in 'terminated' state" + end + + it 'Conflict: Batch with a status of failed' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @processing_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Processing Complete Batch Created With ID: #{@processing_complete_batch_id}") + + #Update Batch to Failed Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'failed' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @processing_complete_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "failed"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @processing_complete_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('failed') + + #Attempt to process complete batch + response = @hri_helper.hri_put_batch(TENANT_ID, @processing_complete_batch_id, 'processingComplete', @record_counts, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 409 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql "The 'processingComplete' endpoint failed, batch is in 'failed' state" + end + + it 'Conflict: Batch that already has a completed status' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @processing_complete_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Processing Complete Batch Created With ID: #{@processing_complete_batch_id}") + + #Update Batch to Completed Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'completed' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @processing_complete_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "completed"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @processing_complete_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('completed') + + #Attempt to process complete batch + response = @hri_helper.hri_put_batch(TENANT_ID, @processing_complete_batch_id, 'processingComplete', @record_counts, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 409 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql "The 'processingComplete' endpoint failed, batch is in 'completed' state" + end + + it 'Unauthorized - Missing Authorization' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'processingComplete', @record_counts) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Missing Authorization header') + end + + it 'Unauthorized - Invalid Authorization' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'processingComplete', @record_counts, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Must have hri_internal role to mark a batch as processing complete') + end + + it 'Unauthorized - Invalid Tenant ID' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'processingComplete', @record_counts, {'Authorization' => "Bearer #{@token_invalid_tenant}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") + end + + it 'Unauthorized - Invalid Audience' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'processingComplete', @record_counts, {'Authorization' => "Bearer #{@token_invalid_audience}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") + end + + end + + context 'PUT /tenants/{tenantId}/batches/{batchId}/action/terminate' do + + before(:all) do + @batch_template = { + name: @batch_name, + dataType: DATA_TYPE, + topic: BATCH_INPUT_TOPIC, + invalidThreshold: INVALID_THRESHOLD, + metadata: { + rspec1: 'test1', + rspec2: 'test2', + rspec3: { + rspec3A: 'test3A', + rspec3B: 'test3B' + } + } + } + @terminate_metadata = { + metadata: { + rspec1: 'test3', + rspec2: 'test4', + rspec4: { + rspec4A: 'test4A', + rspec4B: 'test4B' + } + } + } + end + + it 'Success' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") + + #Terminate Batch + response = @hri_helper.hri_put_batch(TENANT_ID, @terminate_batch_id, 'terminate', @terminate_metadata, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 200 + + #Verify Batch Terminated + response = @hri_helper.hri_get_batch(TENANT_ID, @terminate_batch_id, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 200 + parsed_response = JSON.parse(response.body) + expect(parsed_response['status']).to eql 'terminated' + expect(parsed_response['endDate']).to_not be_nil + + #Verify Kafka Message + Timeout.timeout(KAFKA_TIMEOUT) do + Logger.new(STDOUT).info("Waiting for a Kafka message with Batch ID: #{@terminate_batch_id} and status: terminated") + @kafka_consumer.each_message do |message| + parsed_message = JSON.parse(message.value) + if parsed_message['id'] == @terminate_batch_id && parsed_message['status'] == 'terminated' + @message_found = true + expect(parsed_message['dataType']).to eql(DATA_TYPE) + expect(parsed_message['id']).to eql(@terminate_batch_id) + expect(parsed_message['name']).to eql(@batch_name) + expect(parsed_message['topic']).to eql(BATCH_INPUT_TOPIC) + expect(parsed_message['invalidThreshold']).to eql(INVALID_THRESHOLD) + expect(DateTime.parse(parsed_message['startDate']).strftime("%Y-%m-%d")).to eq(Date.today.strftime("%Y-%m-%d")) + expect(DateTime.parse(parsed_message['endDate']).strftime("%Y-%m-%d")).to eq(Date.today.strftime("%Y-%m-%d")) + expect(parsed_message['metadata']['rspec1']).to eql('test3') + expect(parsed_message['metadata']['rspec2']).to eql('test4') + expect(parsed_message['metadata']['rspec4']['rspec4A']).to eql('test4A') + expect(parsed_message['metadata']['rspec4']['rspec4B']).to eql('test4B') + expect(parsed_message['metadata']['rspec3']).to be_nil + break + end + end + expect(@message_found).to be true + end + end + + it 'Invalid Tenant ID' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_invalid_tenant}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") + end + + it 'Invalid Batch ID' do + response = @hri_helper.hri_put_batch(TENANT_ID, INVALID_ID, 'terminate', nil, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 404 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to include('document_missing_exception') + end + + it 'Conflict: Batch with a status of sendCompleted' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") + + #Update Batch to sendCompleted Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'sendCompleted' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @terminate_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "sendCompleted"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @terminate_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('sendCompleted') + + #Attempt to terminate batch + response = @hri_helper.hri_put_batch(TENANT_ID, @terminate_batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 409 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql "The 'terminate' endpoint failed, batch is in 'sendCompleted' state" + end + + it 'Conflict: Batch with a status of completed' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") + + #Update Batch to Completed Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'completed' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @terminate_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "completed"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @terminate_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('completed') + + #Attempt to terminate batch + response = @hri_helper.hri_put_batch(TENANT_ID, @terminate_batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 409 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql "The 'terminate' endpoint failed, batch is in 'completed' state" + end + + it 'Conflict: Batch with a status of failed' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") + + #Update Batch to Failed Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'failed' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @terminate_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "failed"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @terminate_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('failed') + + #Attempt to terminate batch + response = @hri_helper.hri_put_batch(TENANT_ID, @terminate_batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 409 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql "The 'terminate' endpoint failed, batch is in 'failed' state" + end + + it 'Conflict: Batch that already has a terminated status' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") + + #Update Batch to Terminated Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'terminated' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @terminate_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "terminated"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @terminate_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('terminated') + + #Attempt to terminate batch + response = @hri_helper.hri_put_batch(TENANT_ID, @terminate_batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 409 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql "The 'terminate' endpoint failed, batch is in 'terminated' state" + end + + it 'Unauthorized - Missing Authorization' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'terminate', nil) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Missing Authorization header') + end + + it 'Unauthorized - Invalid Tenant ID' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_invalid_tenant}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") + end + + it 'Unauthorized - No Roles' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") + + response = @hri_helper.hri_put_batch(TENANT_ID, @terminate_batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_no_roles}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Must have hri_data_integrator role to update a batch') + end + + it 'Unauthorized - Consumer Role Can Not Update Batch Status' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") + + response = @hri_helper.hri_put_batch(TENANT_ID, @terminate_batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_consumer_role_only}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Must have hri_data_integrator role to update a batch') + end + + it 'Unauthorized - Invalid Audience' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @terminate_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Terminate Batch Created With ID: #{@terminate_batch_id}") + + response = @hri_helper.hri_put_batch(TENANT_ID, @terminate_batch_id, 'terminate', nil, {'Authorization' => "Bearer #{@token_invalid_audience}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") + end + + end + + context 'PUT /tenants/{tenantId}/batches/{batchId}/action/fail' do + + before(:all) do + @record_counts_and_message = { + actualRecordCount: ACTUAL_RECORD_COUNT, + failureMessage: FAILURE_MESSAGE, + invalidRecordCount: INVALID_RECORD_COUNT + } + @batch_template = { + name: @batch_name, + dataType: DATA_TYPE, + topic: BATCH_INPUT_TOPIC, + invalidThreshold: INVALID_THRESHOLD, + metadata: { + rspec1: 'test1', + rspec2: 'test2', + rspec3: { + rspec3A: 'test3A', + rspec3B: 'test3B' + } + } + } + end + + it 'Success' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @failed_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Failed Batch Created With ID: #{@failed_batch_id}") + + #Update Batch to Completed Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'completed' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @failed_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "completed"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @failed_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('completed') + + #Set Batch to Failed + response = @hri_helper.hri_put_batch(TENANT_ID, @failed_batch_id, 'fail', @record_counts_and_message, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 200 + + #Verify Batch Failed + response = @hri_helper.hri_get_batch(TENANT_ID, @failed_batch_id, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 200 + parsed_response = JSON.parse(response.body) + expect(parsed_response['status']).to eql 'failed' + expect(parsed_response['endDate']).to_not be_nil + expect(parsed_response['invalidRecordCount']).to eq INVALID_RECORD_COUNT + + #Verify Kafka Message + Timeout.timeout(KAFKA_TIMEOUT) do + Logger.new(STDOUT).info("Waiting for a Kafka message with Batch ID: #{@failed_batch_id} and status: failed") + @kafka_consumer.each_message do |message| + parsed_message = JSON.parse(message.value) + if parsed_message['id'] == @failed_batch_id && parsed_message['status'] == 'failed' + @message_found = true + expect(parsed_message['dataType']).to eql(DATA_TYPE) + expect(parsed_message['id']).to eql(@failed_batch_id) + expect(parsed_message['name']).to eql(@batch_name) + expect(parsed_message['topic']).to eql(BATCH_INPUT_TOPIC) + expect(parsed_message['invalidThreshold']).to eql(INVALID_THRESHOLD) + expect(parsed_message['invalidRecordCount']).to eql(INVALID_RECORD_COUNT) + expect(parsed_message['actualRecordCount']).to eql(ACTUAL_RECORD_COUNT) + expect(parsed_message['failureMessage']).to eql(FAILURE_MESSAGE) + expect(DateTime.parse(parsed_message['startDate']).strftime("%Y-%m-%d")).to eq(Date.today.strftime("%Y-%m-%d")) + expect(DateTime.parse(parsed_message['endDate']).strftime("%Y-%m-%d")).to eq(Date.today.strftime("%Y-%m-%d")) + expect(parsed_message['metadata']['rspec1']).to eql('test1') + expect(parsed_message['metadata']['rspec2']).to eql('test2') + expect(parsed_message['metadata']['rspec3']['rspec3A']).to eql('test3A') + expect(parsed_message['metadata']['rspec3']['rspec3B']).to eql('test3B') + break + end + end + expect(@message_found).to be true + end + end + + it 'Invalid Tenant ID' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'fail', @record_counts_and_message, {'Authorization' => "Bearer #{@token_invalid_tenant}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") + end + + it 'Invalid Batch ID' do + response = @hri_helper.hri_put_batch(TENANT_ID, INVALID_ID, 'fail', @record_counts_and_message, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 404 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to include('document_missing_exception') + end + + it 'Missing invalidRecordCount' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'fail', {actualRecordCount: ACTUAL_RECORD_COUNT, failureMessage: 'RSpec failure message'}, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 400 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Missing required parameter(s): [invalidRecordCount]') + end + + it 'Invalid invalidRecordCount' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'fail', {invalidRecordCount: "1", actualRecordCount: ACTUAL_RECORD_COUNT, failureMessage: 'RSpec failure message'}, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 400 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Invalid parameter type(s): [invalidRecordCount must be a float64, got string instead.]') + end + + it 'Missing actualRecordCount' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'fail', {invalidRecordCount: INVALID_RECORD_COUNT, failureMessage: 'RSpec failure message'}, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 400 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Missing required parameter(s): [actualRecordCount]') + end + + it 'Invalid actualRecordCount' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'fail', {actualRecordCount: "1", invalidRecordCount: INVALID_RECORD_COUNT, failureMessage: 'RSpec failure message'}, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 400 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Invalid parameter type(s): [actualRecordCount must be a float64, got string instead.]') + end + + it 'Missing failureMessage' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'fail', {actualRecordCount: ACTUAL_RECORD_COUNT, invalidRecordCount: INVALID_RECORD_COUNT}, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 400 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Missing required parameter(s): [failureMessage]') + end + + it 'Invalid failureMessage' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'fail', {invalidRecordCount: INVALID_RECORD_COUNT, actualRecordCount: ACTUAL_RECORD_COUNT, failureMessage: 10}, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 400 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Invalid parameter type(s): [failureMessage must be a string, got float64 instead.]') + end + + it 'Conflict: Batch with a status of terminated' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @failed_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Failed Batch Created With ID: #{@failed_batch_id}") + + #Update Batch to Terminated Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'terminated' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @failed_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "terminated"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @failed_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('terminated') + + #Attempt to fail batch + response = @hri_helper.hri_put_batch(TENANT_ID, @failed_batch_id, 'fail', @record_counts_and_message, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 409 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql "The 'fail' endpoint failed, batch is in 'terminated' state" + end + + it 'Conflict: Batch that already has a failed status' do + #Create Batch + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @failed_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("New Failed Batch Created With ID: #{@failed_batch_id}") + + #Update Batch to Failed Status + update_batch_script = { + script: { + source: 'ctx._source.status = params.status', + lang: 'painless', + params: { + status: 'failed' + } + } + }.to_json + response = @elastic.es_batch_update(TENANT_ID, @failed_batch_id, update_batch_script) + response.nil? ? (raise 'Elastic batch update did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['result']).to eql('updated') + Logger.new(STDOUT).info('Batch status updated to "failed"') + + #Verify Batch Status Updated + response = @elastic.es_get_batch(TENANT_ID, @failed_batch_id) + response.nil? ? (raise 'Elastic get batch did not return a response') : (expect(response.code).to eq 200) + parsed_response = JSON.parse(response.body) + expect(parsed_response['_source']['status']).to eql('failed') + + #Attempt to fail batch + response = @hri_helper.hri_put_batch(TENANT_ID, @failed_batch_id, 'fail', @record_counts_and_message, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 409 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql "The 'fail' endpoint failed, batch is in 'failed' state" + end + + it 'Unauthorized - Missing Authorization' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'fail', @record_counts_and_message) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Missing Authorization header') + end + + it 'Unauthorized - Invalid Authorization' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'fail', @record_counts_and_message, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql('Must have hri_internal role to mark a batch as failed') + end + + it 'Unauthorized - Invalid Tenant ID' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'fail', @record_counts_and_message, {'Authorization' => "Bearer #{@token_invalid_tenant}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") + end + + it 'Unauthorized - Invalid Audience' do + response = @hri_helper.hri_put_batch(TENANT_ID, @batch_id, 'fail', @record_counts_and_message, {'Authorization' => "Bearer #{@token_invalid_audience}"}) + expect(response.code).to eq 401 + parsed_response = JSON.parse(response.body) + expect(parsed_response['errorDescription']).to eql("Unauthorized tenant access. Tenant '#{TENANT_ID}' is not included in the authorized scopes: .") + end + + end + + context 'End to End Test Using COS Data' do + + it 'Create Batch, Produce Kafka Message with COS Data, Read Kafka Message, Process Complete, and Send Complete' do + @input_data = COSHelper.new.get_object_data('spark-output-2', 'dev_test_of_2/f_drug_clm/schema.json') + + #Create Batch + @batch_name = "#{@batch_prefix}-#{SecureRandom.uuid}" + @batch_template = { + name: "rspec-#{ENV['TRAVIS_BRANCH']}-end-to-end-batch", + dataType: DATA_TYPE, + topic: BATCH_INPUT_TOPIC, + metadata: { + rspec1: 'test1', + rspec2: 'test2', + rspec3: { + rspec3A: 'test3A', + rspec3B: 'test3B' + } + } + } + response = @hri_helper.hri_post_batch(TENANT_ID, @batch_template.to_json, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 201 + parsed_response = JSON.parse(response.body) + @end_to_end_batch_id = parsed_response['id'] + Logger.new(STDOUT).info("End to End: Batch Created With ID: #{@end_to_end_batch_id}") + + #Verify Kafka Message + Timeout.timeout(KAFKA_TIMEOUT) do + Logger.new(STDOUT).info("Waiting for a Kafka message with Batch ID: #{@end_to_end_batch_id} and status: started") + @kafka_consumer.each_message do |message| + parsed_message = JSON.parse(message.value) + if parsed_message['id'] == @end_to_end_batch_id && parsed_message['status'] == 'started' + @message_found = true + expect(parsed_message['id']).to eql(@end_to_end_batch_id) + break + end + end + expect(@message_found).to be true + end + Logger.new(STDOUT).info("End to End: Kafka message received for the creation of batch with ID: #{@end_to_end_batch_id}") + + #Produce Kafka Message + @kafka.deliver_message({name: 'end_to_end_test_message', data: @input_data}.to_json, key: '1', topic: "ingest.#{TENANT_ID}.#{INTEGRATOR_ID}.notification", headers: {'batchId': @end_to_end_batch_id}) + Logger.new(STDOUT).info('End to End: Kafka message containing COS object data successfully written') + + #Verify Kafka Message + Timeout.timeout(KAFKA_TIMEOUT) do + Logger.new(STDOUT).info("Waiting for a Kafka message with Batch ID: #{@end_to_end_batch_id} and status: started") + @kafka_consumer.each_message do |message| + unless message.headers.empty? + if message.headers['batchId'] == @end_to_end_batch_id + @message_found = true + parsed_message = JSON.parse(message.value) + expect(parsed_message['name']).to eql('end_to_end_test_message') + expect(parsed_message['data']).to eql @input_data + break + end + end + end + expect(@message_found).to be true + end + Logger.new(STDOUT).info("End to End: Kafka message received for the new batch containing COS object data") + + #Set Batch Send Complete + @expected_record_count = { + expectedRecordCount: EXPECTED_RECORD_COUNT + } + response = @hri_helper.hri_put_batch(TENANT_ID, @end_to_end_batch_id, 'sendComplete', @expected_record_count, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 200 + + #Verify Batch Send Complete + response = @hri_helper.hri_get_batch(TENANT_ID, @end_to_end_batch_id, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 200 + parsed_response = JSON.parse(response.body) + expect(parsed_response['status']).to eql 'sendCompleted' + Logger.new(STDOUT).info("End to End: Status of batch #{@end_to_end_batch_id} updated to 'sendCompleted'") + + #Verify Kafka Message + Timeout.timeout(KAFKA_TIMEOUT) do + Logger.new(STDOUT).info("Waiting for a Kafka message with Batch ID: #{@end_to_end_batch_id} and status: sendCompleted") + @kafka_consumer.each_message do |message| + parsed_message = JSON.parse(message.value) + if parsed_message['id'] == @end_to_end_batch_id && parsed_message['status'] == 'sendCompleted' + @message_found = true + expect(parsed_message['status']).to eql('sendCompleted') + expect(parsed_message['expectedRecordCount']).to eq EXPECTED_RECORD_COUNT + expect(parsed_message['recordCount']).to eq EXPECTED_RECORD_COUNT + break + end + end + expect(@message_found).to be true + end + Logger.new(STDOUT).info("End to End: Kafka message received for batch #{@end_to_end_batch_id} sendCompleted") + + #Set Batch Processing Complete + @record_counts = { + invalidRecordCount: INVALID_RECORD_COUNT, + actualRecordCount: ACTUAL_RECORD_COUNT + } + response = @hri_helper.hri_put_batch(TENANT_ID, @end_to_end_batch_id, 'processingComplete', @record_counts, {'Authorization' => "Bearer #{@token_internal_role_only}"}) + expect(response.code).to eq 200 + + #Verify Batch Processing Complete + response = @hri_helper.hri_get_batch(TENANT_ID, @end_to_end_batch_id, {'Authorization' => "Bearer #{@token_integrator_role_only}"}) + expect(response.code).to eq 200 + parsed_response = JSON.parse(response.body) + expect(parsed_response['status']).to eql 'completed' + Logger.new(STDOUT).info("End to End: Status of batch #{@end_to_end_batch_id} updated to 'completed'") + + #Verify Kafka Message + Timeout.timeout(KAFKA_TIMEOUT) do + Logger.new(STDOUT).info("Waiting for a Kafka message with Batch ID: #{@end_to_end_batch_id} and status: completed") + @kafka_consumer.each_message do |message| + parsed_message = JSON.parse(message.value) + if parsed_message['id'] == @end_to_end_batch_id && parsed_message['status'] == 'completed' + @message_found = true + expect(parsed_message['status']).to eql('completed') + break + end + end + expect(@message_found).to be true + end + Logger.new(STDOUT).info("End to End: Kafka message received for batch #{@end_to_end_batch_id} completed") + + end + + end + +end \ No newline at end of file diff --git a/test/spec/send_slack_message.rb b/test/spec/send_slack_message.rb new file mode 100755 index 0000000..1e6eedf --- /dev/null +++ b/test/spec/send_slack_message.rb @@ -0,0 +1,17 @@ +#!/usr/bin/env ruby +# (C) Copyright IBM Corp. 2020 +# +# SPDX-License-Identifier: Apache-2.0 + +require_relative '../env' + +logger = Logger.new(STDOUT) + +if ENV['TRAVIS_BRANCH'] == 'release-2.1-2.1' + logger.info("#{ARGV[0]} tests failed. Sending a message to Slack...") + SlackHelper.new.send_slack_message(ARGV[0]) +else + logger.info("#{ARGV[0]} tests failed, but a Slack message is only sent for the release-2.1-2.1 branch.") +end + +exit 1 \ No newline at end of file diff --git a/test/spec/slack_helper.rb b/test/spec/slack_helper.rb new file mode 100644 index 0000000..0ee8d78 --- /dev/null +++ b/test/spec/slack_helper.rb @@ -0,0 +1,23 @@ +# (C) Copyright IBM Corp. 2020 +# +# SPDX-License-Identifier: Apache-2.0 + +class SlackHelper + + def initialize + @helper = Helper.new + @slack_url = ENV['SLACK_WEBHOOK'] + end + + def send_slack_message(test_type) + message = { + text: "*#{test_type} Test Failure:* + Repository: #{ENV['TRAVIS_BUILD_DIR'].split('/').last}, + Branch: #{ENV['TRAVIS_BRANCH']}, + Time: #{(Time.now - 14400).strftime("%m/%d/%Y %H:%M")}, + Build Link: #{ENV['TRAVIS_JOB_WEB_URL'].gsub('https:///', 'https://travis.ibm.com/')}" + }.to_json + @helper.rest_post(@slack_url, message) + end + +end \ No newline at end of file diff --git a/test/spec/upload_test_reports.rb b/test/spec/upload_test_reports.rb new file mode 100755 index 0000000..e7e74f8 --- /dev/null +++ b/test/spec/upload_test_reports.rb @@ -0,0 +1,32 @@ +#!/usr/bin/env ruby +# (C) Copyright IBM Corp. 2020 +# +# SPDX-License-Identifier: Apache-2.0 + +require_relative '../env' + +# This script uploads JUnit test reports to Cloud Object Storage to be used by the UnitTH application to generate HTML +# test trend reports for the IVT and Dredd tests. More information on unitth can be found here: http://junitth.sourceforge.net/ +# +# The 'ivttest.xml' and 'dreddtests.xml' JUnit reports are uploaded to the 'hri-test-reports' Cloud Object Storage bucket, +# which is also mounted on the 'unitth' kubernetes pod. This bucket keeps 30 days of reports that will be used to generate a +# historical HTML report when the UnitTH jar is run on the pod. + +cos_helper = COSHelper.new +logger = Logger.new(STDOUT) +time = Time.now.strftime '%Y%m%d%H%M%S' + +if ENV['TRAVIS_BRANCH'] == ARGV[0] + if ARGV[1] == 'IVT' + logger.info('Uploading ivttest.xml to COS') + `sed -i 's#test/ivt_test_results#rspec#g' ivttest.xml` + cos_helper.upload_object_data('wh-hri-dev1-test-reports', "mgmt-api/#{ENV['TRAVIS_BRANCH']}/ivt/#{time}/ivttest.xml", File.read(File.join(Dir.pwd, 'ivttest.xml'))) + elsif ARGV[1] == 'Dredd' + logger.info('Uploading dreddtests.xml to COS') + cos_helper.upload_object_data('wh-hri-dev1-test-reports', "mgmt-api/#{ENV['TRAVIS_BRANCH']}/dredd/#{time}/dreddtests.xml", File.read(File.join(Dir.pwd, 'dreddtests.xml'))) + else + raise "Invalid argument: #{ARGV[1]}. Valid arguments: 'IVT' or 'Dredd'" + end +else + logger.info("Test reports are only generated for the #{ARGV[0]} branch. Exiting.") +end \ No newline at end of file