Skip to content

Commit

Permalink
JavaScript (v3): FSA (#5255)
Browse files Browse the repository at this point in the history
  • Loading branch information
cpyle0819 authored Aug 15, 2023
1 parent 33e3086 commit ee4977b
Show file tree
Hide file tree
Showing 28 changed files with 53,162 additions and 44,753 deletions.
33 changes: 33 additions & 0 deletions .doc_gen/cross-content/cross_FSA_JavaScript_block.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "file://zonbook/docbookx.dtd" [
<!ENTITY % phrases-shared SYSTEM "file://AWSShared/common/phrases-shared.ent">
%phrases-shared;
]>
<block>
<para>
This example application analyzes and stores customer feedback cards. Specifically,
it fulfills the need of a fictitious hotel in New York City. The hotel receives feedback
from guests in various languages in the form of physical comment cards. That feedback
is uploaded into the app through a web client.

After an image of a comment card is uploaded, the following steps occur:
</para>
<itemizedlist>
<listitem>
<para>Text is extracted from the image using &TEXTRACT;.</para>
</listitem>
<listitem>
<para>&CMP; determines the sentiment of the extracted text and its language.</para>
</listitem>
<listitem>
<para>The extracted text is translated to French using &TSL;.</para>
</listitem>
<listitem>
<para>&POL; synthesizes an audio file from the extracted text.</para>
</listitem>
</itemizedlist>
<para> The full app can be deployed with the &CDK;. For source code and deployment
instructions, see the project in <ulink
url="https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javascriptv3/example_code/cross-services/feedback-sentiment-analyzer">
GitHub</ulink>. </para>
</block>
4 changes: 4 additions & 0 deletions .doc_gen/metadata/cross_metadata.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,10 @@ cross_FSA:
versions:
- sdk_version: 3
block_content: cross_FSA_Ruby_block.xml
JavaScript:
versions:
- sdk_version: 3
block_content: cross_FSA_JavaScript_block.xml
service_main: textract
services:
lambda:
Expand Down
3 changes: 2 additions & 1 deletion applications/feedback_sentiment_analyzer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,8 @@ This application is deployed using the [AWS Cloud Development Kit (AWS CDK)](htt

### Play the result

1. Wait. The image upload takes a minute or two to process.
1. Wait. The image upload takes a minute or two to process. CloudFront will also cache API responses for five
minutes. To speed things up, create a CloudFront invalidation.
2. Choose the `Refresh` button.

![refresh](docs/refresh.png)
Expand Down
27 changes: 11 additions & 16 deletions applications/feedback_sentiment_analyzer/SPECIFICATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,12 +190,12 @@ Following are the required inputs and outputs of each Lambda function.

### ExtractText

Uses the Amazon Textract [DetectDocumentText](https://docs.aws.amazon.com/textract/latest/dg/API_DetectDocumentText.html)
Use the Amazon Textract [DetectDocumentText](https://docs.aws.amazon.com/textract/latest/dg/API_DetectDocumentText.html)
method to extract text from an image and return a unified text representation.

#### **Input**

Uses the data available on the [Amazon S3 event object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventBridge.html).
Use the data available on the [Amazon S3 event object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventBridge.html).

For example:

Expand All @@ -221,12 +221,12 @@ CET HÔTEL ÉTAIT SUPER

### AnalyzeSentiment

Uses the Amazon Comprehend [DetectSentiment](https://docs.aws.amazon.com/comprehend/latest/APIReference/API_DetectSentiment.html)
Use the Amazon Comprehend [DetectSentiment](https://docs.aws.amazon.com/comprehend/latest/APIReference/API_DetectSentiment.html)
method to detect sentiment (`POSITIVE`, `NEUTRAL`, `MIXED`, or `NEGATIVE`).

#### **Input**

Uses the data available on the [Lambda event object](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-concepts.html#gettingstarted-concepts-event).
Use the data available on the [Lambda event object](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-concepts.html#gettingstarted-concepts-event).

For example:

Expand Down Expand Up @@ -254,7 +254,7 @@ For example:

### TranslateText

Uses the Amazon Translate [TranslateText](https://docs.aws.amazon.com/translate/latest/APIReference/API_TranslateText.html)
Use the Amazon Translate [TranslateText](https://docs.aws.amazon.com/translate/latest/APIReference/API_TranslateText.html)
method to translate text to English and return a string.

#### **Input**
Expand All @@ -273,20 +273,19 @@ For example:

#### **Output**

Returns a string representing the translated text.
Returns an object containing the translated text.

For example:

```
THIS HOTEL WAS GREAT
{ translated_text: "THIS HOTEL WAS GREAT" }
```

---

### SynthesizeAudio

Uses the Amazon Polly [SynthesizeAudio](https://docs.aws.amazon.com/polly/latest/dg/API_SynthesizeSpeech.html)
method to convert input text into life-like speech.
Uses the Amazon Polly [SynthesizeSpeech](https://docs.aws.amazon.com/polly/latest/dg/API_SynthesizeSpeech.html) method to convert input text into life-like speech. Store the synthesized audio in the provided Amazon S3 bucket with a content type of "audio/mp3".

#### **Input**

Expand All @@ -305,13 +304,9 @@ For example:

#### **Output**

Returns a string representing the key of the synthesized audio file.

For example:
Return a string representing the key of the synthesized audio file. The key is the provided object name appended with ".mp3". This key will be sent to the frontend. The frontend will use the key to directly get the audio file from Amazon S3.

```
DOC-EXAMPLE-BUCKET/audio.mp3
```
For example, if the object name was "image.jpg", the output would be "image.jpg.mp3".

---

Expand Down Expand Up @@ -363,7 +358,7 @@ Specifically, the trigger is scoped to `ObjectCreated` events emitted by `my-s3-
"name": ["<dynamic media bucket name>"]
},
"object": {
"key": [{ "suffix": ".png" }, { "suffix": ".jpeg" }, { "suffix": ".jpg" }]
"key": [{"suffix": ".png"}, {"suffix": ".jpeg"}, {"suffix": ".jpg"}]
}
}
}
Expand Down
83 changes: 77 additions & 6 deletions applications/feedback_sentiment_analyzer/cdk/lib/functions.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,13 @@
import { Duration } from "aws-cdk-lib";
import { Code, Runtime } from "aws-cdk-lib/aws-lambda";
/*
* Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
* SPDX-License-Identifier: Apache-2.0
*/

import { AppFunctionConfig } from "./constructs/app-lambdas";
import {resolve} from "path";
import {BundlingOutput, Duration} from "aws-cdk-lib";
import {Code, Runtime} from "aws-cdk-lib/aws-lambda";

import {AppFunctionConfig} from "./constructs/app-lambdas";

const BASE_APP_FUNCTION: AppFunctionConfig = {
name: "TestLambda",
Expand Down Expand Up @@ -40,9 +46,9 @@ const BASE_APP_FUNCTION: AppFunctionConfig = {

const EXAMPLE_LANG_FUNCTIONS: AppFunctionConfig[] = [
// The 'name' property must match the examples below in new examples.
{ ...BASE_APP_FUNCTION, name: "ExtractText" },
{...BASE_APP_FUNCTION, name: "ExtractText"},
// Override properties by including them after expanding the function object.
{ ...BASE_APP_FUNCTION, memorySize: 256, name: "AnalyzeSentiment" },
{...BASE_APP_FUNCTION, memorySize: 256, name: "AnalyzeSentiment"},
{
...BASE_APP_FUNCTION,
codeAsset() {
Expand All @@ -55,7 +61,7 @@ const EXAMPLE_LANG_FUNCTIONS: AppFunctionConfig[] = [
},
name: "TranslateText",
},
{ ...BASE_APP_FUNCTION, name: "SynthesizeAudio" },
{...BASE_APP_FUNCTION, name: "SynthesizeAudio"},
];

const RUBY_ROOT =
Expand Down Expand Up @@ -94,11 +100,76 @@ const RUBY_FUNCTIONS: AppFunctionConfig[] = [
},
];

const JAVASCRIPT_BUNDLING_CONFIG = {
command: [
"/bin/sh",
"-c",
"npm i && \
npm run build && \
cp /asset-input/dist/index.mjs /asset-output/",
],
outputType: BundlingOutput.NOT_ARCHIVED,
user: "root",
image: Runtime.NODEJS_18_X.bundlingImage,
};

const JAVASCRIPT_FUNCTIONS = [
{
...BASE_APP_FUNCTION,
name: "ExtractText",
codeAsset() {
const source = resolve(
"../../../javascriptv3/example_code/cross-services/feedback-sentiment-analyzer/ExtractText"
);
return Code.fromAsset(source, {
bundling: JAVASCRIPT_BUNDLING_CONFIG,
});
},
},
{
...BASE_APP_FUNCTION,
name: "AnalyzeSentiment",
codeAsset() {
const source = resolve(
"../../../javascriptv3/example_code/cross-services/feedback-sentiment-analyzer/AnalyzeSentiment"
);
return Code.fromAsset(source, {
bundling: JAVASCRIPT_BUNDLING_CONFIG,
});
},
},
{
...BASE_APP_FUNCTION,
name: "TranslateText",
codeAsset() {
const source = resolve(
"../../../javascriptv3/example_code/cross-services/feedback-sentiment-analyzer/TranslateText"
);
return Code.fromAsset(source, {
bundling: JAVASCRIPT_BUNDLING_CONFIG,
});
},
},
{
...BASE_APP_FUNCTION,
name: "SynthesizeAudio",
codeAsset() {
const source = resolve(
"../../../javascriptv3/example_code/cross-services/feedback-sentiment-analyzer/SynthesizeAudio"
);
return Code.fromAsset(source, {
bundling: JAVASCRIPT_BUNDLING_CONFIG,
});
},
},
];

const FUNCTIONS: Record<string, AppFunctionConfig[]> = {
examplelang: EXAMPLE_LANG_FUNCTIONS,
// Add more languages here. For example
// javascript: JAVASCRIPT_FUNCTIONS,
ruby: RUBY_FUNCTIONS,
javascript: JAVASCRIPT_FUNCTIONS,
};

export function getFunctions(language: string = ""): AppFunctionConfig[] {
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
dist
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
{
"name": "example-javascriptv3-fsa-analyze-sentiment",
"version": "1.0.0",
"description": "Companion AWS Lambda functions for the Feedback Sentiment Analyzer Example.",
"main": "index.js",
"type": "module",
"scripts": {
"test": "vitest run **/*.unit.test.js",
"build": "rollup -c"
},
"author": "Corey Pyle <[email protected]>",
"license": "Apache-2.0",
"devDependencies": {
"@aws-sdk/client-comprehend": "^3.388.0",
"@rollup/plugin-commonjs": "^25.0.3",
"@rollup/plugin-node-resolve": "^15.1.0",
"rollup": "^3.28.0"
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import { nodeResolve } from "@rollup/plugin-node-resolve";
import commonjs from "@rollup/plugin-commonjs";

export default {
input: "src/index.js",
output: {
/**
* The Lambda NodeJS runtime requires .mjs extensions to use ESM.
*/
file: "dist/index.mjs",
compact: true,
format: "es",
},

plugins: [
/**
* By default Rollup will not bundle node_modules. This plugin allows that.
*/
nodeResolve({ preferBuiltins: true }),
/**
* Allows CJS files to be included in bundle. This is mainly for Lodash.
*/
commonjs(),
],
external: [
/**
* Don't bundle the @aws-sdk. It's included in the Lambda NodeJS runtime.
*/
/@aws-sdk/,
],
};
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
/*
* Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
* SPDX-License-Identifier: Apache-2.0
*/

import {
ComprehendClient,
DetectDominantLanguageCommand,
DetectSentimentCommand,
} from "@aws-sdk/client-comprehend";

/**
* Determine the language and sentiment of the extracted text.
*
* @param {{ source_text: string}} extractTextOutput
*/
export const handler = async (extractTextOutput) => {
const comprehendClient = new ComprehendClient({});

const detectDominantLanguageCommand = new DetectDominantLanguageCommand({
Text: extractTextOutput.source_text,
});

// The source language is required for sentiment analysis and
// translation in the next step.
const {Languages} = await comprehendClient.send(
detectDominantLanguageCommand
);

const languageCode = Languages[0].LanguageCode;

const detectSentimentCommand = new DetectSentimentCommand({
Text: extractTextOutput.source_text,
LanguageCode: languageCode,
});

const {Sentiment} = await comprehendClient.send(detectSentimentCommand);

return {
sentiment: Sentiment,
language_code: languageCode,
};
};
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
import {describe, it, expect, vi} from "vitest";

const send = vi.fn(() => Promise.resolve());

vi.doMock("@aws-sdk/client-comprehend", async () => {
const actual = await vi.importActual("@aws-sdk/client-comprehend");
return {
...actual,
ComprehendClient: class {
send = send;
},
};
});

const {handler} = await import("../src/index.js");

describe("analyze-sentiment-handler", () => {
it("should return an object with the sentiment and language_code", async () => {
send
.mockResolvedValueOnce({
Languages: [{LanguageCode: "fr"}],
})
.mockResolvedValueOnce({
Sentiment: "POSITIVE",
});

const response = await handler({source_text: "J'adore."});
expect(response).toEqual({
sentiment: "POSITIVE",
language_code: "fr",
});
});
});
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
dist
Loading

0 comments on commit ee4977b

Please sign in to comment.