You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/connections/sources/catalog/cloud-apps/amazon-s3/index.md
+22-19Lines changed: 22 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,41 +5,40 @@ id: GNLT5OQ45P
5
5
---
6
6
{% include content/source-region-unsupported.md %}
7
7
8
-
This document contains a procedure that enables you to upload a CSV file containing data to Amazon S3, where it uses Lambda to automatically parse, format, and upload the data to Segment.
8
+
This document outlines how to upload a CSV file containing data to [Amazon S3](https://aws.amazon.com/s3/){:target="_blank”}, which uses [Lambda](https://aws.amazon.com/lambda/){:target="_blank”} to automatically parse, format, and upload the data to Segment.
9
9
10
10
You might have sources of data where you can't instrument Segment's SDKs, including other SaaS tools for which a Segment integration is not yet available. In many of these cases, you can extract data from these sources in CSV format, and then use Segment's server-side SDKs or HTTP tracking API to push the data to Segment.
11
11
12
-
The goal of this walkthrough is to make this process easier by providing an automated process that ingests this data. Once you complete this walkthrough, you will have the following Segment, Amazon S3, Lambda, and IAM resources deployed:
12
+
The goal of this walkthrough is to make this process easier by providing an automated process that ingests this data. Once you complete this walkthrough, you will have the following Segment, Amazon S3, Lambda, and [IAM](https://aws.amazon.com/iam/){:target="_blank"} resources deployed:
13
13
14
14
- a Segment S3 source
15
15
- an AWS Lambda function
16
16
- an access policy for the Lambda function that grants Amazon S3 permission to invoke it
17
17
- an AWS IAM execution role that grants the permissions your Lambda function needs through the permissions policy associated with this role
18
18
- an AWS S3 source bucket with a notification configuration that invokes the Lambda function
19
19
20
-
21
20
## Prerequisites
22
21
23
-
This tutorial assumes that you have some basic understanding of S3, Lambda and the `aws cli` tool. If you haven't already, follow the instructions in [Getting Started with AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html){:target="_blank"} to create your first Lambda function. If you're unfamiliar with `aws cli`, follow the instructions in [Setting up the AWS Command Line Interface](https://docs.aws.amazon.com/polly/latest/dg/setup-aws-cli.html){:target="_blank"} before you proceed.
22
+
This tutorial assumes that you have some basic understanding of S3, Lambda and the `aws cli` tool. If you haven't already, follow the instructions in [Getting Started with AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html){:target="_blank"} to create your first Lambda function. If you're unfamiliar with `aws cli`, follow the instructions in [Setting up the AWS Command Line Interface](https://docs.aws.amazon.com/polly/latest/dg/setup-aws-cli.html){:target="_blank"} before you proceed.
24
23
25
24
This tutorial uses a command line terminal or shell to run commands. Commands appear preceded by a prompt symbol (`$`) and the name of the current directory, when appropriate.
26
25
27
26
On Linux and macOS, use your preferred shell and package manager. On macOS, you can use the Terminal application. On Windows 10, you can [install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10){:target="_blank"} to get a Windows-integrated version of Ubuntu and Bash.
28
27
29
28
[Install NPM](https://www.npmjs.com/get-npm){:target="_blank"} to manage the function's dependencies.
30
29
31
-
## Getting Started
32
-
30
+
## Getting started
33
31
### 1. Create an S3 source in Segment
32
+
34
33
Remember the write key for this source, you'll need it in a later step.
35
34
36
-
### 2. Create the Execution Role
35
+
### 2. Create the execution role
37
36
38
-
Create the [execution role](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html){:target="_blank"} that gives your function permission to access AWS resources.
37
+
Create the [execution role](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html){:target="_blank"} that gives your function permission to access AWS resources.
39
38
40
39
**To create an execution role**
41
40
42
-
1. Open the [roles page](https://console.aws.amazon.com/iam/home#/roles){:target="_blank"} in the IAM console.
41
+
1. Open the [roles page](https://console.aws.amazon.com/iam/home#/roles){:target="_blank"} in the IAM console.
43
42
2. Choose **Create role**.
44
43
3. Create a role with the following properties:
45
44
- Set the **Trusted entity** to **AWS Lambda**.
@@ -53,7 +52,7 @@ Create the [execution role](https://docs.aws.amazon.com/lambda/latest/dg/lambda
53
52
54
53
The **AWSLambdaExecute** policy has the permissions that the function needs to manage objects in Amazon S3, and write logs to CloudWatch Logs.
55
54
56
-
### 3. Create Local Files, an S3 Bucket and Upload a Sample Object
55
+
### 3. Create local files, an S3 bucket and upload a sample object
57
56
58
57
Follow these steps to create your local files, S3 bucket and upload an object.
59
58
@@ -73,7 +72,7 @@ Follow these steps to create your local files, S3 bucket and upload an object.
73
72
3. Create your bucket. **Record your bucket name** - you'll need it later!
74
73
4. In the source bucket, upload `track_1.csv`.
75
74
76
-
### 4. Create the Function
75
+
### 4. Create the function
77
76
78
77
Next, create the Lambda function, install dependencies, and zip everything up so it can be deployed to AWS.
79
78
@@ -260,11 +259,11 @@ The command above sets a 90-second timeout value as the function configuration.
In this step, you invoke the Lambda functionmanually using sample Amazon S3 event data.
266
265
267
-
**To test the Lambda function**
266
+
**To test the lambda function**
268
267
269
268
1. Create an empty file named `output.txt`in the `S3-Lambda-Segment` folder - the aws cli complains if it's not there.
270
269
```bash
@@ -281,7 +280,7 @@ In this step, you invoke the Lambda function manually using sample Amazon S3 eve
281
280
282
281
**Note**: Calls to Segment's Object API don't show up the Segment debugger.
283
282
284
-
### Configure Amazon S3 to Publish Events
283
+
### Configure Amazon S3 to publish events
285
284
286
285
In this step, you add the remaining configuration so that Amazon S3 can publish object-created events to AWS Lambda and invoke your Lambda function.
287
286
You'll do the following:
@@ -348,11 +347,15 @@ Last, test your system to make sure it's working as expected:
348
347
### Timestamps
349
348
This script automatically transforms all CSV timestamp columns named `createdAt` and `timestamp` to timestamp objects, regardless of nesting, preparation for Segment ingestion. If your timestamps have a different name, search the example `index.js` code for the `colParser` function, and add your column names there for automatic transformation. If you make this modification, re-zip the package (using `zip -r function.zip .`) and upload the new zip to Lambda.
350
349
351
-
## CSV Formats
350
+
## CSV formats
352
351
353
352
Define your CSV file structure based on the method you want to execute.
354
353
355
-
#### Identify Structure
354
+
> warning "CSV support recommendation"
355
+
>
356
+
> Implementing a production-grade solution with this tutorial can be complex. Segment recommends that you submit feature requests for Segment reverse ETL for CSV support.
357
+
358
+
#### Identify structure
356
359
357
360
An `identify_XXXXX` .csv file uses the following field names:
358
361
@@ -367,7 +370,7 @@ An `identify_XXXXX` .csv file uses the following field names:
367
370
In the above structure, the `userId` is required, but all other items are optional. Start all traits with `traits.` and then the trait name, for example `traits.account_type`. Similarly, start context fields with `context.` followed by the canonical structure. The same structure applies to `integrations.` too.
368
371
369
372
370
-
#### Page/Screen Structure
373
+
#### Page/Screen structure
371
374
372
375
For example a `screen_XXXXX` or `page_YYYY` file has the following field names:
373
376
@@ -380,7 +383,7 @@ For example a `screen_XXXXX` or `page_YYYY` file has the following field names:
380
383
7. `timestamp` (Unix time) - Optional
381
384
8. `integrations.<integration>` - Optional
382
385
383
-
#### Track Structure
386
+
#### Track structure
384
387
385
388
For example a `track_XXXXX` file has the following field names:
386
389
@@ -409,7 +412,7 @@ For any of these methods, you might need to pass nested JSON to the tracking or
409
412
410
413
The example `index.js` sample code above does not support ingestion of arrays. If you need this functionality you can modify the sample code as needed.
411
414
412
-
#### Object Structure
415
+
#### Object structure
413
416
414
417
There are cases when Segment's tracking API is not suitable for datasets that you might want to move to a warehouse. This could be e-commerce product data, media content metadata, campaign performance, and so on.
0 commit comments