Skip to content

Commit

Permalink
V1.0.0 (#8)
Browse files Browse the repository at this point in the history
  • Loading branch information
keithweaver authored Mar 6, 2022
1 parent d11d325 commit 46dd026
Show file tree
Hide file tree
Showing 5 changed files with 142 additions and 39 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,4 @@
.DS_Store
temp.md
test.sh
sample.txt
5 changes: 3 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
# https://aws.amazon.com/blogs/developer/aws-cli-v2-docker-image/
# https://hub.docker.com/r/amazon/aws-cli
FROM amazon/aws-cli:latest
FROM amazon/aws-cli:2.4.23

COPY LICENSE README.md /
WORKDIR /
ADD ./ /

COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["sh", "/entrypoint.sh"]
102 changes: 101 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,14 @@
Upload, download, or list files/folders through Github Actions.

```
- uses: keithweaver/aws-s3-github-action@master
- uses: keithweaver/aws-s3-github-action@v1.0.0
with:
command: cp
source: ./local_file.txt
destination: s3://yourbucket/folder/local_file.txt
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws_region: us-east-1
```

**Inputs**
Expand All @@ -22,4 +23,103 @@ Upload, download, or list files/folders through Github Actions.
| `aws_access_key_id` | Optional | N/A | This is the credentials from an IAM role for getting access to a bucket. [More info](https://docs.aws.amazon.com/cli/latest/reference/configure/) |
| `aws_secret_access_key` | Optional | N/A | This is the credentials from an IAM role for getting access to a bucket. [More info](https://docs.aws.amazon.com/cli/latest/reference/configure/) |
| `aws_session_token` | Optional | N/A | This is the credentials from an IAM role for getting access to a bucket. [More info](https://docs.aws.amazon.com/cli/latest/reference/configure/) |
| `aws_region` | Optional | N/A | This is the region of the bucket. S3 namespace is global but the bucket is regional. |
| `metadata_service_timeout` | Optional | N/A | The number of seconds to wait until the metadata service request times out. [More info](https://docs.aws.amazon.com/cli/latest/reference/configure/) |
| `flags` | Optional | N/A | Additional query flags. |

## FAQs

**Where can I see this run in a pipeline as an example?**

[Here](https://github.com/keithweaver/aws-s3-github-action-demo) is the test/verification pipeline that is used.

**How can I use a specific version or test a feature branch?**

You are specifying the tag or branch by using the `@` after the Action name. Below, it uses `v1.0.0` which is based on the tag.

```
- uses: keithweaver/[email protected]
...
```

This uses the master branch:

```
- uses: keithweaver/aws-s3-github-action@master
```

This uses a feature branch called `dev-branch`:

```
- uses: keithweaver/aws-s3-github-action@dev-branch
```

It is recommended that you point to a specific version to avoid unexpected changes affecting your workflow.


**Can I run this local with Docker?**

```
# You should have Docker on your local and running.
docker build . -t aws-s3-action
docker run \
--env INPUT_AWS_ACCESS_KEY_ID="<ACCESS_KEY>" \
--env INPUT_AWS_SECRET_ACCESS_KEY="<ACCESS_SECRET>" \
--env INPUT_SOURCE="./sample.txt" \
--env INPUT_DESTINATION="s3://yourbucket/sample.txt" \
aws-s3-action
# Docker image must follow the environment variables or they will not set.
```

**Can I run this local outside of Docker?**

You can run a bash script

```
INPUT_AWS_ACCESS_KEY_ID="<ACCESS_KEY>" \
INPUT_AWS_SECRET_ACCESS_KEY="<ACCESS_SECRET>" \
INPUT_SOURCE="./sample.txt" \
INPUT_DESTINATION="s3://yourbucket/sample.txt" \
bash entrypoint.sh
```


## Errors

**upload failed: ./test1.txt to s3://.../test1.txt Unable to locate credentials**

You didn't set credentials correctly. Common reason; forgot to set the Github Secrets.

**An error occurred (SignatureDoesNotMatch) when calling the PutObject operation: The request signature we calculated does not match the signature you provided. Check your key and signing method.**

Solution is [here](https://github.com/aws/aws-cli/issues/602#issuecomment-60387771). [More info](https://stackoverflow.com/questions/4770635/s3-error-the-difference-between-the-request-time-and-the-current-time-is-too-la), [more](https://forums.docker.com/t/syncing-clock-with-host/10432/6).

**botocore.utils.BadIMDSRequestError**

[Here](https://stackoverflow.com/questions/68348222/aws-s3-ls-gives-error-botocore-utils-badimdsrequesterror-botocore-awsrequest-a) is the solution. We set the AWS region as a required argument as a result.

**upload failed: folder1/ to s3://.../folder1/ [Errno 21] Is a directory: '/github/workspace/folder1/'**

You need to a recursive flag for the `cp`. Looks like:

```
- uses: keithweaver/[email protected]
name: Copy Folder
with:
command: cp
source: ./folder1/
destination: s3://bucket/folder1/
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws_region: us-east-1
flags: --recursive
```

**An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied**

[Solution](https://aws.amazon.com/premiumsupport/knowledge-center/s3-access-denied-listobjects-sync/).


**fatal error: An error occurred (404) when calling the HeadObject operation: Key "verify-aws-s3-action/folder1/" does not exist**

You need to add a recursive flag, `flags: --recursive`.
6 changes: 6 additions & 0 deletions action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,12 @@ inputs:
aws_session_token:
description: "The AWS access key part of your credentials. More info: https://docs.aws.amazon.com/cli/latest/reference/configure/"
required: false
aws_region:
description: "This is the region of the bucket. S3 namespace is global but the bucket is regional."
required: false
metadata_service_timeout:
description: "The number of seconds to wait until the metadata service request times out. More info: https://docs.aws.amazon.com/cli/latest/reference/configure/"
required: false
flags:
description: "Additional query flags."
required: false
66 changes: 30 additions & 36 deletions entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

function usage_docs {
echo ""
echo "- uses: keithweaver/aws-s3-github-action@master"
echo "- uses: keithweaver/aws-s3-github-action@v1.0.0"
echo " with:"
echo " command: cp"
echo " source: ./local_file.txt"
Expand All @@ -11,70 +11,64 @@ function usage_docs {
echo " aws_secret_access_key: \${{ secret.AWS_SECRET_ACCESS_KEY }}"
echo ""
}
function get_profile {
PROFILE=""
if [ -z "$INPUT_PROFILE" ]
then
echo "Using the default profile"
else
echo "Using the profile :: [$INPUT_PROFILE]"
PROFILE=" --profile=$INPUT_PROFILE"
fi
}
function get_configuration_settings {
if [ -z "$INPUT_AWS_ACCESS_KEY_ID" ]
then
echo "AWS Access Key Id was not found. Using configuration from previous step."
else
aws configure set aws_access_key_id $INPUT_AWS_ACCESS_KEY_ID $PROFILE
aws configure set aws_access_key_id "$INPUT_AWS_ACCESS_KEY_ID"
fi

if [ -z "$INPUT_AWS_SECRET_ACCESS_KEY" ]
then
echo "AWS Secret Access Key was not found. Using configuration from previous step."
else
aws configure set aws_secret_access_key $INPUT_AWS_SECRET_ACCESS_KEY $PROFILE
aws configure set aws_secret_access_key "$INPUT_AWS_SECRET_ACCESS_KEY"
fi

if [ -z "$INPUT_AWS_SESSION_TOKEN" ]
then
echo "AWS Session Token was not found. Using configuration from previous step."
else
aws configure set aws_session_token $INPUT_AWS_SESSION_TOKEN $PROFILE
aws configure set aws_session_token "$INPUT_AWS_SESSION_TOKEN"
fi

if [ -z "$INPUT_AWS_SESSION_TOKEN" ]
if [ -z "$INPUT_METADATA_SERVICE_TIMEOUT" ]
then
echo "AWS Session Token was not found. Using configuration from previous step."
echo "Metadata service timeout was not found. Using configuration from previous step."
else
aws configure set aws_session_token $INPUT_AWS_SESSION_TOKEN $PROFILE
aws configure set metadata_service_timeout "$INPUT_METADATA_SERVICE_TIMEOUT"
fi

if [ -z "$INPUT_METADATA_SERVICE_TIMEOUT" ]
if [ -z "$INPUT_AWS_REGION" ]
then
echo "Metadata service timeout was not found. Using configuration from previous step."
echo "AWS region not found. Using configuration from previous step."
else
aws configure set metadata_service_timeout $INPUT_METADATA_SERVICE_TIMEOUT $PROFILE
aws configure set region "$INPUT_AWS_REGION"
fi
}
function get_command {
VALID_COMMANDS=("sync" "mb" "rb" "ls" "cp" "mv" "rm")
COMMAND="cp"
if [[ ! ${VALID_COMMANDS[*]} =~ "$INPUT_COMMAND" ]]
if [ -z "$INPUT_COMMAND" ]
then
echo "Command not set using cp"
elif [[ ! ${VALID_COMMANDS[*]} =~ "$INPUT_COMMAND" ]]
then
echo ""
echo "Invalid command provided :: [$INPUT_COMMAND]"
usage_docs
exit 1
else
echo "Using provided command"
COMMAND=$INPUT_COMMAND
fi
}
function validate_source_and_destination {
if [[ "$COMMAND" == "cp" || "$COMMAND" == "mv" || "$COMMAND" == "sync" ]]
if [ "$COMMAND" == "cp" ] || [ "$COMMAND" == "mv" ] || [ "$COMMAND" == "sync" ]
then
# Require source and destination
if [[ -z "$INPUT_SOURCE" && "$INPUT_DESTINATION" ]]
# Require source and target
if [ -z "$INPUT_SOURCE" ] && [ "$INPUT_DESTINATION" ]
then
echo ""
echo "Error: Requires source and destination."
Expand All @@ -84,46 +78,46 @@ function validate_source_and_destination {

# Verify at least one source or target have s3:// as prefix
# if [[] || []]
if [[ $INPUT_SOURCE != *"s3://"* ]] && [[ $INPUT_DESTINATION != *"s3://"* ]]
if [[ ! "$INPUT_SOURCE" =~ ^s3:// ]] && [[ ! "$INPUT_DESTINATION" =~ ^s3:// ]]
then
echo ""
echo "Error: Source destination or target destination must have s3:// as prefix."
echo "Error: Source or target must have s3:// as prefix."
usage_docs
exit 1
fi
else
# Require source
if [ -z "$INPUT_SOURCE" ]
then
echo "Error: Requires source and target destinations."
echo "Error: Requires source."
usage_docs
exit 1
fi

# Verify at least one source or target have s3:// as prefix
if [[ $INPUT_SOURCE != *"s3://"* ]]
# Verify at source has s3:// as prefix
if [[ ! $INPUT_SOURCE =~ ^s3:// ]]
then
echo "Error: Source destination must have s3:// as prefix."
echo "Error: Source must have s3:// as prefix."
usage_docs
exit 1
fi
fi
}
function main {
get_profile
echo "v1.0.0"
get_configuration_settings
get_command
validate_source_and_destination

aws --version
if [[ "$COMMAND" == "cp" || "$COMMAND" == "mv" || "$COMMAND" == "sync" ]]

if [ "$COMMAND" == "cp" ] || [ "$COMMAND" == "mv" ] || [ "$COMMAND" == "sync" ]
then
echo aws s3 $COMMAND "$INPUT_SOURCE" "$INPUT_DESTINATION" $INPUT_FLAGS
aws s3 $COMMAND "$INPUT_SOURCE" "$INPUT_DESTINATION" $INPUT_FLAGS
aws s3 "$COMMAND" "$INPUT_SOURCE" "$INPUT_DESTINATION" $INPUT_FLAGS
else
echo aws s3 $COMMAND "$INPUT_SOURCE" $INPUT_FLAGS
aws s3 $COMMAND "$INPUT_SOURCE" $INPUT_FLAGS
aws s3 "$COMMAND" "$INPUT_SOURCE" $INPUT_FLAGS
fi
}

Expand Down

0 comments on commit 46dd026

Please sign in to comment.