Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Binaries upload to internal s3 bucket #8896

Closed
wants to merge 3 commits into from

Conversation

alessandrolomanto
Copy link
Contributor

@alessandrolomanto alessandrolomanto commented Dec 17, 2024

What is the purpose of the change

This PR allows us to upload binaries to our internal S3 bucket.

  • Added new environment variables related to S3 authentication (we need to add them to our settings):
    • S3_ENDPOINT, S3_REGION, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY
  • Included the environment variables above on both the docker run command in Makefile and in the release.yaml GitHub action

Testing and Verifying

Have tested the release workflow using the act tool and on my personal fork, and it is working.

Documentation and Release Note

  • Does this pull request introduce a new feature or user-facing behavior changes?
  • Changelog entry added to Unreleased section of CHANGELOG.md?

Where is the change documented?

  • Specification (x/{module}/README.md)
  • Osmosis documentation site
  • Code comments?
  • N/A

@alessandrolomanto alessandrolomanto added V:state/compatible/backport State machine compatible PR, should be backported A:no-changelog A:backport/v28.x backport patches to v28.x branch labels Dec 17, 2024
@alessandrolomanto alessandrolomanto requested a review from a team as a code owner December 17, 2024 09:58
Copy link
Contributor

coderabbitai bot commented Dec 17, 2024

Walkthrough

The pull request modifies the Makefile to enhance the release target's error handling and environment variable validation. The changes introduce a series of nested ifdef directives to ensure that critical S3 and AWS-related environment variables are defined before executing the release command. If any required variables are missing, specific error messages are printed to guide the user on which configurations need to be set.

Changes

File Change Summary
Makefile Added nested conditional checks for S3 and AWS environment variables in the release target

Sequence Diagram

flowchart TD
    A[Start Release Process] --> B{GITHUB_TOKEN defined?}
    B -->|No| C[Exit with Error]
    B -->|Yes| D{S3_ENDPOINT defined?}
    D -->|No| E[Print S3_ENDPOINT Error]
    D -->|Yes| F{S3_REGION defined?}
    F -->|No| G[Print S3_REGION Error]
    F -->|Yes| H{AWS_ACCESS_KEY_ID defined?}
    H -->|No| I[Print AWS_ACCESS_KEY_ID Error]
    H -->|Yes| J{AWS_SECRET_ACCESS_KEY defined?}
    J -->|No| K[Print AWS_SECRET_ACCESS_KEY Error]
    J -->|Yes| L[Execute Release Command]
Loading

The sequence diagram illustrates the new hierarchical validation process for environment variables before executing the release command, ensuring all critical configurations are in place.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
Makefile (1)

218-258: Consider simplifying the environment variable validation structure

While the current implementation correctly validates the presence of required environment variables, consider simplifying the deeply nested structure for better maintainability.

Here's a suggested approach using a helper target:

+ check-release-env:
+	@if [ -z "$(GITHUB_TOKEN)" ]; then \
+		echo "Error: GITHUB_TOKEN is not defined" >&2; exit 1; \
+	fi
+	@if [ -z "$(S3_ENDPOINT)" ]; then \
+		echo "Error: S3_ENDPOINT is not defined" >&2; exit 1; \
+	fi
+	@if [ -z "$(S3_REGION)" ]; then \
+		echo "Error: S3_REGION is not defined" >&2; exit 1; \
+	fi
+	@if [ -z "$(AWS_ACCESS_KEY_ID)" ]; then \
+		echo "Error: AWS_ACCESS_KEY_ID is not defined" >&2; exit 1; \
+	fi
+	@if [ -z "$(AWS_SECRET_ACCESS_KEY)" ]; then \
+		echo "Error: AWS_SECRET_ACCESS_KEY is not defined" >&2; exit 1; \
+	fi

- ifdef GITHUB_TOKEN
- ifdef S3_ENDPOINT
- ifdef S3_REGION
- ifdef AWS_ACCESS_KEY_ID
- ifdef AWS_SECRET_ACCESS_KEY
-
- release:
+ release: check-release-env

Additionally, consider adding basic format validation for the AWS credentials and S3 configuration:

+ check-release-env:
+	@if [ -z "$(GITHUB_TOKEN)" ]; then \
+		echo "Error: GITHUB_TOKEN is not defined" >&2; exit 1; \
+	fi
+	@if [ -z "$(S3_ENDPOINT)" ]; then \
+		echo "Error: S3_ENDPOINT is not defined" >&2; exit 1; \
+	elif ! echo "$(S3_ENDPOINT)" | grep -qE '^https?://'; then \
+		echo "Error: S3_ENDPOINT must start with http:// or https://" >&2; exit 1; \
+	fi
+	@if [ -z "$(S3_REGION)" ]; then \
+		echo "Error: S3_REGION is not defined" >&2; exit 1; \
+	elif ! echo "$(S3_REGION)" | grep -qE '^[a-z]{2}-[a-z]+-[0-9]$'; then \
+		echo "Error: S3_REGION must be in format: xx-xxxx-n" >&2; exit 1; \
+	fi
+	@if [ -z "$(AWS_ACCESS_KEY_ID)" ]; then \
+		echo "Error: AWS_ACCESS_KEY_ID is not defined" >&2; exit 1; \
+	elif ! echo "$(AWS_ACCESS_KEY_ID)" | grep -qE '^[A-Z0-9]{20}$'; then \
+		echo "Error: AWS_ACCESS_KEY_ID format is invalid" >&2; exit 1; \
+	fi
+	@if [ -z "$(AWS_SECRET_ACCESS_KEY)" ]; then \
+		echo "Error: AWS_SECRET_ACCESS_KEY is not defined" >&2; exit 1; \
+	elif ! echo "$(AWS_SECRET_ACCESS_KEY)" | grep -qE '^[A-Za-z0-9/+=]{40}$'; then \
+		echo "Error: AWS_SECRET_ACCESS_KEY format is invalid" >&2; exit 1; \
+	fi
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4b203c1 and a084f39.

⛔ Files ignored due to path filters (2)
  • .github/workflows/release.yml is excluded by !**/*.yml
  • .goreleaser.yaml is excluded by !**/*.yaml
📒 Files selected for processing (1)
  • Makefile (1 hunks)
🔇 Additional comments (1)
Makefile (1)

218-258: Verify the integration with GitHub Actions workflow

Please ensure that:

  1. The S3 bucket has the correct IAM permissions configured
  2. The GitHub Actions workflow in release.yaml correctly sets these environment variables
  3. The S3 bucket's CORS configuration allows uploads from the GitHub Actions runners

Run the following script to verify the GitHub Actions workflow configuration:

✅ Verification successful

GitHub Actions workflow correctly integrates with the Makefile release target

The verification shows that:

  1. The GitHub Actions workflow .github/workflows/release.yaml exists and correctly sets all required environment variables:
    • GITHUB_TOKEN
    • S3_ENDPOINT
    • S3_REGION
    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY
  2. The workflow uses repository secrets to securely pass these values to the Makefile target
  3. The environment variables are properly passed to the Docker container in the release job
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify GitHub Actions workflow configuration

# Check if release.yaml exists and contains the required environment variables
if [ -d .github/workflows ]; then
  echo "Checking release.yaml for required environment variables..."
  rg -A 5 'env:|environment:' .github/workflows/release.yaml
fi

# Check if README or documentation contains S3 configuration instructions
echo "Checking for S3 configuration documentation..."
rg -i 's3|aws' README.md docs/

Length of output: 427


Script:

#!/bin/bash
# Let's check for any workflow files and their content
echo "Checking all workflow files:"
fd -e yml -e yaml . .github/workflows --exec cat

# Check for any release-related files or documentation
echo -e "\nChecking for release-related files:"
fd -g "*release*" . --exec cat

# Check for any deployment or CI configuration files
echo -e "\nChecking for deployment configs:"
fd -g "*deploy*" -g "*ci*" -g "*.toml" -g "*.yaml" -g "*.yml" . --exec cat

Length of output: 66980

Comment on lines +222 to +238

release:
docker run \
--rm \
-e GITHUB_TOKEN=$(GITHUB_TOKEN) \
-e COSMWASM_VERSION=$(COSMWASM_VERSION) \
-e S3_ENDPOINT=$(S3_ENDPOINT) \
-e S3_REGION=$(S3_REGION) \
-e AWS_ACCESS_KEY_ID=$(AWS_ACCESS_KEY_ID) \
-e AWS_SECRET_ACCESS_KEY=$(AWS_SECRET_ACCESS_KEY) \
-v /var/run/docker.sock:/var/run/docker.sock \
-v `pwd`:/go/src/osmosisd \
-w /go/src/osmosisd \
$(GORELEASER_IMAGE) \
release \
--clean

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Security: Consider using Docker secrets for sensitive credentials

The current implementation passes sensitive AWS credentials as environment variables to the container. While functional, this approach has security implications:

  1. Environment variables are visible in docker inspect output
  2. Credentials might be logged in container logs
  3. The docker socket mount gives the container full access to the docker daemon

Consider these security improvements:

  1. Use Docker secrets for sensitive data:
 release:
+	@docker secret create aws_credentials - << EOF
+	AWS_ACCESS_KEY_ID=$(AWS_ACCESS_KEY_ID)
+	AWS_SECRET_ACCESS_KEY=$(AWS_SECRET_ACCESS_KEY)
+	EOF
 	docker run \
 		--rm \
+		--secret aws_credentials \
 		-e GITHUB_TOKEN=$(GITHUB_TOKEN) \
 		-e COSMWASM_VERSION=$(COSMWASM_VERSION) \
 		-e S3_ENDPOINT=$(S3_ENDPOINT) \
 		-e S3_REGION=$(S3_REGION) \
-		-e AWS_ACCESS_KEY_ID=$(AWS_ACCESS_KEY_ID) \
-		-e AWS_SECRET_ACCESS_KEY=$(AWS_SECRET_ACCESS_KEY) \
 		-v /var/run/docker.sock:/var/run/docker.sock \
 		-v `pwd`:/go/src/osmosisd \
 		-w /go/src/osmosisd \
 		$(GORELEASER_IMAGE) \
 		release \
-		--clean
+		--clean && \
+	docker secret rm aws_credentials
  1. Consider using buildx for better multi-arch support:
+PLATFORMS := linux/amd64,linux/arm64
+
 release:
-	docker run \
+	docker buildx build \
+		--platform=$(PLATFORMS) \
 		--rm \
  1. Limit docker socket permissions or consider removing if not strictly necessary

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Contributor

This pull request has been automatically marked as stale because it has not had any recent activity. It will be closed if no further activity occurs. Thank you!

@github-actions github-actions bot added the Stale label Dec 27, 2024
@github-actions github-actions bot closed this Dec 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A:backport/v28.x backport patches to v28.x branch A:no-changelog Stale T:build T:CI V:state/compatible/backport State machine compatible PR, should be backported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants