Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: fix tag #18118

Merged
merged 1 commit into from
Feb 25, 2025
Merged

feat: fix tag #18118

merged 1 commit into from
Feb 25, 2025

Conversation

lodmfjord
Copy link
Member

@lodmfjord lodmfjord commented Feb 25, 2025

...

Attach a link to issue if relevant

What

Specify what you're trying to achieve

Why

Specify why you need to achieve this

Screenshots / Gifs

Attach Screenshots / Gifs to help reviewers understand the scope of the pull request

Checklist:

  • I have performed a self-review of my own code
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • Formatting passes locally with my changes
  • I have rebased against main before asking for a review

Summary by CodeRabbit

  • New Features

    • Introduced new automation workflows that streamline deployment processes, including Helm chart updates, manifest uploads to deployment platforms, and enhanced merge queue processing.
    • Enhanced build automation with conditional Docker image building, artifact uploads, and improved tag and identifier generation.
  • Chores

    • Updated development libraries to support the improved CI/CD automation and workflows.

@lodmfjord lodmfjord requested review from a team as code owners February 25, 2025 14:23
Copy link
Contributor

coderabbitai bot commented Feb 25, 2025

Walkthrough

This pull request introduces multiple enhancements to the repository’s GitHub Actions workflows and supporting scripts. A new composite GitHub Action for updating helm-values charts is added, alongside a new workflow for deploying to an ArgoCD repository. Updates to existing workflows include additional input parameters, outputs, and conditional merge queue steps. Several new scripts and development dependencies have also been introduced to manage Docker build data, generate identifiers, and perform JSON data processing.

Changes

File(s) Change Summary
.github/actions/update-helm-values/action.yml New composite action for copying helm chart files from a source to a helm-values repository.
.github/workflows/deploy-argocd.yml New workflow triggered on pushes to main that checks out the repository, sets up Yarn, retrieves manifest data, and conditionally runs the update helm-values action.
.github/workflows/install.yml, .github/workflows/merge-queue.yml, .github/workflows/pullrequest.yml Updates include new input parameters (e.g., main_branch, run_merge_queue), new outputs for merge queue steps, added jobs (docker-build, post-docker-build), removal of workflow_dispatch trigger, and parameter adjustments in the pull request workflow.
package.json Three new development dependencies added: @actions/[email protected], @actions/[email protected], @actions/[email protected].
scripts/ci/_docker.sh, scripts/ci/docker/create-id.mjs, scripts/ci/docker/generate-tag.mjs, scripts/ci/docker/get-data.mjs, scripts/ci/docker/write-build-data.mjs, scripts/ci/docker/write-data.mjs New environment variable, functions, and several new scripts introduced for Docker build data handling, unique ID generation, tag generation, manifest data processing, and writing build data.

Sequence Diagram(s)

sequenceDiagram
    participant User as "Trigger (Push/PR)"
    participant Action as "Update Helm-Values Action"
    participant Repo as "Helm-Values Repository"
    participant FileSystem as "File System"
    
    User->>Action: Provide input (files, PAT)
    Action->>Repo: Checkout repository using PAT
    loop For each file in input
        Action->>FileSystem: Read source file
        FileSystem-->>Action: File content
        Action->>FileSystem: Copy file to destination
    end
    Action->>Repo: Stage changes, commit, push update
Loading
sequenceDiagram
    participant GitHub as "GitHub (Workflow Trigger)"
    participant Repo as "Repository"
    participant Yarn as "Setup Yarn Action"
    participant NodeScript as "Get Manifest Data Script"
    participant UpdateAction as "Update Helm-Values Action"
    
    GitHub->>Repo: Checkout code on push to main
    Repo->>Yarn: Run setup-yarn action
    Yarn->>NodeScript: Execute get-data.mjs script
    NodeScript-->>GitHub: Manifest data output (MQ_HAS_OUTPUT)
    alt Manifest output true
        GitHub->>UpdateAction: Run update helm-values action with file list & PAT
    else No manifest changes
        Note right of GitHub: Skip update helm-values action
    end
Loading

Possibly related PRs

Suggested labels

automerge, deploy-feature

Suggested reviewers

  • thordurhhh
  • RunarVestmann

📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 41c3348 and bccf618.

⛔ Files ignored due to path filters (1)
  • yarn.lock is excluded by !**/yarn.lock, !**/*.lock
📒 Files selected for processing (12)
  • .github/actions/update-helm-values/action.yml (1 hunks)
  • .github/workflows/deploy-argocd.yml (1 hunks)
  • .github/workflows/install.yml (5 hunks)
  • .github/workflows/merge-queue.yml (3 hunks)
  • .github/workflows/pullrequest.yml (1 hunks)
  • package.json (1 hunks)
  • scripts/ci/_docker.sh (2 hunks)
  • scripts/ci/docker/create-id.mjs (1 hunks)
  • scripts/ci/docker/generate-tag.mjs (1 hunks)
  • scripts/ci/docker/get-data.mjs (1 hunks)
  • scripts/ci/docker/write-build-data.mjs (1 hunks)
  • scripts/ci/docker/write-data.mjs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
`scripts/**/*`: "Confirm that the code adheres to the follow...

scripts/**/*: "Confirm that the code adheres to the following:

  • Script efficiency, readability, and maintainability.
  • Usage of environmental variables and configuration files for flexibility.
  • Integration with NX workspace utilities and commands."
  • scripts/ci/docker/create-id.mjs
  • scripts/ci/_docker.sh
  • scripts/ci/docker/generate-tag.mjs
  • scripts/ci/docker/get-data.mjs
  • scripts/ci/docker/write-data.mjs
  • scripts/ci/docker/write-build-data.mjs
`.github/**/*`: "Confirm that the code adheres to the follow...

.github/**/*: "Confirm that the code adheres to the following:

  • GitHub Actions workflows use 'arc-runners' as the valid runner for self-hosted ARC runners.
  • CI pipelines are efficient, using appropriate caching strategies and minimal resource consumption.
  • Reusable workflows and composite actions are properly structured for reusability.
  • Dependency management workflows meet security requirements.
  • Note: 'runs-on: arc-runners' is valid for our self-hosted runner configuration, despite standard linting warnings."
  • .github/workflows/pullrequest.yml
  • .github/workflows/merge-queue.yml
  • .github/actions/update-helm-values/action.yml
  • .github/workflows/deploy-argocd.yml
  • .github/workflows/install.yml
🪛 actionlint (1.7.4)
.github/workflows/merge-queue.yml

56-56: label "arc-runners" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-22.04", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "macos-12-xl", "macos-12-xlarge", "macos-12-large", "macos-12", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)


61-61: property "matrix-output" is not defined in object type {cache-check: {conclusion: string; outcome: string; outputs: {string => string}}; dockerargs: {conclusion: string; outcome: string; outputs: {string => string}}; dockerbuild: {conclusion: string; outcome: string; outputs: {string => string}}; gather: {conclusion: string; outcome: string; outputs: {string => string}}}

(expression)


188-188: label "arc-runners" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-22.04", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "macos-12-xl", "macos-12-xlarge", "macos-12-large", "macos-12", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)

.github/workflows/deploy-argocd.yml

10-10: label "arc-runners" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-22.04", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "macos-12-xl", "macos-12-xlarge", "macos-12-large", "macos-12", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)

🪛 YAMLlint (1.35.1)
.github/workflows/merge-queue.yml

[error] 218-218: trailing spaces

(trailing-spaces)

🪛 Biome (1.9.4)
scripts/ci/docker/write-data.mjs

[error] 17-17: Avoid the use of spread (...) syntax on accumulators.

Spread syntax should be avoided on accumulators (like those in .reduce) because it causes a time complexity of O(n^2).
Consider methods such as .splice or .push instead.

(lint/performance/noAccumulatingSpread)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: prepare / install
🔇 Additional comments (26)
.github/workflows/pullrequest.yml (1)

50-51: Appropriate enhancement to the workflow's flexibility.

Adding the main_branch parameter to the prepare job enables the workflow to receive the base reference of the pull request, which supports integration with other actions requiring branch information.

package.json (1)

318-320: Appropriate dependencies added for GitHub Actions automation.

The new dependencies support the enhanced GitHub Actions functionality introduced in this PR, particularly for the new "Update helm-values charts" action.

scripts/ci/_docker.sh (3)

19-19: Environment variable declaration added correctly.

The addition of UPLOAD_ARTIFACT_DOCKER follows the established pattern of defining environment variables with default values.


73-81: Well-structured function implementation.

The new _upload_artifact() function follows the script's existing patterns with proper case structure. It efficiently handles the conditional execution of the Node.js script based on the environment variable.


85-85: Correctly placed function call.

The _upload_artifact function is properly called after the main function execution, ensuring the upload happens after the primary Docker operations are complete.

scripts/ci/docker/create-id.mjs (1)

1-19:

❓ Verification inconclusive

Well-structured script with clear purpose.

The script effectively generates a unique ID for matrix jobs and sets environment variables. Good practices include TypeScript checking, clear comments, proper error handling, and appropriate module usage.

However, note that it unconditionally sets UPLOAD_ARTIFACT_DOCKER to 'true', which will always trigger the artifact upload functionality added in _docker.sh. Confirm that this is the intended behavior, as _docker.sh defaults this variable to 'false'.


🏁 Script executed:

#!/bin/bash
# Check for any other references to UPLOAD_ARTIFACT_DOCKER to understand workflow context

# Look for other references in scripts
rg "UPLOAD_ARTIFACT_DOCKER" --type=sh --type=js --type=ts

Length of output: 193


Action Required: Confirm Artifact Upload Behavior

The script in scripts/ci/docker/create-id.mjs is well-structured—it correctly generates a unique matrix ID, handles errors, and uses modules appropriately. However, it unconditionally sets UPLOAD_ARTIFACT_DOCKER to 'true', which overrides the default fallback (false) in the _docker.sh script:

  • Location: scripts/ci/docker/create-id.mjs (Lines 1-19)
  • Observation: The override in this script forces artifact upload, in contrast to the default set in scripts/ci/_docker.sh.
  • Request: Please confirm if this deliberate override is intended.
.github/workflows/deploy-argocd.yml (2)

10-10: Runner configuration looks valid.

The arc-runners label is specifically mentioned in your coding guidelines as being valid for your self-hosted runner configuration, despite standard linting warnings.

🧰 Tools
🪛 actionlint (1.7.4)

10-10: label "arc-runners" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-22.04", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "macos-12-xl", "macos-12-xlarge", "macos-12-large", "macos-12", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)


1-27: Well-structured workflow for ArgoCD deployment.

The workflow effectively manages the deployment process by:

  1. Triggering on pushes to main
  2. Setting up the environment properly
  3. Gathering manifest data
  4. Conditionally updating the helm-values repository when necessary

The workflow follows GitHub Actions best practices including proper step organization, conditional execution, and secure handling of tokens.

🧰 Tools
🪛 actionlint (1.7.4)

10-10: label "arc-runners" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-22.04", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "macos-12-xl", "macos-12-xlarge", "macos-12-large", "macos-12", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)

.github/actions/update-helm-values/action.yml (1)

1-49: Well-structured composite action for updating helm charts.

The action effectively handles the process of copying and updating helm chart files between repositories. It properly defines inputs, checkouts the target repository, copies files, and commits changes.

The script would benefit from more robust error handling and some minor syntax improvements as noted in the specific comments above, but overall the implementation is solid and follows good practices for composite actions.

scripts/ci/docker/generate-tag.mjs (1)

1-23: Well-structured script setup and output generation.

The script properly imports necessary modules and sets up variables for the GitHub Actions context. The outputs are set correctly with appropriate logging.

.github/workflows/merge-queue.yml (8)

47-50: Well-structured input parameters for merge queue workflow.

The added parameters allow for proper control of the workflow execution.


52-78: Docker build job setup follows best practices.

The job properly depends on prepare and pre-checks, and sets up matrix-based execution with appropriate permissions and outputs.

Note: The static analysis warning about "arc-runners" can be ignored as mentioned in the coding guidelines, this is a valid custom runner in your environment.

🧰 Tools
🪛 actionlint (1.7.4)

56-56: label "arc-runners" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-22.04", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "macos-12-xl", "macos-12-xlarge", "macos-12-large", "macos-12", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)


61-61: property "matrix-output" is not defined in object type {cache-check: {conclusion: string; outcome: string; outputs: {string => string}}; dockerargs: {conclusion: string; outcome: string; outputs: {string => string}}; dockerbuild: {conclusion: string; outcome: string; outputs: {string => string}}; gather: {conclusion: string; outcome: string; outputs: {string => string}}}

(expression)


79-91: Good approach to gathering app metadata.

The step correctly parses JSON data from the matrix and exports it to environment variables for downstream steps.


108-137: Well-structured Docker build preparation with secrets handling.

The script properly prepares Docker build arguments and handles sensitive information like the NX_CLOUD_ACCESS_TOKEN securely using Docker secrets.


138-146: Efficient image caching strategy.

The steps to check for and use cached buildx images improve build performance.


147-170: Good error handling with retry mechanism.

The workflow includes a retry step for Docker builds that fail initially, which is a good practice for handling transient errors.


237-237: Ensures proper NX_HEAD environment in test job.

Adding the NX_HEAD environment variable to the tests job ensures consistency with other changes.


276-277: Success check for build now references the correct job.

Updated to check post-docker-build instead of the previously commented-out build job.

scripts/ci/docker/get-data.mjs (1)

1-27: Well-structured script initialization and environment setup.

The script properly imports dependencies and sets up environment variables for deployment stages.

.github/workflows/install.yml (7)

4-12: Well-structured workflow input parameters.

The added input parameters follow GitHub Actions best practices with appropriate defaults and descriptions.


14-21: Proper output configuration for merge queue values.

The added outputs are well-organized and reference the correct step outputs.


73-76: Consistent output mapping from job steps to workflow outputs.

The job outputs are correctly mapped to the step outputs for merge queue values.


81-81: Updated DOCKER_CHUNKS reference to BUILD_CHUNKS.

This change correctly updates the output reference to match the generated value from the docker_projects step.


104-110: Effective branch name extraction.

The step properly extracts the branch name from the input, removing the refs/heads/ prefix if present.


133-138: Conditional merge queue preparation.

The step correctly uses the run_merge_queue input to conditionally execute the tag generation script.


211-212: Improved output format for BUILD_CHUNKS.

The updated echo command ensures proper JSON formatting for the BUILD_CHUNKS output.

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@lodmfjord lodmfjord enabled auto-merge February 25, 2025 14:23
@lodmfjord lodmfjord changed the base branch from main to mq-docker-pre-main February 25, 2025 14:25
@lodmfjord lodmfjord removed request for a team February 25, 2025 14:25
Copy link

nx-cloud bot commented Feb 25, 2025

View your CI Pipeline Execution ↗ for commit bccf618.

Command Status Duration Result
nx run-many --target=build --projects=web --par... ✅ Succeeded 16m 23s View ↗
nx run-many --target=lint --projects=services-a... ✅ Succeeded 1m 19s View ↗
nx run-many --projects web --target test --para... ✅ Succeeded 1s View ↗
nx run-many --target=build --projects=unicorn-a... ✅ Succeeded 53s View ↗
nx run-many --projects clients-administration-o... ✅ Succeeded 31s View ↗
nx run-many --projects application-api-core,app... ✅ Succeeded 26s View ↗
nx run-many --projects api,api-catalogue-consts... ✅ Succeeded 24s View ↗
nx run-many --projects infra-express-server,inf... ✅ Succeeded 2s View ↗
Additional runs (82) ✅ Succeeded ... View ↗

☁️ Nx Cloud last updated this comment at 2025-02-25 14:58:15 UTC

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently, please check "Code review limits" under "Moderation" settings.

Actionable comments posted: 4

🧹 Nitpick comments (15)
scripts/ci/docker/write-data.mjs (2)

15-20: Optimize the reducer implementation to improve performance.

The use of spread syntax on accumulators in reduce functions causes O(n²) time complexity as noted by the static analysis, which can become inefficient with larger datasets.

-  const values = Object.keys(data).reduce((a, b) => {
-    return {
-      ...a,
-      [b]: data[b][key],
-    }
-  }, {})
+  const values = Object.keys(data).reduce((a, b) => {
+    a[b] = data[b][key];
+    return a;
+  }, {})
🧰 Tools
🪛 Biome (1.9.4)

[error] 17-17: Avoid the use of spread (...) syntax on accumulators.

Spread syntax should be avoided on accumulators (like those in .reduce) because it causes a time complexity of O(n^2).
Consider methods such as .splice or .push instead.

(lint/performance/noAccumulatingSpread)


1-30: Add documentation about the expected data structure.

The script would benefit from JSDoc comments explaining the expected structure of the JSON_DATA environment variable to enhance maintainability.

 // @ts-check
+/**
+ * Processes JSON data from the JSON_DATA environment variable.
+ * Expected format of JSON_DATA:
+ * {
+ *   "key1": { "value": "data1", ... },
+ *   "key2": { "value": "data2", ... },
+ *   ...
+ * }
+ * 
+ * Output:
+ * [
+ *   { "id": "key1", "value": "data1", ... },
+ *   { "id": "key2", "value": "data2", ... },
+ *   ...
+ * ]
+ */

 import fs from 'fs'
 import path from 'path'
🧰 Tools
🪛 Biome (1.9.4)

[error] 17-17: Avoid the use of spread (...) syntax on accumulators.

Spread syntax should be avoided on accumulators (like those in .reduce) because it causes a time complexity of O(n^2).
Consider methods such as .splice or .push instead.

(lint/performance/noAccumulatingSpread)

scripts/ci/docker/write-build-data.mjs (2)

58-58: Fix typo in console output string.

There's an extra closing brace at the end of the JSON.stringify output.

-console.info(`Build data ${JSON.stringify(value, null, 2)}}`)
+console.info(`Build data ${JSON.stringify(value, null, 2)}`)

1-59: Enhance documentation and error handling.

The script effectively validates environment variables and exports them to GitHub Actions, but would benefit from clearer documentation.

 // @ts-check
 /*
  * This script is used to write the build data to the output.
  */
+/**
+ * This script processes build data from environment variables and exports them
+ * for GitHub Actions. It validates required environment variables and exits with
+ * an error if any are missing.
+ * 
+ * Required environment variables:
+ * - APP_NAME: The name of the application/project
+ * - TARGET: The build target
+ * - IMAGE_NAME: The Docker image name
+ * - DOCKER_TAG: The Docker image tag
+ * - MATRIX_ID: Unique identifier for the matrix job
+ */

 import core from '@actions/core'
.github/actions/update-helm-values/action.yml (4)

27-30: Improve string handling for more robust script execution.

The IFS setting and string splitting logic could be improved for better reliability.

-        IFS=$IFS,
-        paths=()
-        read -a paths <<< $files
-        echo "$files\n$paths"
+        IFS=',' read -ra paths <<< "$files"
+        echo -e "Files to process: $files\nPaths array: ${paths[*]}"

47-47: Fix echo command to properly display newlines.

In Bash, you need to use the -e flag with echo to interpret escape sequences properly.

-        echo "Showing changeset\n $(git show)"
+        echo -e "Showing changeset\n$(git show)"

31-37: Add error handling for file operations.

The script currently doesn't check if the file operations succeed or fail. Adding error checking would make it more robust.

         for path in ${paths[@]}; do
           export DEST="helm-values/charts/${path#charts/}"
           export DEST_PATH=$(dirname $DEST)
-          mkdir -p $DEST_PATH
+          mkdir -p $DEST_PATH || { echo "Failed to create directory $DEST_PATH"; exit 1; }
           echo "Copying filepath: ${path} to $DEST"
-          cp "$path" "$DEST"
+          cp "$path" "$DEST" || { echo "Failed to copy $path to $DEST"; exit 1; }
         done

39-48: Add error handling for Git operations.

The Git operations lack error checking, which could result in silent failures. Adding error handling improves reliability.

         cd helm-values
         git config --global user.email "[email protected]"
         git config --global user.name "CI Bot"

-        git add .
-        git commit -m "Updated helm charts"
+        git add . || { echo "Failed to stage changes"; exit 1; }
+        # Only commit if there are changes
+        git diff --staged --quiet || git commit -m "Updated helm charts" || { echo "Failed to commit changes"; exit 1; }
         echo -e "Showing changeset\n $(git show)"
-        git push
+        git push || { echo "Failed to push changes"; exit 1; }
scripts/ci/docker/generate-tag.mjs (2)

64-80: Commented debugging code should be removed.

There's a commented console.error statement that appears to be leftover debugging code.

Remove the commented debugging line to improve code cleanliness.

  // UNKNOWN BRANCH
- // console.error(`Unknown branch: ${targetBranch} - not sure how to tag this deployment`);
  throw new Error(`Unsupported branch: ${targetBranch}`)

95-102: Consider using a more cryptographically secure random string generator.

The current random string implementation uses Math.random() which isn't cryptographically secure.

For tags that might be used in security-sensitive contexts, consider using crypto.randomBytes():

  function createRandomString(length) {
-   const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789'
-   let result = ''
-   for (let i = 0; i < length; i++) {
-     result += chars.charAt(Math.floor(Math.random() * chars.length))
-   }
-   return result
+   const crypto = require('crypto')
+   return crypto.randomBytes(Math.ceil(length/2))
+     .toString('hex')
+     .slice(0, length)
  }
.github/workflows/merge-queue.yml (2)

181-226: Post-build processing follows CI pipeline best practices.

The job properly handles artifacts, manifest data, and updates helm-values repository when needed.

Consider adding a timeout to this job to prevent long-running stuck processes, similar to what you have in the tests job.

  post-docker-build:
    needs:
      - docker-build
      - prepare
    permissions:
      id-token: write
      contents: write
    runs-on: arc-runners
+   timeout-minutes: 30
    steps:
🧰 Tools
🪛 actionlint (1.7.4)

188-188: label "arc-runners" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-22.04", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "macos-12-xl", "macos-12-xlarge", "macos-12-large", "macos-12", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)

🪛 YAMLlint (1.35.1)

[error] 218-218: trailing spaces

(trailing-spaces)


218-218: Minor formatting issue: trailing whitespace.

There are trailing spaces at line 218.

Remove trailing whitespace to keep the file clean.

-      
+
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 218-218: trailing spaces

(trailing-spaces)

scripts/ci/docker/get-data.mjs (3)

28-59: Complex nested conditionals in YAML parsing.

The code to check for the presence of an image object in YAML files uses deeply nested conditionals.

Consider simplifying the nested conditions with optional chaining:

  if (
-   yamlContent &&
-   typeof yamlContent === 'object' &&
-   'image' in yamlContent &&
-   yamlContent.image &&
-   typeof yamlContent.image === 'object' &&
-   'repository' in yamlContent.image
+   yamlContent?.image?.repository && 
+   typeof yamlContent === 'object' && 
+   typeof yamlContent.image === 'object'
  ) {

92-97: Improve error messaging for unsupported event types.

The error message for unsupported event types could be more informative.

Enhance the error message to provide more context:

function getBranch() {
  if (eventName === 'merge_group') {
    return context.payload.merge_group.base_ref.replace('refs/heads/', '')
  }
- throw new Error(`Unsupported event: ${context.eventName}`)
+ throw new Error(`Unsupported event type: ${context.eventName}. This script only supports 'merge_group' events.`)
}

121-130: Generic error message in getArtifactname function.

The error message when no valid deployment type is found is too generic.

Improve error messaging for clarity:

function getArtifactname() {
  if (typeOfDeployment.dev) {
    return `main-${sha}`
  }
  if (typeOfDeployment.prod) {
    return `release-${sha}`
  }

- throw new Error(`Unsupported`)
+ throw new Error(`Unsupported deployment type. Expected dev or prod but got: ${JSON.stringify(typeOfDeployment)}`)
}
🛑 Comments failed to post (4)
scripts/ci/docker/generate-tag.mjs (2)

24-42: ⚠️ Potential issue

Incorrect error handling in pull request events.

The function throws an error for pull request events instead of generating a tag, with commented-out code that would provide the expected functionality.

This appears to be intentional but should be documented. If pull requests should not trigger this functionality, consider adding a comment explaining why it's disabled rather than leaving commented code that suggests it could work.

  if (eventName === 'pull_request' && context.payload.pull_request?.number) {
    throw new Error(`Unsupported event: ${eventName}`)
-   // return `pr-${context.payload.pull_request.number}-${randomTag}`;
+   // Pull request tagging intentionally disabled - only merge queue events are supported
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

function getTagname() {
  if (eventName === 'pull_request' && context.payload.pull_request?.number) {
    throw new Error(`Unsupported event: ${eventName}`)
    // Pull request tagging intentionally disabled - only merge queue events are supported
  }
  if (eventName === 'merge_group') {
    const dateString = new Date().toISOString().split('T')[0].replace(/-/g, '')
    if (typeOfDeployment.dev) {
      return `main_${dateString}_${randomTag}`
    }
    if (typeOfDeployment.prod) {
      return `release_${dateString}_${randomTag}`
    }
    throw new Error(`Unable to determine artifact name for merge_group event`)
  }
  throw new Error(
    `Unable to determine artifact name for event type: ${eventName}`,
  )
}

44-62: ⚠️ Potential issue

Similar error handling issue in getArtifactname function.

The function follows the same pattern as getTagname, throwing errors for pull request events with commented-out alternative implementation.

Consider using the same approach as suggested for the getTagname function to maintain consistency.

  if (eventName === 'pull_request' && context.payload.pull_request?.number) {
    throw new Error(`Unsupported event: ${eventName}`)
-   // return `pr-${context.payload.pull_request.number}`;
+   // Pull request artifact naming intentionally disabled - only merge queue events are supported
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

function getArtifactname() {
  if (eventName === 'pull_request' && context.payload.pull_request?.number) {
    throw new Error(`Unsupported event: ${eventName}`)
    // Pull request artifact naming intentionally disabled - only merge queue events are supported
  }
  if (eventName === 'merge_group') {
    if (typeOfDeployment.dev) {
      return `main-${context.payload.merge_group.head_sha}`
    }
    if (typeOfDeployment.prod) {
      return `release-${context.payload.merge_group.head_sha}`
    }
    throw new Error(`Unable to determine artifact name for merge_group event`)
  }

  throw new Error(
    `Unable to determine artifact name for event type: ${eventName}`,
  )
}
scripts/ci/docker/get-data.mjs (2)

99-119: 🛠️ Refactor suggestion

Default case returns empty deployment type values.

The getTypeOfDeployment function returns an object with all false values for unknown branches, which could lead to subtle bugs.

Consider throwing an error for unknown deployment types instead of returning all-false values:

  if (branch.startsWith('release')) {
    return {
      dev: false,
      staging: false,
      prod: true,
    }
  }
- return {
-   dev: false,
-   staging: false,
-   prod: false,
- }
+ throw new Error(`Unknown branch type: ${branch}. Cannot determine deployment type.`)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

function getTypeOfDeployment() {
  if (branch === 'main' || branch === 'mq-docker-pre-main') {
    return {
      dev: true,
      staging: false,
      prod: false,
    }
  }
  if (branch.startsWith('release')) {
    return {
      dev: false,
      staging: false,
      prod: true,
    }
  }
  throw new Error(`Unknown branch type: ${branch}. Cannot determine deployment type.`)
}

63-90: 🛠️ Refactor suggestion

parseData function could use more robust error handling.

The function parses JSON data and updates YAML files but has minimal error handling.

Add try/catch blocks to handle potential JSON parsing errors and file writing failures:

async function parseData() {
  const fileName = `/tmp/data.json`
  if (!fs.existsSync(fileName)) {
    process.exit(0)
  }

-  const fileData = fs.readFileSync(fileName, 'utf-8')
-  const parsedData = JSON.parse(fileData)
+  let parsedData
+  try {
+    const fileData = fs.readFileSync(fileName, 'utf-8')
+    parsedData = JSON.parse(fileData)
+  } catch (error) {
+    console.error(`Error reading or parsing data file: ${error.message}`)
+    process.exit(1)
+  }
  
  for (const value of parsedData) {
    const { project, imageName, imageTag } = value
    console.log(`Changing value for imageName ${imageName}`)
    if (imageName in IMAGE_OBJECT) {
      IMAGE_OBJECT[imageName].forEach(({ filePath, content }) => {
        content.image.tag = imageTag
        const newFile = jsyaml.dump(content)
        console.log(newFile)
-       fs.writeFileSync(filePath, newFile, { encoding: 'utf-8' })
-       changedFiles.push(filePath)
-       console.info(`Updated ${filePath}`)
+       try {
+         fs.writeFileSync(filePath, newFile, { encoding: 'utf-8' })
+         changedFiles.push(filePath)
+         console.info(`Updated ${filePath}`)
+       } catch (error) {
+         console.error(`Error writing to ${filePath}: ${error.message}`)
+       }
      })
    } else {
      console.info(`Skipping ${imageName}…`)
    }
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

async function parseData() {
  const fileName = `/tmp/data.json`
  if (!fs.existsSync(fileName)) {
    process.exit(0)
  }

  let parsedData
  try {
    const fileData = fs.readFileSync(fileName, 'utf-8')
    parsedData = JSON.parse(fileData)
  } catch (error) {
    console.error(`Error reading or parsing data file: ${error.message}`)
    process.exit(1)
  }
  
  for (const value of parsedData) {
    const { project, imageName, imageTag } = value
    console.log(`Changing value for imageName ${imageName}`)
    if (imageName in IMAGE_OBJECT) {
      IMAGE_OBJECT[imageName].forEach(({ filePath, content }) => {
        content.image.tag = imageTag
        const newFile = jsyaml.dump(content)
        console.log(newFile)
        try {
          fs.writeFileSync(filePath, newFile, { encoding: 'utf-8' })
          changedFiles.push(filePath)
          console.info(`Updated ${filePath}`)
        } catch (error) {
          console.error(`Error writing to ${filePath}: ${error.message}`)
        }
      })
    } else {
      console.info(`Skipping ${imageName}…`)
    }
  }
  console.log(`Changed files is ${changedFiles.join(',')}`)
  core.setOutput(_KEY_HAS_OUTPUT, 'true')
  core.setOutput(_KEY_CHANGED_FILES, changedFiles.join(','))
}

@lodmfjord lodmfjord added this pull request to the merge queue Feb 25, 2025
Merged via the queue into mq-docker-pre-main with commit 9157fc5 Feb 25, 2025
119 checks passed
@lodmfjord lodmfjord deleted the debug-mq branch February 25, 2025 15:36
github-merge-queue bot pushed a commit that referenced this pull request Feb 28, 2025
* ci: docker deployment test (#18084)

* chore: test

* fix: test

* fix: test

* chore: test 2

* fix: better generate tag script

* chore: test build

* fix: test 2

* fix: add prepare

* fix: oops

* fix: wtf

* fix: faster

* fix: forgot

* fix: ok

* fix: ok

* fix: wtf

* fix: ok

* fix: ok

* fix: i hope

* fix: debug

* fix: ok

* fix: debug

* fix: wtf

* fix: ok

* fix: oops

* fix: texst 2

* fix: debug

* fix: oops

* fix: export variable

* fix: docker chunks

* fix: chunks

* fix: oops

* fix: oh god

* fix: oops

* fix: add git branch

* fix: add sha

* feat: write build data

* fix: upload artifact

* fix: typo

* fix: ok

* fix: test

* fix: ok

* fix: use cloudposse for matrix output

* feat: upload artifact

* fix: write data

* fix: permission

* fix: test

* fix: upload

* fix: ok

* fix: ok

* fix: deploy

* fix: oops

* fix: okok

* fix: ok

* fix: ok

* fix: ok

* fix: ok

* fix: ok

* fix: fix

* fix: 2

* fix: ok hehe

* fix: ok  changes

* fix: ok

* fix: ok

* fix: test

* fix: ok

* fix: ok

* fix: ok

* fix: hoho

* Feature/update helm values (#18100)

* feat(ci): Commit helm charts to helm-values repository

* feat(ci): Changing the steps order of the implementation

* feat(ci): Implementing logic

* feat(ci): Change implementation to use composite action

---------

Co-authored-by: Ingvar Sigurðsson <[email protected]>

* fix: fix fix

* fix: ok

* fix: ok

* fix: sorry

* fix: ok

* fix: update manifest with bs

* fix: changed files

* fix: copy

* fix: create directory if it does not exist

* fix: test

* fix: sha

* fix: target

* fix: ok

* fix: ok

* fix: ok

* fix: ok

* fix: test

* fix: sha

---------

Co-authored-by: Ingvar Sigurðsson <[email protected]>

* fix: cleanup

* fix: apply on mq (#18113)

fiX: ok

* feat: fix tag (#18118)

fix: debug msg

* fix: small fixes (#18127)

* fix: remove test branch

* fix: move yarn install

* feat: get ready for release  (#18136)

* fix: ok

* fix: ignore

* fix: ok

* fix: get sha

* fix: ok

* fix: ok

* fix: ok

* fix: error

* fix: skip run

* fix: ok

* fix: merge

* fix: nothing should happen (#18149)

* chore: addding ssh key (#18145)

* addding ssh key

* fix: passing the key

* fix: missed a spot

* fix: test

* fix: remove

* fix: ok

* fix: set secret

* fix: test

* fix: ok

* fix: ok

* fix: install 2

* fix: remove

---------

Co-authored-by: lommi <[email protected]>

* fix: update path using helm values

* fix: mq-docker

* chore: test prod and staging (#18160)

* fix: mq-docker-main

* fix: ugly hack remove this before going to main

* fix: mq docker pre main

* fix: mq workflow (#18162)

* fix: format

* fix: ok

---------

Co-authored-by: Ingvar Sigurðsson <[email protected]>
Co-authored-by: Róberta Andersen <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant