diff --git a/docs/SUMMARY.md b/docs/SUMMARY.md index 200575d7c..99109c020 100644 --- a/docs/SUMMARY.md +++ b/docs/SUMMARY.md @@ -103,11 +103,18 @@ * [Application Metrics](user-guide/creating-application/app-metrics.md) * [Overview](user-guide/creating-application/overview.md) * [Jobs](user-guide/jobs/README.md) + * [What is job?](user-guide/jobs/what-is-job.md) * [Create a new job](user-guide/jobs/create-job.md) - * [Configurations](user-guide/jobs/configuration-job.md) - * [Workflow Editor](user-guide/jobs/workflow-editor-job.md) + * [Configurations](user-guide/jobs/configurations/README.md) + * [Source Code](user-guide/jobs/configurations/source-code-job.md) + * [Workflow Editor](user-guide/jobs/configurations/workflow-editor-job.md) + * [ConfigMaps & Secrets](user-guide/jobs/configurations/configmap-secret/README.md) + * [ConfigMaps](user-guide/jobs/configurations/configmap-secret/configmap-job.md) + * [Secrets](user-guide/jobs/configurations/configmap-secret/secret-job.md) + * [Environment Overrides](user-guide/jobs/configurations/environment-override-job.md) * [Trigger Job](user-guide/jobs/triggering-job.md) - * [Overview](user-guide/jobs/overview-job.md) + * [Run History](user-guide/jobs/run-history-job.md) + * [Job Overview](user-guide/jobs/overview-job.md) * [Application Groups](user-guide/application-groups.md) * [Software Distribution Hub](user-guide/software-distribution-hub/README.md) * [Tenants](user-guide/software-distribution-hub/tenants.md) diff --git a/docs/user-guide/jobs/README.md b/docs/user-guide/jobs/README.md index 7f8ba1f33..c721e97e1 100644 --- a/docs/user-guide/jobs/README.md +++ b/docs/user-guide/jobs/README.md @@ -1,18 +1,30 @@ # Jobs -Job allows manual and automated execution of your source code. Job pipeline will not have CI/CD pipeline as the job is limited to your source code only. You can also configure [preset plugins](../creating-application/workflow/ci-build-pre-post-plugins.md#preset-plugins) in your job pipeline. +Devtron Jobs provide a streamlined way to execute specific tasks or set of tasks defined by the user within the user's application environment. -With job, you can execute your source code quickly and easily without going through CI/CD pipelines, which also optimize time. +To learn more about how Jobs work, see the below sections -![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/jobs.jpg) -There are two main steps in executing Job: +* [What is Jobs](./what-is-job.md) -* [Configurations](configuration-job.md) +* [Creating a Job](./create-job.md) -* [Trigger Job](triggering-job.md) +* [Configurations](./configurations/README.md) -In the next section, we will learn on how to create, configure, trigger a job. You can also view the details on the [Overview](overview-job.md) tab and `Run History`. + * [Source Code](./configurations/source-code-job.md) + * [Workflow editor](./configurations/workflow-editor-job.md) + * [ConfigMaps & Secrets](./configurations/configmap-secret/README.md) + * [ConfigMaps](./configurations/configmap-secret/configmap-job.md) + + * [Secrets](./configurations/configmap-secret/secret-job.md) + + * [Environments Override](./configurations/environment-override-job.md) + +* [Trigger Job ](./triggering-job.md) + +* [Run History](./run-history-job.md) + +* [Job Overview](./overview-job.md) \ No newline at end of file diff --git a/docs/user-guide/jobs/configuration-job.md b/docs/user-guide/jobs/configuration-job.md deleted file mode 100644 index afd0d7daf..000000000 --- a/docs/user-guide/jobs/configuration-job.md +++ /dev/null @@ -1,9 +0,0 @@ -# Configuration - -For the Jobs, you must configure the following sections before you run and trigger the job: - -[Source Code](../creating-application/git-material.md) - -[Workflow Editor](workflow-editor-job.md) - - diff --git a/docs/user-guide/jobs/configurations/README.md b/docs/user-guide/jobs/configurations/README.md new file mode 100644 index 000000000..6e12a07f1 --- /dev/null +++ b/docs/user-guide/jobs/configurations/README.md @@ -0,0 +1,11 @@ +# Configuration + +After you have created a Job, the next step is, to configure the job. This means specifying the source code and using the Workflow Editor to create and configure the job pipeline, which include defining tasks such as code scanning, vulnerability checks, or data migrations, and configuring the sequence in which these tasks should be executed. + +In the following sections we will explore how you can configure your Job which includes, + +1. Configuring the [Source Code](./source-code-job.md) + +2. Creating and configuring the job pipeline through [Workflow Editor](./workflow-editor-job.md). + +3. Defining [ConfigMaps](./configmap-secret/configmap-job.md) & [Secrets](./configmap-secret/secret-job.md), and [Environment Overrides](./environment-override-job.md) for the job. \ No newline at end of file diff --git a/docs/user-guide/jobs/configurations/configmap-secret/README.md b/docs/user-guide/jobs/configurations/configmap-secret/README.md new file mode 100644 index 000000000..a2b404fb8 --- /dev/null +++ b/docs/user-guide/jobs/configurations/configmap-secret/README.md @@ -0,0 +1,10 @@ +# ConfigMaps & Secrets +## ConfigMaps +A ConfigMap stores key-value pairs that your jobs can use as environment variables or mounted files. ConfigMaps are meant for non-sensitive data. Moreover, you can update configurations without modifying or rebuilding your container images, thus making the deployments more efficient. + +To configure a ConfigMap for your job-pipeline, refer the [ConfigMaps](./configmap-job) section. + +## Secrets +Secrets and ConfigMaps are both used to store configurations but there is one major difference between them: ConfigMap stores key-values in normal text format; whereas secrets store them in base64 encrypted form. Devtron hides the data of secrets for the normal users and it is only visible to the users having edit permission. + +To configure a Secret for your job-pipeline, refer the [Secrets](./secret-job) section. \ No newline at end of file diff --git a/docs/user-guide/jobs/configurations/configmap-secret/configmap-job.md b/docs/user-guide/jobs/configurations/configmap-secret/configmap-job.md new file mode 100644 index 000000000..c0f973cd1 --- /dev/null +++ b/docs/user-guide/jobs/configurations/configmap-secret/configmap-job.md @@ -0,0 +1,197 @@ +# ConfigMaps + +A ConfigMap stores key-value pairs that your jobs can use as environment variables or mounted files. Unlike secrets, ConfigMaps are meant for non-sensitive data. Moreover, you can update configurations without modifying or rebuilding your container images, thus making the deployments more efficient. + +--- + +## Add ConfigMap + +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} + +1. Go to the **Configurations** → **ConfigMaps & Secrets**. + +![Figure 1a: ConfigMaps & Secrets](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap.jpg) + +2. Click the **+** button next to **ConfigMaps**. + +![Figure 1b: Create ConfigMap](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-add.jpg) + +3. Enter a name for the ConfigMap (Once defined, name cannot be changed later). + + In case, you are using a External Kubernetes ConfigMap, name should be exactly same as the as the name given using `kubectl create configmap ` command. + +![Figure 1c: Enter ConfigMap name](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-name.jpg) + +4. **Data Type** - Choose between the following data types: + + * **Kubernetes ConfigMap**: Select the Data Type as Kubernetes ConfigMap, if you wish to create and use the ConfigMap using Devtron. + + * **Kubernetes External ConfigMap**: Select the Data Type as Kubernetes External ConfigMap if you have already created a ConfigMap using the kubectl command and wants to use that in Devtron. + + ![Figure 1d: ConfigMap data type](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-data-type.jpg) + +5. After selecting the data type, you can choose how to mount the data of your ConfigMap. Devtron allows you to mount ConfigMap Data in following ways

**Mount data as** - Select how you want to mount the ConfigMap: + + * [**Environment Variable**](#mount-data-as-environment-variables) – Select this option if you want to inject Environment Variables in pods using ConfigMap. + + * [**Data Volume**](#mount-data-as-data-volume) – Select this option, if you want to configure a Data Volume that is accessible to Containers running in a pod and provide a Volume mount path. Go to [Data Volume](#mount-data-as-data-valume) to know more. + + ![Figure 1e: Mount data as](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-mount-data.jpg) + + ### Mount data as environment variables + + This will pass your ConfigMap data into your Job pod as environment variables, thus making the configurations values directly accessible by your job. + + #### For Kubernetes ConfigMap + + If you have selected Data type as `Kubernetes ConfigMap` and mount data as `Environment Variable` then, you also need to enter the required data (key-value pairs) in the **Data** field

Enter data in: + + * **GUI mode** – User-friendly interface. Click **+Add** button and enter the **Key** and **Value** fields without quotes. + + ![Figure 2a: Enter data in 'GUI' mode](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-env-var-gui.jpg) + + * **YAML mode** – Raw YAML for entering key-value pairs in the format **`key: value`**. Boolean and numeric values must be wrapped in double quotes. + + ![Figure 2b: Enter data in 'YAML' mode](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-env-var-yaml.jpg) + + #### For Kubernetes External ConfigMap + + If you have selected Data type as `Kubernetes External ConfigMap` then, no data is required as devtron will fetch the external ConfigMap data and use it to create a ConfigMap. + + ![Figure 3: Kubernetes External ConfigMap for 'Environment Variable'](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-data-external-env.jpg) + + ### Mount data as Data Volume + + This option allows you to create a ConfigMap by passing the content of a file. The content could be a plain text, json, yaml, bash script, etc. + + ![Figure 4a: Mount Data as Data Volume](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-data-vol.jpg) + + ### Volume Mount Path + + Enter the folder path where the data volume should be mounted for it to be accessible to the containers running in a pod. Your keys will be mounted as files to that volume. + + ![Figure 4b: Volume Mount Path](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-data-vol-mount-path.jpg) + + ### Set Sub Path + + When mounting multiple files to the same location, you can use the **Set Sub Path** option to control how the files are handled. This setting allows you to control whether existing files are overwritten or preserved when mounting new files. + + * If **Set Sub Path** is enabled, the system will preserve existing files in the [specified path](#volume-mount-path) and append the new file using the file name as a sub-path. + + * If **Set Sub Path** is disabled (unchecked), the system will delete any files already present in the [specified path](#volume-mount-path) and then mount the new files. + + ![Figure 4b: Set Sub Path](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-data-vol-set-subpath.jpg) + + {% hint style="info" %} + ### Note + In case of Kubernetes ConfigMap, all keys will be mounted as files on the specified path. + In case of Kubernetes External ConfigMap, manually specify the keys which should be mounted as files. + {% endhint %} + + ### Set File Permission + + The **Set File Permission** option applies permissions at the ConfigMap level, not to individual keys within the ConfigMap. Enabling this option will let you enter a 3-digit standard permission value to control access to the file. + + ![Figure 4c: Set File Permission](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-data-vol-set-file-per.jpg) + + The 3-digit numeric value represents the permission settings for the file: + + * **First digit**: Owner permissions (user). + * **Second digit**: Group permissions. + * **Third digit**: Other users' permissions. + + | **Permission** | **Description** | + |----------------|------------------------------------------------| + | **r** (read) | Grants the ability to read the file. | + | **w** (write) | Grants the ability to modify the file. | + | **x** (execute)| Grants the ability to execute the file as a program. | + + For example, **755** means: + * Owner can read, write, and execute (7), + * Group can read and execute (5), + * Others can read and execute (5). + + ### Data + + #### For Kubernetes ConfigMap + + If you have selected Data type as `Kubernetes ConfigMap` and mount data as `Data Volume` then, you also need to enter the required data (key-value pairs) in the **Data** field. + + The key of the ConfigMap should be your filename and the value of the ConfigMap should be your file content. In the below example, you `file.json` is the key, and the json content is the value of that ConfigMap (below the pipe (**|**) symbol). This file will be created on your specified [volume mount path](#volume-mount-path). + + Enter data in: + + * **GUI mode** – User-friendly interface. Click **+Add** button and enter the **Key** and **Value** fields without quotes. + + ![Figure 5a: Enter data in 'GUI' mode](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-data-vol-gui.jpg) + + * **YAML mode** – Raw YAML for entering key-value pairs in the format **`key: value`**. Boolean and numeric values must be wrapped in double quotes. + + ![Figure 5b: Enter data in 'YAML' mode](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-data-vol-yaml.jpg) + + #### For Kubernetes External ConfigMap + + If you have selected Data type as `Kubernetes External ConfigMap` then, no data is required as devtron will fetch the external ConfigMap along with any volumes attach with it and use it to create a ConfigMap. + + ![Figure 6a: Kubernetes External ConfigMap for 'Data Volume'](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-data-external-data-vol.jpg) + +6. Select **Save** to create a ConfigMap. + +--- + +## Update ConfigMap + +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} + +1. Click your ConfigMap available inside the list of **ConfigMaps** inside **ConfigMaps & Secrets**. + +2. Modify its values. + +3. Click **Save Changes**. + +![Figure 7a: Update ConfigMap](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-update.jpg) + +{% hint style="warning" %} +### Note +You cannot change the name of a ConfigMap. Create a new ConfigMap instead. +{% endhint %} + +--- + +## Delete ConfigMap + +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} + +You may delete a ConfigMap if not in use anymore. Once a ConfigMap is deleted, it will not be used in future deployments. + +1. Click your ConfigMap available inside the list of **ConfigMaps** inside **Base Configurations**. + +2. On the right side, click the kebab menu (3 vertical dots). + +3. Click **Delete**. + +4. Confirm the deletion in the dialog box. + +![Figure 8a: Delete ConfigMap](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/configmap-delete.jpg) + +--- + +After configuring ConfigMaps + + * Refer the [Secrets](./secret-job.md) section to configure secrets + + * Refer the [Environment Override](../environment-override-job.md) section to configure Environment Overrides. + + * Refer the [Trigger Job](../../triggering-job.md) section to trigger the job-pipeline. \ No newline at end of file diff --git a/docs/user-guide/jobs/configurations/configmap-secret/secret-job.md b/docs/user-guide/jobs/configurations/configmap-secret/secret-job.md new file mode 100644 index 000000000..2ed03031f --- /dev/null +++ b/docs/user-guide/jobs/configurations/configmap-secret/secret-job.md @@ -0,0 +1,247 @@ +# Secrets + +Secrets and ConfigMaps are both used to store configurations but there is one major difference between them: ConfigMap stores key-values in normal text format; whereas secrets store them in base64 encrypted form. Devtron hides the data of secrets for the normal users and it is only visible to the users having edit permission. + +Secret objects let you store and manage sensitive information, such as passwords, authentication tokens, and ssh keys. Embedding this information in secrets is safer and more flexible than putting it verbatim in a Pod definition or in a container image. + +--- + +## Add Secret + +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../../../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} + +1. Go to the **Configurations** → **Base Configurations**. + +![Figure 1a: ConfigMaps & Secrets](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret.jpg) + +2. Click the **+** button next to **Secrets**. + +![Figure 1b: Create Secret](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-add.jpg) + +3. Enter a name for the Secret (Once defined, name cannot be changed later). + + In case, you are mounting Existing Kubernetes Secret, name should be exactly same as the as the name given using `kubectl create secret ` command. + +![Figure 1c: Enter secret name](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-name.jpg) + +4. **Data Type** - Choose between the following data types: + + * **Kubernetes Secret**: Select the Data Type as Kubernetes Secret, if you wish to create and use the Secret using Devtron. + + * **Mount Existing Kubernetes Secret**: Select the Data Type as Existing Kubernetes Secret if you have already created a Secret using the kubectl command and wants to use that in Devtron. + + * **External Secret Operator (ESO)**: External Secrets Operator (ESO) is a Kubernetes component that integrates with external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, and more. It retrieves secrets from these external sources and injects them into Kubernetes Secrets automatically. + + > `external-secrets` helm chart should be installed before setting up ESO, otherwise the External Secret Operator (ESO) will not appear. Refer the [External Secret Operator (ESO)](#external-secret-operator-eso) section to setup ESO + +![Figure 1d: Secret data type](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-data-type.jpg) + +**Note**: Devtron automatically converts secrets from various data types to Kubernetes Secrets. Regardless of the original data type, once the conversion is complete, the Pods can access the secrets in the same way as native Kubernetes Secrets. + +5. After selecting the data type, you can choose how to mount the data of your Secret. Devtron allows you to mount Secret data in following ways

**Mount data as** - Select how you want to mount the Secret: + + * [**Environment Variable**](#mount-data-as-environment-variables) – Select this option if you want to inject Secret data(key-value pairs) as Environment Variables in pods using Secret. + + * [**Data Volume**](#mount-data-as-data-volume) – Select this option, if you want to configure a Data Volume that is accessible to Containers running in a pod and provide a Volume mount path. Go to [Data Volume](#mount-data-as-data-valume) to know more. + + ![Figure 1e: Mount data as](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-mount-data.jpg) + + ### Mount data as environment variables + + This will pass your secret data into your Job pod as environment variables, thus making the configurations values directly accessible by your job. + + #### For Kubernetes Secret + + If you have selected Data type as `Kubernetes Secret` and mount data as `Environment Variable` then, you also need to enter the required data (key-value pairs) in the **Data** field

Enter data in: + + * **GUI mode** – User-friendly interface. Click **+Add** button and enter the **Key** and **Value** fields without quotes. + + ![Figure 2a: Enter data in 'GUI' mode](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-env-var-gui.jpg) + + * **YAML mode** – Raw YAML for entering key-value pairs in the format **`key: value`**. Boolean and numeric values must be wrapped in double quotes. + + ![Figure 2b: Enter data in 'YAML' mode](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-env-var-yaml.jpg) + + ### Mount Existing Kubernetes Secrets + + This option allows you to mount an existing Kubernetes Secret in your job pods. A Secret will not be created by system so please ensure that the secret with the same name already exists within the namespace. Otherwise, the deployment will fail. + + If you have selected Data type as `Mount Existing Kubernetes Secrets` then, no data is required as devtron will fetch the existing Secret data and use it to create a Secret. + + ![Figure 3a: Mount Existing Kubernetes Secrets for 'Environment Variable'](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-mount-existing-env-var.jpg) + +--- + + ### Mount Data as Data Volume + + +This option allows you to create a Secret by passing the content of a file. The content could be a plain text, json, yaml, bash script, etc. + +![Figure 4a: Mount Data as Data Volume](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-data-vol.jpg) + + ### Volume Mount Path + +Enter the folder path where the data volume should be mounted for it to be accessible to the containers running in a pod. Your keys will be mounted as files to that volume. + +![Figure 4b: Volume Mount Path](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-data-vol-mount-path.jpg) + + ### Set Sub Path + +When mounting multiple files to the same location, you can use the **Set Sub Path** option to control how the files are handled. This setting allows you to control whether existing files are overwritten or preserved when mounting new files. + +![Figure 4b: Set Sub Path](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-data-vol-sub-path.jpg) + +* If **Set Sub Path** is enabled, the system will preserve existing files in the [specified path](#volume-mount-path) and append the new file using the file name as a sub-path. + +* If **Set Sub Path** is disabled (unchecked), the system will delete any files already present in the [specified path](#volume-mount-path) and then mount the new files. + +{% hint style="info" %} + ### Note +In case of Kubernetes Secrets, all keys will be mounted as files on the specified path. +In case of External Secrets, manually specify the keys which should be mounted as files. +{% endhint %} + + ### Set File Permission + +The **Set File Permission** option applies permissions at the Secret level, not to its individual secret keys. Enabling this option will let you enter a 3-digit standard permission value to control access to the file. + +![Figure 4c: Set File Permission](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-data-vol-file-per.jpg) + +The 3-digit numeric value represents the permission settings for the file: + +* **First digit**: Owner permissions (user). +* **Second digit**: Group permissions. +* **Third digit**: Other users' permissions. + +| **Permission** | **Description** | +|----------------|------------------------------------------------| +| **r** (read) | Grants the ability to read the file. | +| **w** (write) | Grants the ability to modify the file. | +| **x** (execute)| Grants the ability to execute the file as a program. | + +For example, **755** means: +* Owner can read, write, and execute (7), +* Group can read and execute (5), +* Others can read and execute (5). + +### Data +#### For Kubernetes Secret + +If you have selected Data type as `Kubernetes Secret` and mount data as `Data Volume` then, you also need to enter the required data (key-value pairs) in the **Data** field. + +The key of the Secret should be your filename and the value of the Secret should be your file content. In the below example, you `file.json` is the key, and the json content is the value of that Secret (below the pipe (**|**) symbol). This file will be created on your specified [volume mount path](#volume-mount-path). + +Enter data in: + + * **GUI mode** – User-friendly interface. Click **+Add** button and enter the **Key** and **Value** fields without quotes. + + ![Figure 5a: Enter data in 'GUI' mode](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-data-vol-gui.jpg) + + * **YAML mode** – Raw YAML for entering key-value pairs in the format **`key: value`**. Boolean and numeric values must be wrapped in double quotes. + + ![Figure 5b: Enter data in 'YAML' mode](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-data-vol-yaml.jpg) + +#### For Mount Existing Kubernetes Secrets + +This option allows you to mount an existing Kubernetes Secret in your job pods as data volumes. A Secret will not be created by system so please ensure that the secret with the same name already exists within the namespace. Otherwise, the deployment will fail.

If you have selected Data type as `Mount Existing Kubernetes Secrets` then, no data is required as devtron will fetch the existing Secret data and use it to create a Secret. + +![Figure 6a: Mount Existing Kubernetes Secrets for 'Data Volume'](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-mount-existing-data-vol.jpg) + +--- + +## Update Secret + +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../../../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} + +1. Click your Secret available inside the list of **Secrets** inside **ConfigMaps & Secrets**. + +2. Modify its values. + +3. Click **Save Changes**. + +![Figure 7a: Update secret](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-update.jpg) + +{% hint style="warning" %} +### Note +You cannot change the name of a Secret. Create a new Secret instead. +{% endhint %} + +--- + +## Delete Secret + +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../../../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} + +You may delete a Secret if not in use anymore. Once a Secret is deleted, it will not be used in future deployments. + +1. Click your Secret available inside the list of **Secrets** inside **Base Configurations**. + +2. On the right side, click the kebab menu (3 vertical dots). + +3. Click **Delete**. + +4. Confirm the deletion in the dialogbox. + +![Figure 8a: Delete secret](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/secret-delete.jpg) + +--- + +## External Secret Operator (ESO) + +{% hint style="info" %} +### Prerequisite +Chart version should be > 4.14.0 +{% endhint %} + +### Purpose + +This section is for users who wish to use the following data type while adding secrets in Devtron: +* [Google Secrets Manager](./eso/gcp-eso.md) +* [AWS Secrets Manager](./eso/aws-eso.md) +* [Hashi Corp Vault](./eso/hashicorp-eso.md) +* [Azure Secrets Manager](./eso/azure-eso.md) + +External Secrets Operator (ESO) is a Kubernetes component that integrates with external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, and more. It retrieves secrets from these external sources and injects them into Kubernetes Secrets automatically. Before you can create external secrets in Devtron, you need to install the External Secrets Operator on the target cluster. + +### Installation Steps + +1. Go to the **Chart Store**. + +2. Search for the `external-secrets` chart. + + ![Figure 9a: Searching External Secrets Chart](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/secrets/external-secret.jpg) + +{% hint style="info" %} +### What if external-secrets chart is not found? +Manually add the following chart repository URL in Devtron: `https://charts.external-secrets.io`. Follow this [guide](../../global-configurations/chart-repo.md#add-chart-repository) to know the steps. +{% endhint %} + +3. Give a name to the helm app that will be created from the chart. Also enter the project and environment where you wish to install the chart. + + ![Figure 9b: Adding Details](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/secrets/ext-secret-fields.jpg) + +4. Click **Deploy Chart**. + +After Deploying the Chart, refer the [ESO Documentation](/docs/user-guide/creating-application/eso/README.md) to setup ESO for different providers. + +--- + +After configuring Secrets + + * Refer the [ConfigMaps](./configmap-job.md) section to configure ConfigMaps + + * Refer the [Environment Override](../environment-override-job.md) section to configure Environment Overrides. + + * Refer the [Trigger Job](../../triggering-job.md) section to trigger the job-pipeline. \ No newline at end of file diff --git a/docs/user-guide/jobs/configurations/environment-override-job.md b/docs/user-guide/jobs/configurations/environment-override-job.md new file mode 100644 index 000000000..17933fac0 --- /dev/null +++ b/docs/user-guide/jobs/configurations/environment-override-job.md @@ -0,0 +1,131 @@ +# Environment Overrides + +The Environment Overrides section allows you to customize the **ConfigMaps**, and **Secrets** for different environments such as development, testing, staging, and production and it even allows to create additional **ConfigMaps**, and **Secrets** (if required) for different environments + +## How it works + +* When you add a job pipeline to an job's workflow, each environment configuration initially inherits the ConfigMap and Secret from the **Base Configurations** of the job. + +* The **Environment Overrides** section lets you customize those ConfigMap and Secret per environment without affecting those of other environments. For example, in a non-production environment, you might allocate `100m` CPU, while in production, you may increase it to `500m` to handle higher traffic. + +* The **Environment Overrides** section also lets you create additional ConfigMaps and Secrets per environment without affecting those of other environments. For example, a testing environment may require additional ConfigMaps and Secrets for temporary or test-specific configurations, while a production environment uses only the base ConfigMaps and Secrets needed for running the application. + +--- + +## Environment Configurations Page + +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} + +1. In your job, go to **Configurations** → **Environment Overrides**. + + ![Figure 1a: Environment Override](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/env-over.jpg) + +2. Click **Add Environment** and select an environment from the dropdown for which you want your configurations to be modified. + + ![Figure 1b: Add Environment](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/env-over-select-env.jpg) + +3. The environment will be added under **Environment Override**, if you wish you can add more environments by clicking **Add Environment**. + + ![Figure 1c: Select Environment](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/env-over-env-added.jpg) + +4. Click on the environment you have added under **Environment Override**, you will get the following options (similar to the **ConfigMaps & Secrets** page): + + * **ConfigMaps** + + * **Secrets** + + ![Figure 1d: ConfigMaps & Secrets](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/env-over-view.jpg) + +5. You can now do one of the following: + + * Override the existing **ConfigMap & Secrets** which are being inherited from the base configurations specific to the selected environment. + + * Create additional **ConfigMap & Secrets** specific to the selected environment. + + Let's see how to override the values of ConfigMaps & Secrets for the selected environment. + +--- + +## Override ConfigMaps & Secrets + +If you want to have environment-specific **ConfigMaps & Secrets**, use **Environment Override** to override them for specific environments or create new environment specific **ConfigMaps & Secrets**. At the time of execution, devtron will pick environment specific **ConfigMaps & Secrets** according to the environment in which the job is executed and pass them to your job pods. + +1. Under the selected environment, select the **ConfigMap** or **Secret** you wish to override, by default the ConfigMap or Secret is inherited from the base configuration. + + ![Figure 2a: Select ConfigMap or Secrets](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/env-over-select-config-secret.jpg) + +2. To create Override, select the **No Override** tab and click the **Create Override** button. + + ![Figure 2b: Create Override](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/env-over-create-override.jpg) + +3. In the same tab (now labelled as **Override**), you can now change the configuration of your ConfigMap or Secret that will be specific to the selected environment. + + **Note** Except `Name` cannot be changed for ConfigMaps & Secrets that are inherited from the base configuration. + + ![Figure 2c: Override ConfigMap or Secret](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/env-over-config-override.jpg) + +4. Override the data values using [Replace](#replace-strategy) merge strategy. + +### Replace Strategy + +* The entire configuration is replaced with your new environment-specific settings. +* The replaced template will no longer depend or inherit from base configuration anymore. +* Best for a complete override. + +| Field | Inherited Configuration | Input (with Replace) | Final Configuration | +|-----------|--------------------|------------------------------|---------------------| +| cpu | 100m | 500m | 500m | +| memory | 256Mi | 512Mi | 512Mi | +| replicas | 2 | *(Not specified)* | *(Removed)* | +| logLevel | "info" | *(Not specified)* | *(Removed)* | +| timeout | (Not specified) | 30s | 30s (Added) | + +> To know how to configure ConfigMaps & Secrets refer to the following sections:
  • [ConfigMaps](./config-maps-and-secrets/configmaps.md)
  • [Secrets](./config-maps-and-secrets/secrets.md)
+ +--- + +## Create Additional ConfigMaps & Secrets + +To create additional ConfigMaps & Secrets, follow the given steps + +1. Under the selected environment, click the `+` button next to ConfigMap or Secret. + + ![Figure 3a: Add ConfigMap or Secret](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/env-over-add-config-secret.jpg) + +2. A configuration tab will open (which was previously named override) to add a new **ConfigMap** or **Secret**. + + ![Figure 3b: Configure ConfigMap or Secret](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/env-over-add-view.jpg) + + Follow the below guide to create a ConfigMap or Secret: + + * [Add ConfigMaps](./config-maps-and-secrets/configmaps.md#add-configmap) + + * [Add Secrets](./config-maps-and-secrets/secrets.md#add-secret) + +3. Once created, a new ConfigMap or Secret will be added with a label `Created at environment` under it's name, in the left-section under ConfigMap or Secret respectively. + + ![Figure 3c: ConfigMap or Secret added](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/env-over-new-added.jpg) + +--- + +## Delete Override + +This action will discard the current overrides and the base configuration file (in this example, deployment template) will be restored back for the environment. + +1. On the right side, click the kebab menu (3 vertical dots). + +2. Click **Delete Override**. + + ![Figure 4a: Delete Override](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/env-over-delete.jpg) + +3. Confirm the deletion in the dialog box. + + ![Figure 4b: Confirm Delete Override](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/env-over-delete-dialog-box.jpg) + +--- + +After setting up **Environment Overrides**, you can refer the [Trigger Job](../triggering-job.md) section to trigger your job-pipeline in different environments. \ No newline at end of file diff --git a/docs/user-guide/jobs/configurations/source-code-job.md b/docs/user-guide/jobs/configurations/source-code-job.md new file mode 100644 index 000000000..00f298b62 --- /dev/null +++ b/docs/user-guide/jobs/configurations/source-code-job.md @@ -0,0 +1,104 @@ +# Source Code + +In Devtron, the Source Code configuration is used to specify the repository that contains your scripts, terraform files, YAML configurations, or other resources. The repository acts as a central location for these files, allowing you to reference and execute them in your job without needing to rewrite the scripts in the Workflow Editor each time. + +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} + +To configure the Source Code, follow these steps: + +1. If the job has been just created, you will be automatically directed to the Configurations page. If not, navigate to **Configurations** tab of your job.. + +2. Select the **Source Code** tab from the left sidebar. + + ![Figure 1a: Select source code](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/source-code.jpg) + +3. Under **Add Git Repository**, select the **Git Account** from the dropdown menu. You can also select `GitHub Public` from the same dropdown to configure a public repository that does not require authentication. + + ![Figure 1b: Add git account](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/source-code-git-account.jpg) + +4. Enter the **Repository URL** in the Git Repo `URL` field, corresponding to the selected Git account.
+If GitHub Public is selected, you can enter the URL of any public repository, as no authentication is required. + + ![Figure 1c: Add git repository](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/source-code-repo-url.jpg) + +5. Configure the [Additional Options](#configure-additional-options-optional) for the job as per your requirements. + +6. Click on the **Save** button to save the changes. + + ![Figure 1d: Save source code](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/source-code-save.jpg) + +--- + +## Configure Additional Options (optional) + +### Exclude specific file/folder in this repo + +Devtron allows you to exclude specific files or folders from the repository from being included in the job execution. This is particularly useful for avoiding unnecessary files or folders that do not contribute to the job’s tasks that are not relevant to the current job execution. + +You can define either exclusion rules or inclusion rules to filter the files and folders ensuring that only the necessary parts of the repository are used in the job. + +Commits that contain only changes to excluded files or folders will be marked as excluded when selecting commits to trigger the job, preventing them from being included in the build. + +To define the exclusion or inclusion rules, follow these steps: + +1. Check the **Exclude specific file/folder in this repo** checkbox. + + ![Figure 2a: Exclude specific file/folder](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/source-code-exclude-files.jpg) + +2. Enter the exclusion or inclusion rules in the **Enter file or folder paths to be included or excluded** field. + + | Sample Rules | Description | Impact on Commits | + |:---|:---|:---| + | `!README.md` | **Exclusion of a single file in root folder** | Commits containing changes made only in README.md file will not be shown | + | `!README.md`
`!index.js` | **Exclusion of multiple files in root folder** | Commits containing changes made only in README.md or/and index.js files will not be shown | + | `README.md` | **Inclusion of a single file in root folder** | Commits containing changes made only in README.md file will be shown. Rest all will be excluded. | + | `!src/extensions/printer/code2.py` | **Exclusion of a single file in a folder tree** |Commits containing changes made specifically to code2.py file will not be shown | + | `!src/*` | **Exclusion of a single folder and all its files:** |Commits containing changes made specifically to files within src folder will not be shown | + | `!README.md`
`index.js` | **Exclusion and inclusion of files** | Commits containing changes made only in README.md will not be shown, but commits made in index.js file will be shown. All other commits apart from the aforementioned files will be excluded. | + | `!README.md`
`README.md` | **Exclusion and inclusion of conflicting files** | If conflicting paths are defined in the rule, the one defined later will be considered. In this case, commits containing changes made only in README.md will be shown. | + + + You may use the **Learn how** link (as shown below) to understand the syntax of defining an exclusion or inclusion rule. + + ![Figure 2b: 'Learn how' Button](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/source-code-learn-how.jpg) + + Since file paths can be long, Devtron supports regex too for writing the paths. To understand it better, you may click the **How to use** link as shown below. + + ![Figure 2c: Regex Support](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/source-code-regex.jpg) + + +### Set checkout path + +Devtron allows you to define a custom directory path where the repository will be checked out during job execution. By default, the repository is checked out to the root directory (./). However, you can set a custom path to specify a particular folder within the repository to be accessed and utilized during job execution. + +To set the checkout path, follow these steps: + +1. Check the **Set checkout path** checkbox. + +2. Enter the path to the folder you want to check out from the repository in the **Set checkout path** field. + + |Sample paths|Description| + |:---|:---| + |`./`|Checkout the repository to the root directory i.e., the entire repository itself| + |`./src`|Checkout the repository to the src folder| + |`./src/app`|Checkout the repository to the app folder inside the src folder| + + ![Figure 3a: Checkout path](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/source-code-checkout.jpg) + +### Pull submodules recursively + +This checkbox is used for pulling [git submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules) present in a repo. The submodules will be pulled recursively, and the auth method used for the parent repo will be used for submodules too. + +To pull the submodules recursively, check the **Pull submodules recursively** checkbox. + +![Figure 4a: Pull submodules recursively](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/source-code-pull.jpg) + +--- + +After configuring **Source Code**, the next step to create and configure job pipelines. + +Refer the [Workflow editor](./workflow-editor-job.md) section to create and configure job pipelines. \ No newline at end of file diff --git a/docs/user-guide/jobs/configurations/workflow-editor-job.md b/docs/user-guide/jobs/configurations/workflow-editor-job.md new file mode 100644 index 000000000..919375873 --- /dev/null +++ b/docs/user-guide/jobs/configurations/workflow-editor-job.md @@ -0,0 +1,398 @@ +# Workflow Editor + +The **Workflow Editor** in Devtron allows you to create and manage job pipelines. +It provides visual interface to create and configure job pipelines, define basic configurations such as trigger types, branch name and allows you to add **Tasks to be executed** in the desired sequence. + +To create and configure the Job Pipeline + +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} + +1. Navigate to the **Workflow Editor** in the left sidebar of the Job **Configuration** page. + + ![Figure 1a: Select workflow editor](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor.jpg) + +2. Click **+ Job Pipeline** to create a new Job workflow, a pop-up **Create job workflow** will appear asking you to enter a name for your Job workflow. + + ![Figure 1b: Add job workflow](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-add-pipeline.jpg) + +3. Enter the name for your Job workflow and click on **Create Workflow** button, a new Job workflow will be created, in which you can add a job pipeline. + + ![Figure 1c: Enter job workflow name](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-name.jpg) + +4. To add a job pipeline to your workflow, click anywhere in the **Add job pipeline to this workflow** area under the job workflow name. This opens the **Create job pipeline** Window in which you can create and configure your job. + + ![Figure 1d: Job workflow created](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-area.jpg) + + ![Figure 1e: Create job pipeline](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-create-pipeline.jpg) + +--- + +## Create Job Pipeline + +In **Create job pipeline** window, you can create and configure job pipelines. + +It includes 2 stages + +* [**Basic Configurations**](#basic-configurations) + +* [**Tasks to be executed**](#tasks-to-be-executed) + +### Basic Configurations + +This stage allows you to define primary configurations such as Pipeline name, Source Type, Branch Name, and how job should be triggered. Refer the following table to configure each field. + +![Figure 2a: Configure job pipeline](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-basic-config.jpg) + +| Field Name|Description| +| :--- |:--- | +| `Trigger Job Pipeline` |

The job execution may be set to:

  • Automatically: Job execution is triggered automatically as the Git source code changes.
  • Manually: Build is triggered manually.
| +| `Pipeline Name` | Assign a name to your job pipeline| +| `Source type` | Source type to trigger the job pipeline. Available options: Branch Fixed, Branch Regex, Pull Request, Tag Creation| +| `Branch Name`| Branch that triggers the CI build| +| `Use remote cache`|

Enable this option to use the Docker cache from previous builds. Docker's layer caching mechanism allows unchanged docker images layers to be reused across pipeline runs,thus drastically reducing execution times

The globe toggle, next to Docker Layer Caching means that the configuration is inherited from global

  • Enabled: Inherits the caching settings defined globally.
  • Disabled: Allows you to define a pipeline-level configuration specific to this job.
| + +### Tasks to be executed + +The Stage allows you define tasks for your job. + +You can create one or more tasks. Tasks can be dependent on each other for execution, In other words, the output variable of one task can be used as an input for the next task to execute your job. Tasks will execute in the order they are arranged and can be rearranged by drag-and-drop; however, the order of passing the variables must be followed. + +To create a task: + +1. Navigate to **Tasks to be executed** in the **Create job pipeline** window. + +2. Click **Add Task** to add a task in your job pipeline. + + ![Figure 3a: Add task](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-add-task.jpg) + +3. A new task will be added (in the left side of the Create job pipeline window),you can configure the task either by selecting one of the available [preset plugins](#create-task-using-preset-plugins) or by [Executing a custom script](#create-task-using-custom-script) + + ![Figure 3b: Type of tasks](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-tasks.jpg) + +#### Create Task using preset plugins + +In Devtron, preset plugins are pre-defined tasks templates, that helps you automate and execute common operations such as provisioning infrastructure, taking backups, exporting container images etc., without writing custom scripts. + +Devtron provides a set of built-in preset plugins, and you can also create your own plugins in devtron according to your specific needs. + +To create a task using preset plugins, let's take an scenario, where you want to provision a GKE Cluster in your Google Cloud Console, instead of defining a whole new custom script, you can use the **GKE Provisioner** plugin to provision the GKE cluster. + +To create a task using the **GKE Provisioner** plugin follow the below steps: + +1. After Configuring the basic configurations, select **Tasks to be executed** Tab + +2. Click **+Add Task** from the left side panel. + +3. Search for `GKE Provisioner` in the **Search Plugin** Search bar and select **GKE Provisioner** from the list of plugins. + + ![Figure 4a: Search 'Gke Provisioner' plugin](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-gke-search.jpg) + + * The right-side panel will display the fields specific to the **GKE Provisioner** plugin which are required to be configured. + + * The left-side panel will now shows a task under **Tasks (IN ORDER OF EXECUTION)**, named after the selected plugin(by-default), along with it's logo.
You can change the task's name using the **Task name** field but plugin's logo will remain indicating that it is a preset plugin. + + ![Figure 4b: Gke provisioner plugin](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-gke.jpg) + +4. Refer the [GKE Provisioner](/docs/user-guide/plugins/gke-provisioner.md) documentation to configure the **GKE Provisioner** fields with appropriate values. + +> Refer to the [Plugins documentation](/docs/user-guide/plugins/README.md) to explore and configure any of the available plugins. + +5. After configuring the fields successfully, your task will be created, if you wish, you can add more tasks by clicking on **+ Add task** in the left-side panel. + +#### Create Task using Custom Script + +In devtron you can also define a task using custom script to meet specific requirements. To create a task a task using a custom script follow the below steps: + +![Figure 5a: Execute custom task](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-custom-task.jpg) + +1. After Configuring the basic configurations, select **Tasks to be executed** Tab. + +2. Click **+Add Task** from the left side panel, and then select **Execute custom task**. + + * The right-side panel will display the fields which are required to be configured in order to create the task. + + * The left-side panel will now displays a task under **Tasks (IN ORDER OF EXECUTION)**. + +3. Enter the Task name(required) and Task Description (optional). + +4. Select the **Task type**, it can be `Shell` or `Container Image`. + + * **Shell Tasks**: These execute shell scripts directly within the job runtime environment. In this type of tasks you can define inline scripts or use script files from your configured source code. + + * **Container Image Tasks**: These allows you to execute commands and scripts inside a custom docker container, instead of using the default environment provided by devtron, you can specify you own container image with all dependencies and tools required for the tasks. + +These Tasks run using container in container approach, that means, the specified image is pulled and run inside the job pod, thus providing a complete isolated environment. + +5. After selecting the **Task type**, you need to configure task-specific fields based on that **Task type**. Let's look at some examples below to configure both **Shell type** and **Container image** tasks. + +#### Example - Shell Task + +Let's take an example of a **Shell task** for a job that allows you to back up a specific PostgreSQL database and stores it as a file. + +#### Tasks Configurations + +|Field| Values for This Example| Required/Optional | Description| +| :--- | :--- | :--- | :--- | +| `Task Name`| `pg-backup-task`| Required| Enter a name for the task| +| `Task Description`| `This task performs a backup of a specific PostgreSQL database and save it as a file and stores the file path as a output variable.` | Optional | Short description for the task| +| `Task Type` | `Shell`| Optional| Select the preferred task type | +| `Input variables`| Refer the [Input Variable table](#input-variable-table) below | Optional|

These variables provide dynamic values to the script at the time of execution and are defined directly in the UI.

  • Variable name: Alphanumeric chars and (_) only
  • Source or input value: The variable's value can be global, output from the previous task, or a custom value.
    Accepted data types include: STRING
| +| `Trigger/Skip condition` | `Trigger If: DB_NAME == prod-db`| Optional| A conditional statement to execute or skip the task| +| `Script`| Refer the [Script](#script) below| Required| Custom script for executing Job tasks| +| `Output directory path` | `/backups`| Optional| Directory path where output files such as logs, errors, etc. will be available after the execution.| +| `Output variables`| Refer the [output variable](#output-variables) table| Optional|

Output variables stores the output as variables and these variables can be used as input variables for the next task.

  • [Pass/Failure Condition](#pass-fail-condition) (Optional): Conditional statements to determine the success/failure of the task. A failed condition stops the execution of the next task and/or build process
| + +#### Input Variable Table + +| Variable|Type| Value| Description| +| :--- | :--- | :--- | :--- | +| `DB_NAME`| String | `prod-db`| Name of the database to be backed up| +| `DB_USER`| String | `postgres`| Username for the PostgreSQL instance| +| `DB_HOST`| String | `localhost`| PostgreSQL server hostname| +| `BACKUP_PATH`| String | `/backup`| Directory path where the backup file is saved| + +* To add a input variable, click **+ Add Variable** next to the **Input Variable**, a new table appear asking you to enter the variable and its required information. + +* You can click `+` icon next to **Variable** header field to add more rows to the input variable table.
+ + ![Figure 6a: Variable configuration](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-var-config.jpg) + +* You can click the slider icon next to each variable name to make it's value required and add a description for it. + + ![Figure 6b: Value configuration](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-value-config.jpg) + +* You can click the slider icon next to each variable value to add choices, allow custom input, and ask value at runtime. + + ![Figure 6b: Add choice](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-choice.jpg) + +#### Script + +{% code title="Custom Script" overflow="wrap" lineNumbers="true" %} +```bash +#!/bin/sh +set -eo pipefail +#set -v ## uncomment this to debug the script + +echo "Taking database backup" +bash ./scripts/backup.sh --db-host "$DB_HOST" --db-user "$DB_USER" --db-name "$DB_NAME" --backup-path "$BACKUP_PATH" +``` +{% endcode %} + +In the above script, instead of writing the entire script for the backup task, we have referenced the `backup.sh` script from the Github Repository configured as Source code. This approach avoids the need to rewrite the same script again and again for each task, thus making it reusable and efficient across multiple jobs. + +**backup.sh Script (Stored in Github repository)** + +{% code title="backup.sh" overflow="wrap" lineNumbers="true" %} +```bash +#!/bin/bash + +# Input variables for database connection +DB_HOST="$DB_HOST" +DB_USER="$DB_USER" +DB_NAME="$DB_NAME" +DB_PASSWORD="$DB_PASSWORD" +BACKUP_PATH="$BACKUP_PATH" + +# Define the backup file path +BACKUP_FILE_PATH="/backups/$DB_NAME-$(date +%F).backup" + +# Backup PostgreSQL database +pg_dump -h "$DB_HOST" -U "$DB_USER" -d "$DB_NAME" -F c -b -v -f "$BACKUP_FILE_PATH" + +# Check if backup was successful +if [ $? -eq 0 ]; then + BACKUP_STATUS="success" + echo "Backup completed successfully. File path: $BACKUP_FILE_PATH" + echo "BACKUP_STATUS=$BACKUP_STATUS" # Set the output variable + echo "BACKUP_FILE_PATH=$BACKUP_FILE_PATH" # Set the backup file path as output variable +else + BACKUP_STATUS="failure" + echo "Backup failed." + exit 1 +fi +``` +{% endcode %} + +#### Output Variables + +| Variable | Type | Description | +| ------------------ | ------ | --------------------------------------------------- | +| `BACKUP_FILE_PATH` | String | Stores the file path of the backup file generated. | +| `BACKUP_STATUS` | String | Indicates whether the backup was successful or not. | + +#### Pass/Fail Condition + +PASS If: `BACKUP_STATUS == success` +PASS If: `BACKUP_FILE_PATH != ""` + +After adding this backup task, you can add more tasks as well, for example, you can add a task to upload the backup files to cloud storage (e.g., AWS S3) or sending a notification about the backup status.These additional tasks can use the output variable `BACKUP_FILE_PATH` to access the backup file generated in the first task. + +#### Example - Container Image Task + +Let's take an example of a **Container Image Task** for a job that provision an AWS s3 bucket using terraform. Here instead of installing dependencies (such as terraform), this task pulls the official terraform image (hashicorp/terraform:1.5.0) in which our task will execute. This means a container will be created inside the job pod and runs terraform commands inside the container thus avoiding the need to install dependencies manually each time. + +#### Tasks Configurations + +| Field| Values for This Example| Required/Optional | Description| +| :---|:---|:---|:---| +| `Task name`| `provision-s3-bucket`| Required|Enter a name for the task| +| `Description`| Provision an S3 bucket with Terraform| Optional| A descriptive message for the task| +| `Task type`| `Container Image`| Optional| Allows you to execute commands and scripts inside a custom docker container| +| `Input variables`| Refer the [Input Variable table](#input-variable-table-1) below | Optional|

These variables provide dynamic values to the script and are defined directly in the UI.

  • Variable name: Alphanumeric chars and (_) only
  • Source or input value: The variable's value can be global, output from the previous task, or a custom value.
    Accepted data types include: STRING
| +| `Trigger/Skip condition`| `TF_ENV == "prod"`| Optional| Execute or skip the task based on the condition provided.| +| `Container image`| `hashicorp/terraform:1.5.0`| Required| Select an image from the drop-down list or enter a custom value in the format `:`| +| `Mount custom code`| Refer below [Mount custom code](#mount-custom-code) section| Optional|

Enable to mount the custom code in the container. Enter the script in the box below.

  • **Mount above code at** (required): Path where the code should be mounted
| +| `Command`| `sh`| Optional|Mention commands to execute inside the container| +| `Args`| `/run.sh`| Optional| The arguments to be passed to the command mentioned in the command field| +| `Port mapping`| `No`| Optional| The port number on which the container listens. The port number exposes the container to outside services.| +| `Mount code to container`| `yes`| Optional| Mounts the source code (configured git repository) inside the container. Default is "No". If set to "Yes", enter the path where the source should be mounted inside the container.| +| `Mount directory from host` |`No`| Optional| Mount any directory from the host into the container. This can be used to mount code or even output directories.| +| `Output directory path`|`No`| Optional| Directory path where output files such as logs, errors, etc. will be available after the execution.| + +#### Input Variable Table + +| Variable| Type| Value| Description| +| :--- | :---| :--- | :--- | +| `AWS_REGION`| String | `us-east-1`| AWS region where the bucket will be created. | +| `BUCKET_NAME` | String | `my-app-logs-bucket` | Name of the S3 bucket to create.| + +* To add a input variable, click **+ Add Variable** next to the `Input Variable`, a new table appear asking you to enter the variable and its required information. + +* You can click `+` icon next to **Variable** header field to add more rows to the input variable table.
+ + ![Figure 7a: Variable configuration](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-var-config.jpg) + +* You can click the slider icon next to each variable name to make it's value required and add a description for it. + + ![Figure 7b: Value configuration](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-value-config.jpg) + +* You can click the slider icon next to each variable value to add choices, allow custom input, and ask value at runtime. + + ![Figure 7c: Add choice](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-choice.jpg) + +#### Mount Custom Code + +{% code title="Custom Script" overflow="wrap" lineNumbers="true" %} +```bash +#!/bin/sh +set -eo pipefail + +# Navigate to your Terraform config +cd /sourcecode/terraform/s3 + +# Initialize & apply without prompts +terraform init -input=false +terraform plan +terraform apply -auto-approve \ + -var="region=us-east-1" \ + -var="bucket_name=$BUCKET_NAME" + +# Capture the bucket name output +echo "S3_BUCKET_NAME=$(terraform output -raw bucket_name)" +``` +{% endcode %} + +In the above script, instead of writing the entire terraform scripts for provisioning the s3 bucket, we have stored the scripts `main.tf` and `variable.tf` in the Github Repository configured as Source code. By enabling `mount code to container`, the source code (configured Git Repository) is now mounted inside the container as well and available at `/sourcecode`. This approach avoids the need to rewrite the same scripts multiple times for each task, thus making the scripts reusable and efficient across multiple jobs. + +**main.tf Script (Stored in Github repository)** + +{% code title="main.tf" overflow="wrap" lineNumbers="true" %} +```bash +provider "aws" { + region = var.region +} + +resource "aws_s3_bucket" "this" { + bucket = var.bucket_name + + tags = var.tags +} + +resource "aws_s3_bucket_versioning" "this" { + bucket = aws_s3_bucket.this.id + + versioning_configuration { + status = var.versioning_enabled ? "Enabled" : "Suspended" + } +} +``` +{% endcode %} + +**variables.tf Script (Stored in Github repository)** + +{% code title="variables.tf" overflow="wrap" lineNumbers="true" %} +```bash +variable "bucket_name" { + description = "The name of the S3 bucket" + type = string +} + +variable "region" { + description = "AWS region to create the S3 bucket in" + type = string +} +``` +{% endcode %} + +After adding this s3 provisioner task, you can add more tasks as well, for example, you can add a task to add a bucket policy or sending a notification to slack or email that s3 bucket is provisioned successfully. + +6. After configuring the tasks, Choose the environment in which you want the job to be executed. + +7. Select **Create Pipeline** and a job pipeline will be created. + +--- + +## Update Job Pipeline + +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} + +You can update the configurations of an existing Job Pipeline except for the pipeline's name. +To update your job pipeline + +1. Navigate to **Configurations** → **Workflow Editor** of the specific job you want to update. + +2. Select the **Job pipeline** you wish to update, a **Edit job pipeline** modal window will appear. + + ![Figure 7a: Select job pipeline](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-update-select.jpg) + +3. Change the required configurations as per your requirements and select **Update Pipeline** to update the pipeline + + ![Figure 7b: Update job pipeline](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-update-pipeline.jpg) + +--- + +## Delete Job Pipeline + +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} + +To delete a job pipeline + +1. Navigate to **Configurations** → **Workflow Editor** for the job you want to delete. + +2. Select the **Job pipeline** you wish to delete, a **Edit job pipeline** modal window will appear. + + ![Figure 8a: Select job pipeline](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-delete-select.jpg) + +3. Select **Delete Pipeline** at the bottom left corner of the **Edit job pipeline** modal window to delete the job pipeline. + + ![Figure 8b: Delete job pipeline](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-delete-pipeline.jpg) + +4. A pop-up window will appear asking you to confirm the **Delete Pipeline** action. + + ![Figure 8c: Confirm Delete job pipeline](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-delete-dialog-box.jpg) + +--- + +After creating the pipeline, you can configure [ConfigMaps](./configmap-secret/configmap-job.md) (optional) and [Secrets](./configmap-secret/secret-job.md) (optional) and [Environment overrides](./environment-override-job.md) (optional) before triggering it. \ No newline at end of file diff --git a/docs/user-guide/jobs/create-job.md b/docs/user-guide/jobs/create-job.md index a33f33a1b..c1d08246f 100644 --- a/docs/user-guide/jobs/create-job.md +++ b/docs/user-guide/jobs/create-job.md @@ -1,42 +1,114 @@ -# Create a New Job +# Create Job -* On the Devtron dashboard, select **Jobs**. -* On the upper-right corner of the screen, click **Create**. -* Select **Job** from the drop-down list. -* **Create job** page opens. +In devtron, jobs can be created by two ways: -![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/select-create-job-latest.jpg) +* **Blank Job**: This allows you to create a new job from scratch by manually defining all configurations. +* **Clone Job**: This allows you to create a new job by reusing the configuration of an existing job. -## Create Job +--- -![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/create-job-page.jpg) +## Create a Blank Job +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} -Provide below information on the `Create job` page: +To create a new **Blank Job** in Devtron, follow these steps: -| Fields | Description | -| --- | --- | -| **Job Name** | User-defined name for the job in Devtron. | -| **Description** | Enter the description of a job. | -| **Registry URL** | This is the URL of your private registry in Quay. E.g. `quay.io` | -| **Select one of them** |
  • **Create from scratch** :Select the project from the drop-down list.
    `Note`: You have to add [project under Global Configurations](../global-configurations/projects.md). Only then, it will appear in the drop-down list here.
  • **Clone existing application**: Select an app you want to clone from and the project from the drop-down list.
    `Note`: You have to add [project under Global Configurations](../global-configurations/projects.md). Only then, it will appear in the drop-down list here.
| +1. Navigate to **Devtron Dashboard** → **Jobs**. -**Note**: Do not forget to modify git repositories and corresponding branches to be used for each Job Pipeline if required. + ![Figure 1a: Job's page](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/jobs.jpg) +2. Click **Create** button in the top-right corner and select **Job** from the drop-down list. -### Tags + ![Figure 1b: Select job](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/select-create-job-latest.jpg) -`Tags` are key-value pairs. You can add one or multiple tags in your application. +3. The **Create Job** page opens. From the left panel, select **Blank job**, then enter the required details as listed in the table below. -**Propagate Tags** -When tags are propagated, they are considered as labels to Kubernetes resources. Kubernetes offers integrated support for using these labels to query objects and perform bulk operations e.g., consolidated billing using labels. You can use these tags to filter/identify resources via CLI or in other Kubernetes tools. + | Fields| Description| + |:---|:---| + | `Project`| User-defined name for the job in Devtron.| + | `Job Name`| User-defined name for the job in Devtron.| + | `Description` | Enter the description of a job.| + | `Tags`|

Key-value pairs used for identifying and organizing the application.
Users can propagate tags as Kubernetes labels to enable filtering, bulk operations, and integrations with Kubernetes tools.

  1. Click the Add tags to job dropdown on the Create job page.
  2. Under the Tags section, Click + to add a new tag.
  3. You can click X icon to delete an existing tag.
  4. You can click the propagation icon to propagate a tag (turns dark grey when propagated), click again to remove propagation.
    [Snapshot](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/overview/manage-tags-latest-1.jpg)
| -![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/propagate-tags.jpg) + ![Figure 1b: Blank job](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/create-job-page.jpg) -* Click `+ Add tag` to add a new tag. -* Click the symbol on the left side of your tag to propagate a tag.
`Note`: Dark grey colour in symbol specifies that the tags are propagated. -* To remove the tags from propagation, click the symbol again. -* Click `Save`. +4. Click **Create Job**. The job will be created, and you will be automatically redirected to the [Configurations page](/docs/user-guide/jobs/configurations/README.md)to continue setting up the job pipeline. +--- +## Create a Clone Job + +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have [Admin role](../global-configurations/authorization/user-access.md#role-based-access-levels) or above (along with access to the environment and applications) to perform environment override. +{% endhint %} + +To create a new **Clone Job** in Devtron, follow these steps: + +1. From the **Devtron Dashboard** → **Jobs**. +2. Click the **Create** button in the top-right corner and select **Job** from the drop-down list. + + ![Figure 2a: Select job](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/select-create-job-latest.jpg) + +3. The **Create Job** page opens. From the left panel, select **Clone Job**, then enter the required details as listed in the table below. + + | Fields| Description| + |:---|:--- | + | `Project`| User-defined name for the job in Devtron.| + | `Job Name`| User-defined name for the job in Devtron.| + | `Description`| Enter the description of a job.| + | `Tags`|

Key-value pairs used for identifying and organizing the application.
Users can propagate tags as Kubernetes labels to enable filtering, bulk operations, and integrations with Kubernetes tools.

  1. Click the Add tags to job dropdown on the Create job page.
  2. Under the Tags section, Click + to add a new tag.
  3. You can click X icon to delete an existing tag.
  4. You can click the propagation icon to propagate a tag (turns dark grey when propagated), click again to remove propagation.
    [Snapshot]
| + | `Select an job to clone` | Select the existing job from the dropdown that you want to clone.Enter the description of a job.| + + ![Figure 2b: Clone job](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/create-job-clone-job.jpg) + +4. Click **Create Job**. The **Clone job** will be created, and you will be automatically redirected to the [Configurations page](/docs/user-guide/jobs/configurations/README.md), where the configuration will be pre-populated based on the selected source job. You may review and modify these settings as required. + +--- + +## Delete Job + +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} + +To delete a job: + +1. Navigate to **Devtron Dashboard** → **Jobs** → **Select the job** → **Configurations**. + +2. To delete the job, you have to first delete any configured pipelines in the jobs. + + ### Delete job pipelines + + * Navigate to **Devtron Dashboard** → **Jobs** → **Select the job** → **Configurations** → **Workflow Editor**. + + * Select the job pipeline you wish to delete, a edit job pipeline modal window will appear. + + ![Figure 3a: Select job pipeline](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-delete-select.jpg) + + * Select **Delete Pipeline** at the bottom left corner of the edit job pipeline modal window to delete the job pipeline. + + ![Figure 3b: Delete job pipeline](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-delete-pipeline.jpg) + + * A pop-up window will appear asking you to confirm the **Delete Pipeline** action. + + ![Figure 3c: Confirm delete job pipeline](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/workflow-editor-delete-dialog-box.jpg) + +3. Select **Delete Job** to delete the job. + + ![Figure 4a: Delete job](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/create-job-delete-job.jpg) + +4. A pop-up window will appear asking you to confirm the **Delete Job** action. + + ![Figure 4b: Confirm delete job](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/create-job-delete-job-dialog.jpg) + +--- + +After creating a job, the next step is to configure the job, refer the [Configurations](./configurations/README.md) section to configure the job. \ No newline at end of file diff --git a/docs/user-guide/jobs/overview-job.md b/docs/user-guide/jobs/overview-job.md index cbd25200a..7df4d1685 100644 --- a/docs/user-guide/jobs/overview-job.md +++ b/docs/user-guide/jobs/overview-job.md @@ -1,49 +1,95 @@ # Overview -The `Overview` section contains the brief information of the job, any added tags, and deployment details of the particular job. -In this section, you can also [change project of your job](#change-project-of-your-job) and [manage tags](#manage-tags) if you added them while creating job. +The Overview page provides a centralized view of a job’s details within Devtron. It allows users to quickly access information about the job, manage tags, view job pipelines — all in a single, organized interface. -![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/overview-job.jpg) +![Figure 1a: Job's overview](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/overview-job.jpg) +The **Overview** page contains three main sections: +* [**About**](#about): Contains job's metadata such as name, description, project, project, creator, and it also includes options to manage tags. -The following details are provided on the **Overview** page: +* [**Job Pipelines**](#job-pipelines): Displays all job pipelines along with their last status and quick access to associated workflows. -| Fields | Description | -| :--- | :--- | -| **Job Name** | Displays the name of the job. | -| **Created on** | Displays the day, date and time the job was created. | -| **Created by** | Displays the email address of a user who created the job. | -| **Project** | Displays the current project of the job. You can change the project by selecting a different project from the drop-down list. | +--- +## About -## Change Project of your Job +The **About** section allows you to: -You can change the project of your job by clicking **Project** on the `Overview` section. +* View key job details +* Change the project your application is assigned to +* Manage tags that you may have added during the job’s creation -1. Click `Project`. -2. On the `Change project` dialog box, select the different project you want to change from the drop-down list. +The left side of the **About** section displays essential information about the job. -Click **Save**. The job will be moved to the selected project. +![Figure 1b: Job's basic info](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/overview-job-about-left.jpg) -## Manage Tags +The table below captures all the key elements presented in this section, along with their descriptions and whether they can be edited by the user. -`Tags` are key-value pairs. You can add one or multiple tags in your application. When tags are propagated, they are considered as labels to Kubernetes resources. Kubernetes offers integrated support for using these labels to query objects and perform bulk operations e.g., consolidated billing using labels. You can use these tags to filter/identify resources via CLI or in other Kubernetes tools. +| Field Name | User Editable |Description| +| :--------- | :--------------- |:--------- | +| `Job Name` | No |Displays the name of the application (e.g., backend-healthcare-app).| +| `Short Description`|Yes|A short, optional description to summarize the application's purpose.| +| `Project` |Yes|Indicates the current project under which the application is organized.
You can change the project directly from this section.
  1. Click the `Edit` icon next to the current project.
  2. In the `Change Project` window, select the new project from the dropdown.
  3. Click `Save`.
Changing the project will revoke access for existing users and grant access only to those who have permissions in the newly selected project.| +| `Created on` |No|Shows the exact date and time when the application was created.| +| `Created by`|No|Displays the email address of the user who created the application.| +| `Tags` |Yes|Key-value pairs used for identifying and organizing the application.
Users can propagate tags as Kubernetes labels to enable filtering, bulk operations, and integrations with Kubernetes tools.
  1. Click the `Edit` icon next to `Tags`.
  2. On the `Manage Tags` page, Click `+ Add tag` to add a new tag.
  3. You can click `X` icon to delete an existing tag.
  4. You can click the `propagation icon` to propagate a tag (turns dark grey when propagated), click again to remove propagation.
    [[Snapshot](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/overview/manage-tags-latest-1.jpg)]
  5. Click `Save`, Configured Tags will reflect immediately under `Tags` in `About` Section
| -`Manage tags` is the central place where you can create, edit, and delete tags. You can also propagate tags as labels to Kubernetes resources for the application. +### Readme -![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/creating-application/manage-tags-latest.jpg) +The right side of the **About** section contains a **README** area where you can maintain job-specific notes or documentation. The `Readme` supports Markdown formatting, making it easy to include formatted text, instructions, or important context related to the application. -* Click `Edit tags`. -* On the `Manage tags` page, click `+ Add tag` to add a new tag. -* Click `X` to delete a tag. -* Click the symbol on the left side of your tag to propagate a tag.
`Note`: Dark grey colour in symbol specifies that the tags are propagated. -* To remove the tags from propagation, click the symbol again. -* Click `Save`. +![Figure 2a: Readme](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/overview-job-readme.jpg) -The changes in the tags will be reflected in the `Tags` on the `Overview` section. +To add or update the **Readme**: +1. Click the **Edit** button in the Readme section. +2. A Markdown editor will appear where you can write or modify content under the `Write` tab. +3. Use standard Markdown syntax to format text, create lists, insert links, and more. +4. Preview the content using the **Preview** tab. +5. Click **Save** to update the README. +![Figure 2b: Edit Readme](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/overview-job-readme-edit.jpg) +{% hint style="info" %} + After saving, the system displays the email address of the user who last updated the README, along with the date and time. This information appears in the header of the Readme section, beside the title. +{% endhint %} +### Catalog [![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/elements/EnterpriseTag.svg)](https://devtron.ai/pricing) +The **Catalog** in the **About** section displays information about your job—such as Container port, Environment Variables, Arguments, Resources(CPU and RAM) This data is managed using [Devtron’s Catalog Framework](../global-configurations/catalog-framework.md). - +![Figure 3a: Catalog](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/overview-job-catalog.jpg) + +You can use the **Catalog framework** to maintain information about your job such as Environment Variables, Resources(CPU and RAM), service documentations etc. This makes it easier for others to understand, manage and use your job. + +`Super-Admins` define a custom JSON schema that determines what fields are shown in the catalog form. This schema is specific to each resource type, such as Devtron applications. + +When you click the **Edit** icon, a form appears based on the defined schema. As an job owner, you can fill out fields like: +* Container port (e.g., API contract, service documentation) +* Environment Variables +* Arguments +* Resources(CPU and RAM) + +![Figure 3b: Edit Catalog](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/overview-job-catalog-expand.jpg) + +{% hint style= "info" %} +The structure and labels in the catalog form are entirely configurable by your platform team via JSON schema in **Catalog Framework**. Field names and sections may vary depending on how the schema was defined by your organization. +{% endhint %} + +Once saved, this information is displayed in a readable format within the Catalog subsection and is accessible to all users who have permission to view the job. + +![Figure 3c: Catalog overview](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/overview-job-catalog-saved.jpg) + +--- + +## Job Pipelines + +The Job Pipelines section provides a detailed view of all job pipelines. For each job pipeline, it displays + +| Field Name |Description| +| :--------- |:--------- | +| Pipeline name |Name of the job pipeline.| +| Last Run Status|Displays the status of the most recent job execution| +| Run in environment | Displays the name of the environment in which job is executed. | +| Last Run AT| Displays how long ago the job was last triggered.| + +![Figure 4: Job Pipelines](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/overview-job-job-pipelines.jpg) diff --git a/docs/user-guide/jobs/run-history-job.md b/docs/user-guide/jobs/run-history-job.md new file mode 100644 index 000000000..c9ee1f139 --- /dev/null +++ b/docs/user-guide/jobs/run-history-job.md @@ -0,0 +1,69 @@ +# Run History + +The run history allows you to review each and every execution of job-pipelines. Here you can review who triggered each pipeline, when it started executing and finished, and whether it succeeded or failed. It also allows you to inspect logs for each execution. + +## Accessing run history for specific pipeline + +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} + +Either you have just executed the job and want to inspect live execution or you just want to check previous executions of a job pipeline + +1. Navigate to **Run History** tab of your job, all the executions will appear in a reverse chronological order under the pipeline name. In case you have configured multiple job pipelines within a job, you need to select the pipeline from **Select Pipeline** dropdown in the top-left corner. + + ![Figure 1a: Select Pipeline](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/run-history.jpg) + +2. Select the specific execution you want to inspect. By default the latest execution is selected. + + ![Figure 1b: Select specific execution](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/run-history-select-execution.jpg) + +3. After selecting the execution, the right section of the page will display the details about that particular execution. + + |Field|Description| + |:---|:---| + |`Triggered`|Shows the date, time, user, and commit ID that initiated this run.| + |`Environment`|Indicates which execution environment (e.g., devtron-ci) was used for this job.| + |`Execution started`|Timestamp marking when the job actually began running.| + |`Execution succeeded`|Timestamp marking when the job finished successfully.| + |`Worker status`|Final outcome of the worker performing the job (e.g., Succeeded or Failed). On failure, the error message is also shown| + + ![Figure 1c: Execution's details](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/run-history-details.jpg) + +Apart from these details, you can also inspect logs, source code, security, and download artifacts (if any). + +### Logs + * In logs tab, you can inspect logs for each pipeline stage and task along with their runtime. + * Use the `Search log` search bar to search specific keywords or errors. + * You can expand/collapse each pipeline stage to view specific logs related to that stage. Use the `Expand/collapse all stages` button near the search bar to expand or collapse all stages at once. + * Select the fullscreen button in the bottom-right corner to view logs in fullscreen. + + ![Figure 2a: Logs](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/run-history-logs.jpg) + +### Source + The source tab shows which commit is from the source code (configured Git repository) is used to execute the job pipeline. + + It shows following commit details + |Field|Description| + |:---|:---| + |`Repository name & icon` |The Git repo used with its provider logo| + |`Commit hash`|A short, clickable commit ID (e.g. 443vecd) that opens the full commit details when clicked.| + |`Commit message`|Commit message used while pushing that commit| + |`Author`|Name & email of the committer.| + |`Date & time`|When that commit was authored| + + ![Figure 2b: Source](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/run-history-source.jpg) + +### Artifacts + + The Artifacts tab shows all archives or files your job has produced such as backup binaries, reports, log bundles and let you download them for inspection or further use your job has produced. + +### Security + + The Security tab provides a view of vulnerability scanning results for code, the container images used or built during the job execution. It appears when a security scan plugin (e.g., Trivy via the Code Scan plugin) is integrated into your job pipeline. + +--- + +After inspecting run history you can also setup the [Job Overview](./overview-job.md) so that others can easily use the job in future. \ No newline at end of file diff --git a/docs/user-guide/jobs/triggering-job.md b/docs/user-guide/jobs/triggering-job.md index bd2071278..a57f034f8 100644 --- a/docs/user-guide/jobs/triggering-job.md +++ b/docs/user-guide/jobs/triggering-job.md @@ -1,34 +1,41 @@ -# Triggering Job +# Triggering Job Pipeline -## Triggering Job Pipeline +After creating the job pipeline, the next step is to trigger the job pipeline. This is the step where the job will executed in the selected environment. -The Job Pipeline can be triggered by selecting `Select Material` +To trigger the job-pipeline: -![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/trigger-job.jpg) +{% hint style="warning" %} +### Who Can Perform This Action? +Users need to have **Admin role** or **Super Admin role**. +Refer the [User permissions](../global-configurations/authorization/user-access.md#roles-available-for-jobs). +{% endhint %} -Job Pipelines that are set as automatic are always triggered as soon as a new commit is made to the git branch they're sensing. However, Job pipelines can always be manually triggered as and if required. +1. Navigate to the **Trigger Job** tab of your job, which list all configured pipelines. -Various commits done in the repository can be seen here along with details like Author, Date etc. Select the commit that you want to trigger and then click on `Run Job` to trigger the job pipeline. + ![Figure 1a: Trigger job](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/trigger-job.jpg) -![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/run-job.jpg) +2. Select **Select Material** for the job-pipeline you wish to execute, a modal window will open, under **Code-source** tab, this window lists all recent commits along with their hash, author, date, and message. from your configured source repository. + ![Figure 1b: Select material for specific pipeline](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/trigger-job-select-material.jpg) -**Refresh** icon, refreshes Git Commits in the job Pipeline and fetches the latest commits from the `Git Repository`. +3. Select the commit you want to use in the job execution. You can use the search bar to filter the commits hash, and you can also click the kebab menu to reveal excluded commits. If a recent commit isn’t displayed, click the Refresh icon to reload the commit list from your Git repository. -**Ignore Cache** : This option will ignore the previous build cache and create a fresh build. If selected, will take a longer build time than usual. [Click here](../creating-application/workflow/ci-pipeline.md#docker-layer-caching) to read more about controlling cache behavior in Devtron. + ![Figure 1c: Select commit](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/trigger-job-commit.jpg) -It can be seen that the job pipeline is triggered here and is the _Running_ state. +4. Select the **Parameters** tab to configure pipeline runtime inputs(if any). The Key and Type columns are read‑only; enter values for each required parameter (denoted by *). Optional parameters can be configured as needed or left blank. -![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/click-job-details.jpg) + ![Figure 1d: Configure runtime parameters](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/trigger-job-parameter.jpg) -Click your `job pipeline` or click `Run History` to get the details about the job pipeline such as logs, reports etc. +5. After selecting the commit and configuring runtime parameters, pick the target environment from the **Execute job in** dropdown at the bottom. -![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/run-history-job.jpg) + ![Figure 1e: Select environment for job's execution](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/trigger-job-execute-env.jpg) -Click `Source code` to view the details such as commit id, Author and commit message of the Git Material that you have selected for the job. +6. Select **Run Job** to execute your pipeline. -Click `Artifacts` to download the _reports_ of the job, if any. + ![Figure 1f: Run job](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/trigger-job-run-job.jpg) -If you have multiple job pipelines, you can select a pipeline from the drop-down list to view th details of logs, source code, or artifacts. +--- +After executing your pipeline, the pipeline will now be in running state and you can monitor the pipeline execution(such as logs, source,artifacts)in [run-history](./run-history.md) either by navigating to **Run History** tab or by clicking `details` above the **Select Material** of the specific pipeline. +![Figure 2: Job status](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/trigger-job-details.jpg) \ No newline at end of file diff --git a/docs/user-guide/jobs/what-is-job.md b/docs/user-guide/jobs/what-is-job.md new file mode 100644 index 000000000..eda634487 --- /dev/null +++ b/docs/user-guide/jobs/what-is-job.md @@ -0,0 +1,19 @@ +# What is Jobs? + +Devtron Jobs provide a streamlined way to execute specific tasks or set of tasks defined by the user within the user's application environment. + +Unlike Devtron CI/CD pipelines that primarily focus on building, testing, and deploying applications, Devtron Jobs are designed to handle independent, ephemeral tasks and allows you to execute tasks without impacting Ci/CD workflows or pipelines, making them ideal for specific tasks. + +Each Devtron Job corresponds to a [Kubernetes Job](https://kubernetes.io/docs/concepts/workloads/controllers/job/) that creates one or more Pods to carry out the specific task. Once the task is completed, the Pods are terminated, making Devtron Jobs an ideal solution for one-time, recurring, or event-driven workloads. + +Jobs can be configured to run as: + +* **One-time tasks**: Useful for maintenance operations, data migrations, backups, or environment cleanups. + +* **Recurring tasks**: Scheduled operations like daily scans, periodic backups, or routine security checks. + +* **Event-driven tasks**: Triggered by specific events such as commits, pull requests, or tag creation. + +Devtron Jobs support executing custom tasks or predefined operations using integrated pipeline plugins. These plugins extend job functionality by enabling tasks such as code scanning, image signing, vulnerability patching, container image copying, and external automation through tools like Ansible and Bitbucket Runners. To explore the full list of supported plugins and their configuration options, refer to the [Devtron Plugin Documentation](/docs/user-guide/plugins/README.md). + +To learn how to create a new Job in Devtron, continue to the [Create a new job](./create-job.md) section. \ No newline at end of file diff --git a/docs/user-guide/jobs/workflow-editor-job.md b/docs/user-guide/jobs/workflow-editor-job.md deleted file mode 100644 index c29cb9e62..000000000 --- a/docs/user-guide/jobs/workflow-editor-job.md +++ /dev/null @@ -1,101 +0,0 @@ -# Workflow Editor - -In the `Workflow Editor` section, you can configure a job pipeline to be executed. Pipelines can be configured to be triggered automatically or manually based on code change or time. - -* After adding Git repo in the `Source Code` section, go to the `Workflow Editor`. -* Click `Job Pipeline`. -* Provide the information in the following fields on the **Create job pipeline** page: - -![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/create-job-pipeline-basic.jpg) - -| Field Name | Required/Optional | Description | -| :--- | :--- | :--- | -| Pipeline Name | Required | A name for the pipeline | -| Source type | Required | Source type to trigger the job pipeline. Available options: [Branch Fixed](#source-type-branch-fixed) \| [Branch Regex](#source-type-branch-regex) \|[Pull Request](#source-type-pull-request) \| [Tag Creation](#source-type-tag-creation) | -| Branch Name | Required | Branch that triggers the job pipeline. | - -* Click **Create Pipeline**. - -* The job pipeline is created. - -![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/job-pipeline-created.jpg) - -* To trigger job pipeline, go to the [Trigger Job](triggering-job.md) section. - -`Note`: You can create more than one job pipeline by clicking **+ Job Pipeline**. - -### Docker Layer Caching [![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/elements/EnterpriseTag.svg)](https://devtron.ai/pricing) - -[Click here](../creating-application/workflow/ci-pipeline.md#docker-layer-caching) to read more about controlling cache behavior in Devtron. - -### Source type: Branch Fixed - -The **Source type** - "Branch Fixed" allows you to trigger a CI build whenever there is a code change on the specified branch. - -Select the **Source type** as "Branch Fixed" and enter the **Branch Name**. - -### Source type: Branch Regex - -`Branch Regex` allows users to easily switch between branches matching the configured Regex before triggering the build pipeline. -In case of `Branch Fixed`, users cannot change the branch name in ci-pipeline unless they have admin access for the app. So, if users with -`Build and Deploy` access should be allowed to switch branch name before triggering ci-pipeline, `Branch Regex` should be selected as source type by a user with Admin access. - -For example if the user sets the Branch Regex as `feature-*`, then users can trigger from branches such as `feature-1450`, `feature-hot-fix` etc. - -### Source type: Pull Request - -The **Source type** - "Pull Request" allows you to configure the CI Pipeline using the PR raised in your repository. - -> Before you begin, [configure the webhook](../creating-application/workflow/ci-pipeline.md#configuring-webhook) for either GitHub or Bitbucket. - -> The "Pull Request" source type feature only works for the host GitHub or Bitbucket cloud for now. To request support for a different Git host, please create a GitHub issue [here](https://github.com/devtron-labs/devtron/issues). - - -To trigger the build from specific PRs, you can filter the PRs based on the following keys: - -| Filter key | Description | -| :--- | :--- | -| `Author` | Author of the PR | -| `Source branch name` | Branch from which the Pull Request is generated | -| `Target branch name` | Branch to which the Pull request will be merged | -| `Title` | Title of the Pull Request | -| `State` | State of the PR. Default is "open" and cannot be changed | - -Select the appropriate filter and pass the matching condition as a regular expression (`regex`). - -> Devtron uses regexp library, view [regexp cheatsheet](https://yourbasic.org/golang/regexp-cheat-sheet/). You can test your custom regex from [here](https://regex101.com/r/lHHuaE/1). - -### Source type: Tag Creation - -The **Source type** - "Tag Creation" allows you to build the CI pipeline from a tag. - -> Before you begin, [configure the webhook](../creating-application/workflow/ci-pipeline.md#configuring-webhook) for either GitHub or Bitbucket. - -To trigger the build from specific tags, you can filter the tags based on the `author` and/or the `tag name`. - -| Filter key | Description | -| :--- | :--- | -| `Author` | The one who created the tag | -| `Tag name` | Name of the tag for which the webhook will be triggered | - -Select the appropriate filter and pass the matching condition as a regular expression (`regex`). - - -### Add Preset Plugins - -![](https://devtron-public-asset.s3.us-east-2.amazonaws.com/images/create-job/create-job-pipeline-add-tasks.jpg) - -You can also add preset plugins in your job pipeline to execute some standard tasks, such as Code analysis, Load testing, Security scanning etc. Click `Add Task` to add [preset plugins](../creating-application/workflow/ci-build-pre-post-plugins.md#configuring-pre-post-build-tasks). - - -## Update Job Pipeline - -You can update the configurations of an existing Job Pipeline except for the pipeline's name. -To update a pipeline, select your job pipeline. -In the **Edit job pipeline** window, edit the required fields and select **Update Pipeline**. - -## Delete Job Pipeline - -You can only delete a job pipeline in your workflow. - -To delete a job pipeline, go to **Configurations > Workflow Editor** and select **Delete Pipeline**.