Skip to content

Commit

Permalink
Merge branch 'takeoff_develop' of https://github.com/devonfw/hangar i…
Browse files Browse the repository at this point in the history
…nto takeoff_develop
  • Loading branch information
serhiibets committed Dec 21, 2022
2 parents 9e82083 + 3e90309 commit 9c33833
Show file tree
Hide file tree
Showing 125 changed files with 1,117 additions and 578 deletions.
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ takeoff/takeoff_gui/.vscode/launch.json
takeoff/takeoff_gui/pubspec.lock
takeoff/takeoff_cli/bin/takeoff_cli.exe
# VSCode Config
.vscode
.vscode/*

# Terraform files
**/.terraform/**
Expand Down
111 changes: 45 additions & 66 deletions documentation/aws/setup-sonarqube-instance.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,12 @@
:terraform_tutorials: https://developer.hashicorp.com/terraform/tutorials/aws
:terraform_vars_example_short: --region eu-west-1 --keypair_name sonarqube
:terraform_vars_example_full: --region eu-west-1 --vpc_cidr_block 10.0.0.0/16 --subnet_cidr_block 10.0.1.0/24 --nic_private_ip 10.0.1.50 --instance_type t3a.small --keypair_name sonarqube
:terraform_vars: + \
--region The AWS region where the resources will be created. By default: eu-west-1 + \
--vpc_cidr_block Virtual private network IP range (CIDR). By default: 10.0.0.0/16 + \
--subnet_cidr_block The range of internal addresses that are owned by this subnetwork. Ranges must be unique and non-overlapping within a network. + \
By default: 10.0.1.0/24 + \
--nic_private_ip Instance private IP within subnet range. By default: 10.0.1.50 + \
--instance_type Machine Instance type. By default: t3a.small + \
--keypair_name Keypair name to connect with ssh as defined in AWS. By default: sonarqube
:terraform_vars: --region Region where the resources will be created. Default: eu-west-1 + \
--vpc_cidr_block Virtual private network IP range (CIDR). Default: 10.0.0.0/16 + \
--subnet_cidr_block Range of internal addresses that are owned by this subnetwork. Ranges must be unique and non-overlapping within a network. Default: 10.0.1.0/24 + \
--nic_private_ip Instance private IP within subnet range. Default: 10.0.1.50 + \
--instance_type Machine Instance type. Default: t3a.small + \
--keypair_name Keypair name to connect with SSH as defined in AWS. Default: sonarqube

= Setting up a SonarQube instance in {provider_name}
:toc:
Expand All @@ -36,63 +34,65 @@ IMPORTANT: This will create a public key, directly stored in AWS (current region

=== Relevant files

* `./sonarqube.sh` script to automatically do all the steps in only one execution command.
* `main.tf` contains declarative definition written in HCL of AWS infrastructure.
* `./sonarqube.sh` script to automatically do all the steps in one command execution.
* `main.tf` contains declarative definition written in HCL of Cloud infrastructure.
* `../common/setup_sonarqube.sh` script to be run on {container_instance_type} that installs and deploys a container running SonarQube.
* `variables.tf` contains variable definition for `main.tf`.
* `terraform.tfvars` contains values (user-changeable) for the variables defined in `variables.tf`.
* `terraform.tfstate` contains current state of the created infrastructure. It is generated after use it and should be stored securely.
* `set-terraform-variables.sh` assists user in setting the values of `terraform.tfvars`.

== Usage
== SonarQube instance setup

=== Easy usage
=== Quick setup

To make it easier to use for users who do not know terraform, or for those who need only one command to be executed, we have prepared a script that executes all the steps automatically.
To make it easier to use for non-experienced users, or for those who need only one command to be executed, we provide `sonarqube.sh` script that executes all the steps automatically.

==== Usage
```
./sonarqube.sh [command] [flags] [terraform variables]
./sonarqube.sh <command> [flags...] [terraform variables...]
```

*Commands*
==== Commands
```
COMMAND DESCRIPTION
apply Create or update infrastructure
destroy Destroy previously-created infrastructure
output Show output values from your terraform. With output command only one option is readed `--output-key`, all other flags and options are ignored.
If you want to show the output as json, add an option '--output-key --json'.
If you want to recover only one output value add an option '--output-key key' where key is the name of the output var.
apply Creates or updates infrastructure.
destroy Destroys previously created infrastructure.
output Shows output values from Terraform state. Ignores flags other than '--output-key' or '-k'.
To print only one output value use flag '--output-key <key>' where key is the name of the output variable.
```

*Flags*
==== Flags
```
-s, --state-folder The folder where you are going to save/import your terraform configuration."
-k, --output-key [ONLY FOR output] The key of the terraform output variable that you want to recover."
-q, --quiet To not print any command of the script, only the execution of 'terraform command'."
-h, --help Get help for commands."
-s, --state-folder Folder for saving/importing Terraform configuration.
-k, --output-key [ONLY FOR output] Key of a single Terraform output variable to print.
-q, --quiet Suppress output other than the generated by Terraform command.
-h, --help Displays help message.
```

*terraform variables*
==== Terraform variables

These options are variables to be replace by the set-terraform-variables.sh script. You can replace as many as you want of the next:
These variables will be used to update `terraform.tfvars` (using `set-terraform-variables.sh` script). They are ignored in output command. Syntax: '--key value' or '--key=value'.

===== Configurable variables

[subs=attributes+]
```
{terraform_vars}
```

*Examples*
==== Examples

[subs=attributes+]
```
./sonarqube.sh apply --state-folder "path_to_save_state" {terraform_vars_example_short}
./sonarqube.sh apply --state-folder /secure/location {terraform_vars_example_short}

./sonarqube.sh apply --state-folder "path_to_save_state" {terraform_vars_example_full}
./sonarqube.sh apply --state-folder /secure/location {terraform_vars_example_full}
```

WARNING: *Remember to securely store all the content inside the state-folder*, otherwise you will not be able to perform any changes, including detroying them, from Terraform.
CAUTION: *Remember to securely store all the content inside the state folder*, otherwise you will not be able to perform any changes in infrastructure, including destroying it, from Terraform.

=== Expert usage
=== Step-by-step setup

First, you need to initialize the working directory containing Terraform configuration files (located at `/scripts/sonarqube/{provider_path}`) and install any required plugins:

Expand All @@ -102,14 +102,14 @@ terraform init

Then, you may need to customize some input variables about the environment. To do so, you can either edit `terraform.tfvars` file or take advantage of the `set-terraform-variables` script, which allows you to create or update values for the required variables, passing them as flags.

You can replace as many as you want of the next:
*Configurable variables:*

[subs=attributes+]
```
{terraform_vars}
```

Examples of usage:
*Examples of usage:*

[subs=attributes+]
```
Expand All @@ -118,37 +118,21 @@ Examples of usage:
./set-terraform-variables.sh {terraform_vars_example_full}
```

WARNING: Unless changed, some of the variables used to deploy by default probably do not exist in your environment of {provider_name}.
WARNING: Unless changed, some of the variables used by default probably do not exist in your environment of {provider_name}.

Finally, deploy SonarQube instance:

```
terraform apply --auto-approve
```

WARNING: *Remember to securely store `terraform.tfstate` file*, otherwise you will not be able to perform any changes, including detroying them, from Terraform. More insights https://www.terraform.io/cli/run[here].
CAUTION: *Remember to securely store `terraform.tfstate` file*, otherwise you will not be able to perform any changes in infrastructure, including detroying it, from Terraform. More insights https://www.terraform.io/cli/run[here].

NOTE: `terraform apply` command performs a plan and actually carries out the planned changes to each resource using the relevant infrastructure provider's API. You can use it to perform changes on the created resources later on.

In particular, this will create an Ubuntu-based in {container_instance_type} and deploy a Docker container running SonarQube.

You will get the public url of {container_instance_type} and an admin token to connect with sonar as output. Take note of it, you will need it later on.

==== Manage terraform output
In particular, this will create {container_instance_type} based on Ubuntu and deploy a Docker container running SonarQube.

You can recover all the outputs from terraform after having used apply command using the next command:

```
terraform output
```

Or you can get an specific output value using his key in the command:

```
terraform output $outputKeyName
```

NOTE: Remember that command needs `terraform.tfstate` file to work.
You will get the public URL of the SonarQube instance and an admin token as output. Take note of it, you will need it later on.

==== Destroy SonarQube instance

Expand All @@ -160,28 +144,23 @@ terraform destroy

==== Modify SonarQube instance infrastructure

As long as you keep the `terraform.tfstate` file generated when creating the SonarQube instance, you can apply changes to the infrastructure deployed.

If you are going to apply a change in the infrastructure, you will have to modify the terraform files and reapply the changes with the command `terraform apply`.

IMPORTANT: In windows, keep in mind that after applying any changes, you will lose the value of the token so be sure to copy or write it down before applying any changes. To avoid this we have implemented a method but to work you must store the standard terraform output in a file called terraform.tfoutput. This can be done with the following command:
As long as you keep the `terraform.tfstate` file generated when creating the SonarQube instance, you can apply changes to the infrastructure deployed by modifying `main.tf` and executing:

```
terraform output > terraform.tfoutput
terraform apply
```

IMPORTANT: In Windows, when applying any changes, the value of the token is lost if `terraform.tfoutput` does not exist. Be sure you do not skip the first command.

== Change Sonarqube default admin password

After having deployed sonarqube by following this guide, you will be able to access SonarQube web interface on the url provided by terraform output and the following credentials:
After a few minutes, you will be able to access SonarQube web interface on the public URL provided by Terraform output with the following credentials:

* Username: `admin`
* Password: `admin`

IMPORTANT: Change the default password promptly. After that, update the password in terraform vars, you can do it manually or with the next command:

```
./set-terraform-variables.sh --sonarqube_password ${YOUR_NEW_PASSWORD}
```
IMPORTANT: Change the default password promptly. After that, update the password in Terraform configuration: `./set-terraform-variables.sh --sonarqube_password <new password>`.

== Appendix: More information about terraform for {provider_name}
== Appendix: More information about Terraform for {provider_name}
* {terraform_tutorials}[Official Terraform tutorials]
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

In this section we will create a pipeline which will provision an Azure AKS cluster. This pipeline will be configured to be manually triggered by the user. As part of AKS cluster provisioning, a NGINX Ingress controller is deployed and a variable group with the name `aks-variables` is created, which contains, among others, the DNS name of the Ingress controller, that you you will need to add as CNAME record on the domains used in your application Ingress manifest files. Refer to the appendix for more details.

The creation of the pipeline will follow the project workflow, so a new branch named `feature/aks-provisioning` will be created, the YAML file for the pipeline and the terraform files for creating the cluster will be pushed to it.
The creation of the pipeline will follow the project workflow, so a new branch named `feature/aks-provisioning` will be created, the YAML file for the pipeline and the Terraform files for creating the cluster will be pushed to it.

Then, a Pull Request (PR) will be created in order to merge the new branch into the appropiate branch (provided in `-b` flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using `-w` flag.

Expand Down Expand Up @@ -104,4 +104,4 @@ kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{

=== Appendix: Destroying the cluster

To destroy the provisioned resources, set `operation` pipeline variable value to `destroy` and run the pipeline.
To destroy the provisioned resources, set `operation` pipeline variable value to `destroy` and run the pipeline.
37 changes: 20 additions & 17 deletions documentation/azure-devops/setup-deploy-pipeline.asciidoc
Original file line number Diff line number Diff line change
@@ -1,18 +1,20 @@
:provider: Azure Devops
:pipeline_type: Pipeline
:trigger_sentence: This pipeline will be configured to be triggered after the package pipeline
:provider: Azure DevOps

:pipeline_type: pipeline
:trigger_sentence: This pipeline will be configured in order to be triggered every time package pipeline is executed successfully on a commit for `release/*` and `develop` branches, requiring manual launch for other branches but still enforcing that package pipeline has passed
:pipeline_type2: pipeline
:path_provider: azure-devops
:openBrowserFlag: -w
= Setting up a Deploy Pipeline on {provider}
= Setting up a Deploy {pipeline_type} on {provider}

In this section we will create a deploy {pipeline_type} on {provider} to deploy the project application on an already provisioned Kubernetes cluster. This pipeline will be configured in order to be triggered every time package {pipeline_type} is executed successfully on a commit for `release/*` and `develop` branches, requiring manual launch for other branches but still enforcing that package {pipeline_type} has passed. By default, it depends on the environment provisioning {pipeline_type} being successfully run on beforehand and, depending on the Kubernetes provider, it consumes the artifact produced by that. It also consumes variable groups created by package and environment provisioning {pipeline_type}.
In this section we will create a deploy {pipeline_type} on {provider} to deploy the project application on an already provisioned Kubernetes cluster. {trigger_sentence}. By default, it depends on the environment provisioning {pipeline_type} being successfully run on beforehand and, depending on the Kubernetes provider, it consumes the artifact produced by that. It also consumes variable groups created by package and environment provisioning {pipeline_type}.

The creation of the pipeline will follow the project workflow, so a new branch named `feature/deploy-pipeline` will be created and the YAML file for the pipeline will be pushed to it.
The creation of the {pipeline_type2} will follow the project workflow, so a new branch named `feature/deploy-pipeline` will be created and the YAML file for the {pipeline_type} will be pushed to it.

Then, a Pull Request (PR) will be created in order to merge the new branch into the appropriate branch (provided in `-b` flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using `-w` flag.

The script located at `/scripts/pipelines/{path_provider}/pipeline_generator.sh` will automatically create the new branch, create a deploy pipeline based on a YAML template appropriate for the project manifests files, create the Pull Request, and if it is possible, merge this new branch into the specified branch.
The script located at `/scripts/pipelines/{path_provider}/pipeline_generator.sh` will automatically create the new branch, create a deploy {pipeline_type} based on a YAML template appropriate for the project manifests files, create the Pull Request, and if it is possible, merge this new branch into the specified branch.


== Prerequisites

Expand All @@ -27,31 +29,32 @@ The script located at `/scripts/pipelines/{path_provider}/pipeline_generator.sh`
```
pipeline_generator.sh \
-c <config file path> \
-n <pipeline name> \
-n <{pipeline_type} name> \
-d <project local path> \
--package-pipeline-name <pipeline name> \
--env-provision-pipeline-name <pipeline name> \
--package-pipeline-name <{pipeline_type} name> \
--env-provision-pipeline-name <{pipeline_type} name> \
--k8s-provider <provider name> \
--k8s-namespace <namespace> \
--k8s-deploy-files-path <manifests path> \
[--k8s-image-pull-secret-name <secret name>] \
[-b <branch>] \
[-w]
```
NOTE: The config file for the deploy pipeline is located at `/scripts/pipelines/{path_provider}/templates/deploy/deploy-pipeline.cfg`.
NOTE: The config file for the deploy {pipeline_type} is located at `/scripts/pipelines/{path_provider}/templates/deploy/deploy-pipeline.cfg`.

=== Flags
[subs=attributes+]
```
-c --config-file [Required] Configuration file containing pipeline definition.
-n --pipeline-name [Required] Name that will be set to the pipeline.
-c --config-file [Required] Configuration file containing {pipeline_type} definition.
-n --pipeline-name [Required] Name that will be set to the {pipeline_type}.
-d --local-directory [Required] Local directory of your project.
--package-pipeline-name [Required] Package pipeline name.
--env-provision-pipeline-name [Required] Environment provisioning pipeline name.
--package-pipeline-name [Required] Package {pipeline_type} name.
--env-provision-pipeline-name [Required] Environment provisioning {pipeline_type} name.
--k8s-provider [Required] Kubernetes cluster provider name. Accepted values: EKS, AKS.
--k8s-namespace [Required] Kubernetes namespace where the application will be deployed.
--k8s-deploy-files-path [Required] Path from the root of the project to the YAML manifests directory.
--k8s-image-pull-secret-name Name for the generated secret containing registry credentials. Required when using a private registry to host images.
--k8s-image-pull-secret-name Name for the generated secret containing registry credentials. Required when using a private registry to host images.

-b --target-branch Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided.
-w Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag.
```
Expand All @@ -65,7 +68,7 @@ NOTE: The config file for the deploy pipeline is located at `/scripts/pipelines

=== Appendix: accessing the application

Once the {pipeline_type} is executed and your application is deployed, you can list the hostname to access it with:
Once the {pipeline_type} is executed and your application is deployed, you can list the hostname to access it by running locally:

```
kubectl get ingress -n <namespace>
Expand Down
Loading

0 comments on commit 9c33833

Please sign in to comment.