Starter template for Skatteetaten/vagrant-hashistack
- Quick start
- Description - What & Why
- Install Prerequisites
- Configuration
- Usage
- Test Configuration and Execution
⚠️ If you are new to the template we strongly recommend you to read the Description - What & Why, then perform the steps in Install Prerequisites. After that you can move on to our getting started guides in the getting started section.
You can quickly get started by following the below steps:
- Create your own repository by pressing Use this template button.
- Clone the repository you just created to your machine.
- Now, run
make template_init
to clean your repository for unnecessary files and folders. - You are now all set to start developing your module!
Here is a overview of the folders and files you need to know to quickly get you started:
├── example <------------------------ Your example(s) of module usage goes here
├── nomad <------------------------ All Nomad job files goes here (.hcl)
├── main.tf <------------------------ All your resources goes here (e.g. importing and rendering your Nomad job file)
├── outputs.tf <------------------------ Your output variables (if any)
├── variables.tf <------------------------ Your input variables (if any)
└── README.md <------------------------ Documentation of your module
ℹ️ After step 3, this readme will be available in .github/template_specific as old_README.md
.
If you are new to the template we strongly recommend you to read the Description - What & Why, then perform the steps in Install Prerequisites.
After that you can move on to the Usage section.
This template is a starting point, and example, on how to take advantage of the Hashistack vagrant-box to create, develop, and test Terraform-modules within the Hashistack ecosystem.
Hashistack, in current repository context, is a set of software products by HashiCorp.
💡 If you found this in
Skatteetaten/vagrant-hashistack
, you may be interested in the separate repository vagrant-hashistack-template.
⚠️ If you are reading this in your own repository, go to If This Is in Your Own Repository⚠️ There are getting started guides ingetting_started_vagrantbox.md
andgetting_started_modules.md
This template aims to standardize workflow for building and testing terraform-nomad-modules, using the Skatteetaten/hashistack vagrant-box.
The default box will start Nomad, Vault, Consul and MinIO bound to loopback and advertising on the IP 10.0.3.10
, which should be available on your local machine.
Port-forwarding for nomad
on port 4646
should bind to 127.0.0.1
and should allow you to use the nomad binary to post jobs directly.
Consul and Vault have also been port-forwarded and are available on 127.0.0.1
on ports 8500
and 8200
respectively.
Minio is started on port 9000
and shares the /vagrant
(your repo) from within the vagrant box.
Service | URL | Token(s) |
---|---|---|
Nomad | http://10.0.3.10:4646 | |
Consul | http://10.0.3.10:8500 | master |
Vault | http://10.0.3.10:8200 | master |
Minio | http://10.0.3.10:9000 | minioadmin : minioadmin |
If you get the error message
Vagrant cannot forward the specified ports on this VM, since they
would collide with some other application that is already listening
on these ports. The forwarded port to 8500 is already in use
on the host machine.
you do most likely have another version of the vagrant-box already running and using the ports. You can solve this in one of two ways:
Run
vagrant status
to see all running boxes. Then run
vagrant destroy <box-name>
to take it down. Doc on what vagrant destroy
does.
Vagrant has a configuration option called auto_correct which will use another port if the port specified is already taken. To enable it you can add the lines below to the bottom of your Vagrantfile
.
Vagrant.configure("2") do |config|
# Hashicorp consul ui
config.vm.network "forwarded_port", guest: 8500, host: 8500, host_ip: "127.0.0.1", auto_correct: true
# Hashicorp nomad ui
config.vm.network "forwarded_port", guest: 4646, host: 4646, host_ip: "127.0.0.1", auto_correct: true
# Hashicorp vault ui
config.vm.network "forwarded_port", guest: 8200, host: 8200, host_ip: "127.0.0.1", auto_correct: true
end
This will enable the autocorrect-feature on the ports used by consul, nomad, and vault.
💡 You can find out more about Vagrantfiles here
⚠️ Note that using auto_correct WILL change your ports, meaning nothing will work as expected at this point in time, because all ports are hardcoded
make install
The command, will install:
- VirtualBox
- Packer
- Vagrant with additional plugins
- Additional software dependent on the OS (Linux, MacOS)
- Virtualization must be enabled. This is enabled by default on MacOS.
- Homebrew must be installed.
- Virtualization must be enabled. Error if it is not.
- Packages gpg and apt must be installed.
NB
Post installation you might need to reboot your system in order to start the virtual-provider (VirtualBox)
From a thousand foot view the startup scheme will:
- Start the hashistack and MinIO
- Run playbook.yml, which in turn runs all ansible-playbooks inside dev/ansible/.
💡 Vagrantfile lines 8-11 run the first playbook on startup, and can be changed.
💡 Below is a detailed description of the whole startup procedure, both user changeable and not.
box - Comes bundled with the box, not possible to change
system - Provided by the system in automated processes, not possible to change
user - Provided by the user to alter the box or template in some way
Seq number | What | Provided by | Description |
---|---|---|---|
1 | /home/vagrant/.env_default |
[ box ] | default variables |
2 | /vagrant/.env |
[ user ] | variables override, see Pre-packaged Configuration Switches for details |
3 | /vagrant/.env_override |
[ system ] | variables are overridden for test purposes |
4 | /vagrant/dev/vagrant/conf/pre_ansible.sh |
[ user ] | script running before ansible bootstrap procedure, details |
5 | /vagrant/dev/vagrant/conf/pre_bootstrap/*.yml |
[ user ] | pre bootstrap tasks, running before hashistack software starts, details |
6 | /etc/ansible/bootstrap.yml |
[ box ] | verify ansible variables and software configuration, run hashistack software and MinIO, & verify that it started correctly, link |
7 | /vagrant/conf/post_bootstrap/*.yml |
[ user ] | poststart scripts, running after hashistack software has started, details |
8 | /vagrant/dev/conf/post_ansible.sh |
[ user ] | script running after ansible bootstrap procedure, details |
9 | /vagrant/ansible/*.yml |
[ user ] | ansible tasks included in playbook, see Pre-packaged Configuration Switches for details |
You may change the hashistack configuration or add additional pre and post steps to the ansible startup procedure to match your needs. Detailed documentation in dev/vagrant/conf/README.md
In addition to ansible playbooks, you can also add bash-scripts that will be run before and/or after the ansible provisioning step. This is useful for doing deeper changes to the box pertaining to your needs. Detailed documentation in dev/vagrant/conf/README.md
The box comes with a set of configuration switches controlled by env variables to simplify testing of different scenarios and enable staged development efforts. To change any of these values from their defaults, you may add the environment variable to .env.
NB: All lowercase variables will automatically get a corresponding TF_VAR_
prepended variant for use directly in terraform. Script
To use enterprise versions of the hashistack components set the software's corresponding Enterprise-variable to true
(see below).
default | environment variable | value |
---|---|---|
nomad_enterprise | true | |
x | nomad_enterprise | false |
nomad_acl | true | |
x | nomad_acl | false |
When ACLs are enabled in Nomad the bootstrap token will be available in vault under secret/nomad/management-token
with the two key-value pairs accessor-id
and secret-id
. secret-id
is the token itself. These can be accessed in several ways:
- From inside the vagrant box with
vault kv get secret/nomad-bootstrap-token
- From local machine with
vagrant ssh -c vault kv get secret/nomad-bootstrap-token"
- By going to vault's UI on
localhost:8200
, and signing in with the root token.
default | environment variable | value |
---|---|---|
consul_enterprise | true | |
x | consul_enterprise | false |
x | consul_acl | true |
consul_acl | false | |
x | consul_acl_default_policy | allow |
consul_acl_default_policy | deny |
Consul namespaces feature is available in enterprise version only. The switches below will enable consul_namespaces_test.yml
consul_enterprise=true
consul_acl=true
consul_acl_default_policy=deny
Consul will come up with two additional namespaces ["team1", "team2"] and *admin token for these namespaces.
⚠️ Admin tokens use builtin policy - Namespace Management with scope=global.
References:
- Consul namespaces documentation
- Consul http api documentation
- Consul cli documentation
- Consul 1.7 - Namespaces: Simplifying Self-Service, Governance and Operations Across Teams
default | environment variable | value |
---|---|---|
vault_enterprise | true | |
x | vault_enterprise | false |
If consul_acl_default_policy
has value deny
, it will also enable consul secrets engine in vault.
Ansible will provision additional custom roles (admin-team, dev-team), policies and tokens for test purpose with different access level.
How to generate token:
# generate token for dev team member
vagrant ssh -c 'vault read consul/creds/dev-team'
# generate token for admin team member
vagrant ssh -c 'vault read consul/creds/admin-team'
💡 Tokens can be used to access UI (different access level depends on policy attached to the token)
default | environment variable | value |
---|---|---|
x | vault_pki | true |
vault_pki | false |
Vault PKI will be enabled at /pki
. A role called default
is available to issue certificates.
Issue certificates from terminal:
vault write pki/issue/default common_name="your_common_name"
or with the terraform resource `vault_pki_secret_backend_cert:
resource "vault_pki_secret_backend_cert" "app" {
backend = "pki"
name = "default"
common_name = "app.my.domain"
}
If you get the error message Dimension memory exhausted on 1 node
or Dimension CPU exhausted on 1 node
, you might want to increase resources dedicated to your vagrant-box.
To overwrite the default resource-configuration you can add the lines
Vagrant.configure("2") do |config|
config.vm.provider "virtualbox" do |vb|
vb.memory = 2048
vb.cpu = 2
end
end
to the bottom of your Vagrantfile, and change vb.memory
and vb.cpu
to suit your needs. Any configuration in Vagrantfile will overwrite the defaults if there is any. More configuration options.
💡 The defaults can be found in Vagrantfile.default.
There are two "Getting started" guides:
getting_started_modules.md
, will guide you through how to create a terraform module with this template.getting_started_vagrantbox.md
, will guide you through how to use the vagrantbox, geared towards those using the box to develop terraform modules.
There are several commands that help to run the vagrant-box:
-
make install
installs all prerequisites. Run once. -
make up
provisions a vagrant-hashistack box on your machine. After the machine and hashistack are set up it will run the Startup Scheme. -
make clean
takes down the provisioned box if there is any. -
make dev
is same asmake up
except that it skips all the tasks within ansible playbook that have the tagtest
and custom_ca. Read more about ansible tags here. -
make test
takes down the provisioned box if there is any, removes tmp files and then runsmake up
. -
make update
downloads the newest version of the vagrant-hashistack box from vagrantcloud. -
make template_example
runs the example in template_example/. -
make template_init
will clean out the template for you to get started with the module development. -
make fmt
format/prettify all the.tf
files in the directory. -
make lint
Will run the github linter locally -
make pre-commit
shorthand for all you need before committing to a PR. -
make destroy-all-running-boxes
CAUTION! DESTRUCTIVE. If you are running out of space or having trouble with virtualbox or vagrant. You may run the "get out of jail" command . It will stop ALL virtualbox vms and delete them.
💡 For full info, check
template/Makefile
. :warning: Makefile commands are not idempotent in the context of vagrant-box. You could face the error of port collisions. Most of the cases it could happen because of the vagrant box has already been running. Runvagrant destroy -f
to destroy the box.
Once vagrant-box is running, you can use other options like the Nomad- and Terraform-CLIs to iterate over the deployment in the development stage.
Minio S3 can be used as a general artifact repository while building and testing within the scope of the vagrantbox to push, pull and store resources for further deployments.
⚠️ Directory/vagrant
is mounted to minio. Only first level of sub-directories become bucket names.
Resource examples:
- docker images
- compiled binaries
- jar files
- etc...
Push(archive) of docker image.
# NB! Folder /vagrant is mounted to Minio
# Folder `dev` is going to be a bucket name
- name: Create tmp if it does not exist
file:
path: /vagrant/dev/tmp
state: directory
mode: '0755'
owner: vagrant
group: vagrant
- name: Archive docker image
docker_image:
name: docker_image
tag: local
archive_path: /vagrant/dev/tmp/container-image.tar
source: local
💡 The artifact stanza instructs Nomad to fetch and unpack a remote resource, such as a file, tarball, or binary.
Example:
task "web" {
driver = "docker"
artifact {
source = "s3::http://127.0.0.1:9000/dev/tmp/container-image.tar"
options {
aws_access_key_id = "minioadmin"
aws_access_key_secret = "minioadmin"
}
}
config {
load = "container-image.tar"
image = "docker_image:local"
}
}
Once you start the box with one of the commands make dev
, make up
or make example
,
you need a simple way how to continuously deploy development changes.
There are several options:
- From the local machine. You can install Hashicorp binaries on the local machine, such as terraform and nomad. Then you can deploy changes to the vagrant-box using these binaries.
Example terraform:
terraform init
terraform apply
Example nomad:
nomad job run countdash.hcl
⚠️ _Your local binaries and the binaries in the box might not be the same versions, and may behave differently. Box versions.
- Using vagrant. Box instance has all binaries are installed and available in the PATH.
You can use
vagrant ssh
to place yourself inside of the vagrantbox and run commands.
# remote command execution
vagrant ssh default -c 'cd /vagrant; terraform init; terraform apply'
# ssh inside the box, local command execution
vagrant ssh default
cd /vagrant
terraform init
terraform apply
💡
default
is the name of running VM. You could also use VMid
. To get vmid
checkvagrant global-status
.
The CHANGELOG.md should follow this syntax.
All PRs will run super-linter. You can use this to run it locally before creating a PR.
💡 Information about rules can be found under .github/linters/
You can run terraform fmt --recursive
to rewrite your terraform config-files to a canonical format.
⚠️ Terraform binary must be available to do this.
1 Add remote upstream to template origin
git remote add template https://github.com/Skatteetaten/vagrant-hashistack-template.git
2 Fetch all
git fetch --all
3 Checkout template master branch and pull. Its important to have it locally
git checkout -b template-master template/master
git pull
4 Checkout new branch from origin master of current module
git checkout master # checkout master
git pull # pull the latest master
git checkout -b sync # checkout new branch `sync` from master
5 Run git merge with the flag --allow-unrelated-histories
git merge template-master --allow-unrelated-histories
6 Fix conflicts (carefully)
7 Commit changes
8 Push branch sync
9 Make pull request sync
against master
branch
The tests are run using Github Actions feature which makes it possible to automate, customize, and execute the software development workflows right in the repository. We utilize the matrix testing strategy to cover all the possible and logical combinations of the different properties and values that the components support. The .env_override file is used by the tests to override the values that are available in the .env_default file, as well as the user configurable .env file.
As of today, the following tests are executed:
Test name | Consul Acl | Consul Acl Policy | Nomad Acl | Hashicorp binary |
---|---|---|---|---|
test (consul_acl_enabled, consul_acl_deny, nomad_acl_enabled, hashicorp_oss) | true | deny | true | Open source |
test (consul_acl_enabled, consul_acl_deny, nomad_acl_enabled, hashicorp_enterprise) | true | deny | true | enterprise |
test (consul_acl_enabled, consul_acl_deny, nomad_acl_disabled, hashicorp_oss) | true | deny | false | Open source |
test (consul_acl_enabled, consul_acl_deny, nomad_acl_disabled, hashicorp_enterprise) | true | deny | false | enterprise |
test (consul_acl_disabled, consul_acl_deny, nomad_acl_enabled, hashicorp_oss) | false | deny | true | Open source |
test (consul_acl_disabled, consul_acl_deny, nomad_acl_enabled, hashicorp_enterprise) | false | deny | true | enterprise |
test (consul_acl_disabled, consul_acl_deny, nomad_acl_disabled, hashicorp_oss) | false | deny | false | Open source |
test (consul_acl_disabled, consul_acl_deny, nomad_acl_disabled, hashicorp_enterprise) | false | deny | false | enterprise |
The latest test results can be looked up under the Actions tab.