Skip to content

Local Hashicorp Stack for DevOps Development without Hypervisor or Cloud

Notifications You must be signed in to change notification settings

rms1000watt/local-hashicorp-stack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Local HashiCorp Stack

Introduction

This projects lets you run a 3 Server + 3 Client Nomad/Consul cluster in 6 Virtualbox VMs on OS X using Packer & Terraform

Contents

Motivation

HashiCorp tools enable you to build/maintain multi-datacenter systems with ease. However, you usually don't have datacenters to play with. This project builds VirtualBox VMs that you can run Terraform against to play with Nomad, Consul, etc.

The workflow is:

  • Build ISOs (Packer)
  • Deploy VMs to your local machine (Terraform + 3rd Party Provider)
  • Play with Nomad, Consul, etc.

(Packer is used directly instead of Vagrant so the pipeline is the same when you build & deploy against hypervisors and clouds)

Prerequisites

  • OS X
  • Homebrew
  • brew install packer terraform nomad
  • brew cask install virtualbox

Build

cd packer
packer build -on-error=abort -force packer.json
cd output-virtualbox-iso
tar -zcvf ubuntu-16.04-docker.box *.ovf *.vmdk
cd ../..

Deploy

cd terraform
# Remove any cached golden images before redeploying
rm -rf ~/.terraform/virtualbox/gold/ubuntu-16.04-docker
terraform init
terraform apply
cd ..

You can ssh onto a host by running:

ssh -o 'IdentitiesOnly yes' [email protected]
# password: packer

Jobs

Take the IP Address of the server deployment and run Nomad jobs:

cd jobs
nomad run -address http://192.168.0.118:4646 redis-job.nomad
nomad run -address http://192.168.0.118:4646 echo-job.nomad
nomad run -address http://192.168.0.118:4646 golang-redis-pg.nomad
nomad run -address http://192.168.0.118:4646 raw.nomad
cd ..

You can view the logs of an allocation:

nomad logs -address http://192.168.0.118:4646 bf90d9cb

At a later time, you can stop the nomad jobs (but first look at the UI):

cd jobs
nomad stop -address http://192.168.0.118:4646 Echo-Job
nomad stop -address http://192.168.0.118:4646 Redis-Job
nomad stop -address http://192.168.0.118:4646 Golang-Redis-PG
nomad stop -address http://192.168.0.118:4646 view_files
cd ..

UI

Using the IP Address of the server deployment, you can:

HDFS

You can deploy HDFS by running:

cd jobs
nomad run -address http://192.168.0.118:4646 hdfs.nomad
cd ..

(Give it a minute to download the docker image..)

Then you can view the UI at: http://192.168.0.118:50070/

Spark

SSH into a server node then start PySpark:

pyspark \
--master nomad \
--conf spark.executor.instances=2 \
--conf spark.nomad.datacenters=dc-1 \
--conf spark.nomad.sparkDistribution=local:///usr/local/bin/spark

Then run some PySpark commands:

df = spark.read.json("/usr/local/bin/spark/examples/src/main/resources/people.json")
df.show()
df.printSchema()
df.createOrReplaceTempView("people")
sqlDF = spark.sql("SELECT * FROM people")
sqlDF.show()

Vault

Init the Vault system and go through the process for 1 of the Vault servers

vault init   -address http://192.168.0.118:8200
vault unseal -address http://192.168.0.118:8200
vault auth   -address=http://192.168.0.118:8200 66344296-222d-5be6-e052-15679209e0e7
vault write  -address=http://192.168.0.118:8200 secret/names name=ryan
vault read   -address=http://192.168.0.118:8200 secret/names

Then unseal the other Vault servers for HA

vault unseal -address http://192.168.0.125:8200
vault unseal -address http://192.168.0.161:8200

Then check Consul to see the health checks show that all the vault servers are unlocked

Attributions

About

Local Hashicorp Stack for DevOps Development without Hypervisor or Cloud

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •