You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
usage: ndt account-id [-h]
Get current account id. Either from instance metadata or current cli
configuration.
optional arguments:
-h, --help show this help message and exit
ndt add-deployer-server
usage: ndt add-deployer-server [-h] [--id ID] file username
Add a server into a maven configuration file. Password is taken from the
environment variable \'DEPLOYER_PASSWORD\'
positional arguments:
file The file to modify
username The username to access the server.
optional arguments:
-h, --help show this help message and exit
--id ID Optional id for the server. Default is deploy. One server with
this id is added and another with \'-release\' appended
ndt assume-role
usage: ndt assume-role [-h] [-t TOKEN_NAME] [-d DURATION] [-p PROFILE]
role_arn
Assume a defined role. Prints out environment variables to be eval\'d to
current context for use: eval$(ndt assume-role\'arn:aws:iam::43243246645:role/DeployRole\')
positional arguments:
role_arn The ARN of the role to assume
optional arguments:
-h, --help show this help message and exit
-t TOKEN_NAME, --mfa-token TOKEN_NAME
Name of MFA token to use
-d DURATION, --duration DURATION
Duration forthe sessionin minutes
-p PROFILE, --profile PROFILE
Profile to edit in~/.aws/credentials to make role
persist in that file for the duration of the session.
ndt assumed-role-name
usage: ndt assumed-role-name [-h]
Read the name of the assumed role if currently defined
optional arguments:
-h, --help show this help message and exit
ndt bake-docker
usage: ndt bake-docker [-h] [-i] component docker-name
Runs a docker build, ensures that an ecr repository with the docker name
(by default <component>/<branch>-<docker-name>) exists and pushes the built
image to that repository with the tags "latest" and "$BUILD_NUMBER"
positional arguments:
component the component directory where the docker directory is
docker-name the name of the docker directory that has the Dockerfile
For example for ecs-cluster/docker-cluster/Dockerfile
you would give cluster
optional arguments:
-h, --help show this help message and exit
-i, --imagedefinitions create imagedefinitions.json for AWS CodePipeline
ndt bake-image
usage: ndt bake-image [-h] component [image-name]
Runs an ansible playbook that builds an Amazon Machine Image (AMI) and
tags the image with the job name and build number.
positional arguments
component the component directory where the ami bake configurations are
[image-name] Optional name fora named imagein component/image-[image-name]
optional arguments:
-h, --help show this help message and exit
ndt cf-delete-stack
usage: ndt cf-delete-stack [-h] stack_name region
Delete an existing CloudFormation stack
positional arguments:
stack_name Name of the stack to delete
region The region to delete the stack from
optional arguments:
-h, --help show this help message and exit
ndt cf-follow-logs
usage: ndt cf-follow-logs [-h] [-s START] stack_name
Tail logs from the log group of a cloudformation stack
positional arguments:
stack_name Name of the stack to watch logs for
optional arguments:
-h, --help show this help message and exit
-s START, --start START
Start timein seconds since epoc
ndt cf-get-parameter
usage: ndt cf-get-parameter [-h] parameter
Get a parameter value from the stack
positional arguments:
parameter The name of the parameter to print
optional arguments:
-h, --help show this help message and exit
ndt cf-logical-id
usage: ndt cf-logical-id [-h]
Get the logical id that is expecting a signal from this instance
optional arguments:
-h, --help show this help message and exit
ndt cf-region
usage: ndt cf-region [-h]
Get region of the stack that created this instance
optional arguments:
-h, --help show this help message and exit
ndt cf-signal-status
usage: ndt cf-signal-status [-h] [-r RESOURCE] status
Signal CloudFormation status to a logical resource in CloudFormation that is
either given on the command line or resolved from CloudFormation tags
positional arguments:
status Status to indicate: SUCCESS | FAILURE
optional arguments:
-h, --help show this help message and exit
-r RESOURCE, --resource RESOURCE
Logical resource name to signal. Looked up from
cloudformation tags by default
ndt cf-stack-id
usage: ndt cf-stack-id [-h]
Get id of the stack the creted this instance
optional arguments:
-h, --help show this help message and exit
ndt cf-stack-name
usage: ndt cf-stack-name [-h]
Get name of the stack that created this instance
optional arguments:
-h, --help show this help message and exit
ndt create-account
usage: ndt create-account [-h] [-d] [-o ORGANIZATION_ROLE_NAME]
[-r TRUST_ROLE_NAME]
[-a [TRUSTED_ACCOUNTS [TRUSTED_ACCOUNTS ...]]]
[-t TOKEN_NAME]
email account_name
Creates a subaccount.
positional arguments:
email Email for account root
account_name Organization unique account name
optional arguments:
-h, --help show this help message and exit-d, --deny-billing-access
-o ORGANIZATION_ROLE_NAME, --organization-role-name ORGANIZATION_ROLE_NAME
Role name for admin access from parent account
-r TRUST_ROLE_NAME, --trust-role-name TRUST_ROLE_NAME
Role name for admin access from parent account
-a [TRUSTED_ACCOUNTS [TRUSTED_ACCOUNTS ...]], --trusted-accounts [TRUSTED_ACCOUNTS [TRUSTED_ACCOUNTS ...]]
Account to trust with user management
-t TOKEN_NAME, --mfa-token TOKEN_NAME
Name of MFA token to use
ndt create-stack
usage: ndt create-stack [-h] [-y] [template]
Create a stack from a template
positional arguments:
template
optional arguments:
-h, --help show this help message and exit
-y, --yes Answer yes or use default to all questions
ndt deploy-cdk
usage: ndt deploy-cdk [-d] [-h] component cdk-name
Exports ndt parameters into component/cdk-name/variables.json, runs pre_deploy.sh in the
cdk project and runs cdk diff; cdk deploy for the same
positional arguments:
component the component directory where the cdk directory is
cdk-name the name of the cdk directory that has the template
For example for lambda/cdk-sender/bin/MyProject.ts
you would give sender
optional arguments:
-d, --dryrun dry-run - do only parameter expansion and pre_deploy.sh and cdk diff
-h, --help show this help message and exit
ndt deploy-serverless
usage: ndt deploy-serverless [-d] [-h] component serverless-name
Exports ndt parameters into component/serverless-name/variables.yml, runs npm i in the
serverless project and runs sls deploy -s $paramEnvIdfor the same
positional arguments:
component the component directory where the serverless directory is
serverless-name the name of the serverless directory that has the template
For example for lambda/serverless-sender/template.yaml
you would give sender
optional arguments:
-d, --dryrun dry-run - do only parameter expansion and template pre-processing and npm i
-h, --help show this help message and exit
ndt deploy-stack
ami that is tagged with the bake-job name
usage: ndt deploy-stack [-d] [-h] component stack-name ami-id bake-job
Resolves potential ECR urls and AMI Ids and then deploys the given stack either updating or creating it.
positional arguments:
component the component directory where the stack template is
stack-name the name of the stack directory inside the component directory
For example for ecs-cluster/stack-cluster/template.yaml
you would give cluster
ami-id If you want to specify a value forthe paramAmi variablein the stack,
you can do so. Otherwise give an empty string with two quotation marks
bake-job If an ami-id is not given, the ami id is resolved by getting the latest
optional arguments:
-d, --dryrun dry-run - show only the change set without actually deploying it
-h, --help show this help message and exit
ndt deploy-terraform
usage: ndt deploy-terraform [-d] [-h] component terraform-name
Exports ndt parameters into component/terraform-name/terraform.tfvars as json, runs pre_deploy.sh in the
terraform project and runs terraform plan; terraform apply for the same
positional arguments:
component the component directory where the terraform directory is
terraform-name the name of the terraform directory that has the template
For example for lambda/terraform-sender/template.yaml
you would give sender
optional arguments:
-d, --dryrun dry-run - do only parameter expansion and template pre-processing and npm i
-h, --help show this help message and exit
ndt detach-volume
usage: ndt detach-volume [-h] mount_path
Create a snapshot of a volume identified by it\'s mount path
positional arguments:
mount_path Where to mount the volume
optional arguments:
-h, --help show this help message and exit
ndt ec2-clean-snapshots
usage: ndt ec2-clean-snapshots [-h] [-r REGION] [-d DAYS] tags [tags ...]
Clean snapshots that are older than a number of days (30 by default) and have
one of specified tag values
positional arguments:
tags The tag values to selectdeleted snapshots
optional arguments:
-h, --help show this help message and exit
-r REGION, --region REGION
The region to delete snapshots from. Can also be set
with env variable AWS_DEFAULT_REGION or is gotten from
instance metadata as a last resort
-d DAYS, --days DAYS The number of days that is theminimum age for
snapshots to be deleted
ndt ec2-get-tag
usage: ndt ec2-get-tag [-h] name
Get the value of a tag for an ec2 instance
positional arguments:
name The name of the tag to get
optional arguments:
-h, --help show this help message and exit
ndt ec2-get-userdata
usage: ndt ec2-get-userdata [-h] file
Get userdata defined for an instance into a file
positional arguments:
file File to write userdata into
optional arguments:
-h, --help show this help message and exit
ndt ec2-instance-id
usage: ndt ec2-instance-id [-h]
Get id for instance
optional arguments:
-h, --help show this help message and exit
ndt ec2-region
usage: ndt ec2-region [-h]
Get default region - the region of the instance if run in an EC2 instance
optional arguments:
-h, --help show this help message and exit
ndt ec2-wait-for-metadata
usage: ndt ec2-wait-for-metadata [-h] [--timeout TIMEOUT]
Waits for metadata service to be available. All errors are ignored untiltime
expires or a socket can be established to the metadata service
optional arguments:
-h, --help show this help message and exit
--timeout TIMEOUT, -t TIMEOUT
Maximum time to waitin seconds for the metadata
service to be available
ndt ecr-ensure-repo
usage: ndt ecr-ensure-repo [-h] name
Ensure that an ECR repository exists and get the uri and login token for it
positional arguments:
name The name of the ecr repository to verify
optional arguments:
-h, --help show this help message and exit
ndt ecr-repo-uri
usage: ndt ecr-repo-uri [-h] name
Get the repo uri for a named docker
positional arguments:
name The name of the ecr repository
optional arguments:
-h, --help show this help message and exit
ndt enable-profile
usage: ndt enable-profile [-h] [-i |-a|-n] profile
Enable a configured profile. Simple IAM user, AzureAD and ndt assume-role
profiles are supported
positional arguments:
profile The profile to enable
optional arguments:
-h, --help show this help message and exit
-i, --iam IAM user type profile
-a, --azure Azure login type profile
-n, --ndt NDT assume role type profile
ndt get-images
usage: ndt get-images [-h] job_name
Gets a list of images given a bake job name
positional arguments:
job_name The job name to look for
optional arguments:
-h, --help show this help message and exit
ndt interpolate-file
usage: ndt interpolate-file [-h] [-s STACK] [-v] [-o OUTPUT] [-e ENCODING]
file
Replace placeholders in file with parameter values from stack and optionally
from vault
positional arguments:
file File to interpolate
optional arguments:
-h, --help show this help message and exit
-s STACK, --stack STACK
Stack name for values. Automatically resolved on ec2
instances
-v, --vault Use vault values as well.Vault resovled from env
variables or default is used
-o OUTPUT, --output OUTPUT
Output file
-e ENCODING, --encoding ENCODING
Encoding to use for the file. Defaults to utf-8
ndt json-to-yaml
usage: ndt json-to-yaml [-h] [--colorize] file
Convert CloudFormation json to an approximation of a Nitor CloudFormation yaml
with for example scripts externalized
positional arguments:
file File to parse
optional arguments:
-h, --help show this help message and exit
--colorize, -c Colorize output
ndt latest-snapshot
usage: ndt latest-snapshot [-h] tag
Get the latest snapshot with a given tag
positional arguments:
tag The tag to find snapshots with
optional arguments:
-h, --help show this help message and exit
ndt list-components
usage: ndt list-components [-h] [-j] [-b BRANCH]
Prints the components in a branch, by default the current branch
optional arguments:
-h, --help show this help message and exit
-j, --json Print in json format.
-b BRANCH, --branch BRANCH
The branch to get components from. Default is to
process current branch
ndt list-file-to-json
usage: ndt list-file-to-json [-h] arrayname file
Convert a file with an entry on each line to a json document with a single
element (name as argument) containg file rows as list.
positional arguments:
arrayname The name in the json object givento the array
file The file to parse
optional arguments:
-h, --help show this help message and exit
ndt list-jobs
usage: ndt list-jobs [-h] [-e] [-j] [-b BRANCH] [-c COMPONENT]
Prints a line forevery runnable jobin this git repository, in all branches
and optionally exports the properties for each under \'$root/job-properties/
optional arguments:
-h, --help show this help message and exit
-e, --export-job-properties
Set if you want the properties of all jobs into files
under job-properties/
-j, --json Print in json format. Optionally exported parameters
will be in the json document
-b BRANCH, --branch BRANCH
The branch to process. Default is to process all
branches
-c COMPONENT, --component COMPONENT
Component to process. Default is to process all
components
ndt load-parameters
usage: ndt load-parameters [-h] [--branch BRANCH] [--resolve-images]
[--stack STACK | --serverless SERVERLESS | --docker DOCKER | --image [IMAGE]
| --cdk CDK | --terraform TERRAFORM]
[--json | --yaml | --properties | --export-statements]
[component]
Load parameters from infra*.properties files in the order:
infra.properties,
infra-[branch].properties,
[component]/infra.properties,
[component]/infra-[branch].properties,
[component]/[subcomponent-type]-[subcomponent]/infra.properties,
[component]/[subcomponent-type]-[subcomponent]/infra-[branch].properties
Last parameter defined overwrites ones defined before in the files. Supports parameter expansion
and bash -like transformations. Namely:
${PARAM##prefix}# strip prefix greedy${PARAM%%suffix}# strip suffix greedy${PARAM#prefix}# strip prefix not greedy${PARAM%suffix}# strip suffix not greedy${PARAM:-default}# default if empty${PARAM:4:2}# start:len${PARAM/substr/replace}${PARAM^}# upper initial${PARAM,}# lower initial${PARAM^^}# upper${PARAM,,}# lower
Comment lines start with \'#\'
Lines can be continued by adding \'\' at the end
See https://www.tldp.org/LDP/Bash-Beginners-Guide/html/sect_10_03.html
(arrays not supported)
positional arguments:
component Compenent to descend into
optional arguments:
-h, --help show this help message and exit
--branch BRANCH, -b BRANCH
Branch to get active parameters for
--resolve-images, -r Also resolve subcomponent AMI IDs and docker repo urls
--stack STACK, -s STACK
CloudFormation subcomponent to descent into
--serverless SERVERLESS, -l SERVERLESS
Serverless subcomponent to descent into
--docker DOCKER, -d DOCKER
Docker image subcomponent to descent into
--image [IMAGE], -i [IMAGE]
AMI image subcomponent to descent into
--cdk CDK, -c CDK CDK subcomponent to descent into
--terraform TERRAFORM, -t TERRAFORM
Terraform subcomponent to descent into
--json, -j JSON format output (default)
--yaml, -y YAML format output
--properties, -p properties file format output
--export-statements, -e
Output as eval-able export statements
ndt logs
usage: ndt logs log_group_pattern [-h] [-f FILTER] [-s START [START ...]] [-e END [END ...]] [-o]
Get logs from multiple CloudWatch log groups and possibly filter them.
positional arguments:
log_group_pattern Regular expression to filter log groups with
optional arguments:
-h, --help show this help message and exit-f FILTER, --filter FILTER
CloudWatch filter pattern
-s START [START ...], --start START [START ...]
Start time (x m|h|d|w ago | now |<seconds since
epoc>)
-e END [END ...], --end END [END ...]
End time (x m|h|d|w ago | now |<seconds since epoc>)
-o, --order Best effort ordering of log entries
ndt mfa-add-token
usage: ndt mfa-add-token [-h] [-i] [-a TOKEN_ARN] [-s TOKEN_SECRET] [-f]
token_name
Adds an MFA token to be used with role assumption. Tokens will be saved in a
.ndt subdirectory in the user\'s home directory. If a token with the same name
already exists, it will not be overwritten.
positional arguments:
token_name Name for the token. Use this to refer to the token
later with the assume-role command.
optional arguments:
-h, --help show this help message and exit
-i, --interactive Ask for token details interactively.
-a TOKEN_ARN, --token_arn TOKEN_ARN
ARN identifier for the token.
-s TOKEN_SECRET, --token_secret TOKEN_SECRET
Token secret.
-f, --force Force an overwrite if the token already exists.
ndt mfa-backup
usage: ndt mfa-backup [-h] [-d FILE] backup_secret
Encrypt or decrypt a backup JSON structure of tokens. To output an encrypted
backup, provide an encryption secret. To decrypt an existing backup, use
--decrypt <file>.
positional arguments:
backup_secret Secret to use for encrypting or decrypts the backup.
optional arguments:
-h, --help show this help message and exit
-d FILE, --decrypt FILE
Outputs a decrypted token backup read from given file.
ndt mfa-code
usage: ndt mfa-code [-h] token_name
Generates a TOTP code using an MFA token.
positional arguments:
token_name Name of the token to use.
optional arguments:
-h, --help show this help message and exit
ndt mfa-delete-token
usage: ndt mfa-delete-token [-h] token_name
Deletes an MFA token file from the .ndt subdirectory in the user\'s home
directory
positional arguments:
token_name Name of the token to delete.
optional arguments:
-h, --help show this help message and exit
ndt mfa-qrcode
usage: ndt mfa-qrcode [-h] token_name
Generates a QR code to import a token to other devices.
positional arguments:
token_name Name of the token to use.
optional arguments:
-h, --help show this help message and exit
ndt print-create-instructions
Prints out the instructions to create and deploy the resources in a stack
usage: ndt print-create-instructions [-h] component stack-name
positional arguments:
component the component directory where the stack template is
stack-name the name of the stack directory inside the component directory
For example for ecs-cluster/stack-cluster/template.yaml
you would give cluster
optional arguments:
-h, --help show this help message and exit
ndt profile-expiry-to-env
usage: ndt profile-expiry-to-env [-h] profile
Prints profile expiry from credentials file (~/.aws/credentials) as eval-able
environment variables
positional arguments:
profile The profile to read expiry info from
optional arguments:
-h, --help show this help message and exit
ndt profile-to-env
usage: ndt profile-to-env [-h] [-t] [-r ROLE_ARN] profile
Prints profile parameters from credentials file (~/.aws/credentials) as eval-
able environment variables
positional arguments:
profile The profile to read profile info from
optional arguments:
-h, --help show this help message and exit
-t, --target-role Output also azure_default_role_arn
-r ROLE_ARN, --role-arn ROLE_ARN
Output also the role given here as the target role for
the profile
ndt promote-image
usage: ndt promote-image [-h] image_id target_job
Promotes an image forusein another branch
positional arguments:
image_id The image to promote
target_job The job name to promote the image to
optional arguments:
-h, --help show this help message and exit
ndt pytail
usage: ndt pytail [-h] file
Read and print a file and keep following the end for new data
positional arguments:
file File to follow
optional arguments:
-h, --help show this help message and exit
ndt read-profile-expiry
usage: ndt read-profile-expiry [-h] profile
Read expiry field from credentials file, which is there if the login happened
with aws-azure-login or another tool that implements the same logic (none
currently known).
positional arguments:
profile The profile to read expiry info from
optional arguments:
-h, --help show this help message and exit
ndt region
usage: ndt region [-h]
Get default region - the region of the instance if run in an EC2 instance
optional arguments:
-h, --help show this help message and exit
ndt register-private-dns
usage: ndt register-private-dns [-h] dns_name hosted_zone
Register local private IP in route53 hosted zone usually for internal use.
positional arguments:
dns_name The name to update in route 53
hosted_zone The name of the hosted zone to update
optional arguments:
-h, --help show this help message and exit
ndt setup-cli
usage: ndt setup-cli [-h] [-n NAME] [-k KEY_ID] [-s SECRET] [-r REGION]
Setup the command line environment to define an aws cli profile with the given
name and credentials. If an identically named profile exists, it will not be
overwritten.
optional arguments:
-h, --help show this help message and exit
-n NAME, --name NAME Name for the profile to create
-k KEY_ID, --key-id KEY_ID
Key id for the profile
-s SECRET, --secret SECRET
Secret to setfor the profile
-r REGION, --region REGION
Default region for the profile
ndt share-to-another-region
usage: ndt share-to-another-region [-h]
ami_id to_region ami_name account_id
[account_id ...]
Shares an image to another region for potentially another account
positional arguments:
ami_id The ami to share
to_region The region to share to
ami_name The name for the ami
account_id The account ids to share ami to
optional arguments:
-h, --help show this help message and exit
ndt show-stack-params-and-outputs
usage: ndt show-stack-params-and-outputs [-h] [-r REGION] [-p PARAMETER]
stack_name
Show stack parameters and outputs as a single json documents
positional arguments:
stack_name The stack name to show
optional arguments:
-h, --help show this help message and exit
-r REGION, --region REGION
Region for the stack to show
-p PARAMETER, --parameter PARAMETER
Name of paremeter if only one parameter required
ndt snapshot-from-volume
usage: ndt snapshot-from-volume [-h] [-w] [-c [COPYTAGS [COPYTAGS ...]]]
[-t [TAGS [TAGS ...]]]
tag_key tag_value mount_path
Create a snapshot of a volume identified by it\'s mount path
positional arguments:
tag_key Key of the tag to find volume with
tag_value Value of the tag to find volume with
mount_path Where to mount the volume
optional arguments:
-h, --help show this help message and exit-w, --wait Wait for the snapshot to finish before returning
-c [COPYTAGS [COPYTAGS ...]], --copytags [COPYTAGS [COPYTAGS ...]]
Tag to copy to the snapshot from instance. Multiple
values allowed.
-t [TAGS [TAGS ...]], --tags [TAGS [TAGS ...]]
Tag to add to the snapshot in the format name=value.
Multiple values allowed.
ndt undeploy-serverless
usage: ndt undeploy-serverless [-h] component serverless-name
Exports ndt parameters into component/serverless-name/variables.yml
and runs sls remove -s $paramEnvIdfor the same
positional arguments:
component the component directory where the serverless directory is
serverless-name the name of the serverless directory that has the template
For example for lambda/serverless-sender/template.yaml
you would give sender
optional arguments:
-h, --help show this help message and exit
ndt undeploy-stack
usage: ndt undeploy-stack [-h] [-f] <component><stack-name>
Undeploys (deletes) the given stack.
Found s3 buckets are emptied and deleted only incase the -f argument is given.
positional arguments:
component the component directory where the stack template is
stack-name the name of the stack directory inside the component directory
For example for ecs-cluster/stack-cluster/template.yaml
you would give cluster
optional arguments:
-h, --help show this help message and exit
ndt undeploy-terraform
usage: ndt undeploy-terraform [-h] component terraform-name
Exports ndt parameters into component/terraform-name/terraform.tfvars as json
and runs terraform destroy for the same
positional arguments:
component the component directory where the terraform directory is
terraform-name the name of the terraform directory that has the template
For example for lambda/terraform-sender/template.yaml
you would give sender
optional arguments:
-h, --help show this help message and exit
ndt undeploy-undeploy
Traceback (most recent call last):
File "/home/pasi/src/nitor-deploy-tools/n_utils/ndt.py", line 100, in<module>do_command_completion()
File "/home/pasi/src/nitor-deploy-tools/n_utils/ndt.py", line 91, in ndt
getattr(__import__(parts[0], fromlist=[parts[1]]), parts[1])()
File "/usr/lib/python2.7/subprocess.py", line 394, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child
raise child_exception
AttributeError: \'NoneType\' object has no attribute \'rfind\'
ndt upsert-cloudfront-records
usage: ndt upsert-cloudfront-records [-h]
(-i DISTRIBUTION_ID | -c DISTRIBUTION_COMMENT)
[-w]
Upsert Route53 records for all aliases of a CloudFront distribution
optional arguments:
-h, --help show this help message and exit
-i DISTRIBUTION_ID, --distribution_id DISTRIBUTION_ID
Id for the distribution to upsert
-c DISTRIBUTION_COMMENT, --distribution_comment DISTRIBUTION_COMMENT
Comment for the distribution to upsert
-w, --wait Wait for request to sync
ndt volume-from-snapshot
usage: ndt volume-from-snapshot [-h] [-n] [-c [COPYTAGS [COPYTAGS ...]]]
[-t [TAGS [TAGS ...]]]
tag_key tag_value mount_path [size_gb]
Create a volume from an existing snapshot and mount it on the given path. The
snapshot is identified by a tag key and value. If no tag is found, an empty
volume is created, attached, formatted and mounted.
positional arguments:
tag_key Key of the tag to find volume with
tag_value Value of the tag to find volume with
mount_path Where to mount the volume
size_gb Size in GB for the volume. If different from snapshot
size, volume and filesystem are resized
optional arguments:
-h, --help show this help message and exit-n, --no_delete_on_termination
Whether to skip deleting the volume on termination,
defaults to false-c [COPYTAGS [COPYTAGS ...]], --copytags [COPYTAGS [COPYTAGS ...]]
Tag to copy to the volume from instance. Multiple
values allowed.
-t [TAGS [TAGS ...]], --tags [TAGS [TAGS ...]]
Tag to add to the volume in the format name=value.
Multiple values allowed.
ndt yaml-to-json
usage: ndt yaml-to-json [-h] [--colorize] [--merge [MERGE [MERGE ...]]]
[--small]
file
Convert Nitor CloudFormation yaml to CloudFormation json with some
preprosessing
positional arguments:
file File to parse
optional arguments:
-h, --help show this help message and exit
--colorize, -c Colorize output
--merge [MERGE [MERGE ...]], -m [MERGE [MERGE ...]]
Merge other yaml files to the main file
--small, -s Compact representration of json
ndt yaml-to-yaml
usage: ndt yaml-to-yaml [-h] [--colorize] file
Do ndt preprocessing for a yaml file
positional arguments:
file File to parse
optional arguments:
-h, --help show this help message and exit
--colorize, -c Colorize output
[ndt ]associate-eip
usage: associate-eip [-h] [-i IP] [-a ALLOCATIONID] [-e EIPPARAM]
[-p ALLOCATIONIDPARAM]
Associate an Elastic IP for the instance that this script runs on
optional arguments:
-h, --help show this help message and exit
-i IP, --ip IP Elastic IP to allocate - default is to get paramEip
from the stack that created this instance
-a ALLOCATIONID, --allocationid ALLOCATIONID
Elastic IP allocation id to allocate - default is to
get paramEipAllocationId from the stack that created
this instance
-e EIPPARAM, --eipparam EIPPARAM
Parameter to look up forElastic IPin the stack -
default is paramEip
-p ALLOCATIONIDPARAM, --allocationidparam ALLOCATIONIDPARAM
Parameter to look up forElastic IP Allocation IDin
the stack - default is paramEipAllocationId
[ndt ]cf-logs-to-cloudwatch
usage: cf-logs-to-cloudwatch [-h] file
Read a file and send rows to cloudwatch and keep following the end for new
data. The log group will be the stack name that created instance and the
logstream will be the instance id and filename.
positional arguments:
file File to follow
optional arguments:
-h, --help show this help message and exit
[ndt ]ec2-associate-eip
usage: ec2-associate-eip [-h] [-i IP] [-a ALLOCATIONID] [-e EIPPARAM]
[-p ALLOCATIONIDPARAM]
Associate an Elastic IP for the instance that this script runs on
optional arguments:
-h, --help show this help message and exit
-i IP, --ip IP Elastic IP to allocate - default is to get paramEip
from the stack that created this instance
-a ALLOCATIONID, --allocationid ALLOCATIONID
Elastic IP allocation id to allocate - default is to
get paramEipAllocationId from the stack that created
this instance
-e EIPPARAM, --eipparam EIPPARAM
Parameter to look up forElastic IPin the stack -
default is paramEip
-p ALLOCATIONIDPARAM, --allocationidparam ALLOCATIONIDPARAM
Parameter to look up forElastic IP Allocation IDin
the stack - default is paramEipAllocationId
[ndt ]logs-to-cloudwatch
usage: logs-to-cloudwatch [-h] file
Read a file and send rows to cloudwatch and keep following the end for new
data. The log group will be the stack name that created instance and the
logstream will be the instance id and filename.
positional arguments:
file File to follow
optional arguments:
-h, --help show this help message and exit
[ndt ]n-include
usage: n-include [-h] file
Find a file from the first of the defined include paths
positional arguments:
file The file to find
optional arguments:
-h, --help show this help message and exit
[ndt ]n-include-all
usage: n-include-all [-h] pattern
Find a file from the first of the defined include paths
positional arguments:
pattern The file pattern to find
optional arguments:
-h, --help show this help message and exit
[ndt ]signal-cf-status
usage: signal-cf-status [-h] [-r RESOURCE] status
Signal CloudFormation status to a logical resource in CloudFormation that is
either given on the command line or resolved from CloudFormation tags
positional arguments:
status Status to indicate: SUCCESS | FAILURE
optional arguments:
-h, --help show this help message and exit
-r RESOURCE, --resource RESOURCE
Logical resource name to signal. Looked up from
cloudformation tags by default
create-shell-archive.sh
file one or more files to package into the archive
usage: create-shell-archive.sh [-h] [<file> ...]
Creates a self-extracting bash archive, suitable forstoringin e.g. Lastpass SecureNotes
positional arguments:
optional arguments:
-h, --help show this help message and exit
encrypt-and-mount.sh
Mounts a local block device as an encrypted volume. Handy for things like local database installs.
usage: encrypt-and-mount.sh [-h] blk-device mount-path
positional arguments
blk-device the block device you want to encrypt and mount
mount-path the mount point for the encrypted volume
optional arguments:
-h, --help show this help message and exit
ensure-letsencrypt-certs.sh
usage: ensure-letsencrypt-certs.sh [-h] domain-name [domain-name ...]
Fetches a certificate with fetch-secrets.sh, and exits cleanly if certificate is found and valid.
Otherwise gets a new certificate from letsencrypt via DNS verification using Route53.
Requires that fetch-secrets.sh and Route53 are set up correctly.
positional arguments
domain-name The domain(s) you want to check certificates for
optional arguments:
-h, --help show this help message and exit
lastpass-fetch-notes.sh
--optional marks that following files will not fail and exit the script in they do not exist
usage: lasptass-fetch-notes.sh [-h] mode file [file ...] [--optional file ...]
Fetches secure notes from lastpass that match the basename of each listed file.
Files specified after --optional won\'t fail if the file does not exist.
positional arguments
mode the file mode for the downloaded files
file the file(s) to download. The source will be the note that matches the basename of the file
optional arguments:
-h, --help show this help message and exit
lpssh
usage: lpssh [-h] [-k key-name] [email protected]
Fetches key mappings from lastpass, downloads mapped keys into a local ssh-agent and starts
an ssh session using those credentials.
positional arguments
[email protected] The user and host to match in"my-ssh-mappings" secure note
and to log into once keys are set up.
optional arguments:
-k, key name in lastpass to use if you don\'t want to use a mapping
-h, --help show this help message and exit
setup-fetch-secrets.sh
Please run as root
usage: setup-fetch-secrets.sh [-h] <lpass|s3|vault>
Sets up a global fetch-secrets.sh that fetches secrets from either LastPass, S3 or nitor-vault
positional arguments
lpass|s3|vault the selected secrets backend.
optional arguments:
-h, --help show this help message and exitexit 1
ssh-hostkeys-collect.sh
usage: ssh-hostkeys-collect.sh [-h] hostname
Creates a <hostname>-ssh-hostkeys.sh archive in the current directory containing
ssh host keys to preserve the identity of a server over image upgrades.
positional arguments
hostname the name of the host used to store the keys. Typically the hostname is what
instance userdata scripts will use to look for the keys
optional arguments:
-h, --help show this help message and exit