diff --git a/DESIGN.md b/DESIGN.md
index b0bdd98..8472d84 100644
--- a/DESIGN.md
+++ b/DESIGN.md
@@ -4,22 +4,26 @@
The default settings for Puppet Enterprise services are tuned, but not necessarily optimized for PE Infrastructure type and the combination of PE services competing for system resources on each PE Infrastructure host.
+The `tune` command outputs optimized settings for Puppet Enterprise services based upon available system resources.
+
+The command expects that you have provisioned the PE Infrastructure hosts with the system resources required to handle the workload, given agent count and code and environment complexity.
+
## Methodology
-1. Query PuppetDB for PE Infrastructure Hosts (query for declared PE classes)
-1. Query PuppetDB for CPU and RAM facts for PE Infrastructure Hosts (query for processors, memory)
+1. Query PuppetDB for PE Infrastructure hosts (query for declared PE classes)
1. Identify PE Infrastructure type: Standard, Large, Extra Large (legacy: Split)
-1. For each PE Infrastructure Host, output settings for PE services (as parameters for the declared PE classes)
+1. Query PuppetDB for CPU and RAM facts for each PE Infrastructure host (query for processors, memory)
+1. Output settings for PE services for each PE Infrastructure host (as parameters for the declared PE classes)
## Resource Allocation
-### Ratios, Minimums, and Maximumsw
+### Ratios, Minimums, and Maximums
-With some exceptions, the `tune` command calculates settings for each service based upon a ratio of system resources (processors and/or memory) limited by a minimum and maximum.
+The `tune` command calculates settings for each service based upon a ratio of system resources (processors and/or memory) limited by a minimum and maximum.
-The ratio, minimum, and maximum vary based upon the PE Infrastructure type and the PE services competing for system resources on each PE Infrastructure host.
+The ratio, minimum, and maximum vary based upon the PE Infrastructure type and the combination of PE services competing for system resources on each PE Infrastructure host.
-The minimum system resources for the `tune` command are 4 CPU / 8 GB RAM.
+The minimum system resources for the `tune` command to function are 4 CPU / 8 GB RAM.
Notes:
@@ -27,153 +31,199 @@ Notes:
> Any Replica should/would/will receive the same settings as the Primary Master, as a Replica is required to have the same system resources as the Primary Master.
+
#### Standard Reference Architecture
A Standard Reference Architecture is a Master-only install.
##### Master
+Allocations are calculated in the following order.
+
###### Database Service (pe-postgresql)
```
-percent_ram_database = 0.25
-minimum_ram_database = 2048
-maximum_ram_database = 16384
+CPU No Allocation
```
+```
+RAM Percent = 0.25
+RAM Minimum = 2048
+RAM Maximum = 16384
+```
+
+If the total number of potential database connections from all PuppetDB services exceeds the default, we increase `max_connections` by the number of potential database connections times `1.10`.
+
###### PuppetDB Service (pe-puppetdb)
```
-percent_cpu_puppetdb = 0.25
-minimum_cpu_puppetdb = 1
-maximum_cpu_puppetdb = (CPU * 0.50)
+CPU Percent = 0.25
+CPU Minimum = 1
```
```
-percent_ram_puppetdb = 0.10
-minimum_ram_puppetdb = 512
-maximum_ram_puppetdb = 8192
+RAM Percent = 0.10
+RAM Minimum = 512
+RAM Maximum = 8192
```
###### Console Service (pe-console-services)
```
-percent_ram_console = 0.08
-minimum_ram_console = 512
-maximum_ram_console = 1024
+CPU No Allocation
+```
+
+```
+RAM Percent = 0.08
+RAM Minimum = 512
+RAM Maximum = 1024
```
###### Orchestrator Service (pe-orchestration-services)
```
-percent_ram_orchestrator = 0.08
-minimum_ram_orchestrator = 512
-maximum_ram_orchestrator = 1024
+CPU No Allocation
```
-With PE 2019.2.x, the processor and memory associated with one jruby is reallocated from PuppetServer to Orchestrator, as Orchestrator has jrubies and requires (estimated) one processor and additional memory.
+```
+RAM Percent = 0.08
+RAM Minimum = 512
+RAM Maximum = 1024
+```
+
+In PE 2019.2.x, Orchestrator has JRubies and is allocated additional memory as follows.
+
+
+```
+RAM Percent = 0.10
+RAM Maximum = N/A
+```
-###### ActiveMQ Service (pe-activemq)
+Orchestrator JRubies do not require a CPU allocation as the are bound by I/O.
+But we limit the number of Orchestrator JRubies based upon how many fit into the memory allocated to Orchestrator.
```
-percent_ram_activemq = 0.08
-minimum_ram_activemq = 512
-maximum_ram_activemq = 1024
+minimum jrubies orchestrator = 1
+maximum jrubies orchestrator = 4
+maximum jrubies orchestrator limited by memory = (allocated memory / memory per jruby)
+orchestrator_jruby_max_active_instances = (maximum jrubies orchestrator limited by memory).clamp(minimum jrubies orchestrator, maximum jrubies puppetserver orchestrator)
```
-ActiveMQ (used by MCollective) is deprecated in PE 2018.x and removed in PE 2019.x.
-###### PuppetServer Service (pe-puppetserver)
+###### ActiveMQ Service (pe-activemq) *
-Since PuppetServer is allocated up to the remainder of system resources, it does not have explicit ratios.
+```
+CPU No Allocation
+```
```
-minimum_cpu_puppetserver = 2
-maximum_cpu_puppetserver = 24
+RAM Percent = 0.08
+RAM Minimum = 512
+RAM Maximum = 1024
```
-Since ReservedCodeCache is limited to a maximum of 2 GB, and each jruby requires an estimated 85 MB of ReservedCodeCache, the maximum number of jrubies is effectively limited to a maximum of 24. But note we allocate 96 MB of reserved code cache per jruby when possible.
+\* ActiveMQ (used by MCollective) is deprecated in PE 2018.x and removed in PE 2019.x.
+
+###### Puppet Server Service (pe-puppetserver)
+
+Since PuppetServer is allocated the remainder of system resources, it does not have explicit ratios of CPU or RAM, or a maximum of RAM.
+
+```
+CPU Percent = N/A
+CPU Minimum = 2
+CPU Maximum = 24
+```
+
+Since ReservedCodeCache is limited to a maximum of 2 GB, and each JRuby requires an estimated 96 MB of ReservedCodeCache, the maximum number of JRubies is effectively limited to 24.
```
-minimum_ram_puppetserver = 512
+RAM Percent Heap = N/A
+RAM Minimum Heap = 512
```
```
-minimum_ram_code_cache = 128
-maximum_ram_code_cache = 2048
+RAM Percent Reserved Code Cache = N/A
+RAM Minimum Reserved Code Cache = 128
+RAM Maximum Reserved Code Cache = 2048
```
```
-ram_per_jruby = (512, 768, 1024) if total memory (4-7 GB, 8-16 GB, 16 GB+)
-ram_per_jruby_code_cache = 128
+RAM Heap Per JRuby = (512, 768, 1024) when RAM equals (4-7 GB, 8-16 GB, 16 GB+)
+RAM Reserved Code Cache Per JRuby = 96
```
-PuppetServer jrubies are constrained based on both how many jrubies fit into unallocated memory and unallocated processors (one jruby per processor).
-PuppetServer memory is then set to the amount of memory required for the total calculated number of jrubies.
+Puppet Server JRubies are constrained based on both how many JRubies fit into unallocated memory and unallocated processors (one JRuby per processor).
+Puppet Server memory is then set to the amount of memory required for the total calculated number of JRubies.
```
-possible_jrubies_by_ram = (unreserved ram) / (ram_per_jruby + ram_per_jruby_code_cache)
-# rjubies capped by (unreserved cpu) or maximum_cpu_puppetserver, whichever is less.
-puppetserver_ram = jrubies * ram_per_jruby
-code_cache_ram = jrubies * ram_per_jruby_code_cache
+minimum jrubies puppetserver = 2
+maximum jrubies puppetserver = 24
+maximum jrubies puppetserver limited by processors = (available processors).clamp(minimum jrubies puppetserver, maximum jrubies puppetserver)
+maximum jrubies puppetserver limited by memory = (available memory / (memory per jruby + memory per jruby reserved code cache))
+puppetserver_jruby_max_active_instances = (maximum jrubies puppetserver limited by memory).clamp(minimum jrubies puppetserver, maximum jrubies puppetserver limited by processors)
```
###### Operating System and Other Services
```
-cpu_reserved = 1
+CPU Reserved = 1
```
```
-ram_reserved = (256, 512, 1024) if total memory (4-7 GB, 8-16 GB, 16 GB+)
+RAM Reserved Percentage = 0.20
```
+
#### Large Reference Architecture
-A Large Reference Architecture is a Master plus compilers install.
+A Large Reference Architecture is a Master plus Compilers.
##### Master
-Calculations for the Master in a Large Reference Architecture use the same algorithm as for the [Standard Reference Architecture Master](#Standard-Master) with the following exceptions:
+Calculations for the Master in a Large Reference Architecture use the same algorithms used for the [Standard Reference Architecture Master](#Standard-Master) with the following exceptions:
-PuppetServer on the Master will process catalog requests only for other PE Infrastructure hosts.
-PuppetDB on the Master will be expected to handle requests from PuppetServer services on multiple Compilers that together are servicing more agents than the Standard Reference Architecture.
-So resources on the master are transferred from PuppetServer to PuppetDB as follows.
```
-percent_cpu_puppetdb = 0.50 # up from 0.25
-
-percent_ram_puppetdb = 0.20 # up from 0.10
+PuppetDB CPU Percent = 0.50 # up from 0.25
```
-##### Compilers
+```
+PuppetDB RAM Percent = 0.15 # up from 0.10
+```
-Compilers are configured by the same algorithm used for the [Standard Reference Architecture Master](#Standard-Master).
+Rationale:
-With PuppetDB on Compilers, each PuppetDB connects to the same PostgresSQL service as the PuppetDB on the Master does.
+Puppet Server on the Master will process catalog requests only for PE Infrastructure hosts.
+PuppetDB on the Master is expected to handle requests from the Puppet Server services on multiple Compilers that by definition serve more agents than the Standard Reference Architecture.
-We lower each PuppetDB's allocation of CPU to create a limited number of connections to PostgresSQL, preventing an overallocation of connections to PostgresSQL.
+##### Compilers
-In addition, PuppetDB garbage collection is disabled on Compilers, as garbage collection is/should only be performed by one PuppetDB (on the Master).
+Calculations for Compilers in a Large Reference Architecture use the same algorithms used for the [Standard Reference Architecture Master](#Standard-Master) with the following exceptions.
```
-maximum_cpu_puppetdb = 3 # was (CPU * 0.50)
+PuppetDB CPU Maximum = 3
```
+Compilers in a Large Reference Architecture include a local PuppetDB service.
+The local PuppetDB service connects to the same PostgreSQL service as the PuppetDB service on the Master.
+We lower the local PuppetDB allocation of CPU to enforce a limited number of connections to PostgreSQL, preventing an overallocation of connections to PostgreSQL.
+In addition, we disable the local PuppetDB service garbage collection, as garbage collection is already performed by the PuppetDB service on the Master.
+
+
#### Extra Large Reference Architecture
-An Extra Large Reference Architecture is a Master plus compilers with a standalone PuppetDB and PostgresSQL service on the PuppetDB host.
+An Extra Large Reference Architecture is a Master plus Compilers with PuppetDB and PostgreSQL services on a PuppetDB host.
##### Master
-Calculations for the Master in an Extra Large Reference Architecture use the same algorithm used for the [Large Reference Architecture Master](#Large-Master)
+Calculations for the Master in an Extra Large Reference Architecture use the same algorithms used for the [Large Reference Architecture Master](#Large-Master)
##### Compilers
-Calculations for the Compilers in an Extra Large Reference Architecture use the same algorithm used for the [Large Reference Architecture Compilers](#Large-Compilers)
+Calculations for Compilers in an Extra Large Reference Architecture use the same algorithms used for the [Large Reference Architecture Compilers](#Large-Compilers)
##### PuppetDB Host
-Calculations for the PuppetDB Host use the same algorithm as for the Standard Reference Architecture.
+Calculations for the PuppetDB Host use the same algorithms used for the [Standard Reference Architecture Master](#Standard-Master).
The below are the same settings for these two services as would be seen on a Standard Reference Architecture Master.
@@ -185,20 +235,25 @@ Same as [Standard Reference Architecture Database Service (pe-postgresql)](#Stan
Same as [Standard Reference Architecture PuppetDB Service (pe-puppetdb)](#Standard-PuppetDB)
+
#### Legacy Split Architecture
##### Master
-Same as [Standard Reference Architecture Master](#Standard-Master) minus allocations for the services not present.
+Calculations for a Split Master use the same algorithms used for the [Standard Reference Architecture Master](#Standard-Master) minus allocations for the services moved to the other hosts.
##### Console Host
###### Console Service (pe-console-services)
```
-percent_ram_console = 0.75
-minimum_ram_console = 512
-maximum_ram_console = 4096
+CPU No Allocation
+```
+
+```
+RAM Percent = 0.75
+RAM Minimum = 512
+RAM Maximum = 4096
```
##### Database Host
@@ -206,29 +261,32 @@ maximum_ram_console = 4096
###### Database Service (pe-postgresql)
```
-percent_ram_database = 0.25
-minimum_ram_database = 2048
-maximum_ram_database = 16384
+CPU No Allocation
+```
+
+```
+RAM Percent = 0.25
+RAM Minimum = 2048
+RAM Maximum = 16384
```
###### PuppetDB Service (pe-puppetdb)
```
-percent_cpu_puppetdb = 0.50
-minimum_cpu_puppetdb = 1
-maximum_cpu_puppetdb = (CPU * 0.50)
+CPU Percent = 0.50
+CPU Minimum = 1
```
```
-percent_ram_puppetdb = 0.25
-minimum_ram_puppetdb = 512
-maximum_ram_puppetdb = 8192
+RAM Percent = 0.25
+RAM Minimum = 512
+RAM Maximum = 8192
```
-If PostgreSQL is not present (External PostgreSQL) the following change:
+If PostgreSQL is moved to an External PostgreSQL Host the following change:
```
-percent_ram_puppetdb = 0.50
+PuppetDB RAM Percent = 0.50
```
##### External PostgreSQL Host
@@ -236,7 +294,7 @@ percent_ram_puppetdb = 0.50
###### Database Service (pe-postgresql)
```
-percent_ram_database = 0.25
-minimum_ram_database = 2048
-maximum_ram_database = 16384
+RAM Percent = 0.25
+RAM Minimum = 2048
+RAM Maximum = 16384
```
diff --git a/README.md b/README.md
index b44c1e3..74ec662 100644
--- a/README.md
+++ b/README.md
@@ -12,10 +12,13 @@
> The fault, dear Brutus, is not in our stars, but in our defaults, that we are under-allocating system resources.
-This module provides a Puppet subcommand `puppet pe tune` that outputs optimized settings for Puppet Enterprise services based upon available system resources.
+The default settings for Puppet Enterprise services are tuned, but not necessarily optimized for PE Infrastructure type and the combination of PE services competing for system resources on each PE Infrastructure host.
-Puppet Enterprise 2018.1.3 and newer includes the functionality of this module via the `puppet infrastructure tune` subcommand.
-To use this module with Puppet Enterprise 2018.1.3 and newer, refer to [Limitations](#limitations).
+This module provides a Puppet command `puppet pe tune` that outputs optimized settings for Puppet Enterprise services based upon available system resources.
+This command expects that you have provisioned the PE Infrastructure hosts with the system resources required to handle the workload, given agent count and code and environment complexity.
+
+Puppet Enterprise 2018.1.3 and newer includes this functionality via the `puppet infrastructure tune` command.
+To use this command instead of `puppet infrastructure tune` with Puppet Enterprise 2018.1.3 and newer, refer to [Limitations](#limitations).
## Setup
@@ -45,21 +48,21 @@ wget -q -O - https://api.github.com/repos/tkishel/pe_tune/releases/latest | grep
##### `--common`
-Extract common settings from node-specific settings.
+Extract common settings from node-specific settings when outputting optimized settings.
-A common setting is one with a value that is identical on multiple nodes.
-This option extracts and outputs common settings separately from node-specific settings, potentially reducing the number of node-specific settings.
+A common setting is one with a value that is identical on multiple hosts.
+This option extracts and outputs common settings separately from node-specific settings, for use in `common.yaml`.
##### `--compare`
-Output a comparison of currently-defined and optimized settings, and exit.
+Output a comparison of currently-defined and optimized settings.
##### `--current`
-Output currently-defined settings, in JSON format, and exit.
+Output currently-defined settings, in JSON format.
Settings may be defined either in the Classifier (the Console) or in Hiera, with Classifier settings taking precedence over Hiera settings.
-This option also identifies duplicate settings found in both the Classifier and Hiera.
+This option also identifies duplicate settings defined in both the Classifier and Hiera.
Best practice is to define settings in Hiera (preferred) or the Classifier, but not both.
##### `--debug`
@@ -68,9 +71,9 @@ Enable logging of debug information.
##### `--hiera DIRECTORY`
-Output optimized settings to the specified directory, as YAML files, for use in Hiera.
+Output optimized settings to the specified directory, as YAML files for use in Hiera.
-> Do not specify a directory in your Hiera hierarchy, which should be managed by Code Manager. Instead: specify a temporary directory, verify the settings in resulting files, and merge them into the control repository that contains your Hiera hierarchy.
+> Do not specify a directory in your Hiera hierarchy if that directory is managed by Code Manager. Instead: specify a temporary directory, verify the settings in resulting files, and merge them into the control repository that contains your Hiera data.
##### `--force`
@@ -78,37 +81,41 @@ Do not enforce minimum system requirements (4 CPU / 8 GB RAM) for PE Infrastruct
##### `--inventory FILE`
-Use the specified YAML file to define infrastructure nodes.
-
-This eliminates a dependency upon PuppetDB to query node facts and classes.
+Use the specified YAML file to define PE Infrastructure hosts.
+This eliminates a dependency upon PuppetDB to query facts and classes for PE Infrastructure hosts.
Refer to the [examples](examples) directory of this module for details.
##### `--local`
Use the local system to define a monolithic master host.
-This eliminates a dependency upon PuppetDB to query node facts and classes.
+This eliminates a dependency upon PuppetDB to query facts and classes for PE Infrastructure hosts, and is only useful after a clean install of a monolithic master host.
##### `--memory_per_jruby MB`
-Amount of RAM to allocate for each JRuby.
+Amount of RAM to allocate for each Puppet Server JRuby.
##### `--memory_reserved_for_os MB`
-Amount of RAM to reserve for the OS.
+Amount of RAM to reserve for the operating system and other services.
##### `--node CERTNAME`
-Limit output to a single node.
+Limit output to a single PE Infrastructure host.
##### `--use_current_memory_per_jruby`
-Use currently-defined settings to determine memory_per_jruby.
+Use currently-defined settings to determine `memory_per_jruby`.
## Reference
-This subcommand queries PuppetDB for node group membership to identify PE Infrastructure hosts, queries PuppetDB for facts for each of those hosts to identify system resources, and outputs optimized settings for PE services (in YAML format) use in Hiera.
+This command outputs optimized settings for PE services as follows.
+
+1. Query PuppetDB for PE Infrastructure hosts (query for declared PE classes)
+1. Identify PE Infrastructure type: Standard, Large, Extra Large (legacy: Split)
+1. Query PuppetDB for CPU and RAM facts for each PE Infrastructure host (query for processors, memory)
+1. Output settings for PE services for each PE Infrastructure host (as parameters for the declared PE classes)
### Output
@@ -146,7 +153,7 @@ puppet_enterprise::profile::orchestrator::java_args:
# JVM Summary: Using 768 MB per Puppet Server JRuby for pe-master.puppetdebug.vlan
```
-By default, this subcommand outputs node-specific settings for use in node-specific YAML files in a node-specific hierarchy.
+By default, this command outputs node-specific settings for use in node-specific YAML files in a node-specific Hiera hierarchy.
For example:
@@ -192,15 +199,15 @@ For more information, review:
Support is limited to the following infrastructures:
* Monolithic Master
-* Monolithic Master with Compile Masters
-* Monolithic Master with External PostgreSQL
-* Monolithic Master with Compile Masters with External PostgreSQL
* Monolithic Master with HA
+* Monolithic Master with Compile Masters
* Monolithic Master with Compile Masters with HA
+* Monolithic Master with Compile Masters with External PostgreSQL
+* Monolithic Master with External PostgreSQL
* Split Infrastructure
* Split Infrastructure with Compile Masters
-* Split Infrastructure with External PostgreSQL
* Split Infrastructure with Compile Masters with External PostgreSQL
+* Split Infrastructure with External PostgreSQL
### Version Support
@@ -211,13 +218,13 @@ Support is limited to the following versions:
* PE 2018.x.x
* PE 2019.x.x
-\* In these versions, this module is unable to identify PE Database hosts or tune PE PostgreSQL services.
+\* In these versions, this command is unable to identify PE Database hosts or tune PE PostgreSQL services.
#### Puppet Enterprise 2018.1.3 and Newer
-This module is the upstream version of the `puppet infrastructure tune` subcommand built into Puppet Enterprise 2018.1.3 and newer. Installing this module in Puppet Enterprise 2018.1.3 and newer will result in a conflict with the built-in `puppet infrastructure tune` subcommand.
-
-To avoid that conflict, install this module and run this subcommand outside the `modulepath`.
+This command is the upstream version of the `puppet infrastructure tune` command built into Puppet Enterprise 2018.1.3 and newer.
+Installing this module in Puppet Enterprise 2018.1.3 and newer will result in a conflict with the `puppet infrastructure tune` subcommand.
+To avoid that conflict, install this module and run this command outside the `modulepath`.
For example:
@@ -234,7 +241,6 @@ puppet pe tune --modulepath /tmp/puppet_modules
#### Puppet Enterprise 2018.1.2 and Older
This module may not be able to query PuppetDB in older versions of Puppet Enterprise.
-
To avoid that error, install this module and run the command outside the `modulepath`.
```shell