+
+
+## Table of Contents
+
+* [About the Project](#dominovagrant)
+ * [Built With](#built-with)
+* [Getting Started](#getting-started)
+ * [Prerequisites](#prerequisites)
+ * [Installation](#installation)
+ * [Mac OS X](https://github.com/DominoVagrant/demo-v2-task-based/blob/master/MacMojaveReadme.md) -- Quick Start
+ * [Windows](https://github.com/DominoVagrant/demo-v2-task-based/blob/master/Win10ReadMe.md) -- Quick Start
+* [Deployment](#deployment)
+ * [Cloning](#cloning-the-repo-locally)
+ * [Overview](#configuring-the-environment)
+ * [Variables](#commonly-changed-parameters)
+ * [Source Files](#source-files)
+* [Initialization](#starting-the-vm)
+ * [Access Methods](#accessing-the-domino-server)
+ * [Web](#web-interface)
+ * [Notes Client](#access-from-notes-client)
+ * [Console](#domino-console)
+* [Common Issues](#common-problems)
+* [Roadmap](#roadmap)
+* [Contributing](#contributing)
+* [License](#license)
+* [Contact](#authors)
+* [Acknowledgements](#acknowledgments)
+
+
+## DominoVagrant
+Primary goal is to use Vagrant to deploy the latest Domino Server in an Linux VM. Vagrant and Role Specific Variables will be passed along, automating installation via the RestAPI interace and Mooneshine or other tools that support CRUD API calls. This uses a Specialized Packer Build that cuts down deployment time:
+
+* **Template:** [Packer](https://app.vagrantup.com/STARTcloud/boxes/debian11-server)
+* **Build Source:** [Repo](Notyetavailableforpublicconsumption)
+
+Each Release will be a at the time, stable branch. Recommended to use the latest.
+
+## Getting Started
+
+These instructions will get you a copy of the project up and running on your local machine for development and testing purposes, as well as what will power the build process of the VMs at Prominic.NET.
+
+### Prerequisites
+
+You will need some software on your PC or Mac:
+
+```
+git
+Vagrant
+Virtualbox
+```
+
+## Installation
+
+To ease deployment, we have a few handy scripts that will utlize a package manager for each OS to get the pre-requisite software for your host OS. This is NOT required, this is to help you ensure you have all the applications that are neccessary to run this VM.
+
+#### Windows
+Powershell has a package manager named Chocalatey which is very similar to SNAP, YUM, or other Package manager, We will utilize that to quickly install Virtualbox, Vagrant and Git.
+
+Powershell
+```powershell
+Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
+choco install vagrant
+choco install virtualbox
+choco install git.install
+```
+
+For those that need to run this in a Command Prompt, you can use this:
+
+CMD
+```bat
+@"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"
+choco install vagrant
+choco install virtualbox
+choco install git.install
+```
+
+#### Mac
+Just like Windows and other Linux repos, there is a similar package manager for Mac OS X, Homebrew, We will utilize that to install the prequsites. You will likley need to allow unauthenticated applications in the Mac OS X Security Settings, there are reports that Mac OS X Mojave will require some additional work to get running correctly. You do NOT have to use these scripts to get the pre-requisites on your Mac, it is recommened, you simply need to make sure you have the 3 applications installed on your Mac.
+
+```shell
+/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
+brew cask install virtualbox
+brew cask install vagrant
+brew cask install vagrant-manager
+brew install git
+```
+
+#### CentOS 7
+We will utilize YUM and a few other bash commands to get the Virtualbox, Git, and Vagrant installed.
+
+YUM
+```shell
+yum -y install gcc dkms make qt libgomp patch kernel-headers kernel-devel binutils glibc-headers glibc-devel font-forge
+cd /etc/yum.repo.d/
+wget http://download.virtualbox.org/virtualbox/rpm/rhel/virtualbox.repo
+yum install -y VirtualBox-5.1
+/sbin/rcvboxdrv setup
+yum -y install https://releases.hashicorp.com/vagrant/1.9.6/vagrant_1.9.6_x86_64.rpm
+sudo yum install git
+```
+
+#### Ubuntu
+We will utilize APT to get the Virtualbox, Git, and Vagrant installed.
+
+APT
+```shell
+sudo apt-get install virtualbox vagrant git-core -y
+```
+
+## Deployment
+### Cloning the repo locally
+
+Open up a terminal and perform the following git command in order to save the Project to a local folder:
+
+```shell
+git clone https://github.com/DominoVagrant/demo-v2-task-based
+```
+
+### Configuring the Environment
+Once you have navigated into the projects directory. You will need to modify the Hosts.yml to your specific Environment.
+
+Please set the configuration file with the correct, Network and Memory and CPU settings your host machine will allow, as these may vary from system to system, I cannot predict your Machines CPU and Network Requirements. You will need to make sure that you do not over allocate CPU, and RAM.
+
+##### Networking is setup to create one NAT adapter for Vagrant communications and one Bridged Adapter.
+The bridge adapter needs to be specified or it will prompt upon deployment.
+Setting dhcp4 to true (ipv6 not yet fully supported try at your own risk) will pull a IP from your local network's DHCP server.
+
+##### Secrets
+
+If you have any sensitive credentials, You will also need to create ```.secrets.yml``` in the root of the project. This is where you can store credentials variables that may contain sensitive data. this will prevent you from uploading them to the repo should you contribute back. Please note that if you remove this from the .gitignore you risk uploading sensitve data.
+
+```
+cd demo-v2-task-based
+touch .secrets.yml
+nano Hosts.yml
+```
+
+## Commonly Changed Parameters:
+
+* ip: Use any IP on your Internal Network that is NOT in use by any other machine.
+* gateway: This is the IP of your Router
+* dhcp4: true/false
+* hostname: This is the Hostname of the VM,
+* domain: This is the domain to complete the FQDN
+* mac: This is your machines Network unique identifier, if you run more than one instance on a network, randonmize this. [Mac Generator](https://www.miniwebtool.com/mac-address-generator/)
+* netmask: Set this to the subnet you have your network configured to. This is normally: 255.255.255.0
+* name: The Vagrant unique identifier
+* cpu: The number of cores you are allocating to this machine. Please beware and do not over allocate. Overallocation may cause instability
+* memory: The amount of Memory you are allocating to the machine. Please beware and do not over allocate. Overallocation may cause instability
+
+
+
+### Modifying Roles
+The default provisioning engine is ansible-local. This allows us to template our variables into files before deploying and executing the installers.
+This allows us to set dynamic usernames, paths, passwords, etc.
+
+#### Domino One-Touch References
+In order to make changes to the one touch installer. Modify the template file setup.json.j2 in the /templates folder of the role "domino-config".
+
+You can find more information on the fields and how they correspond to Field Values in Doimino designer here:
+
+[Domino-OneTouch](https://help.hcltechsw.com/domino/12.0.0/admin/inst_usingthedominoserversetupprogram_c.html)
+
+## Source Files
+
+If you have Domino and the installations files in a remote repository.
+You can define them in the Hosts.yml under their respective variables.
+
+If you do not have a repository to pull your installation files from.
+You can place the archived installers in the ./installers/{{APPLICATION}}/archived directory.
+These will be expanded into their respective folders under /vagrant/installers/{{APPLICATION}}/archived.
+
+You will need to supply the Domino installer and optional fix pack files
+yourself (eg, Domino_12.0_Linux_English.tar, Domino_1101FP2_Linux.tar).
+
+## Cross Certifying
+
+If you want to access the server from a Notes ID, create a safe ID using the instructions [here](#access-from-notes-client)
+
+**Place your file into the ./safe-id-to-cross-certify folder.**
+
+## Starting the VM
+The installation process is estimated to take about 15 - 30 Minutes.
+
+```
+vagrant up
+```
+
+At this point, you can execute 'vagrant up' in the git checkout directory
+to spin up a vm instance, or use the utility scripts
+./scripts/vagrant_up.sh, ./scripts/vagrant_up.ps1 to create a log file with the initialization
+output in addition to showing on the screen.
+
+Once the system has been provisioned, you can use 'vagrant ssh' to access
+it, or again the utility scripts vagrant_ssh.sh/vagrant_ssh.ps1 to create
+a log file of the ssh session.
+
+View the contents of the CommandHelp.text for more details.
+This file will also be displayed followed each vagrant up operation for
+your continued reference.
+
+## Accessing the Domino Server
+
+The Domino server will be started automatically when `vagrant up` completes.
+
+### Domino Console
+
+To access the console, run:
+
+```vagrant ssh -c "screen -r"```
+or
+```vagrant ssh -c "sudo domino console"```
+
+### Web Interface
+
+The web interface of the server is here: https://yourstaticordhcpip:443/downloads/welcome.html
+
+### Access from Notes Client
+
+If you want to access the server from a Notes Client, you will need to cross-certify your ID. To do this, first create a safe ID:
+1. Open User Security:
+ - MacOS: HCL Notes > Security > User Security
+ - Windows: File > Security > User Security
+2. Select the Your Identity > Your Certificates tab
+3. Run Other Actions > Export NotesID Safe ID. Do not set a password
+
+Copy this ID to `./safe-id-to-cross-certify` and update `safe_notes_id`, and run `vagrant up`.
+
+Then you will need to create a connection document in your local Notes client.
+1. File > Open > HCL Notes Application
+2. Open names.nsf on your local machine
+3. Click `Advanced` in the bottom of the left sidebar
+4. Open the Connections view
+5. Click New > Server Connection
+ 1. In the Basic tab, set `Server name` as "demo/Demo" and check the `TCP/IP` checkbox
+ 2. In the Advanced tab, set the `Destination server address` to "127.0.0.1:1352"
+ 3. Click `Save & Close`
+
+Then you can open a database on the server like this:
+1. File > Open > HCL Notes Application
+2. Enter "demo/DEMO" as the server name
+3. Select a database (like names.nsf) and click Open
+
+### Domino Default Credentials
+
+* username: Demo Admin
+* password: password
+
+## Common Problems
+
+### Error for Headless VirtualBox
+
+If you get an error indicating that VirtualBox could not start in headless mode, open Vagrantfile and uncomment this line
+
+```
+ #vb.gui = true
+```
+
+## Roadmap
+
+See the [open issues](https://github.com/DominoVagrant/demo-v2-task-based/issues) for a list of proposed features (and known issues).
+
+## Built With
+* [Vagrant](https://www.vagrantup.com/) - Portable Development Environment Suite.
+* [VirtualBox](https://www.virtualbox.org/wiki/Downloads) - Hypervisor.
+* [Ansible](https://www.ansible.com/) - Virtual Manchine Automation Management.
+
+## Contributing
+
+Please read [CONTRIBUTING.md](https://www.prominic.net) for details on our code of conduct, and the process for submitting pull requests to us.
+
+## Authors
+* **Joel Anderson** - *Initial work* - [JoelProminic](https://github.com/JoelProminic)
+* **Justin Hill** - *Initial work* - [JustinProminic](https://github.com/JustinProminic)
+* **Mark Gilbert** - *Refactor* - [MarkProminic](https://github.com/MarkProminic)
+
+See also the list of [contributors](https://github.com/DominoVagrant/demo-v2-task-based/graphs/contributors) who participated in this project.
+
+## License
+
+This project is licensed under the SSLP v3 License - see the [LICENSE.md](LICENSE.md) file for details
+
+## Acknowledgments
+
+* Hat tip to anyone whose code was used
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/Vagrantfile b/Assets/provisioners/demo-tasks/0.1.20/scripts/Vagrantfile
new file mode 100755
index 00000000..f13c766a
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/Vagrantfile
@@ -0,0 +1,9 @@
+## Vagrant File tooling compatabile with Bhyve and Virtualbox, potentially ESXI/Vmware,KVM
+require 'yaml'
+require File.expand_path("#{File.dirname(__FILE__)}/Hosts.rb")
+
+settings = YAML::load(File.read("#{File.dirname(__FILE__)}/Hosts.yml"))
+
+Vagrant.configure("2") do |config|
+ Hosts.configure(config, settings)
+end
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/ansible.cfg b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/ansible.cfg
new file mode 100644
index 00000000..5b1302be
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/ansible.cfg
@@ -0,0 +1,478 @@
+# config file for ansible -- https://ansible.com/
+# ===============================================
+
+# nearly all parameters can be overridden in ansible-playbook
+# or with command line flags. ansible will read ANSIBLE_CONFIG,
+# ansible.cfg in the current working directory, .ansible.cfg in
+# the home directory or /etc/ansible/ansible.cfg, whichever it
+# finds first
+
+[defaults]
+
+# some basic default values...
+
+#inventory = /etc/ansible/hosts
+#library = /usr/share/my_modules/
+#module_utils = /usr/share/my_module_utils/
+#remote_tmp = ~/.ansible/tmp
+#local_tmp = ~/.ansible/tmp
+#plugin_filters_cfg = /etc/ansible/plugin_filters.yml
+#forks = 5
+#poll_interval = 15
+#sudo_user = root
+#ask_sudo_pass = True
+#ask_pass = True
+#transport = smart
+#remote_port = 22
+#module_lang = C
+#module_set_locale = False
+
+# plays will gather facts by default, which contain information about
+# the remote system.
+#
+# smart - gather by default, but don't regather if already gathered
+# implicit - gather by default, turn off with gather_facts: False
+# explicit - do not gather by default, must say gather_facts: True
+#gathering = implicit
+
+# This only affects the gathering done by a play's gather_facts directive,
+# by default gathering retrieves all facts subsets
+# all - gather all subsets
+# network - gather min and network facts
+# hardware - gather hardware facts (longest facts to retrieve)
+# virtual - gather min and virtual facts
+# facter - import facts from facter
+# ohai - import facts from ohai
+# You can combine them using comma (ex: network,virtual)
+# You can negate them using ! (ex: !hardware,!facter,!ohai)
+# A minimal set of facts is always gathered.
+#gather_subset = all
+
+# some hardware related facts are collected
+# with a maximum timeout of 10 seconds. This
+# option lets you increase or decrease that
+# timeout to something more suitable for the
+# environment.
+# gather_timeout = 10
+
+# additional paths to search for roles in, colon separated
+#roles_path = /etc/ansible/roles
+
+# uncomment this to disable SSH key host checking
+host_key_checking = False
+
+# change the default callback, you can only have one 'stdout' type enabled at a time.
+#stdout_callback = log_plays
+#check_mode_markers = true
+#
+#[callback_log_plays]
+#log_folder = /home/startcloud/output
+
+## Ansible ships with some plugins that require whitelisting,
+## this is done to avoid running all of a type by default.
+## These setting lists those that you want enabled for your system.
+## Custom plugins should not need this unless plugin author specifies it.
+
+# enable callback plugins, they can output to stdout but cannot be 'stdout' type.
+#callback_whitelist = timer, mail, profile_tasks
+
+# Determine whether includes in tasks and handlers are "static" by
+# default. As of 2.0, includes are dynamic by default. Setting these
+# values to True will make includes behave more like they did in the
+# 1.x versions.
+#task_includes_static = False
+#handler_includes_static = False
+
+# Controls if a missing handler for a notification event is an error or a warning
+#error_on_missing_handler = True
+
+# change this for alternative sudo implementations
+#sudo_exe = sudo
+
+# What flags to pass to sudo
+# WARNING: leaving out the defaults might create unexpected behaviours
+#sudo_flags = -H -S -n
+
+# SSH timeout
+#timeout = 10
+
+# default user to use for playbooks if user is not specified
+# (/usr/bin/ansible will use current user as default)
+#remote_user = root
+
+# logging is off by default unless this path is defined
+# if so defined, consider logrotate
+#log_path = /var/log/ansible.log
+
+# default module name for /usr/bin/ansible
+#module_name = command
+
+# use this shell for commands executed under sudo
+# you may need to change this to bin/bash in rare instances
+# if sudo is constrained
+#executable = /bin/sh
+
+# if inventory variables overlap, does the higher precedence one win
+# or are hash values merged together? The default is 'replace' but
+# this can also be set to 'merge'.
+#hash_behaviour = replace
+
+# by default, variables from roles will be visible in the global variable
+# scope. To prevent this, the following option can be enabled, and only
+# tasks and handlers within the role will see the variables there
+#private_role_vars = yes
+
+# list any Jinja2 extensions to enable here:
+#jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n
+
+# if set, always use this private key file for authentication, same as
+# if passing --private-key to ansible or ansible-playbook
+#private_key_file = /path/to/file
+
+# If set, configures the path to the Vault password file as an alternative to
+# specifying --vault-password-file on the command line.
+#vault_password_file = /path/to/vault_password_file
+
+# format of string {{ ansible_managed }} available within Jinja2
+# templates indicates to users editing templates files will be replaced.
+# replacing {file}, {host} and {uid} and strftime codes with proper values.
+#ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
+# {file}, {host}, {uid}, and the timestamp can all interfere with idempotence
+# in some situations so the default is a static string:
+ansible_managed = Ansible managed
+
+# by default, ansible-playbook will display "Skipping [host]" if it determines a task
+# should not be run on a host. Set this to "False" if you don't want to see these "Skipping"
+# messages. NOTE: the task header will still be shown regardless of whether or not the
+# task is skipped.
+#display_skipped_hosts = True
+
+# by default, if a task in a playbook does not include a name: field then
+# ansible-playbook will construct a header that includes the task's action but
+# not the task's args. This is a security feature because ansible cannot know
+# if the *module* considers an argument to be no_log at the time that the
+# header is printed. If your environment doesn't have a problem securing
+# stdout from ansible-playbook (or you have manually specified no_log in your
+# playbook on all of the tasks where you have secret information) then you can
+# safely set this to True to get more informative messages.
+#display_args_to_stdout = False
+
+# by default (as of 1.3), Ansible will raise errors when attempting to dereference
+# Jinja2 variables that are not set in templates or action lines. Uncomment this line
+# to revert the behavior to pre-1.3.
+#error_on_undefined_vars = False
+
+# by default (as of 1.6), Ansible may display warnings based on the configuration of the
+# system running ansible itself. This may include warnings about 3rd party packages or
+# other conditions that should be resolved if possible.
+# to disable these warnings, set the following value to False:
+#system_warnings = True
+
+# by default (as of 1.4), Ansible may display deprecation warnings for language
+# features that should no longer be used and will be removed in future versions.
+# to disable these warnings, set the following value to False:
+#deprecation_warnings = True
+
+# (as of 1.8), Ansible can optionally warn when usage of the shell and
+# command module appear to be simplified by using a default Ansible module
+# instead. These warnings can be silenced by adjusting the following
+# setting or adding warn=yes or warn=no to the end of the command line
+# parameter string. This will for example suggest using the git module
+# instead of shelling out to the git command.
+# command_warnings = False
+
+
+# set plugin path directories here, separate with colons
+#action_plugins = /usr/share/ansible/plugins/action
+#cache_plugins = /usr/share/ansible/plugins/cache
+#callback_plugins = /usr/share/ansible/plugins/callback
+#connection_plugins = /usr/share/ansible/plugins/connection
+#lookup_plugins = /usr/share/ansible/plugins/lookup
+#inventory_plugins = /usr/share/ansible/plugins/inventory
+#vars_plugins = /usr/share/ansible/plugins/vars
+#filter_plugins = /usr/share/ansible/plugins/filter
+#test_plugins = /usr/share/ansible/plugins/test
+#terminal_plugins = /usr/share/ansible/plugins/terminal
+#strategy_plugins = /usr/share/ansible/plugins/strategy
+
+
+# by default, ansible will use the 'linear' strategy but you may want to try
+# another one
+#strategy = free
+
+# by default callbacks are not loaded for /bin/ansible, enable this if you
+# want, for example, a notification or logging callback to also apply to
+# /bin/ansible runs
+bin_ansible_callbacks = False
+
+
+# don't like cows? that's unfortunate.
+# set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1
+#nocows = 1
+
+# set which cowsay stencil you'd like to use by default. When set to 'random',
+# a random stencil will be selected for each task. The selection will be filtered
+# against the `cow_whitelist` option below.
+#cow_selection = default
+#cow_selection = random
+
+# when using the 'random' option for cowsay, stencils will be restricted to this list.
+# it should be formatted as a comma-separated list with no spaces between names.
+# NOTE: line continuations here are for formatting purposes only, as the INI parser
+# in python does not support them.
+#cow_whitelist=bud-frogs,bunny,cheese,daemon,default,dragon,elephant-in-snake,elephant,eyes,\
+# hellokitty,kitty,luke-koala,meow,milk,moofasa,moose,ren,sheep,small,stegosaurus,\
+# stimpy,supermilker,three-eyes,turkey,turtle,tux,udder,vader-koala,vader,www
+
+# don't like colors either?
+# set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1
+#nocolor = 1
+
+# if set to a persistent type (not 'memory', for example 'redis') fact values
+# from previous runs in Ansible will be stored. This may be useful when
+# wanting to use, for example, IP information from one group of servers
+# without having to talk to them in the same playbook run to get their
+# current IP information.
+#fact_caching = memory
+
+
+# retry files
+# When a playbook fails by default a .retry file will be created in ~/
+# You can disable this feature by setting retry_files_enabled to False
+# and you can change the location of the files by setting retry_files_save_path
+
+#retry_files_enabled = False
+#retry_files_save_path = ~/.ansible-retry
+
+# squash actions
+# Ansible can optimise actions that call modules with list parameters
+# when looping. Instead of calling the module once per with_ item, the
+# module is called once with all items at once. Currently this only works
+# under limited circumstances, and only with parameters named 'name'.
+#squash_actions = apk,apt,dnf,homebrew,pacman,pkgng,yum,zypper
+
+# prevents logging of task data, off by default
+#no_log = False
+
+# prevents logging of tasks, but only on the targets, data is still logged on the master/controller
+#no_target_syslog = False
+
+# controls whether Ansible will raise an error or warning if a task has no
+# choice but to create world readable temporary files to execute a module on
+# the remote machine. This option is False by default for security. Users may
+# turn this on to have behaviour more like Ansible prior to 2.1.x. See
+# https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user
+# for more secure ways to fix this than enabling this option.
+#allow_world_readable_tmpfiles = False
+
+# controls the compression level of variables sent to
+# worker processes. At the default of 0, no compression
+# is used. This value must be an integer from 0 to 9.
+#var_compression_level = 9
+
+# controls what compression method is used for new-style ansible modules when
+# they are sent to the remote system. The compression types depend on having
+# support compiled into both the controller's python and the client's python.
+# The names should match with the python Zipfile compression types:
+# * ZIP_STORED (no compression. available everywhere)
+# * ZIP_DEFLATED (uses zlib, the default)
+# These values may be set per host via the ansible_module_compression inventory
+# variable
+#module_compression = 'ZIP_DEFLATED'
+
+# This controls the cutoff point (in bytes) on --diff for files
+# set to 0 for unlimited (RAM may suffer!).
+#max_diff_size = 1048576
+
+# This controls how ansible handles multiple --tags and --skip-tags arguments
+# on the CLI. If this is True then multiple arguments are merged together. If
+# it is False, then the last specified argument is used and the others are ignored.
+# This option will be removed in 2.8.
+#merge_multiple_cli_flags = True
+
+# Controls showing custom stats at the end, off by default
+#show_custom_stats = True
+
+# Controls which files to ignore when using a directory as inventory with
+# possibly multiple sources (both static and dynamic)
+#inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo
+
+# This family of modules use an alternative execution path optimized for network appliances
+# only update this setting if you know how this works, otherwise it can break module execution
+#network_group_modules=eos, nxos, ios, iosxr, junos, vyos
+
+# When enabled, this option allows lookups (via variables like {{lookup('foo')}} or when used as
+# a loop with `with_foo`) to return data that is not marked "unsafe". This means the data may contain
+# jinja2 templating language which will be run through the templating engine.
+# ENABLING THIS COULD BE A SECURITY RISK
+#allow_unsafe_lookups = False
+
+# set default errors for all plays
+#any_errors_fatal = False
+
+[inventory]
+# enable inventory plugins, default: 'host_list', 'script', 'yaml', 'ini'
+#enable_plugins = host_list, virtualbox, yaml, constructed
+
+# ignore these extensions when parsing a directory as inventory source
+#ignore_extensions = .pyc, .pyo, .swp, .bak, ~, .rpm, .md, .txt, ~, .orig, .ini, .cfg, .retry
+
+# ignore files matching these patterns when parsing a directory as inventory source
+#ignore_patterns=
+
+# If 'true' unparsed inventory sources become fatal errors, they are warnings otherwise.
+#unparsed_is_failed=False
+
+[privilege_escalation]
+#become=True
+#become_method=sudo
+#become_user=root
+#become_ask_pass=False
+
+[paramiko_connection]
+
+# uncomment this line to cause the paramiko connection plugin to not record new host
+# keys encountered. Increases performance on new host additions. Setting works independently of the
+# host key checking setting above.
+#record_host_keys=False
+
+# by default, Ansible requests a pseudo-terminal for commands executed under sudo. Uncomment this
+# line to disable this behaviour.
+#pty=False
+
+# paramiko will default to looking for SSH keys initially when trying to
+# authenticate to remote devices. This is a problem for some network devices
+# that close the connection after a key failure. Uncomment this line to
+# disable the Paramiko look for keys function
+#look_for_keys = False
+
+# When using persistent connections with Paramiko, the connection runs in a
+# background process. If the host doesn't already have a valid SSH key, by
+# default Ansible will prompt to add the host key. This will cause connections
+# running in background processes to fail. Uncomment this line to have
+# Paramiko automatically add host keys.
+#host_key_auto_add = True
+
+[ssh_connection]
+
+# ssh arguments to use
+# Leaving off ControlPersist will result in poor performance, so use
+# paramiko on older platforms rather than removing it, -C controls compression use
+#ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s
+
+# The base directory for the ControlPath sockets.
+# This is the "%(directory)s" in the control_path option
+#
+# Example:
+# control_path_dir = /tmp/.ansible/cp
+#control_path_dir = ~/.ansible/cp
+
+# The path to use for the ControlPath sockets. This defaults to a hashed string of the hostname,
+# port and username (empty string in the config). The hash mitigates a common problem users
+# found with long hostames and the conventional %(directory)s/ansible-ssh-%%h-%%p-%%r format.
+# In those cases, a "too long for Unix domain socket" ssh error would occur.
+#
+# Example:
+# control_path = %(directory)s/%%h-%%r
+#control_path =
+
+# Enabling pipelining reduces the number of SSH operations required to
+# execute a module on the remote server. This can result in a significant
+# performance improvement when enabled, however when using "sudo:" you must
+# first disable 'requiretty' in /etc/sudoers
+#
+# By default, this option is disabled to preserve compatibility with
+# sudoers configurations that have requiretty (the default on many distros).
+#
+pipelining = True
+
+# Control the mechanism for transferring files (old)
+# * smart = try sftp and then try scp [default]
+# * True = use scp only
+# * False = use sftp only
+#scp_if_ssh = smart
+
+# Control the mechanism for transferring files (new)
+# If set, this will override the scp_if_ssh option
+# * sftp = use sftp to transfer files
+# * scp = use scp to transfer files
+# * piped = use 'dd' over SSH to transfer files
+# * smart = try sftp, scp, and piped, in that order [default]
+#transfer_method = smart
+
+# if False, sftp will not use batch mode to transfer files. This may cause some
+# types of file transfer failures impossible to catch however, and should
+# only be disabled if your sftp version has problems with batch mode
+#sftp_batch_mode = False
+
+# The -tt argument is passed to ssh when pipelining is not enabled because sudo
+# requires a tty by default.
+#use_tty = True
+
+[persistent_connection]
+
+# Configures the persistent connection timeout value in seconds. This value is
+# how long the persistent connection will remain idle before it is destroyed.
+# If the connection doesn't receive a request before the timeout value
+# expires, the connection is shutdown. The default value is 30 seconds.
+#connect_timeout = 30
+
+# Configures the persistent connection retry timeout. This value configures the
+# the retry timeout that ansible-connection will wait to connect
+# to the local domain socket. This value must be larger than the
+# ssh timeout (timeout) and less than persistent connection idle timeout (connect_timeout).
+# The default value is 15 seconds.
+#connect_retry_timeout = 15
+
+# The command timeout value defines the amount of time to wait for a command
+# or RPC call before timing out. The value for the command timeout must
+# be less than the value of the persistent connection idle timeout (connect_timeout)
+# The default value is 10 second.
+#command_timeout = 10
+
+[accelerate]
+#accelerate_port = 5099
+#accelerate_timeout = 30
+#accelerate_connect_timeout = 5.0
+
+# The daemon timeout is measured in minutes. This time is measured
+# from the last activity to the accelerate daemon.
+#accelerate_daemon_timeout = 30
+
+# If set to yes, accelerate_multi_key will allow multiple
+# private keys to be uploaded to it, though each user must
+# have access to the system via SSH to add a new key. The default
+# is "no".
+#accelerate_multi_key = yes
+
+[selinux]
+# file systems that require special treatment when dealing with security context
+# the default behaviour that copies the existing context or uses the user default
+# needs to be changed to use the file system dependent context.
+#special_context_filesystems=nfs,vboxsf,fuse,ramfs,9p
+
+# Set this to yes to allow libvirt_lxc connections to work without SELinux.
+#libvirt_lxc_noseclabel = yes
+
+[colors]
+#highlight = white
+#verbose = blue
+#warn = bright purple
+#error = red
+#debug = dark gray
+#deprecate = purple
+#skip = cyan
+#unreachable = red
+#ok = green
+#changed = yellow
+#diff_add = green
+#diff_remove = red
+#diff_lines = cyan
+
+
+[diff]
+# Always print diff when running ( same as always running with -D/--diff )
+# always = no
+
+# Set how many context lines to show in diff
+# context = 3
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/generate-playbook.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/generate-playbook.yml
new file mode 100755
index 00000000..b2896d4c
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/generate-playbook.yml
@@ -0,0 +1,19 @@
+---
+-
+ name: "Generating demo-tasks Playbook Locally"
+ become: true
+ gather_facts: true
+ hosts: all
+ tasks:
+ -
+ name: "Dynamically generating template playbook for SHI"
+ ansible.builtin.template:
+ dest: "Hosts.template.yml"
+ mode: a+x
+ src: "Hosts.template.yml.j2"
+ -
+ name: "Dynamically generating playbook"
+ ansible.builtin.template:
+ dest: "/vagrant/ansible/playbook.yml"
+ mode: a+x
+ src: "playbook.yml.j2"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_appdevpack/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_appdevpack/defaults/main.yml
new file mode 100644
index 00000000..4898b003
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_appdevpack/defaults/main.yml
@@ -0,0 +1,9 @@
+---
+appdevpack_archive: domino-appdev-pack-1.0.15.tgz
+appdevpack_version: 1.0.15
+appdevpack_debug: true
+domino_appdevpack_port_forwards:
+ -
+ guest: 8080
+ url: "appdevwebpack"
+domino_appdevpack_proxy_url: "{{ domino_appdevpack_port_forwards[0].url }}.{{ settings.hostname }}.{{ settings.domain }}"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_appdevpack/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_appdevpack/meta/main.yml
new file mode 100644
index 00000000..f1aac7b5
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_appdevpack/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: domino_appdevpack
+ author: MarkProminic
+ description: Add Appdevpack tools to Domino for NodeJS development
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_appdevpack/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_appdevpack/tasks/main.yml
new file mode 100755
index 00000000..f60df8d9
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_appdevpack/tasks/main.yml
@@ -0,0 +1,114 @@
+---
+-
+ name: "Creating installation directories for domino-appdev-pack"
+ ansible.builtin.file:
+ path: "{{ item }}"
+ state: directory
+ mode: '0644'
+ with_items:
+ - "{{ installer_dir }}/appdevpack/archives"
+ - "{{ installer_dir }}/appdevpack/domino-appdev-pack"
+
+-
+ name: "Checking if domino-appdev-pack installer is at appdevpack/archives/{{ appdevpack_archive }}"
+ register: domino_server_installer_check
+ ansible.builtin.stat:
+ path: "{{ installer_dir }}/appdevpack/archives/{{ appdevpack_archive }}"
+ get_md5: false
+
+-
+ name: "Checking if domino-appdev-pack is installed: {{ appdevpack_version }}"
+ register: appdevpack_installed_check
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/appdevpack_install"
+ get_md5: false
+
+-
+ name: "Downloading domino-appdev-pack from {{ domino_installer_base_url }}"
+ register: appdevpackresult
+ until: "appdevpackresult is not failed"
+ retries: 3
+ ansible.builtin.get_url:
+ mode: '0755'
+ url: "{{ domino_installer_base_url }}/domino-appdev-pack/{{ appdevpack_archive }}"
+ dest: "{{ installer_dir }}/appdevpack/archives/{{ appdevpack_archive }}"
+ username: "{{ domino_installer_url_user }}"
+ password: "{{ secrets.domino_installer_url_pass }}"
+ when: not domino_server_installer_check.stat.exists and not appdevpack_installed_check.stat.exists
+
+-
+ name: "Extracting domino-appdev-pack from {{ appdevpack_archive }}"
+ when: not appdevpack_installed_check.stat.exists
+ ansible.builtin.unarchive:
+ mode: "a+x"
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ src: "{{ installer_dir }}/appdevpack/archives/{{ appdevpack_archive }}"
+ dest: "{{ installer_dir }}/appdevpack/domino-appdev-pack"
+ creates: "{{ installer_dir }}/appdevpack/domino-appdev-pack/adpconfig.ntf"
+ remote_src: true
+
+-
+ name: "Stopping Domino for domino-appdev-pack Installation"
+ when: not appdevpack_installed_check.stat.exists
+ become: true
+ ansible.builtin.service:
+ name: domino
+ state: stopped
+ enabled: true
+
+-
+ name: "Copying App Dev Pack Templates and Files"
+ ansible.builtin.copy:
+ mode: '0644'
+ src: "{{ item }}"
+ dest: "{{ service_home_dir }}"
+ remote_src: true
+ owner: "{{ service_user }}"
+ with_items:
+ - "{{ installer_dir }}/appdevpack/domino-appdev-pack/adpconfig.ntf"
+ - "{{ installer_dir }}/appdevpack/domino-appdev-pack/iam-store.ntf"
+ - "{{ installer_dir }}/appdevpack/domino-appdev-pack/1202-proton-addin-0.15.5+ND12000200.tgz"
+ when: not appdevpack_installed_check.stat.exists
+
+-
+ name: "Extracting domino-appdev-pack from {{ appdevpack_archive }}"
+ when: not appdevpack_installed_check.stat.exists
+ ansible.builtin.unarchive:
+ mode: "a+x"
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ src: "{{ installer_dir }}/appdevpack/domino-appdev-pack/1202-proton-addin-0.15.5+ND12000200.tgz"
+ dest: "/opt/hcl/domino/notes/latest/linux"
+ creates: "/opt/hcl/domino/notes/latest/linux/libiamclient.so"
+ remote_src: true
+
+-
+ name: "Installing domino-appdev-pack"
+ when: not appdevpack_installed_check.stat.exists
+ ansible.builtin.shell: "{{ item }}"
+ become: true
+ args:
+ executable: "/bin/bash"
+ chdir: "/opt/hcl/domino/notes/latest/linux"
+ with_items:
+ - sh -v ./setup_proton.sh
+
+-
+ name: "Setting domino-appdev-pack as installed"
+ when: not appdevpack_installed_check.stat.exists
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: touch
+ with_items:
+ - "{{ completed_dir }}/appdevpack_install"
+
+-
+ name: "Starting Domino"
+ when: not appdevpack_installed_check.stat.exists
+ become: true
+ ansible.builtin.service:
+ name: domino
+ state: started
+ enabled: true
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_config/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_config/defaults/main.yml
new file mode 100644
index 00000000..666840cd
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_config/defaults/main.yml
@@ -0,0 +1,7 @@
+---
+show_help: true
+debug_autoconfigure: false
+existing_server_id: "server.id"
+use_existing_server_id: false
+existing_server: ""
+existing_server_ip: ""
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_config/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_config/meta/main.yml
new file mode 100644
index 00000000..37a3a430
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_config/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: domino_config
+ author: MarkProminic
+ description: Configure Domino using one-touch setup
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_config/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_config/tasks/main.yml
new file mode 100755
index 00000000..c992a161
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_config/tasks/main.yml
@@ -0,0 +1,222 @@
+---
+-
+ name: "Creating KYR for Domino"
+ ansible.builtin.shell: "{{ item }}"
+ become_user: "{{ domino_user }}"
+ when: domino_https_enabled
+ args:
+ executable: "/bin/bash"
+ chdir: "{{ domino_home_dir }}"
+ creates: "{{ cert_dir }}/kyr/{{ kyr_cert }}"
+ with_items:
+ - "{{ domino_install_basedir }}/bin/tools/startup {{ domino_install_dir }}/kyrtool create -k {{ cert_dir }}/kyr/{{ kyr_cert }} -p {{ cert_pass }}"
+
+-
+ name: "Adding SSL Keys to KYR"
+ ansible.builtin.shell: >
+ {{ domino_install_basedir }}/bin/tools/startup
+ {{ domino_install_dir }}/kyrtool import keys
+ -k {{ cert_dir }}/kyr/{{ kyr_cert }}
+ -i {{ cert_dir }}/key/{{ settings.hostname }}.{{ settings.domain }}-self-signed.key &&
+ touch {{ completed_dir }}/kyr-key-imported
+ become_user: "{{ domino_user }}"
+ when: domino_https_enabled and selfsigned_enabled
+ args:
+ executable: "/bin/bash"
+ chdir: "{{ domino_home_dir }}"
+ creates: "{{ completed_dir }}/kyr-key-imported"
+
+-
+ name: "Adding SSL Certificate to KYR"
+ ansible.builtin.shell: >
+ {{ domino_install_basedir }}/bin/tools/startup
+ {{ domino_install_dir }}/kyrtool import certs
+ -k {{ cert_dir }}/kyr/{{ kyr_cert }}
+ -i {{ cert_dir }}/crt/{{ settings.hostname }}.{{ settings.domain }}-self-signed.crt &&
+ touch {{ completed_dir }}/kyr-cert-imported
+ become_user: "{{ domino_user }}"
+ when: domino_https_enabled and selfsigned_enabled
+ args:
+ executable: "/bin/bash"
+ chdir: "{{ domino_home_dir }}"
+ creates: "{{ completed_dir }}/kyr-cert-imported"
+
+-
+ name: "Adding SSL Keys to KYR"
+ ansible.builtin.shell: >
+ {{ domino_install_basedir }}/bin/tools/startup
+ {{ domino_install_dir }}/kyrtool import keys
+ -k {{ cert_dir }}/kyr/{{ kyr_cert }}
+ -i {{ cert_dir }}/key/default-signed.key &&
+ touch {{ completed_dir }}/kyr-key-imported
+ become_user: "{{ domino_user }}"
+ when: domino_https_enabled and not selfsigned_enabled
+ args:
+ executable: "/bin/bash"
+ chdir: "{{ domino_home_dir }}"
+ creates: "{{ completed_dir }}/kyr-key-imported"
+
+-
+ name: "Adding SSL Certificate to KYR"
+ ansible.builtin.shell: >
+ {{ domino_install_basedir }}/bin/tools/startup
+ {{ domino_install_dir }}/kyrtool import certs
+ -k {{ cert_dir }}/kyr/{{ kyr_cert }}
+ -i {{ cert_dir }}/crt/default-signed.crt &&
+ touch {{ completed_dir }}/kyr-cert-imported
+ become_user: "{{ domino_user }}"
+ when: domino_https_enabled and not selfsigned_enabled
+ args:
+ executable: "/bin/bash"
+ chdir: "{{ domino_home_dir }}"
+ creates: "{{ completed_dir }}/kyr-cert-imported"
+
+-
+ name: "Creating Installation Directories"
+ ansible.builtin.file:
+ mode: '0755'
+ path: "{{ item }}"
+ state: directory
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ with_items:
+ - "/safe-id-to-cross-certify"
+ - "{{ domino_home_dir }}/idvault"
+ - "{{ domino_home_dir }}/ids"
+
+-
+ name: "Checking if Domino has been touched"
+ register: domino_server_touched
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/domsetup"
+ get_md5: false
+
+-
+ name: "Running CreateNamesDatabase application, Cleanup names.nsf, and Generate a new names.nsf"
+ ansible.builtin.shell: "{{ item }}"
+ become: true
+ become_user: "{{ service_user }}"
+ args:
+ chdir: ~
+ executable: /bin/bash
+ environment:
+ PASSWORD: "{{ domino_admin_notes_id_password }}"
+ when: java_helpers_install
+ with_items:
+ - "java -jar CreateNamesDatabase.jar"
+ - "yes {{ domino_admin_notes_id_password }} | java -jar CreateNamesDatabase.jar"
+ - "expect CreateNamesDatabase.exp {{ domino_admin_notes_id_password }}"
+ - "rm -f {{ domino_home_dir }}/names.nsf"
+ - "java -jar CreateNamesDatabase.jar"
+
+-
+ name: "Adding the Domino One-Touch Setup.json"
+ when: not domino_server_touched.stat.exists
+ ansible.builtin.template:
+ dest: "{{ domino_home_dir }}/setup.json"
+ mode: "a+x"
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ src: setup.json.j2
+
+-
+ name: "Configuring Domino server via setup.json"
+ when: not domino_server_touched.stat.exists
+ ansible.builtin.shell: "{{ item }}"
+ become: true
+ async: 25920
+ poll: 0
+ args:
+ chdir: "{{ domino_home_dir }}"
+ executable: /bin/sh
+ creates: "{{ completed_dir }}/domsetup"
+ with_items:
+ - "(su - {{ domino_user }} -c '{{ domino_install_basedir }}/bin/server -autoconf setup.json && touch {{ completed_dir }}/domsetup > /dev/null 2>&1 &')"
+
+-
+ name: Pause for 60 seconds to let Domino fully configure
+ when: not domino_server_touched.stat.exists
+ ansible.builtin.pause:
+ seconds: 60
+
+-
+ name: "Waiting until the file autoconfigure.log is present before continuing"
+ when: not domino_server_touched.stat.exists
+ ansible.builtin.wait_for:
+ path: "{{ domino_home_dir }}/IBM_TECHNICAL_SUPPORT/autoconfigure.log"
+
+-
+ name: "Capturing autoconfigure log output"
+ ansible.builtin.command: "cat {{ domino_home_dir }}/IBM_TECHNICAL_SUPPORT/autoconfigure.log"
+ register: autoconf_result
+ when: ( debug_autoconfigure or debug_all ) and not domino_server_touched.stat.exists
+
+-
+ name: "Outputting Autoconfigure Log"
+ ansible.builtin.debug:
+ var: autoconf_result.stdout_lines
+ when: ( debug_autoconfigure or debug_all ) and not domino_server_touched.stat.exists
+
+-
+ name: Waiting until the file certstore.nsf is present before continuing
+ when: not domino_server_touched.stat.exists
+ ansible.builtin.wait_for:
+ path: "{{ domino_home_dir }}/certstore.nsf"
+
+-
+ name: Pause for 60 seconds to let Domino fully configure
+ when: not domino_server_touched.stat.exists
+ ansible.builtin.pause:
+ seconds: 60
+
+-
+ name: "Cleanly stopping Domino"
+ when: not domino_server_touched.stat.exists
+ ansible.builtin.shell: "{{ item }}"
+ become: true
+ args:
+ executable: "/bin/bash"
+ register: domino_stop_status
+ with_items:
+ - 'domino cmd "quit" 20'
+
+-
+ name: "Outputting Domino Stop Status"
+ when: ( debug_autoconfigure or debug_all ) and not domino_server_touched.stat.exists
+ ansible.builtin.debug:
+ var: domino_stop_status
+
+-
+ name: Waiting until the completed installation file is present before continuing
+ when: not domino_server_touched.stat.exists
+ ansible.builtin.wait_for:
+ path: "{{ completed_dir }}/domsetup"
+
+-
+ name: 'Copying userID per "convention" for cross-certification'
+ become: true
+ when: not domino_server_touched.stat.exists
+ ansible.builtin.file:
+ path: "{{ domino_home_dir }}/ids/{{ domino_admin_user_id }}"
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ mode: '0755'
+
+-
+ name: 'Copying userID per "convention" for cross-certification'
+ become: true
+ when: not domino_server_touched.stat.exists
+ ansible.builtin.copy:
+ mode: '0755'
+ src: "{{ domino_home_dir }}/ids/{{ domino_admin_user_id }}"
+ dest: "{{ service_home_dir }}/user.id"
+ owner: "{{ service_user }}"
+ group: "{{ service_group }}"
+
+-
+ name: "Stopping Domino and Enabling at boot"
+ become: true
+ ansible.builtin.service:
+ name: domino
+ state: stopped
+ enabled: true
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_config/templates/setup.json.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_config/templates/setup.json.j2
new file mode 100755
index 00000000..7bd92925
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_config/templates/setup.json.j2
@@ -0,0 +1,213 @@
+{
+ "serverSetup":{
+ "server":{
+ "type":"first",
+ "name":"{{ settings.hostname }}.{{ settings.domain }}",
+ "domainName":"{{ domino_organization }}",
+ "title":"{{ settings.hostname }}",
+ "password":null,
+ "minPasswordLength":5,
+ {% if existing_server_id is defined and use_existing_server_id %}
+ "IDFilePath":"{{ domino_home_dir }}/{{ existing_server_id }}",
+ "useExistingServerID": {{ use_existing_server_id }},
+ {% endif %}
+ "serverTasks":"HTTP"
+ },
+ {% if existing_server_id is defined and use_existing_server_id %}
+ "existingServer": {
+ "CN": "{{ existing_server }}",
+ "hostNameOrIP": "{{ existing_server_ip }}"
+ },
+ {% endif %}
+ "network":{
+ "hostName":"{{ settings.hostname }}.{{ settings.domain }}",
+ "enablePortEncryption":true,
+ "enablePortCompression":true
+ },
+ "org":{
+ "countryCode":null,
+ "orgName":"{{ domino_organization }}",
+ "certifierPassword":"{{ domino_admin_notes_id_password }}",
+ "orgUnitName":null,
+ "orgUnitPassword":null
+ },
+ "admin":{
+ "firstName":"{{ domino_admin_user_first_name }}",
+ "middleName":null,
+ "lastName":"{{ domino_admin_user_last_name }}",
+ "password":"{{ domino_admin_notes_id_password }}",
+ "IDFilePath":"{{ domino_home_dir }}/ids/{{ domino_admin_user_id }}"
+ },
+ "notesINI":{
+ "ServerTasks":"{{ domino_notesini_servertasks }}",
+ "LOG_REPLICATION":"1",
+ "LOG_SESSIONS":"1",
+ "iNotes_WA_EnableActionsInArchive":"1",
+ "JavaUserClassesExt":"GJA_Genesis",
+ "GJA_Genesis":"JavaAddin/Genesis/{{ genesis_jar }}",
+ "HTTPEnableMethods":"GET,POST,PUT,DELETE,HEAD,OPTIONS",
+ "HTTPJVMMaxHeapSize":"2048M",
+ "HTTPJVMMaxHeapSizeSet":"1"
+ },
+ "security":{
+ "ACL":{
+ "prohibitAnonymousAccess":true,
+ "addLocalDomainAdmins":true
+ },
+ "TLSSetup": {
+ "method": "import",
+ "retainImportFile": true,
+ "importFilePath": "{{ cert_dir }}/kyr/{{ kyr_cert }}",
+ "importFilePassword": "{{ cert_pass }}",
+ "exportPassword": "{{ cert_pass }}"
+ }
+ },
+ "autoRegister":{
+ "count": {{ domino_server_clustermates }},
+ "IDPath":"{{ domino_home_dir }}/ids",
+ "pattern":"server#"
+ },
+ "registerUsers":{
+ "users":[
+ {
+ "firstName":"{{ domino_dev_user_first_name }}",
+ "lastName":"{{ domino_dev_user_last_name }}",
+ "shortName":"{{ domino_dev_user_short_name }}",
+ "password":"{{ domino_dev_user_id_password }}",
+ "IDFilePath":"{{ domino_home_dir }}/ids/{{ domino_dev_user_id }}"
+ }
+ ]
+ }
+ },
+ "IDVault":{
+ "name":"O={{ id_vault_name }}",
+ "description":"{{ id_vault_name }}",
+ "IDFile":"/idvault/{{ id_vault_name }}.id",
+ "IDPassword":"{{ id_vault_password }}",
+ "path":"IBM_ID_VAULT/{{ id_vault_name }}.nsf",
+ "passwordReset":{
+ "helpText":"{{ id_vault_help_text }}"
+ },
+ "securitySettingsPolicy":{
+ "name":"{{ id_vault_name }} Security Settings Policy",
+ "description":"{{ id_vault_name }} Security Settings"
+ },
+ "masterPolicy":{
+ "description":"{{ id_vault_name }} Master Policy Description"
+ }
+ },
+ "appConfiguration":{
+ "databases":[
+ {
+ "filePath":"names.nsf",
+ "action":"update",
+ "ACL":{
+ "ACLEntries":[
+ {
+ "name":"AutomaticallyCrossCertifiedUsers",
+ "level":"manager",
+ "type":"personGroup",
+ "isPublicReader":true,
+ "isPublicWriter":true,
+ "canDeleteDocuments":true
+ }
+ ]
+ },
+ "documents":[
+ {
+ "action":"update",
+ "findDocument":{
+ "Type":"Server",
+ "ServerName":"CN={{ settings.hostname }}.{{ settings.domain }}/O={{ domino_organization }}"
+ },
+ "computeWithForm":true,
+ "items":{
+ "FullAdmin":[
+ "LocalDomainAdmins",
+ "AutomaticallyCrossCertifiedUsers",
+ "CN={{ domino_admin_user_first_name }} {{ domino_admin_user_last_name }}/O={{ domino_organization }}"
+ ],
+ "CreateAccess":[
+ "LocalDomainAdmins",
+ "AutomaticallyCrossCertifiedUsers",
+ "CN={{ domino_admin_user_first_name }} {{ domino_admin_user_last_name }}/O={{ domino_organization }}"
+ ],
+ "ReplicaAccess":[
+ "LocalDomainAdmins",
+ "AutomaticallyCrossCertifiedUsers",
+ "CN={{ domino_admin_user_first_name }} {{ domino_admin_user_last_name }}/O={{ domino_organization }}"
+ ],
+ "UnrestrictedList":[
+ "LocalDomainAdmins",
+ "AutomaticallyCrossCertifiedUsers",
+ "CN={{ domino_admin_user_first_name }} {{ domino_admin_user_last_name }}/O={{ domino_organization }}"
+ ],
+ "OnBehalfOfInvokerLst":[
+ "LocalDomainAdmins",
+ "AutomaticallyCrossCertifiedUsers",
+ "CN={{ domino_admin_user_first_name }} {{ domino_admin_user_last_name }}/O={{ domino_organization }}"
+ ],
+ "LibsLst":[
+ "LocalDomainAdmins",
+ "AutomaticallyCrossCertifiedUsers",
+ "CN={{ domino_admin_user_first_name }} {{ domino_admin_user_last_name }}/O={{ domino_organization }}"
+ ],
+ "RestrictedList":[
+ "LocalDomainAdmins",
+ "AutomaticallyCrossCertifiedUsers",
+ "CN={{ domino_admin_user_first_name }} {{ domino_admin_user_last_name }}/O={{ domino_organization }}"
+ ],
+ "HTTP_EnableSessionAuth":"1",
+ "HTTP_Port":{{ domino_install_port_forwards[1].guest }},
+ "HTTP_SSLPort":{{ domino_install_port_forwards[0].guest }},
+ "HTTP_SSLMode":"{{ domino_https_enabled }}",
+ "HTTP_SSLKeyFile":"{{ cert_dir }}/kyr/{{ kyr_cert }}",
+ "LdISite":"1"
+ }
+ },
+ {
+ "action":"create",
+ "computeWithForm":true,
+ "items":{
+ "Form":"Program",
+ "CmdLine":"Genesis",
+ "Enabled":"2",
+ "Program":"runjava",
+ "Source":"CN={{ settings.hostname }}.{{ settings.domain }}/O={{ domino_organization }}"
+ }
+ },
+ {
+ "action":"create",
+ "computeWithForm":true,
+ "items":{
+ "Form":"WebSite",
+ "ISiteOrg":"{{ domino_organization }}",
+ "ISiteName":"Domino Web Site",
+ "WSIsDflt":"1",
+ "HTTP_EnableSessionAuth":"1",
+ "WSHTTPMthds_ed":[ "1", "2", "3", "4", "6", "7" ]
+ }
+ },
+ {
+ "action":"create",
+ "computeWithForm":true,
+ "items":{
+ "Form":"Group",
+ "Type":"Group",
+ "GroupType":"0",
+ "ListName":"AutomaticallyCrossCertifiedUsers",
+ "ListDescription":"Created automatically during installation"
+ }
+ }
+ ]
+ }
+ ]
+ },
+ "autoConfigPreferences":{
+ "startServerAfterConfiguration": true,
+ "consoleLogOutput": {
+ "show": "all",
+ "pauseOnErrorSeconds": 10
+ }
+ }
+}
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_cross_certify/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_cross_certify/defaults/main.yml
new file mode 100644
index 00000000..e3762e74
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_cross_certify/defaults/main.yml
@@ -0,0 +1,3 @@
+---
+show_help: true
+safe_notes_id: "SAFE.IDS"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_cross_certify/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_cross_certify/meta/main.yml
new file mode 100644
index 00000000..f934d06a
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_cross_certify/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: domino_cross_certify
+ author: MarkProminic
+ description: Cross Certifies a provided Notes Safe Id
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_cross_certify/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_cross_certify/tasks/main.yml
new file mode 100755
index 00000000..a3dea925
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_cross_certify/tasks/main.yml
@@ -0,0 +1,88 @@
+---
+-
+ name: "Ensuring Domino is up"
+ ansible.builtin.service:
+ name: domino
+ state: started
+ enabled: true
+
+-
+ name: "Checking if ID to Cross Certify is available at /safe-id-to-cross-certify/{{ safe_notes_id }}"
+ register: cross_certify_check
+ ansible.builtin.stat:
+ path: "/safe-id-to-cross-certify/{{ safe_notes_id }}"
+ get_md5: false
+
+-
+ name: Pause for 45 seconds to let Domino fully start
+ when: cross_certify_check.stat.exists
+ ansible.builtin.pause:
+ seconds: 45
+
+-
+ name: "Cross Certifying /safe-id-to-cross-certify/{{ safe_notes_id }}"
+ become_user: "{{ service_user }}"
+ become: true
+ register: cross_certify_commands
+ when: cross_certify_check.stat.exists
+ ansible.builtin.shell: >
+ source {{ service_home_dir }}/.sdkman/bin/sdkman-init.sh &&
+ source {{ service_home_dir }}/.bashrc &&
+ source {{ service_home_dir }}/.bash_profile &&
+ $JAVA_HOME/bin/java
+ -Dapp.properties.file={{ service_home_dir }}/CrossCertifyNotesID.properties
+ -jar CrossCertifyNotesID.jar /safe-id-to-cross-certify/{{ safe_notes_id }}
+ args:
+ executable: /bin/bash
+ chdir: "{{ service_home_dir }}"
+
+-
+ name: "Waiting until the Cross Certification JAR writes to CrossCertifyNotesID.out"
+ when: cross_certify_check.stat.exists
+ ansible.builtin.wait_for:
+ path: "{{ completed_dir }}/CrossCertifyNotesID.out"
+
+-
+ name: "Outputting available Help Text"
+ when: cross_certify_check.stat.exists
+ ansible.builtin.debug:
+ var: cross_certify_commands.stdout_lines
+
+-
+ name: "Running Check database to populate IDs into ID Vault"
+ become_user: "{{ service_user }}"
+ ansible.builtin.expect:
+ chdir: "{{ service_home_dir }}"
+ command: >
+ /bin/bash -c "
+ . {{ service_home_dir }}/.bash_profile &&
+ . {{ service_home_dir }}/.bashrc &&
+ . {{ service_home_dir }}/.sdkman/bin/sdkman-init.sh &&
+ java -jar CheckDatabase.jar {{ settings.hostname }}.{{ settings.domain }}/{{ domino_organization }} names.nsf"
+ responses:
+ 'Enter password \(press the Esc key to abort\): ': "{{ domino_admin_notes_id_password }}"
+
+-
+ name: "Stopping Domino for Changes to take effect"
+ become: true
+ when: cross_certify_check.stat.exists
+ ansible.builtin.service:
+ name: domino
+ state: stopped
+ enabled: true
+ register: domino_cc_service_details_stop
+ until: domino_cc_service_details_stop.state == "stopped"
+ retries: 3
+ delay: 5
+
+-
+ name: "Starting Domino for Changes to take effect"
+ when: cross_certify_check.stat.exists
+ ansible.builtin.service:
+ name: domino
+ state: started
+ enabled: true
+ register: domino_cc_service_details_start
+ retries: 3
+ delay: 5
+ until: domino_cc_service_details_start.state == "started"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/defaults/main.yml
new file mode 100644
index 00000000..c3b17e7c
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/defaults/main.yml
@@ -0,0 +1,4 @@
+---
+genesis_debug: false
+genesis_version: 0.6.18
+genesis_jar: Genesis-0.6.18.jar
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/files/Genesis-0.6.18.jar b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/files/Genesis-0.6.18.jar
new file mode 100644
index 00000000..9049babf
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/files/Genesis-0.6.18.jar
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e1cc8635e861fed95566eeff3836ff37ab3e3f62ec8460db5def9e5a73c7795d
+size 73664
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/meta/main.yml
new file mode 100644
index 00000000..7c086aee
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: domino_genesis
+ author: MarkProminic
+ description: Install and Enable Genesis
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/tasks/main.yml
new file mode 100755
index 00000000..56bae690
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/tasks/main.yml
@@ -0,0 +1,79 @@
+---
+-
+ name: "Checking if Genesis is installed: {{ genesis_version }}"
+ register: genesis_installed_check
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/genesis_installed_check"
+ get_md5: false
+
+-
+ name: "Creating installation directories for Genesis"
+ when: not genesis_installed_check.stat.exists
+ ansible.builtin.file:
+ mode: '0755'
+ path: "{{ item }}"
+ state: directory
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ with_items:
+ - "{{ domino_home_dir }}/JavaAddin/Genesis"
+ - "{{ domino_home_dir }}/JavaAddin/Genesis/json"
+
+-
+ name: "Stopping Domino for Changes to take effect"
+ become: true
+ when: not genesis_installed_check.stat.exists
+ ansible.builtin.service:
+ name: domino
+ state: stopped
+ enabled: true
+ register: domino_service_stopped
+ until: domino_service_stopped.state == "stopped"
+ retries: 3
+ delay: 5
+
+-
+ name: Pause for 15 seconds to let Domino fully shutdown
+ when: not genesis_installed_check.stat.exists
+ ansible.builtin.pause:
+ seconds: 15
+
+-
+ name: "Placing Genesis template Configuration file"
+ when: not genesis_installed_check.stat.exists
+ become: true
+ ansible.builtin.copy:
+ mode: '0755'
+ dest: "{{ domino_home_dir }}/JavaAddin/Genesis/{{ genesis_jar }}"
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ src: "{{ genesis_jar }}"
+
+-
+ name: "Starting Domino for Changes to take effect"
+ become: true
+ when: not genesis_installed_check.stat.exists
+ ansible.builtin.service:
+ name: domino
+ state: started
+ enabled: true
+ register: domino_genesis_service_start_details
+ until: domino_genesis_service_start_details.state == "started"
+ retries: 10
+ delay: 10
+
+-
+ name: "Debug"
+ when: ( genesis_debug or debug_all ) and not genesis_installed_check.stat.exists
+ ansible.builtin.debug:
+ var: domino_genesis_service_start_details
+
+-
+ name: "Marking Genesis as installed"
+ when: not genesis_installed_check.stat.exists
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: touch
+ with_items:
+ - "{{ completed_dir }}/genesis_installed_check"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/templates/genesis-test.json b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/templates/genesis-test.json
new file mode 100755
index 00000000..7bb072ca
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis/templates/genesis-test.json
@@ -0,0 +1,86 @@
+{
+ "title": "TEST JSON",
+ "versionjson": "1.0.0",
+ "steps": [
+ {
+ "title": "--- Step 1. Databases examples ---",
+ "databases": [
+ {
+ "action": "create",
+ "title": "BACKUP DOCS",
+ "filePath": "backupdocs4.nsf",
+ "templatePath": "c:\\private\\BackupDocs2.ntf",
+ "sign": true,
+ "replace": true,
+ "ACL": {
+ "roles": [
+ "GrandPoobah",
+ "WorkerBee",
+ "Peon"
+ ],
+ "ACLEntries": [
+ {
+ "name": "Nancy Noaccess",
+ "level": "noAccess",
+ "type": "unspecified",
+ "isPublicReader": true,
+ "isPublicWriter": true
+ },
+ {
+ "name": "Ronnie Reader",
+ "level": "reader",
+ "type": "serverGroup",
+ "canCreatePersonalAgent": true,
+ "canCreatePersonalFolder": true,
+ "canCreateLSOrJavaAgent": true,
+ "canReplicateOrCopyDocuments": true
+ },
+ {
+ "name": "Annie Author",
+ "level": "author",
+ "canCreateDocuments": true,
+ "canDeleteDocuments": true,
+ "type": "mixedGroup"
+ },
+ {
+ "name": "Ed Itor",
+ "level": "editor",
+ "type": "server",
+ "canCreateSharedFolder": true
+ },
+ {
+ "name": "Wolfpack",
+ "level": "designer",
+ "type": "personGroup"
+ },
+ {
+ "name": "Sherlock Holmes/GBR/sherlock",
+ "level": "manager",
+ "type": "person",
+ "canCreateDocuments": true,
+ "canCreateLSOrJavaAgent": true,
+ "canCreatePersonalAgent": true,
+ "canCreatePersonalFolder": true,
+ "canCreateSharedFolder": true,
+ "canDeleteDocuments": true,
+ "canReplicateOrCopyDocuments": true,
+ "isPublicReader": true,
+ "isPublicWriter": true,
+ "roles": [
+ "GrandPoobah",
+ "WorkerBee"
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ },
+ {
+ "title": "--- Step 2 (final). Completed ---",
+ "messages": [
+ "You have run test.json file successfully"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis_applications/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis_applications/defaults/main.yml
new file mode 100644
index 00000000..6d0290b5
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis_applications/defaults/main.yml
@@ -0,0 +1,5 @@
+---
+genesis_packages_debug: false
+genesis_packages:
+ - netmonitor
+ - SuperHumanPortal
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis_applications/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis_applications/meta/main.yml
new file mode 100644
index 00000000..2342635b
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis_applications/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: domino_genesis_applications
+ author: MarkProminic
+ description: Install Applications and Databases on Domino via Genesis
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis_applications/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis_applications/tasks/main.yml
new file mode 100644
index 00000000..fac026db
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_genesis_applications/tasks/main.yml
@@ -0,0 +1,40 @@
+---
+-
+ name: "Checking if Genesis Apps are installed: {{ genesis_packages }}"
+ register: genesis_packages_installed_check
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/genesis_packages_installed_check"
+ get_md5: false
+
+-
+ name: "Waiting for 15 seconds to let Domino fully startup from previous tasks"
+ when: not genesis_packages_installed_check.stat.exists
+ ansible.builtin.pause:
+ seconds: 15
+
+-
+ name: "Installing Genesis Applications"
+ ansible.builtin.shell: domino cmd "tell genesis install {{ item }}" 20
+ when: not genesis_packages_installed_check.stat.exists
+ become: true
+ args:
+ executable: "/bin/bash"
+ register: domino_genesis_applications
+ with_items:
+ - "{{ genesis_packages }}"
+
+-
+ name: "Debugging Genesis Application Installation"
+ when: ( genesis_packages_debug or debug_all ) and not genesis_packages_installed_check.stat.exists
+ ansible.builtin.debug:
+ msg: "{{ domino_genesis_applications }}"
+
+-
+ name: "Marking all Genesis packages as installed"
+ when: not genesis_packages_installed_check.stat.exists
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: touch
+ with_items:
+ - "{{ completed_dir }}/genesis_packages_installed_check"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_install/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_install/defaults/main.yml
new file mode 100644
index 00000000..ff01c8da
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_install/defaults/main.yml
@@ -0,0 +1,49 @@
+---
+domino_user: domino
+domino_group: domino
+domino_home_dir: /local/notesdata
+domino_user_soft_limit: "60000"
+domino_user_hard_limit: "80000"
+kyr_cert: keyfile.kyr
+id_vault_name: DemoVault
+id_vault_password: IDVaultPassword
+id_vault_help_text: Please create an issue in the DominoVagrant Issues Page for help!
+domino_admin_user_first_name: Demo
+domino_admin_user_last_name: Admin
+domino_admin_user_id: demo-user.id
+domino_admin_notes_id_password: "password"
+domino_dev_user_first_name: Dev
+domino_dev_user_last_name: User
+domino_dev_user_short_name: DevUser
+domino_dev_user_id: dev-user.id
+domino_dev_user_id_password: "password"
+domino_major_version: "12"
+domino_minor_version: "0"
+domino_patch_version: "2"
+domino_server_installer_tar: "Domino_12.0.2_Linux_English.tar"
+domino_notesini_servertasks: "HTTP,nomad"
+domino_installer_fixpack_install: false
+domino_fixpack_version: FP1
+domino_server_fixpack_tar: "Domino_1201FP1_Linux.tar"
+domino_installer_hotfix_install: false
+domino_hotfix_version: HF50
+domino_server_hotfix_tar: "1201HF50-linux64.tar"
+domino_installer_base_url: "https://mydomain.com"
+domino_installer_url_user: "SomeHTTPUser@mydomain.com"
+domino_install_dir: /opt/hcl/domino/notes/latest/linux
+domino_install_basedir: /opt/hcl/domino
+domino_https_enabled: 1
+domino_server_clustermates: 0
+domino_organization: STARTcloud
+domino_countrycode: null
+domino_install_port_forwards:
+ -
+ guest: 442
+ url: "domino"
+ -
+ guest: 82
+ url: "domino"
+ -
+ guest: 1352
+ url: "domino"
+domino_install_proxy_url: "{{ domino_install_port_forwards[0].url }}.{{ settings.hostname }}.{{ settings.domain }}"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_install/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_install/meta/main.yml
new file mode 100644
index 00000000..e23164c9
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_install/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: domino_install
+ author: MarkProminic
+ description: Install Domino and Fixpacks from local or remote source
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_install/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_install/tasks/main.yml
new file mode 100755
index 00000000..561edd85
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_install/tasks/main.yml
@@ -0,0 +1,227 @@
+---
+-
+ name: "Creating Domino installation directories"
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: directory
+ with_items:
+ - "{{ installer_dir }}/domino"
+ - "{{ installer_dir }}/domino/core"
+ - "{{ installer_dir }}/domino/fixpack"
+ - "{{ installer_dir }}/domino/hotfix"
+
+-
+ name: "Checking if the Domino Installer archive is at /domino/archives/{{ domino_server_installer_tar }}"
+ register: domino_server_installer_check
+ ansible.builtin.stat:
+ path: "{{ installer_dir }}/domino/archives/{{ domino_server_installer_tar }}"
+ get_md5: false
+
+-
+ name: "Checking if Domino has been installed"
+ register: domino_server_installed
+ ansible.builtin.stat:
+ path: "{{ domino_home_dir }}/notes.ini"
+ get_md5: false
+
+-
+ name: "Downloading Domino from {{ domino_installer_base_url }}"
+ register: domlsresult
+ until: "domlsresult is not failed"
+ retries: 3
+ ansible.builtin.get_url:
+ mode: '0755'
+ url: "{{ domino_installer_base_url }}/ND{{ domino_major_version }}/{{ domino_server_installer_tar }}"
+ dest: "{{ installer_dir }}/domino/archives/{{ domino_server_installer_tar }}"
+ username: "{{ domino_installer_url_user }}"
+ password: "{{ secrets.domino_installer_url_pass }}"
+ when: not domino_server_installer_check.stat.exists and not domino_server_installed.stat.exists
+
+-
+ name: "Extracting Domino from {{ domino_server_installer_tar }}"
+ when: not domino_server_installed.stat.exists
+ ansible.builtin.unarchive:
+ src: "{{ installer_dir }}/domino/archives/{{ domino_server_installer_tar }}"
+ dest: "{{ installer_dir }}/domino/core"
+ creates: "{{ installer_dir }}/domino/core/linux64"
+ remote_src: true
+
+-
+ name: "Checking if the Domino fixpack installer archive is at /domino/archives/{{ domino_server_fixpack_tar }}"
+ register: fixpack_archive
+ ansible.builtin.stat:
+ path: "{{ installer_dir }}/domino/archives/{{ domino_server_fixpack_tar }}"
+ get_md5: false
+
+-
+ name: "Checking if the Domino fixpack extracted installer is at /domino/fixpack/linux64"
+ register: fixpack_extracted
+ ansible.builtin.stat:
+ path: "{{ installer_dir }}/domino/fixpack/linux64"
+ get_md5: false
+
+-
+ name: "Checking if the Domino fixpack has been installed"
+ register: fixpack_installed
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/fpinstalled"
+ get_md5: false
+
+-
+ name: "Downloading Domino Fixpack from {{ domino_installer_base_url }}"
+ register: fplsresult
+ until: "fplsresult is not failed"
+ retries: 3
+ when: not fixpack_archive.stat.exists and not fixpack_extracted.stat.exists and not domino_installer_fixpack_install and fixpack_installed.stat.exists
+ ansible.builtin.get_url:
+ mode: '0755'
+ url: "{{ domino_installer_base_url }}/ND{{ domino_major_version }}/{{ domino_server_fixpack_tar }}"
+ dest: "{{ installer_dir }}/domino/archives/{{ domino_server_fixpack_tar }}"
+ username: "{{ domino_installer_url_user }}"
+ password: "{{ secrets.domino_installer_url_pass }}"
+
+-
+ name: "Extracting Domino fixpack from {{ domino_server_fixpack_tar }}"
+ when: domino_installer_fixpack_install and not fixpack_extracted.stat.exists and not fixpack_installed.stat.exists
+ ansible.builtin.unarchive:
+ src: "{{ installer_dir }}/domino/archives/{{ domino_server_fixpack_tar }}"
+ dest: "{{ installer_dir }}/domino/fixpack"
+ creates: "{{ installer_dir }}/domino/fixpack/linux64"
+ remote_src: true
+
+-
+ name: "Checking if the Domino hotfix installer archive is at /domino/archives/{{ domino_server_hotfix_tar }}"
+ register: hotfix_installer_check
+ ansible.builtin.stat:
+ path: "{{ installer_dir }}/domino/archives/{{ domino_server_hotfix_tar }}"
+ get_md5: false
+
+-
+ name: "Checking if the Domino hotfix extracted installer is at /domino/hotfix/domino"
+ register: hotfix_extracted
+ ansible.builtin.stat:
+ path: "{{ installer_dir }}/domino/hotfix/domino"
+ get_md5: false
+
+-
+ name: "Checking if the Domino hotfix has been installed"
+ register: hotfix_installed
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/hfinstalled"
+ get_md5: false
+
+-
+ name: "Downloading Domino hotfix from {{ domino_installer_base_url }}"
+ register: hflsresult
+ until: "hflsresult is not failed"
+ retries: 3
+ ansible.builtin.get_url:
+ mode: '0755'
+ url: "{{ domino_installer_base_url }}/ND{{ domino_major_version }}/{{ domino_server_hotfix_tar }}"
+ dest: "{{ installer_dir }}/domino/archives/{{ domino_server_hotfix_tar }}"
+ username: "{{ domino_installer_url_user }}"
+ password: "{{ secrets.domino_installer_url_pass }}"
+ when: not hotfix_installer_check.stat.exists and not hotfix_extracted.stat.exists and domino_installer_hotfix_install and not hotfix_installed.stat.exists
+
+-
+ name: "Extracting Domino Hotfix from {{ domino_server_hotfix_tar }}"
+ when: domino_installer_hotfix_install and not hotfix_extracted.stat.exists and not hotfix_installed.stat.exists
+ ansible.builtin.unarchive:
+ src: "{{ installer_dir }}/domino/archives/{{ domino_server_hotfix_tar }}"
+ dest: "{{ installer_dir }}/domino/hotfix"
+ creates: "{{ installer_dir }}/domino/hotfix/domino"
+ remote_src: true
+
+-
+ name: "Creating the group {{ domino_group }}"
+ ansible.builtin.group:
+ name: "{{ domino_group }}"
+ state: present
+
+-
+ name: "Adding user group: {{ domino_group }}"
+ ansible.builtin.user:
+ name: "{{ domino_user }}"
+ shell: /bin/sh
+ groups: "{{ domino_group }}"
+ home: "{{ domino_home_dir }}"
+
+-
+ name: "Adding soft nofile limits in limits.conf for: {{ domino_user }}"
+ community.general.pam_limits:
+ domain: "{{ domino_user }}"
+ limit_type: soft
+ limit_item: nofile
+ value: "{{ domino_user_soft_limit }}"
+
+-
+ name: "Adding hard nofile limits to limits.conf for: {{ domino_user }}"
+ community.general.pam_limits:
+ domain: "{{ domino_user }}"
+ limit_type: hard
+ limit_item: nofile
+ value: "{{ domino_user_hard_limit }}"
+
+-
+ name: "Adding Domino silent install response file"
+ when: not domino_server_installed.stat.exists
+ ansible.builtin.template:
+ dest: "{{ installer_dir }}/domino/installer.properties"
+ mode: a+x
+ src: "installer.properties.j2"
+
+-
+ name: "Installing Domino"
+ ansible.builtin.command: "{{ item }}"
+ become: true
+ become_user: root
+ when: not domino_server_installed.stat.exists
+ args:
+ chdir: "{{ installer_dir }}/domino/core/linux64"
+ creates: "{{ domino_home_dir }}/notes.ini"
+ with_items:
+ - "bash ./install -f {{ installer_dir }}/domino/installer.properties -i silent"
+
+-
+ name: "Configuring fixpack installer {{ domino_fixpack_version }}"
+ when: domino_installer_fixpack_install and not fixpack_installed.stat.exists
+ ansible.builtin.lineinfile:
+ path: "{{ installer_dir }}/domino/fixpack/linux64/domino/script.dat"
+ line: "installation_type = 2"
+
+-
+ name: "Installing Domino fixpack {{ domino_fixpack_version }}"
+ ansible.builtin.command: "{{ item }}"
+ become: true
+ become_user: root
+ environment:
+ NUI_NOTESDIR: "{{ domino_install_basedir }}/"
+ args:
+ chdir: "{{ installer_dir }}/domino/fixpack/linux64/domino"
+ creates: "{{ completed_dir }}/fpinstalled"
+ when: domino_installer_fixpack_install and not fixpack_installed.stat.exists
+ with_items:
+ - "bash ./install -script ./script.dat && touch {{ completed_dir }}/fpinstalled"
+
+-
+ name: "Configuring hotfix installer {{ domino_hotfix_version }}"
+ when: domino_installer_hotfix_install and not domino_server_hostfix_installed.stat.exists
+ ansible.builtin.lineinfile:
+ path: "{{ installer_dir }}/domino/hotfix/linux64/script.dat"
+ line: "installation_type = 2"
+
+-
+ name: "Installing Domino Hotfix {{ domino_hotfix_version }}"
+ ansible.builtin.command: "{{ item }}"
+ become: true
+ become_user: root
+ environment:
+ NUI_NOTESDIR: "{{ domino_install_basedir }}/"
+ args:
+ chdir: "{{ installer_dir }}/domino/hotfix/linux64"
+ creates: "{{ completed_dir }}/hfinstalled"
+ executable: "/bin/bash"
+ when: domino_installer_hotfix_install and not domino_server_hostfix_installed.stat.exists
+ with_items:
+ - "bash ./install -script ./script.dat && touch {{ completed_dir }}/hfinstalled"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_install/templates/installer.properties.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_install/templates/installer.properties.j2
new file mode 100755
index 00000000..11f8462a
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_install/templates/installer.properties.j2
@@ -0,0 +1,149 @@
+# Fri Feb 12 00:07:24 UTC 2021
+# Replay feature output
+# ---------------------
+# This file was built by the Replay feature of InstallAnywhere.
+# It contains variables that were set by Panels, Consoles or Custom Code.
+
+
+
+#Choose Install Program Files Directory
+#--------------------------------------
+USER_INSTALL_DIR={{ domino_install_basedir }}
+
+#Select Partition
+#----------------
+IA_IS_PARTITION=0
+
+#Choose Install Data Files Directory
+#-----------------------------------
+USER_INSTALL_DATA_DIR={{ domino_home_dir }}
+
+#Enter User Name
+#---------------
+IA_USERNAME={{ domino_user }}
+
+#Enter Group Name
+#----------------
+IA_GROUPNAME={{ domino_group}}
+
+#Number of Partition Servers
+#---------------------------
+
+#Choose Install Data Files Directory Partition Server 1
+#------------------------------------------------------
+USER_MAGIC_FOLDER_1={{ domino_home_dir }}
+
+#Partitioned Number 1
+#--------------------
+
+#UserName 1
+#----------
+
+#GroupName 1
+#-----------
+
+#Choose Install Data Files Directory Partition Server 2
+#------------------------------------------------------
+USER_MAGIC_FOLDER_2=
+
+#Partitioned Number 2
+#--------------------
+
+#UserName 2
+#----------
+
+#GroupName 2
+#-----------
+
+#Choose Install Data Files Directory Partition Server 3
+#------------------------------------------------------
+USER_MAGIC_FOLDER_3=
+
+#Partitioned Number 3
+#--------------------
+
+#UserName 3
+#----------
+
+#GroupName 3
+#-----------
+
+#Choose Install Data Files Directory Partition Server 4
+#------------------------------------------------------
+USER_MAGIC_FOLDER_4=
+
+#Partitioned Number 4
+#--------------------
+
+#UserName 4
+#----------
+
+#GroupName 4
+#-----------
+
+#Choose Install Data Files Directory Partition Server 5
+#------------------------------------------------------
+USER_MAGIC_FOLDER_5=
+
+#Partitioned Number 5
+#--------------------
+
+#UserName 5
+#----------
+
+#GroupName 5
+#-----------
+
+#Choose Install Data Files Directory Partition Server 6
+#------------------------------------------------------
+USER_MAGIC_FOLDER_6=
+
+#Partitioned Number 6
+#--------------------
+
+#UserName 6
+#----------
+
+#GroupName 6
+#-----------
+
+#Choose Install Data Files Directory Partition Server 7
+#------------------------------------------------------
+USER_MAGIC_FOLDER_7=
+
+#Partitioned Number 7
+#--------------------
+
+#UserName 7
+#----------
+
+#GroupName 7
+#-----------
+
+#Choose Install Data Files Directory Partition Server 8
+#------------------------------------------------------
+USER_MAGIC_FOLDER_8=
+
+#Partitioned Number 8
+#--------------------
+
+#UserName 8
+#----------
+
+#GroupName 8
+#-----------
+
+#
+#
+USER_INPUT_CONSOLE_RESULTS=\"Manual\",\"\",\"\"
+USER_INPUT_RESULT_1=\"NO\"
+
+#Choose Install Set
+#------------------
+CHOSEN_FEATURE_LIST=Program Files,DataFiles,Domino Enterprise Connection Services,License,iNotes,Domino Directory Sync Services,Performance Monitoring,OS Integration,Resource Modeling Engine,Billing Support,Clustering Support,Optional Network Drivers,Symbols Files,Java Support,Required Templates,Administration Templates,Optional Templates,Certificate Management,Readme Files,Dojo,Xpages,Web Services Data Files,Enterprise Server Files,Help
+CHOSEN_INSTALL_FEATURE_LIST=Program Files,DataFiles,Domino Enterprise Connection Services,License,iNotes,Domino Directory Sync Services,Performance Monitoring,OS Integration,Resource Modeling Engine,Billing Support,Clustering Support,Optional Network Drivers,Symbols Files,Java Support,Required Templates,Administration Templates,Optional Templates,Certificate Management,Readme Files,Dojo,Xpages,Web Services Data Files,Enterprise Server Files,Help
+CHOSEN_INSTALL_SET=Enterprise
+
+#Install
+#-------
+-fileOverwrite_{{ domino_install_basedir }}/notes/11000100/linux/_HCL Domino_installation/Change HCL Domino Installation.lax=Yes
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_app_example/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_app_example/defaults/main.yml
new file mode 100644
index 00000000..074a2aa0
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_app_example/defaults/main.yml
@@ -0,0 +1,2 @@
+---
+show_help: true
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_app_example/files/ExampleServlet.class b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_app_example/files/ExampleServlet.class
new file mode 100755
index 0000000000000000000000000000000000000000..1bce6a8c8bdb18afacd64ac9f3eed0dbf5311481
GIT binary patch
literal 1316
zcma)+>rN9v6vzKlTDG(c#X{ApXi>p(T@}2w6|q<@+CoB!@sem3n84z8cio+0eH5R-
z&&C8&O^orQk7VNI%(f7UL~PSDGbd-x`JGGW`>#)508FA3K@a-FxERGHT$W-WjKK)5
zARWaJhNBq4s1#!g#uaE0M5Xyvxn2w7x-^=Qhe-uD6x?KJ&l#2>rWo4NLysAPGxj=X
zh!qWsFIFq7++EREP11zc?Kv(Omea*8eOs?-9(T7*F0@S{9BrONxuIC*udCb>*<;N-
z$F@A4l~Fs(LSK7T(j7mTf?Eo1GeiotHSP$5%oN?tDa#c(P85LRNRNMunsO9EYO2w&MerlfU>)iEH9=Am5jUhHuVw0!b26`r^
z<;iGyia=D{Lr%f8iu=eDm5LcWP*K1vL${2nnYy*1m4$0q8w{rotMXODq)1d)=}^TS
z<{1K+l!^sBRPhK!hOX9B7|#E@!VLYb2F)5QRfSU(gu?X-L&8U?3d7V2R<*)iT~?9`
z=j#nyTUgq!FT-G~$G@A}=oZ0xT+G;(pu(-Zb*K@E#y}jbgWuEG1{bYF1L>hdLvaLn
zmO*i-n8GC1f#azBQ6ngd)oAIWUf~`?vKdM{oIkmbZR%dhc6q_%70QCFY4o2lL@gSd
zrjns|&x7CJrPVFICg^gz?l|09XBaywZ9lz(m1wq~E_UoAQnCN*E?o)>dvCH@c^cX*
z`VprGL4E=tW+X_Sq*EWQjMmWbF4za!&}fou_mv$aPa{cJr0GWlNe{dnjR)#z3t+U4
zAj9HVJhX%MkvbHHOfVU&Bh2s)8M*6V_=-p-H1-+Mr@K(&ojd4Cl6_Y^Mq(rxs-rsq
z+V0>4d3pc)n}i7X5aaax0G$Y<3oPbDVg4=nYF?f{;pN^@P^v1o;AI=nJw8{6wFE
VvkK1rz% ./manifest.txt
+echo "Class-Path: ./Notes.jar $NOTES_JAR" >> ./manifest.txt
+
+
+#############
+
+
+echo "Building and creating a runnable JAR for: CheckNotesUser.java "
+
+javac -verbose -source 1.8 -target 1.8 -d /home/vagrant/dist-built -classpath $NOTES_JAR /home/vagrant/dist-src/net/prominic/domino/vagrant/CheckNotesUser.java
+
+cp ./manifest.txt ./manifest-temp.txt
+echo "Main-Class: net.prominic.domino.vagrant.CheckNotesUser" >> ./manifest-temp.txt
+
+jar -cvfm CheckNotesUser2.jar ./manifest-temp.txt net.prominic.domino.vagrant.CheckNotesUser -C /home/vagrant/dist-built/ .
+
+
+#############
+
+
+echo "Building and creating a runnable JAR for: CreateNamesDatabase.java "
+
+javac -verbose -source 1.8 -target 1.8 -d /home/vagrant/dist-built -classpath $NOTES_JAR /home/vagrant/dist-src/net/prominic/domino/vagrant/CreateNamesDatabase.java
+
+cp ./manifest.txt ./manifest-temp.txt
+echo "Main-Class: net.prominic.domino.vagrant.CreateNamesDatabase" >> ./manifest-temp.txt
+
+jar -cvfm CreateNamesDatabase2.jar ./manifest-temp.txt -C /home/vagrant/dist-built/ .
+
+
+#############
+
+echo "Building and creating a runnable JAR for: CheckDatabase.java "
+
+javac -verbose -source 1.8 -target 1.8 -d /home/vagrant/dist-built -classpath $NOTES_JAR /home/vagrant/dist-src/net/prominic/domino/vagrant/CheckDatabase.java
+
+cp ./manifest.txt ./manifest-temp.txt
+echo "Main-Class: net.prominic.domino.vagrant.CheckDatabase" >> ./manifest-temp.txt
+
+jar -cvfm CheckDatabase2.jar ./manifest-temp.txt -C /home/vagrant/dist-built/ .
+
+
+
+#############
+
+#https://www.codejava.net/java-core/tools/using-jar-command-examples
+echo "View one of the JARs..."
+
+jar tf CheckNotesUser2.jar
+
+echo "Attempting to run one of the new builds..."
+
+cp CheckNotesUser2.jar {{ service_home_dir }}
+
+cd {{ service_home_dir }}
+
+java -jar ./CheckNotesUser2.jar
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_app_example/templates/build-servlet-example.bsh b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_app_example/templates/build-servlet-example.bsh
new file mode 100755
index 00000000..72f566b0
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_app_example/templates/build-servlet-example.bsh
@@ -0,0 +1,17 @@
+#!/bin/bash
+
+mkdir -p /home/vagrant/dist-built
+
+
+#############
+
+
+echo "Building servlet: ExampleServlet.java "
+
+javac -verbose -source 1.8 -target 1.8 -d /home/vagrant/dist-built -classpath {{ domino_install_basedir }}/notest/latest/linux/jvm/lib/ext/:{{ domino_install_dir }}/ndext/jsdk.jar /home/vagrant/dist-src/ExampleServlet.java
+
+sudo mkdir -p {{ domino_home_dir }}/domino/servlet
+
+sudo cp /home/vagrant/dist-built/ExampleServlet.class {{ domino_home_dir }}/domino/servlet
+
+sudo chown -R domino:domino {{ domino_home_dir }}/domino/servlet
\ No newline at end of file
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_config/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_config/defaults/main.yml
new file mode 100644
index 00000000..ddee2148
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_config/defaults/main.yml
@@ -0,0 +1,3 @@
+---
+show_help: true
+java_helpers_install: false
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_config/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_config/meta/main.yml
new file mode 100644
index 00000000..a5b6d16a
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_config/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: domino_java_config
+ author: MarkProminic
+ description: Configure java for use with Domino
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_config/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_config/tasks/main.yml
new file mode 100755
index 00000000..cef55db1
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_config/tasks/main.yml
@@ -0,0 +1,30 @@
+---
+-
+ name: "Placing templated notes.ini into {{ service_home_dir }}"
+ become: true
+ ansible.builtin.template:
+ dest: "{{ service_home_dir }}/notes.ini"
+ owner: "{{ service_user }}"
+ group: "{{ service_group }}"
+ mode: "0744"
+ src: notes.ini.j2
+
+-
+ name: "Exporting Domino LD_LIBRARY_PATH"
+ ansible.builtin.lineinfile:
+ mode: '0644'
+ path: "{{ service_home_dir }}/.bash_profile"
+ create: true
+ line: "export LD_LIBRARY_PATH={{ domino_install_dir }}/"
+ insertbefore: EOF
+
+-
+ name: "Copying templates necessary for standalone Notes Java app"
+ ansible.builtin.copy:
+ mode: '0644'
+ src: "{{ item }}"
+ dest: "{{ service_home_dir }}"
+ remote_src: true
+ owner: "{{ service_user }}"
+ with_items:
+ - "{{ domino_home_dir }}/pernames.ntf"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_config/templates/notes.ini.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_config/templates/notes.ini.j2
new file mode 100755
index 00000000..842b1303
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_config/templates/notes.ini.j2
@@ -0,0 +1,46 @@
+[Notes]
+
+: must be the path to this notes.ini file
+Directory={{ service_home_dir }}
+
+: key file to use
+: This needs to be replaced with the ID file you upload in Vagrantfile
+KeyFilename={{ domino_home_dir }}/ids/{{ domino_admin_user_id }}
+
+: timezone to use
+Timezone=6
+
+: 1 means daylight savings time enabled, 0 means disabled
+DST=1
+
+
+:===== should NOT be necessary to change anything below here =====
+
+: 1 means client install, 2 means server install
+KitType=1
+
+: this is with port encryption I hope:
+TCPIP=TCP,0,15,0,,45056,
+
+: this is without port encryption I hope:
+: TCPIP=TCP, 0, 15, 0
+
+: these two settings are apparently required to enable IP as the default
+$$HasLANPort=1
+Ports=TCPIP
+
+: this may have something to do with the time delay in ms allowed to remote IP host
+DDETimeout=10
+
+: I guess this means transaction logging is off
+TRANSLOG_AutoFixup=1
+TRANSLOG_UseAll=0
+TRANSLOG_Style=0
+TRANSLOG_Performance=2
+TRANSLOG_Status=0
+
+: this gets added back if I remove it, so it must be the default
+DEBUG_BTREE_ERRORS=1
+PhoneLog=2
+Log=log.nsf, 1, 0, 7, 40000
+MailType=0
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/defaults/main.yml
new file mode 100644
index 00000000..83650a77
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/defaults/main.yml
@@ -0,0 +1,3 @@
+---
+show_help: true
+build_utility_jars: true
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/meta/main.yml
new file mode 100644
index 00000000..6248daca
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: domino_java__tools
+ author: MarkProminic
+ description: Prepare the system to build Java Applications
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/tasks/main.yml
new file mode 100755
index 00000000..a9bd440b
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/tasks/main.yml
@@ -0,0 +1,104 @@
+---
+-
+ name: "Checking if java build tools has been deployed"
+ register: build_tools_deployed
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/javabuildtools"
+ get_md5: false
+
+-
+ name: "Copying Java helper applications"
+ ansible.builtin.copy:
+ mode: '0644'
+ src: "{{ item }}"
+ dest: "{{ service_home_dir }}"
+ remote_src: true
+ owner: "{{ service_user }}"
+ with_items:
+ - "{{ installer_dir }}/domino-java-helpers/JavaTest/CreateNamesDatabase.jar"
+ - "{{ installer_dir }}/domino-java-helpers/JavaTest/Notes.jar"
+ - "{{ installer_dir }}/domino-java-helpers/CreateNamesDatabase.exp"
+ when: java_helpers_install and build_utility_jars and not build_tools_deployed.stat.exists
+
+-
+ name: "Changing file ownership, group and permissions of Java Build Files"
+ when: build_utility_jars and not build_tools_deployed.stat.exists
+ ansible.builtin.file:
+ path: "/vagrant/installers/domino-java-helpers"
+ owner: "{{ service_user }}"
+ group: "{{ service_group }}"
+ mode: '0777'
+ recurse: true
+
+-
+ name: "Creating Templated CrossCertify Java file"
+ when: build_utility_jars and not build_tools_deployed.stat.exists
+ ansible.builtin.template:
+ dest: "{{ installer_dir }}/domino-java-helpers/src/main/java/net/prominic/domino/vagrant/CrossCertifyNotesID.java"
+ mode: "a+x"
+ owner: "{{ service_user }}"
+ group: "{{ service_group }}"
+ src: "CrossCertifyNotesID.java.j2"
+
+-
+ name: "Creating Templated JSON for Names.nsf ACLs"
+ when: build_utility_jars and not build_tools_deployed.stat.exists
+ ansible.builtin.template:
+ dest: "{{ service_home_dir }}/default_cross_certify_acl.json"
+ mode: "a+x"
+ owner: "{{ service_user }}"
+ group: "{{ service_group }}"
+ src: "default_cross_certify_acl.json.j2"
+
+-
+ name: "Creating Templated Cross Certify Properties file"
+ when: build_utility_jars and not build_tools_deployed.stat.exists
+ ansible.builtin.template:
+ dest: "{{ service_home_dir }}/CrossCertifyNotesID.properties"
+ mode: "a+x"
+ owner: "{{ service_user }}"
+ group: "{{ service_group }}"
+ src: "CrossCertifyNotesID.properties.j2"
+
+-
+ name: "Building utility jars and deploying"
+ ansible.builtin.shell: "{{ item }}"
+ become: true
+ become_user: "{{ service_user }}"
+ when: build_utility_jars and not build_tools_deployed.stat.exists
+ args:
+ chdir: "/vagrant/installers/domino-java-helpers"
+ creates: "{{ completed_dir }}/javabuildtools"
+ executable: /bin/bash
+ with_items:
+ - "source {{ service_home_dir }}/.sdkman/bin/sdkman-init.sh && gradle clean jarIndividual --stacktrace"
+
+-
+ name: "Marking jars as installed"
+ when: build_utility_jars and not build_tools_deployed.stat.exists
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: touch
+ with_items:
+ - "{{ completed_dir }}/javabuildtools"
+
+-
+ name: "Finding build Libs in domino-java-helpers/build/libs/"
+ when: not build_tools_deployed.stat.exists
+ ansible.builtin.find:
+ paths: "{{ installer_dir }}/domino-java-helpers/build/libs/"
+ file_type: file
+ patterns: '*.jar'
+ register: build_jar_list
+
+-
+ name: "Copying Build Libs for {{ service_user }}"
+ when: not build_tools_deployed.stat.exists
+ loop: "{{ build_jar_list.files }}"
+ ansible.builtin.copy:
+ mode: '0755'
+ remote_src: true
+ owner: "{{ service_user }}"
+ src: "{{ item.path }}"
+ dest: "{{ service_home_dir }}"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/templates/CrossCertifyNotesID.java.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/templates/CrossCertifyNotesID.java.j2
new file mode 100755
index 00000000..cc79631a
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/templates/CrossCertifyNotesID.java.j2
@@ -0,0 +1,868 @@
+package net.prominic.domino.vagrant;
+
+import lotus.domino.*;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.util.Date;
+import java.util.Properties;
+import java.util.Vector;
+
+import org.json.JSONArray;
+import org.json.JSONObject;
+import org.json.JSONTokener;
+
+/*
+
+Domino 12 Java classes
+https://help.hcltechsw.com/dom_designer/12.0.0/basic/H_10_NOTES_CLASSES_ATOZ_JAVA.html
+
+Registration.crossCertify
+https://help.hcltechsw.com/dom_designer/12.0.0/basic/H_CROSSCERTIFY_METHOD_JAVA.html
+
+*/
+
+
+public class CrossCertifyNotesID
+{
+ public static final String AUTHORIZED_GROUP = "AutomaticallyCrossCertifiedUsers";
+
+
+ protected static final String DEFAULT_SUCCESS_FILE = "/tmp/CrossCertifyNotesID.out";
+ protected static String successFileName = DEFAULT_SUCCESS_FILE;
+
+ protected static final String DEFAULT_PROPERTIES_FILE = "CrossCertifyNotesID.properties";
+ protected static String dataDirectory = null;
+ protected static String certID = null;
+ protected static String settingsFile = null;
+ protected static String aclTemplate = null;
+ protected static boolean debugMode = true;
+
+ public static void main(String args[])
+ {
+ log("Starting cross-certification tool.");
+
+ FileInputStream fis = null;
+ boolean threadInitialized = false;
+ Session session = null;
+ try {
+
+ // load properties
+ loadProperties();
+
+ // clear the file that indicates success
+ File successFile = new File(successFileName);
+ if (successFile.exists()) {
+ successFile.delete();
+ }
+
+ if (args.length < 1) {
+ throw new Exception("No ID file specified.");
+ }
+ String targetID = args[0]; // TODO: support more files
+
+ // The JSON file used for Domino server setup can also be used for for this configuration
+ fis = new FileInputStream(settingsFile);
+ JSONObject json = (JSONObject)new JSONTokener(fis).nextValue();
+
+ // extract the values
+ // TODO: add more validation if it becomes a problem. This code could easily trigger NullPointerExceptions if the format is invalid
+ JSONObject serverSetup = json.getJSONObject("serverSetup");
+ JSONObject serverConfig = serverSetup.getJSONObject("server");
+ String name = serverConfig.getString("name");
+ String org = serverConfig.getString("domainName");
+ String server = name + "/" + org;
+
+ String certPassword = serverSetup.getJSONObject("org").getString("certifierPassword");
+
+ // currently we are using the admin user for actions like this
+ String userPassword = serverSetup.getJSONObject("admin").getString("password");
+
+
+
+ // initialize the session
+ debug("NotesThread.sinitThread()");
+ NotesThread.sinitThread();
+ threadInitialized = true;
+
+ // build the session arguments
+ String[] sessionArgs = null;
+ log("Using default notesID path.");
+ sessionArgs = new String[0];
+
+ //Session session = NotesFactory.createSession("localhost", args, "", "");
+ //Session session = NotesFactory.createSession(null, args, null, null);
+ String sessionServer = null; // local server
+ String sessionUser = null; // default user
+ debug("NotesFactory.createSession");
+ session = NotesFactory.createSession(sessionServer, args, sessionUser, userPassword);
+ log("Running on Notes Version: '" + session.getNotesVersion() + "'.");
+
+ String userName = crossCertify(session, targetID, server, certID, certPassword);
+
+ log( "crossCertifyNotesID() completed.");
+
+ // add the user to an authorized group
+ if (null != userName) {
+ addUserToAuthorizedGroup(session, userName, server, userPassword);
+ // This is required to fix the "Error validating execution rights" error. The above group does not work as expected
+ addUserAsServerAdmin(session, userName, server);
+ }
+ else {
+ log("Could not detect user from safe ID.");
+ }
+
+
+ log("");
+ log("## All operations completed successfully. ##");
+ // Create an output file to indicate that the action was succesful.
+ // This is needed because if there is a SIGSEGV or NSD, the Java application does not return exit code 0
+ successFile.createNewFile();
+ }
+ catch (Throwable t) {
+ log(t);
+ // Exit with a non-zero status code so that Vagrant will detect the problem.
+ System.exit(1);
+ }
+ finally {
+ try {
+ if (null != session) {
+ debug("session.recycle()");
+ session.recycle();
+ }
+ if (threadInitialized) {
+ debug("NotesThread.stermThread()");
+ NotesThread.stermThread();
+ }
+ if (null != fis) { fis.close(); }
+ }
+ catch (Exception ex) {
+ log(ex);
+ }
+ }
+ }
+
+ /**
+ * Load the application properties, from the first available source here:
+ *
The file configured by the app.properties.file property (set with -Dapp.properties.file=%file%)
+ *
The default file: ./CrossCertifyNotesID.properties
+ *
Default values defined in this class.
+ *
+ */
+ public static void loadProperties() {
+
+ String propertiesFileName = System.getProperty("app.properties.file");
+ debug ("propertiesFileName='" + propertiesFileName + "'.");
+ if (null == propertiesFileName || propertiesFileName.isEmpty()) {
+ propertiesFileName = DEFAULT_PROPERTIES_FILE;
+ }
+
+ Properties properties = new Properties();
+ File propertiesFile = new File(propertiesFileName);
+ if (propertiesFile.exists()) {
+ log("Loading properties file '" + propertiesFile.getAbsolutePath() + "'.");
+ FileInputStream fis = null;
+ try {
+ fis = new FileInputStream(propertiesFile);
+ properties.load(fis);
+ }
+ catch (Exception ex) {
+ log("Could not load properties file '" + propertiesFile.getAbsolutePath() + "'. Using defaults..." );
+ }
+ finally {
+ if (null != fis) {
+ try {
+ fis.close();
+ }
+ catch (Exception ex) {
+ // ignore
+ }
+ }
+ }
+ }
+ else {
+ log("Properties file '" + propertiesFile.getAbsolutePath() + "' does not exist. Using defaults...");
+ }
+
+ // read the properties
+ dataDirectory = properties.getProperty("data.directory", "/local/notesdata");
+ settingsFile = properties.getProperty("domino.setup.file", dataDirectory + "/setup.json");
+ certID = properties.getProperty("cert.id.file", dataDirectory + "/cert.id");
+ aclTemplate = properties.getProperty("acl.template.file", "default_cross_certify_acl.json");
+ successFileName = properties.getProperty("output.file", DEFAULT_SUCCESS_FILE);
+ String debugStr = properties.getProperty("debug", "false");
+ if (null != debugStr && debugStr.equalsIgnoreCase("true")) {
+ debugMode = true;
+ }
+ else {
+ debugMode = false;
+ }
+
+ }
+
+
+
+ /**
+ * Cross-certify the given targetID for the given server.
+ * @param session the Domino Session
+ * @param targetID the ID to sign
+ * @param server the server to sign against
+ * @param certID the cert ID for server
+ * @param certPassword the password for certID
+ * @return the name of the user that was cross-certified, or null if the user could not be identified
+ * @throws NotesException if an error occurred in the Notes API
+ * @throws Exception if the cross-certification failed
+ */
+ public static String crossCertify(Session session, String targetID, String server, String certID, String certPassword) throws Exception {
+ log("Signing ID: '" + targetID + "'.");
+
+ Registration reg = null;
+ DateTime dt = null;
+ try {
+ debug("session.createRegistration()");
+ reg = session.createRegistration();
+ debug("Registration.setRegistrationServer('" + server + "')");
+ reg.setRegistrationServer( server);
+ debug("Registration.setCertifierIDFile('" + server + "')");
+ reg.setCertifierIDFile( certID);
+
+
+
+ debug("session.createDateTime()");
+ dt = session.createDateTime("Today");
+ dt.setNow();
+ dt.adjustYear(1);
+ reg.setExpiration(dt);
+
+ // NOTE: crossCertify triggers a password check even with an authenticated session, if the ID file has a password
+ // I see this behavior is specifically noted for the recertify method, but not crossCertify: https://help.hcltechsw.com/dom_designer/12.0.0/basic/H_RECERTIFY_METHOD_JAVA.html
+ // Enter the password from the command prompt, or automate it using the "yes" command
+ debug("Registration.crossCertify(...)");
+ if (reg.crossCertify(targetID,
+ certPassword, // certifier password
+ "programmatically cross certified")) // comment field
+ {
+ log("Cross-certification succeeded");
+
+ // Lookup the cross-certification document to check the user name
+ // I haven't found a better way to do this.
+ return getLastCrossCertifiedUser(session, server);
+ }
+ else {
+ throw new Exception("Registration.crossCertify reported failure");
+ }
+ }
+// catch(NotesException e) {
+// log( e.id + " " + e.text);
+// log(e);
+// }
+ finally {
+ try {
+ if (null != dt) { dt.recycle(); }
+ if (null != reg) { reg.recycle(); }
+ }
+ catch (NotesException ex) {
+ log("NotesException on recycle: ");
+ log(ex);
+ }
+ }
+ }
+
+ /**
+ * Get the last cross-certified user in names.nsf on the given server.
+ * I don't see a better way to extract the username from a Notes ID for now.
+ * Note that this agent won't throw an Exception.
+ *
+ * @param session the existing session
+ * @param server the target server
+ * @return the username, or null if no valid user was found.
+ */
+ public static String getLastCrossCertifiedUser(Session session, String server) {
+ Database namesDatabase = null;
+ View certView = null;
+ ViewEntryCollection entries = null;
+ try {
+ debug("Session.getDatabase()");
+ namesDatabase = session.getDatabase(server, "names.nsf", false);
+ if (null == namesDatabase || !namesDatabase.isOpen()) {
+ throw new Exception("Could not open names.nsf");
+ }
+
+ debug("namesDatabase.getView()");
+ certView = namesDatabase.getView("($CrossCertByName)");
+ if (null == certView) {
+ throw new Exception("Could not open cross-certificate view.");
+ }
+ debug("certView.refresh()");
+ certView.refresh(); // avoid race conditions on view population
+ debug("certView.setAutoUpdate(false)");
+ certView.setAutoUpdate(false); // avoid updates in the middle of iteration
+
+ debug("certView.getAllEntries()");
+ entries = certView.getAllEntries();
+ debug("ViewEntryCollection.getFirstEntry()");
+ ViewEntry curEntry = entries.getFirstEntry();
+ String userName = null;
+ Date latestDate = null;
+ while (null != curEntry) {
+ Document curDoc = null;
+ DateTime dateTime = null;
+ try {
+ if (curEntry.isDocument()) {
+ debug("ViewEntry.getDocument()");
+ curDoc = curEntry.getDocument();
+ debug("Document.getItemValueString(IssuedTo)");
+ String issuedTo = curDoc.getItemValueString("IssuedTo");
+ dateTime = curDoc.getLastModified();
+
+ if (null == issuedTo || issuedTo.trim().isEmpty()) {
+ debug("Document.getUniversalID()");
+ log("Found cross-certificate document " + curDoc.getUniversalID() + " with no value for IssuedTo.");
+ // Skip
+ }
+ else if (null == latestDate || dateTime.toJavaDate().after(latestDate)) {
+ // this is the new latest document
+ userName = issuedTo;
+ latestDate = dateTime.toJavaDate();
+ }
+
+
+
+ }
+ // not a document
+ }
+ finally {
+ ViewEntry prevEntry = curEntry;
+ debug("ViewEntry.getNextEntry()");
+ curEntry = entries.getNextEntry();
+
+ // cleanup
+ if (null != dateTime) { dateTime.recycle(); }
+ if (null != curDoc) { curDoc.recycle(); }
+ prevEntry.recycle();
+ }
+ }
+
+ if (null == userName || userName.trim().isEmpty()) {
+ return null; // normallize the output
+ }
+ return userName;
+ }
+ catch (Exception ex) {
+ log("Failed to read last cross-certified user: ");
+ log(ex);
+ return null;
+ }
+ finally {
+ try {
+ if (null != entries) {
+ entries.recycle();
+ }
+ if (null != certView) {
+ certView.recycle();
+ }
+ if (null != namesDatabase) {
+ namesDatabase.recycle();
+ }
+ }
+ catch (NotesException ex) {
+ log("Failed to recycle objects: ");
+ log(ex);
+ }
+ }
+ }
+
+ /**
+ * Add the given username to the {@link #AUTHORIZED_GROUP} group on the target server.
+ * @param session the Domino session to use.
+ * @param username the username to add
+ * @param server the target server
+ * @param userPassword the password for the running user (not the above username).
+ */
+ public static void addUserToAuthorizedGroup(Session session, String username, String server, String userPassword) throws NotesException, Exception {
+ log ("Adding user '" + username + "' to authorized user group (" + AUTHORIZED_GROUP + ").");
+ Database namesDatabase = null;
+ View groupView = null;
+ Document groupDoc = null;
+ Vector members = null;
+
+ try {
+
+ debug("session.getDatabase(names.nsf)");
+ namesDatabase = session.getDatabase(server, "names.nsf", false);
+ if (null == namesDatabase || !namesDatabase.isOpen()) {
+ throw new Exception("Could not open names.nsf");
+ }
+
+ debug("namesDatabase.getView(Groups)");
+ groupView = namesDatabase.getView("Groups");
+ if (null == groupView) {
+ throw new Exception("Could not open group view.");
+ }
+
+ debug("groupView.getDocumentByKey('" + AUTHORIZED_GROUP + "'");
+ groupDoc = groupView.getDocumentByKey(AUTHORIZED_GROUP, true);
+ if (null == groupDoc) {
+ throw new Exception("Could not find expected group document: '" + AUTHORIZED_GROUP + "'.");
+ }
+
+ debug("groupDoc.getItemValue(Members)");
+ members = groupDoc.getItemValue("Members");
+ if (null == members || members.size() == 0 ||
+ (members.size() == 1 && members.get(0).toString().trim().isEmpty())) { // default blank entry
+ members = new Vector(); // normalize
+ }
+ members.add(username);
+ debug("groupDoc.replaceItemValue(Members)");
+ groupDoc.replaceItemValue("Members", members);
+
+ // computeWithForm - is this required?
+ debug("groupDoc.computeWithForm()");
+ groupDoc.computeWithForm(false, true);
+
+ // save
+ debug("groupDoc.save()");
+ if (!groupDoc.save(true)) { // force the save
+ throw new Exception("Could not update group document.");
+ }
+ else {
+ log("Authorized group has been updated.");
+ }
+
+ // force refresh of ($ServerAccess)
+ View refreshView = null;
+ String viewName = "($ServerAccess)";
+ try {
+ debug("namesDatabase.getView('" + viewName + "'");
+ refreshView = namesDatabase.getView(viewName);
+ if (null != refreshView) {
+ debug("refreshView.refresh()");
+ refreshView.refresh();
+ }
+ else {
+ log("Could not open view '" + viewName + "'.");
+ }
+ }
+ catch (Exception ex) {
+ log("Could not refresh view '" + viewName + "'.");
+ }
+ finally {
+ if (null != refreshView) { refreshView.recycle(); }
+ }
+
+ session.recycle(members);
+ }
+ finally {
+ if (null != members) { session.recycle(members);}
+ if (null != groupDoc) { groupDoc.recycle();}
+ if (null != groupView) { groupView.recycle();}
+ if (null != namesDatabase) { namesDatabase.recycle();}
+ }
+
+ }
+
+ /**
+ * Add the given username to the server document, with enough access for DXL Importer and agents to work.
+ * TODO: This is massive overkill - it should be controlled by the group instead.
+ * However, the group was not working in our recent tests, so I'm using this as a workaround for now.'
+ * @param session the Domino session to use
+ * @param username the username to add
+ * @param server the target server
+ */
+ public static void addUserAsServerAdmin(Session session, String username, String server) throws NotesException, Exception {
+ log ("Adding user '" + username + "' to server document as authorized user.");
+ Database namesDatabase = null;
+ View serverView = null;
+ Document serverDoc = null;
+ Name nameObj = null;
+
+ try {
+
+ debug("session.getDatabase(names.nsf)");
+ namesDatabase = session.getDatabase(server, "names.nsf", false);
+ if (null == namesDatabase || !namesDatabase.isOpen()) {
+ throw new Exception("Could not open names.nsf");
+ }
+
+ debug("namesDatabase.getView($Servers)");
+ serverView = namesDatabase.getView("($Servers)");
+ if (null == serverView) {
+ throw new Exception("Could not open server view.");
+ }
+
+ debug("session.createName()");
+ nameObj = session.createName(server);
+ debug("nameObj.getCanonical()");
+ String key = nameObj.getCanonical();
+
+ debug("serverView.getDocumentByKey('" + key + "'");
+ serverDoc = serverView.getDocumentByKey(key, true);
+ if (null == serverDoc) {
+ throw new Exception("Could not find expected server document: '" + server + "'.");
+ }
+
+ // Track if any of the fields are update.
+ // This will support rerunning the agent for the same ID.
+ boolean updated = false;
+
+ // update the security fields
+ updated = updateServerSecurityField(serverDoc, "FullAdmin", username) ? true : updated;
+ updated = updateServerSecurityField(serverDoc, "CreateAccess", username) ? true : updated;
+ updated = updateServerSecurityField(serverDoc, "ReplicaAccess", username) ? true : updated;
+ updated = updateServerSecurityField(serverDoc, "UnrestrictedList", username) ? true : updated;
+ updated = updateServerSecurityField(serverDoc, "OnBehalfOfInvokerLst", username) ? true : updated;
+ updated = updateServerSecurityField(serverDoc, "LibsLst", username) ? true : updated;
+ updated = updateServerSecurityField(serverDoc, "RestrictedList", username) ? true : updated;
+ // Updating AllowAccess breaks all access to the server, including the admin user. I suspect I need to set additional related fields.
+ //updateServerSecurityField(serverDoc, "AllowAccess", server);
+
+ // computeWithForm - is this required?
+ // This fails with an error like:
+ // [018455:000002-00007FCD65B93700] ECL Alert Result: Code signed by Domino Template Development/Domino was prevented from executing with the right: Access to current database.NotesException: Operation aborted at your request
+ //serverDoc.computeWithForm(false, true);
+
+ // save
+ if (!updated) {
+ // If the document is not updated, saving will trigger an error
+ log("No server document updates required.");
+ }
+ else if (!serverDoc.save(true)) { // force the save
+ throw new Exception("Could not update server document.");
+ }
+ else {
+ log("Server doc has been updated.");
+ }
+
+ // also update the ACL to give the user access to configure the server.
+ updateNamesACL(namesDatabase, username);
+ }
+ finally {
+ if (null != nameObj) { nameObj.recycle();}
+ if (null != serverDoc) { serverDoc.recycle();}
+ if (null != serverView) { serverView.recycle();}
+ if (null != namesDatabase) { namesDatabase.recycle();}
+ }
+
+ }
+
+ /**
+ * Add the given userName to the indicated item in the given document.
+ * Handle duplicates and empty existing values
+ * @param serverDoc the server document
+ * @param itemName the name of the item/field
+ * @param userName the name of the user
+ */
+ protected static boolean updateServerSecurityField(Document serverDoc, String itemName, String userName) throws NotesException {
+ Vector members = null;
+ try {
+
+ members = serverDoc.getItemValue(itemName);
+ if (null == members || members.size() == 0 ||
+ (members.size() == 1 && members.get(0).toString().trim().isEmpty())) { // default blank entry
+ members = new Vector(); // normalize
+ }
+
+ if (!members.contains(userName)) {
+ members.add(userName);
+ debug("serverDoc.replaceItemValue('" + itemName + "', " + members + ")");
+ serverDoc.replaceItemValue(itemName, members);
+ return true;
+ }
+ return false;
+
+ }
+ finally {
+ if (null != members) {
+ // recycle the vector in case it contains Domino objects
+ serverDoc.recycle(members);
+ }
+ }
+
+ }
+
+ /**
+ * Add an ACL entry for the user in the given database.
+ * The user will have Manager access with all roles.
+ * This was needed because the user was not properly recognized in AutomaticallyCrossCertifiedUsers, so this could be disabled once that bug is fixed.
+ * @param database the database to update. Expected to be names.nsf
+ * @param userName the username to add, in canonical format
+ */
+ protected static boolean updateNamesACL(Database database, String userName) throws NotesException, Exception {
+ ACL acl = null;
+ FileInputStream fis = null;
+ boolean updated = false;
+ try {
+ debug("namesDatabase.getACL()");
+ acl = database.getACL();
+
+ fis = new FileInputStream(aclTemplate);
+ JSONObject json = (JSONObject)new JSONTokener(fis).nextValue();
+/* Example JSON:
+{
+ "level": "manager",
+ "type": "person",
+ "canDeleteDocuments": true,
+ "canReplicateOrCopyDocuments": true,
+ "roles": [
+ "GroupCreator",
+ "GroupModifier",
+ "NetCreator",
+ "PolicyCreator",
+ "PolicyModifier",
+ "PolicyReader",
+ "NetModifier ",
+ "ServerCreator",
+ "ServerModifier",
+ "UserCreator",
+ "UserModifier"
+ ]
+}
+*/
+ updated = updateACLFromConfig(acl, json, userName);
+
+ if (!updated) {
+ log("No ACL updates required.");
+ }
+ else {
+ log("ACL updated and saved.");
+ debug("acl.save()");
+ acl.save();
+ }
+
+ }
+ finally {
+ if (null != acl) { acl.recycle(); }
+ if (null != fis) { fis.close(); }
+ }
+
+ return updated;
+
+
+ }
+
+ /**
+ * Update the ACL to match the provided configuration.
+ * Adapted from https://github.com/DominoGenesis/Genesis/blob/bde62b70bcd0fef35c41117a87daedba4b80a6f6/src/main/java/net/prominic/genesis/JSONRules.java#L534-L704
+ * @param acl The names.nsf ACL
+ * @param config The JSON configuration object
+ */
+ private static boolean updateACLFromConfig(ACL acl, JSONObject config, String userName) {
+ boolean toSave = false;
+
+ try {
+ debug("acl.getEntry('" + userName + "')");
+ ACLEntry entry = acl.getEntry(userName);
+
+ // 1. get/create entry (default no access)
+ if (entry == null) {
+ debug("acl.createACLEntry('" + userName + "', LEVEL_NOACCESS)");
+ entry = acl.createACLEntry(userName, ACL.LEVEL_NOACCESS);
+ log(String.format("> ACL: new entry (%s)", userName));
+ toSave = true;
+ }
+
+ // 2. level
+ if (config.has("level")) {
+ String sLevel = (String) config.get("level");
+ int level = ACL.LEVEL_NOACCESS;
+ if ("noAccess".equalsIgnoreCase(sLevel)) {
+ level = ACL.LEVEL_NOACCESS;
+ }
+ else if("depositor".equalsIgnoreCase(sLevel)) {
+ level = ACL.LEVEL_DEPOSITOR;
+ }
+ else if("reader".equalsIgnoreCase(sLevel)) {
+ level = ACL.LEVEL_READER;
+ }
+ else if("author".equalsIgnoreCase(sLevel)) {
+ level = ACL.LEVEL_AUTHOR;
+ }
+ else if("editor".equalsIgnoreCase(sLevel)) {
+ level = ACL.LEVEL_EDITOR;
+ }
+ else if("designer".equalsIgnoreCase(sLevel)) {
+ level = ACL.LEVEL_DESIGNER;
+ }
+ else if("manager".equalsIgnoreCase(sLevel)) {
+ level = ACL.LEVEL_MANAGER;
+ }
+
+ if (entry.getLevel() != level) {
+ debug("aclEntry.setLevel('" + sLevel + "')");
+ entry.setLevel(level);
+ toSave = true;
+ log(String.format(">> ACLEntry: level (%s)", sLevel));
+ }
+ }
+
+ // 3. type
+ if (config.has("type")) {
+ String sType = (String) config.get("type");
+ int type = ACLEntry.TYPE_UNSPECIFIED;
+ if ("unspecified".equalsIgnoreCase(sType)) {
+ type = ACLEntry.TYPE_UNSPECIFIED;
+ }
+ else if("person".equalsIgnoreCase(sType)) {
+ type = ACLEntry.TYPE_PERSON;
+ }
+ else if("server".equalsIgnoreCase(sType)) {
+ type = ACLEntry.TYPE_SERVER;
+ }
+ else if("personGroup".equalsIgnoreCase(sType)) {
+ type = ACLEntry.TYPE_PERSON_GROUP;
+ }
+ else if("serverGroup".equalsIgnoreCase(sType)) {
+ type = ACLEntry.TYPE_SERVER_GROUP;
+ }
+ else if("mixedGroup".equalsIgnoreCase(sType)) {
+ type = ACLEntry.TYPE_MIXED_GROUP;
+ }
+
+ if (entry.getUserType() != type) {
+ debug("aclEntry.setUserType('" + type + "')");
+ entry.setUserType(type);
+ log(String.format(">> ACLEntry: type (%s)", sType));
+ toSave = true;
+ }
+ }
+
+ // 4. canCreateDocuments
+ boolean canCreateDocuments = config.has("canCreateDocuments") && (Boolean) config.get("canCreateDocuments");
+ debug("aclEntry.isCanCreateDocuments()");
+ if (entry.isCanCreateDocuments() != canCreateDocuments) {
+ debug("aclEntry.setCanCreateDocuments('" + canCreateDocuments + "')");
+ entry.setCanCreateDocuments(canCreateDocuments);
+ log(String.format(">> ACLEntry: setCanCreateDocuments (%b)", canCreateDocuments));
+ toSave = true;
+
+ }
+
+ // 5. canDeleteDocuments
+ boolean canDeleteDocuments = config.has("canDeleteDocuments") && (Boolean) config.get("canDeleteDocuments");
+ debug("aclEntry.isCanDeleteDocuments()");
+ if (entry.isCanDeleteDocuments() != canDeleteDocuments) {
+ debug("aclEntry.setCanDeleteDocuments('" + canCreateDocuments + "')");
+ entry.setCanDeleteDocuments(canDeleteDocuments);
+ log(String.format(">> ACLEntry: canDeleteDocuments (%b)", canDeleteDocuments));
+ toSave = true;
+ }
+
+ // 6. canCreatePersonalAgent
+ boolean canCreatePersonalAgent = config.has("canCreatePersonalAgent") && (Boolean) config.get("canCreatePersonalAgent");
+ debug("aclEntry.isCanCreatePersonalAgent()");
+ if (entry.isCanCreatePersonalAgent() != canCreatePersonalAgent) {
+ debug("aclEntry.setCanCreatePersonalAgent('" + canCreatePersonalAgent + "')");
+ entry.setCanCreatePersonalAgent(canCreatePersonalAgent);
+ log(String.format(">> ACLEntry: canCreatePersonalAgent (%b)", canCreatePersonalAgent));
+ toSave = true;
+ }
+
+ // 7. canCreatePersonalFolder
+ boolean canCreatePersonalFolder = config.has("canCreatePersonalFolder") && (Boolean) config.get("canCreatePersonalFolder");
+ debug("aclEntry.isCanCreatePersonalFolder()");
+ if (entry.isCanCreatePersonalFolder() != canCreatePersonalFolder) {
+ debug("aclEntry.setCanCreatePersonalFolder('" + canCreatePersonalFolder + "')");
+ entry.setCanCreatePersonalFolder(canCreatePersonalFolder);
+ log(String.format(">> ACLEntry: canCreatePersonalFolder (%b)", canCreatePersonalFolder));
+ toSave = true;
+ }
+
+ // 8. canCreateSharedFolder
+ boolean canCreateSharedFolder = config.has("canCreateSharedFolder") && (Boolean) config.get("canCreateSharedFolder");
+ debug("aclEntry.isCanCreateSharedFolder()");
+ if (entry.isCanCreateSharedFolder() != canCreateSharedFolder) {
+ debug("aclEntry.setCanCreateSharedFolder('" + canCreateSharedFolder + "')");
+ entry.setCanCreateSharedFolder(canCreateSharedFolder);
+ log(String.format("> ACL: entry canCreateSharedFolder (%b)", canCreateSharedFolder));
+ toSave = true;
+ }
+
+ // 9. canCreateLSOrJavaAgent
+ boolean canCreateLSOrJavaAgent = config.has("canCreateLSOrJavaAgent") && (Boolean) config.get("canCreateLSOrJavaAgent");
+ debug("aclEntry.isCanCreateLSOrJavaAgent()");
+ if (entry.isCanCreateLSOrJavaAgent() != canCreateLSOrJavaAgent) {
+ debug("aclEntry.setCanCreateLSOrJavaAgent('" + canCreateLSOrJavaAgent + "')");
+ entry.setCanCreateLSOrJavaAgent(canCreateLSOrJavaAgent);
+ log(String.format(">> ACLEntry: canCreateLSOrJavaAgent (%b)", canCreateLSOrJavaAgent));
+ toSave = true;
+ }
+
+ // 10. isPublicReader
+ boolean isPublicReader = config.has("isPublicReader") && (Boolean) config.get("isPublicReader");
+ debug("aclEntry.isPublicReader()");
+ if (entry.isPublicReader() != isPublicReader) {
+ debug("aclEntry.setPublicReader('" + isPublicReader + "')");
+ entry.setPublicReader(isPublicReader);
+ log(String.format(">> ACLEntry: isPublicReader (%b)", isPublicReader));
+ toSave = true;
+ }
+
+ // 11. isPublicWriter
+ boolean isPublicWriter = config.has("isPublicWriter") && (Boolean) config.get("isPublicWriter");
+ debug("aclEntry.isPublicWriter()");
+ if (entry.isPublicWriter() != isPublicWriter) {
+ debug("aclEntry.setPublicWriter('" + isPublicWriter + "')");
+ entry.setPublicWriter(isPublicWriter);
+ log(String.format(">> ACLEntry: isPublicWriter (%b)", isPublicWriter));
+ toSave = true;
+ }
+
+ // 12. canReplicateOrCopyDocuments
+ boolean canReplicateOrCopyDocuments = config.has("canReplicateOrCopyDocuments") && (Boolean) config.get("canReplicateOrCopyDocuments");
+ debug("aclEntry.isCanReplicateOrCopyDocuments()");
+ if (entry.isCanReplicateOrCopyDocuments() != canReplicateOrCopyDocuments) {
+ debug("aclEntry.setCanReplicateOrCopyDocuments('" + canReplicateOrCopyDocuments + "')");
+ entry.setCanReplicateOrCopyDocuments(canReplicateOrCopyDocuments);
+ log(String.format(">> ACLEntry: canReplicateOrCopyDocuments (%b)", canReplicateOrCopyDocuments));
+ toSave = true;
+ }
+
+ // 13. roles
+ if (config.has("roles")) {
+ debug("acl.getRoles()");
+ Vector> aclRoles = acl.getRoles();
+ log("Valid ACL Roles: ");
+ for (Object aclRole : aclRoles) {
+ log(" " + aclRole.toString());
+ }
+ JSONArray roles = (JSONArray) config.get("roles");
+ for (Object roleObj : roles) {
+ String role = roleObj.toString();
+
+ if (entry.isRoleEnabled(role)) {
+ log(String.format(">> ACLEntry: role already added (%s)", role));
+ }
+ else if (aclRoles.contains(role)) {
+ debug("aclRole.enableRole('" + role + "'");
+ entry.enableRole(role);
+ log(String.format(">> ACLEntry: enableRole (%s)", role));
+ toSave = true;
+ }
+ else {
+ log(String.format(">> ACLEntry: ignoring unsupported role (%s)", role));
+ }
+ }
+ }
+ } catch (Exception e) {
+ log(e);
+ }
+
+ log("ACL Updates complete.");
+ return toSave;
+ }
+
+
+
+ protected static void log(String message) {
+ System.out.println(message);
+ }
+ protected static void debug(String message) {
+ final String debugPrefix = " (debug)";
+ if (debugMode) {
+ log(debugPrefix + message);
+ }
+ }
+ protected static void log(Throwable t) {
+ t.printStackTrace(System.out);
+ }
+}
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/templates/CrossCertifyNotesID.properties.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/templates/CrossCertifyNotesID.properties.j2
new file mode 100644
index 00000000..a7778748
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/templates/CrossCertifyNotesID.properties.j2
@@ -0,0 +1,6 @@
+data.directory={{ domino_home_dir }}
+domino.setup.file={{ domino_home_dir }}/setup.json
+cert.id.file={{ domino_home_dir }}/cert.id
+acl.template.file={{ service_home_dir }}/default_cross_certify_acl.json
+output.file={{ completed_dir }}/CrossCertifyNotesID.out
+debug=true
\ No newline at end of file
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/templates/default_cross_certify_acl.json.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/templates/default_cross_certify_acl.json.j2
new file mode 100644
index 00000000..bd4b8981
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_java_tools/templates/default_cross_certify_acl.json.j2
@@ -0,0 +1,19 @@
+{
+ "level": "manager",
+ "type": "person",
+ "canDeleteDocuments": true,
+ "canReplicateOrCopyDocuments": true,
+ "roles": [
+ "[GroupCreator]",
+ "[GroupModifier]",
+ "[NetCreator]",
+ "[PolicyCreator]",
+ "[PolicyModifier]",
+ "[PolicyReader]",
+ "[NetModifier]",
+ "[ServerCreator]",
+ "[ServerModifier]",
+ "[UserCreator]",
+ "[UserModifier]"
+ ]
+}
\ No newline at end of file
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_leap/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_leap/defaults/main.yml
new file mode 100644
index 00000000..567c0ac1
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_leap/defaults/main.yml
@@ -0,0 +1,12 @@
+---
+leap_archive: Leap-1.0.5.zip
+leap_version: 1.0.5
+leap_debug: false
+domino_leap_port_forwards:
+ -
+ guest: "{{ domino_install_port_forwards[0].guest }}"
+ url: "leap"
+ -
+ guest: "{{ domino_install_port_forwards[1].guest }}"
+ url: "leap"
+domino_leap_proxy_url: "{{ domino_leap_port_forwards[0].url }}.{{ settings.hostname }}.{{ settings.domain }}"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_leap/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_leap/meta/main.yml
new file mode 100644
index 00000000..60c036b2
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_leap/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: leap
+ author: MarkProminic
+ description: Install Leap for Domino
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_leap/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_leap/tasks/main.yml
new file mode 100755
index 00000000..44883b0a
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_leap/tasks/main.yml
@@ -0,0 +1,125 @@
+---
+-
+ name: "Creating installation directories for Leap"
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: directory
+ with_items:
+ - "{{ installer_dir }}/leap/archives"
+ - "{{ installer_dir }}/leap/Leap"
+
+-
+ name: "Checking if Leap installer is at leap/archives/{{ leap_archive }}"
+ register: domino_server_installer_check
+ ansible.builtin.stat:
+ path: "{{ installer_dir }}/leap/archives/{{ leap_archive }}"
+ get_md5: false
+
+-
+ name: "Checking if Leap is installed: {{ leap_version }} "
+ register: leap_installed_check
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/leap_install"
+ get_md5: false
+
+-
+ name: "Downloading Leap from {{ domino_installer_base_url }}"
+ register: leapresult
+ until: "leapresult is not failed"
+ retries: 3
+ ansible.builtin.get_url:
+ mode: '0755'
+ url: "{{ domino_installer_base_url }}/Leap/{{ leap_archive }}"
+ dest: "{{ installer_dir }}/leap/archives/{{ leap_archive }}"
+ username: "{{ domino_installer_url_user }}"
+ password: "{{ secrets.domino_installer_url_pass }}"
+ when: not domino_server_installer_check.stat.exists and not leap_installed_check.stat.exists
+
+-
+ name: "Extracting Leap from {{ leap_archive }}"
+ when: not leap_installed_check.stat.exists
+ ansible.builtin.unarchive:
+ mode: "a+x"
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ src: "{{ installer_dir }}/leap/archives/{{ leap_archive }}"
+ dest: "{{ installer_dir }}/leap/Leap"
+ creates: "{{ installer_dir }}/leap/Leap/Leap-{{ leap_version }}-for-domino-1201-linux"
+ remote_src: true
+
+-
+ name: "Stopping Domino for Leap Installation"
+ when: not leap_installed_check.stat.exists
+ become: true
+ ansible.builtin.service:
+ name: domino
+ state: stopped
+ enabled: true
+
+-
+ name: "Configuring Leap and Starting Service"
+ when: not leap_installed_check.stat.exists
+ ansible.builtin.shell: "{{ item }}"
+ become: true
+ args:
+ executable: "/bin/bash"
+ chdir: "{{ installer_dir }}/leap/Leap/Leap-{{ leap_version }}/linux"
+ with_items:
+ - ./install silent
+
+-
+ name: "Registering leap installation Output"
+ when: not leap_installed_check.stat.exists
+ ansible.builtin.shell: cat /var/log/volt_install_*.log
+ register: leap_install
+
+-
+ name: "Outputting Leap installation logs"
+ when: not leap_installed_check.stat.exists and ( leap_debug or debug_all )
+ ansible.builtin.debug:
+ var: leap_install.stdout_lines
+
+-
+ name: "Starting Domino"
+ ansible.builtin.service:
+ name: domino
+ state: started
+ enabled: true
+
+-
+ name: "Handing Leap/Volt ACL templated JSON to Genesis"
+ when: not leap_installed_check.stat.exists
+ ansible.builtin.template:
+ dest: "{{ domino_home_dir }}/JavaAddin/Genesis/json/voltacl-org.json"
+ mode: "a+x"
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ src: "voltacl-org.json.j2"
+
+-
+ name: Waiting until Genesis returns OK
+ when: not leap_installed_check.stat.exists
+ ansible.builtin.wait_for:
+ path: "{{ domino_home_dir }}/JavaAddin/Genesis/jsonresponse/voltacl-org.json"
+
+-
+ name: "Checking if ACL applied succesfully"
+ when: not leap_installed_check.stat.exists
+ ansible.builtin.lineinfile:
+ path: "{{ domino_home_dir }}/JavaAddin/Genesis/jsonresponse/voltacl-org.json"
+ line: "OK"
+ state: present
+ check_mode: true
+ register: presence
+ failed_when: presence is changed
+
+-
+ name: "Marking leap as installed"
+ when: not leap_installed_check.stat.exists
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: touch
+ with_items:
+ - "{{ completed_dir }}/leap_install"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_leap/templates/voltacl-org.json.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_leap/templates/voltacl-org.json.j2
new file mode 100644
index 00000000..f6f2a630
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_leap/templates/voltacl-org.json.j2
@@ -0,0 +1,127 @@
+{
+ "title": "Modify ACL volt/VoltBuilder.nsf",
+ "versionjson": "1.0.0",
+ "steps": [
+ {
+ "title": "--- Step 1. SET ACL for Voltbuilder.nsf ---",
+ "databases": [
+ {
+ "action": "update",
+ "filePath": "volt/VoltBuilder.nsf",
+ "ACL": {
+ "roles": [
+ "VoltAppsManager"
+ ],
+ "ACLEntries": [
+ {
+ "name": "Nancy Noaccess",
+ "level": "noAccess",
+ "type": "unspecified",
+ "isPublicReader": true,
+ "isPublicWriter": false
+ },
+ {
+ "name": "Volt Authors",
+ "level": "Editor",
+ "canCreateDocuments": true,
+ "canDeleteDocuments": true,
+ "type": "mixedGroup"
+ },
+ {
+ "name": "LocalDomainAdmins",
+ "level": "manager",
+ "type": "personGroup"
+ },
+ {
+ "name": "AutomaticallyCrossCertifiedUsers",
+ "level": "manager",
+ "type": "personGroup"
+ },
+ {
+ "name": "{{ domino_admin_user_first_name }} {{ domino_admin_user_last_name }}/{{ domino_organization }}",
+ "level": "manager",
+ "type": "person",
+ "canCreateDocuments": true,
+ "canCreateLSOrJavaAgent": true,
+ "canCreatePersonalAgent": true,
+ "canCreatePersonalFolder": true,
+ "canCreateSharedFolder": true,
+ "canDeleteDocuments": true,
+ "canReplicateOrCopyDocuments": true,
+ "isPublicReader": true,
+ "isPublicWriter": true,
+ "roles": [
+ "VoltAppsManager"
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ },
+ {
+ "title": "--- Step 2 (final). Completed ---",
+ "messages": [
+ "You have adjusted ACL for volt/VoltBuilder.nsf successfully"
+ ]
+ },
+ {
+ "title": "--- Step 3. SET ACL for VoltConfig.nsf ---",
+ "databases": [
+ {
+ "action": "update",
+ "filePath": "volt/VoltConfig.nsf",
+ "ACL": {
+ "roles": [
+ "VoltAppsManager"
+ ],
+ "ACLEntries": [
+ {
+ "name": "Nancy Noaccess",
+ "level": "noAccess",
+ "type": "unspecified",
+ "isPublicReader": true,
+ "isPublicWriter": false
+ },
+ {
+ "name": "Volt Authors",
+ "level": "Editor",
+ "canCreateDocuments": true,
+ "canDeleteDocuments": true,
+ "type": "mixedGroup"
+ },
+ {
+ "name": "AutomaticallyCrossCertifiedUsers",
+ "level": "manager",
+ "type": "personGroup"
+ },
+ {
+ "name": "{{ domino_admin_user_first_name }} {{ domino_admin_user_last_name }}/{{ domino_organization }}",
+ "level": "manager",
+ "type": "person",
+ "canCreateDocuments": true,
+ "canCreateLSOrJavaAgent": true,
+ "canCreatePersonalAgent": true,
+ "canCreatePersonalFolder": true,
+ "canCreateSharedFolder": true,
+ "canDeleteDocuments": true,
+ "canReplicateOrCopyDocuments": true,
+ "isPublicReader": true,
+ "isPublicWriter": true,
+ "roles": [
+ "VoltAppsManager"
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ },
+ {
+ "title": "--- Step 4 (final). Completed ---",
+ "messages": [
+ "You have adjusted ACL for volt/VoltConfig.nsf successfully"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_nomadweb/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_nomadweb/defaults/main.yml
new file mode 100644
index 00000000..02321a52
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_nomadweb/defaults/main.yml
@@ -0,0 +1,9 @@
+---
+nomadweb_archive: nomad-server-1.0.6-for-domino-1202-linux.tgz
+nomadweb_version: 1.0.6
+nomadweb_debug: false
+domino_nomadweb_port_forwards:
+ -
+ guest: 9443
+ url: "nomadweb"
+domino_nomadweb_proxy_url: "{{ domino_nomadweb_port_forwards[0].url }}.{{ settings.hostname }}.{{ settings.domain }}"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_nomadweb/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_nomadweb/meta/main.yml
new file mode 100644
index 00000000..f72e0900
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_nomadweb/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: nomadweb
+ author: MarkProminic
+ description: Nomad Web Domino Installer
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_nomadweb/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_nomadweb/tasks/main.yml
new file mode 100755
index 00000000..4ebea3ae
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_nomadweb/tasks/main.yml
@@ -0,0 +1,118 @@
+---
+-
+ name: "Creating installation directories for NomadWeb"
+ ansible.builtin.file:
+ mode: '0755'
+ path: "{{ item }}"
+ state: directory
+ with_items:
+ - "{{ installer_dir }}/nomadweb/archives"
+ - "{{ installer_dir }}/nomadweb/NomadWeb"
+
+-
+ name: "Checking if NomadWeb is installed: {{ nomadweb_version }}"
+ register: nomadweb_installed_check
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/nomadweb_install"
+ get_md5: false
+
+-
+ name: "Checking if the NomadWeb installer is at nomadweb/archives/{{ nomadweb_archive }}"
+ register: nomad_archive_check
+ ansible.builtin.stat:
+ path: "{{ installer_dir }}/nomadweb/archives/{{ nomadweb_archive }}"
+ get_md5: false
+
+-
+ name: "Downloading NomadWeb from {{ domino_installer_base_url }}"
+ register: nomadwebresult
+ until: "nomadwebresult is not failed"
+ retries: 3
+ ansible.builtin.get_url:
+ mode: '0755'
+ url: "{{ domino_installer_base_url }}/ND{{ domino_major_version }}/NomadWeb/{{ nomadweb_archive }}"
+ dest: "{{ installer_dir }}/nomadweb/archives/{{ nomadweb_archive }}"
+ username: "{{ domino_installer_url_user }}"
+ password: "{{ secrets.domino_installer_url_pass }}"
+ when: not nomad_archive_check.stat.exists and not nomadweb_installed_check.stat.exists
+
+-
+ name: "Extracting NomadWeb from {{ nomadweb_archive }}"
+ when: not nomadweb_installed_check.stat.exists
+ ansible.builtin.unarchive:
+ mode: "a+x"
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ src: "{{ installer_dir }}/nomadweb/archives/{{ nomadweb_archive }}"
+ dest: "{{ installer_dir }}/nomadweb/NomadWeb"
+ creates: "{{ installer_dir }}/nomadweb/NomadWeb/nwsp-linux"
+ remote_src: true
+
+-
+ name: "Copying Nomadweb installer files to {{ domino_install_dir }}"
+ when: not nomadweb_installed_check.stat.exists
+ ansible.builtin.copy:
+ mode: "a+x"
+ src: "{{ installer_dir }}/nomadweb/NomadWeb/"
+ dest: "{{ domino_install_dir }}"
+
+-
+ name: "Configuring NomadWeb"
+ when: "not nomadweb_installed_check.stat.exists"
+ ansible.builtin.shell: "{{ item }}"
+ become: true
+ args:
+ executable: "/bin/bash"
+ register: nomadweb_config_status
+ with_items:
+ - 'domino cmd "set config NOMAD_WEB_HOST={{ domino_nomadweb_proxy_url }}" 20'
+
+-
+ name: "Outputting NomadWeb Configuration Status Logs"
+ when: ( nomadweb_debug or debug_all ) and not nomadweb_installed_check.stat.exists
+ ansible.builtin.debug:
+ var: nomadweb_config_status
+
+-
+ name: "Stopping Domino for Changes to take effect"
+ when: not nomadweb_installed_check.stat.exists
+ become: true
+ ansible.builtin.service:
+ name: domino
+ state: stopped
+ enabled: true
+ register: domino_service_stop_details
+ until: domino_service_stop_details.state == "stopped"
+ retries: 3
+ delay: 5
+
+-
+ name: "Starting Domino for Changes to take effect"
+ when: not nomadweb_installed_check.stat.exists
+ ansible.builtin.service:
+ name: domino
+ state: started
+ enabled: true
+ register: domino_service_start_details
+ retries: 3
+ delay: 5
+ until: domino_service_start_details.state == "started"
+
+-
+ name: "Checking NomadWeb is listening on port {{ domino_nomadweb_port_forwards[0].guest }}"
+ ansible.builtin.wait_for:
+ port: "{{ domino_nomadweb_port_forwards[0].guest }}"
+ delay: 5
+ timeout: 60
+ msg: "Timeout waiting for {{ domino_nomadweb_port_forwards[0].guest }} to respond"
+ register: port_check
+
+-
+ name: "Configuring NomadWeb and Starting Service"
+ when: not nomadweb_installed_check.stat.exists
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: touch
+ with_items:
+ - "{{ completed_dir }}/nomadweb_install"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_reset/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_reset/defaults/main.yml
new file mode 100644
index 00000000..b1303b76
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_reset/defaults/main.yml
@@ -0,0 +1,2 @@
+---
+antivirus: clamav
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_reset/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_reset/meta/main.yml
new file mode 100644
index 00000000..689016a9
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_reset/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: domino_reset
+ author: MarkProminic
+ description: Remove any uneccessary files or services
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_reset/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_reset/tasks/main.yml
new file mode 100755
index 00000000..eeb07d2a
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_reset/tasks/main.yml
@@ -0,0 +1,56 @@
+---
+-
+ name: "Checking if Domino has been installed"
+ register: domino_server_installed
+ ansible.builtin.stat:
+ path: "{{ domino_home_dir }}/notes.ini"
+ get_md5: false
+
+-
+ name: "Stopping Domino and disabling Domino"
+ when: domino_server_installed.stat.exists
+ become: true
+ ansible.builtin.service:
+ name: domino
+ state: stopped
+ enabled: false
+
+-
+ name: "Removing Domino Data Directories"
+ become: true
+ when: domino_server_installed.stat.exists
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item.path }}"
+ state: "absent"
+ with_items:
+ - { path: "{{ domino_home_dir }}/ids/" }
+ - { path: "{{ domino_home_dir }}/" }
+ - { path: "{{ domino_install_basedir }}/" }
+ - { path: "{{ completed_dir }}/fpinstalled" }
+ - { path: "{{ completed_dir }}/hfinstalled" }
+ - { path: "{{ completed_dir }}/verse_install" }
+ - { path: "{{ completed_dir }}/traveler_install" }
+ - { path: "{{ completed_dir }}/nomadweb_install" }
+ - { path: "{{ completed_dir }}/leap_install" }
+ - { path: "{{ completed_dir }}/htmo_installed" }
+ - { path: "{{ completed_dir }}/service_configured" }
+ - { path: "{{ completed_dir }}/domsetup" }
+ - { path: "{{ completed_dir }}/kyr-cert-imported" }
+ - { path: "{{ completed_dir }}/kyr-key-imported" }
+ - { path: "{{ completed_dir }}/appdevpack_install" }
+ - { path: "{{ completed_dir }}/genesis_installed_check" }
+ - { path: "{{ completed_dir }}/genesis_packages_installed_check" }
+ - { path: "{{ completed_dir }}/javabuildtools" }
+ - { path: "{{ completed_dir }}/sametime_installed" }
+
+-
+ name: "Setting domino_reset as completed"
+ become: true
+ when: domino_server_installed.stat.exists
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item.path }}"
+ state: "touch"
+ with_items:
+ - { path: "{{ completed_dir }}/domino_reset" }
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_rest_api/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_rest_api/defaults/main.yml
new file mode 100644
index 00000000..f1f5432d
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_rest_api/defaults/main.yml
@@ -0,0 +1,12 @@
+---
+domino_rest_api_archive: Domino_REST_API_V1_Installer.tar.gz
+domino_rest_api_version: 1
+domino_rest_api_debug: true
+domino_rest_api_port_forwards:
+ -
+ guest: "{{ domino_install_port_forwards[0].guest }}"
+ url: "restapi"
+ -
+ guest: "{{ domino_install_port_forwards[1].guest }}"
+ url: "restapi"
+domino_rest_api_proxy_url: "{{ domino_rest_api_port_forwards[0].url }}.{{ settings.hostname }}.{{ settings.domain }}"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_rest_api/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_rest_api/meta/main.yml
new file mode 100644
index 00000000..611d3fa6
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_rest_api/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: domino_rest_api
+ author: MarkProminic
+ description: Add Appdevpack tools to Domino for NodeJS development
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_rest_api/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_rest_api/tasks/main.yml
new file mode 100755
index 00000000..ebbff922
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_rest_api/tasks/main.yml
@@ -0,0 +1,88 @@
+---
+-
+ name: "Creating installation directories for domino-rest-api"
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: directory
+ with_items:
+ - "{{ installer_dir }}/domino-rest-api/archives"
+
+-
+ name: "Checking if domino-rest-api installer is at domino-rest-api/archives/{{ domino_rest_api_archive }}"
+ register: domino_server_installer_check
+ ansible.builtin.stat:
+ path: "{{ installer_dir }}/domino-rest-api/archives/{{ domino_rest_api_archive }}"
+ get_md5: false
+
+-
+ name: "Checking if domino-rest-api is installed: {{ domino_rest_api_version }}"
+ register: domino_rest_api_installed_check
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/domino_rest_api_install"
+ get_md5: false
+
+-
+ name: "Downloading domino-rest-api from {{ domino_installer_base_url }}"
+ register: domino_rest_apiresult
+ until: "domino_rest_apiresult is not failed"
+ retries: 3
+ ansible.builtin.get_url:
+ mode: '0755'
+ url: "{{ domino_installer_base_url }}/domino-rest-api/{{ domino_rest_api_archive }}"
+ dest: "{{ installer_dir }}/domino-rest-api/archives/{{ domino_rest_api_archive }}"
+ username: "{{ domino_installer_url_user }}"
+ password: "{{ secrets.domino_installer_url_pass }}"
+ when: not domino_server_installer_check.stat.exists and not domino_rest_api_installed_check.stat.exists
+
+-
+ name: "Extracting domino-rest-api from {{ domino_rest_api_archive }}"
+ when: not domino_rest_api_installed_check.stat.exists
+ ansible.builtin.unarchive:
+ mode: "a+x"
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ src: "{{ installer_dir }}/domino-rest-api/archives/{{ domino_rest_api_archive }}"
+ dest: "{{ installer_dir }}/domino-rest-api"
+ creates: "{{ installer_dir }}/domino-rest-api/restapiInstall.jar"
+ remote_src: true
+
+-
+ name: "Stopping Domino for domino-rest-api Installation"
+ when: not domino_rest_api_installed_check.stat.exists
+ become: true
+ ansible.builtin.service:
+ name: domino
+ state: stopped
+ enabled: true
+
+-
+ name: "Installing Domino Rest API"
+ ansible.builtin.shell: "{{ item }}"
+ become: true
+ become_user: "{{ service_user }}"
+ args:
+ chdir: "{{ installer_dir }}/domino-rest-api"
+ executable: /bin/bash
+ when: not domino_rest_api_installed_check.stat.exists
+ with_items:
+ - "sudo java -jar restapiInstall.jar -d={{ domino_home_dir }} -i={{ domino_home_dir }}/notes.ini -r=/opt/hcl/restapi -p={{ domino_install_dir }} -a"
+
+-
+ name: "Marking Domino Rest API as installed"
+ when: not domino_rest_api_installed_check.stat.exists
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: touch
+ with_items:
+ - "{{ completed_dir }}/domino_rest_api_install"
+
+-
+ name: "Starting Domino"
+ when: not domino_rest_api_installed_check.stat.exists
+ become: true
+ ansible.builtin.service:
+ name: domino
+ state: started
+ enabled: true
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_sametime/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_sametime/defaults/main.yml
new file mode 100644
index 00000000..d8b56c64
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_sametime/defaults/main.yml
@@ -0,0 +1,16 @@
+---
+sametime_archive: Sametime_Premium_12.0.zip
+sametime_archive_fixpack: Sametime_Premium_12.0_FP1.zip
+sametime_domino_major_version: 12
+sametime_domino_minor_version: 01
+sametime_version: 12.0
+sametime_fixpack_version: FP1
+sametime_debug: true
+domino_sametime_port_forwards:
+ -
+ guest: "{{ domino_install_port_forwards[0].guest }}"
+ url: "sametime"
+ -
+ guest: "{{ domino_install_port_forwards[1].guest }}"
+ url: "sametime"
+domino_sametime_proxy_url: "{{ domino_sametime_port_forwards[0].url }}.{{ settings.hostname }}.{{ settings.domain }}"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_sametime/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_sametime/meta/main.yml
new file mode 100644
index 00000000..ddbbfa3c
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_sametime/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: sametime
+ author: MarkProminic
+ description: Install and configure sametime
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_sametime/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_sametime/tasks/main.yml
new file mode 100755
index 00000000..0fe984bc
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_sametime/tasks/main.yml
@@ -0,0 +1,217 @@
+---
+## Install Guide here: https://help.hcltechsw.com/sametime/12/admin/installing.html
+-
+ name: "Creating installation directories for Sametime"
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: directory
+ with_items:
+ - "{{ installer_dir }}/sametime/archives"
+ - "{{ installer_dir }}/sametime/Sametime"
+
+-
+ name: "Checking if Sametime installer is at sametime/archives/{{ sametime_archive }}"
+ register: sametime_installer_check
+ ansible.builtin.stat:
+ path: "{{ installer_dir }}/sametime/archives/{{ sametime_archive }}"
+ get_md5: false
+
+-
+ name: "Checking if Sametime is installed: {{ sametime_version }}"
+ register: sametime_installed_check
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/sametime_installed"
+ get_md5: false
+
+-
+ name: "Downloading Sametime from {{ domino_installer_base_url }}"
+ register: sametimeresult
+ until: "sametimeresult is not failed"
+ retries: 3
+ ansible.builtin.get_url:
+ mode: '0755'
+ url: "{{ domino_installer_base_url }}/Sametime/{{ sametime_version }}/{{ sametime_archive }}"
+ dest: "{{ installer_dir }}/sametime/archives/{{ sametime_archive }}"
+ username: "{{ domino_installer_url_user }}"
+ password: "{{ secrets.domino_installer_url_pass }}"
+ when: not sametime_installer_check.stat.exists and not sametime_installed_check.stat.exists
+
+-
+ name: "Extracting Sametime from {{ sametime_archive }}"
+ when: not sametime_installed_check.stat.exists
+ ansible.builtin.unarchive:
+ mode: "a+x"
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ src: "{{ installer_dir }}/sametime/archives/{{ sametime_archive }}"
+ dest: "{{ installer_dir }}/sametime/Sametime"
+ creates: "{{ installer_dir }}/sametime/Sametime/Sametime-{{ sametime_version }}-for-domino-1201-linux"
+ remote_src: true
+
+-
+ name: "Stopping Domino for Sametime Installation"
+ when: not sametime_installed_check.stat.exists
+ become: true
+ ansible.builtin.service:
+ name: domino
+ state: stopped
+ enabled: true
+
+-
+ name: "Configuring Sametime and Starting Service"
+ when: not sametime_installed_check.stat.exists
+ ansible.builtin.shell: "{{ item }}"
+ become: true
+ args:
+ executable: "/bin/bash"
+ chdir: "{{ installer_dir }}/sametime/Sametime/Sametime-{{ sametime_version }}/linux"
+ with_items:
+ - ./install silent
+ - touch {{ completed_dir }}/sametime_install
+
+-
+ name: "Registering sametime installation Output"
+ ansible.builtin.shell: cat /var/log/volt_install_*.log
+ register: sametime_install
+ changed_when: false
+
+-
+ name: "Outputting Sametime installation logs"
+ when: not sametime_installed_check.stat.exists
+ ansible.builtin.debug:
+ var: sametime_install.stdout_lines
+
+-
+ name: "Creating Database chatlogging for Sametime"
+ when: not sametime_installed_check.stat.exists
+ community.mongodb.mongodb_user:
+ login_user: "{{ mongodb_admin_user }}"
+ login_password: "{{ mongodb_admin_password }}"
+ login_port: "27017"
+ database: admin
+ user: sametimeUser
+ password: sametime
+ state: present
+ roles:
+ - db: chatlogging
+ role: readWrite
+ - db: mobileOffline
+ role: readWrite
+ - db: meeting
+ role: readWrite
+ - db: meeting
+ role: dbAdmin
+ - db: admin
+ role: userAdminAnyDatabase
+
+-
+ name: "Initializing Databases"
+ when: not sametime_installed_check.stat.exists
+ community.mongodb.mongodb_index:
+ login_user: "{{ mongodb_admin_user }}"
+ login_password: "{{ mongodb_admin_password }}"
+ login_port: "27017"
+ indexes:
+ - database: chatlogging
+ collection: "EVENTS"
+ keys:
+ _id: 1
+ options:
+ name: "dummy"
+ state: present
+ - database: chatlogging
+ collection: "SESSIONS"
+ options:
+ name: "dummy"
+ keys:
+ _id: 1
+ state: present
+
+# -
+# name: "Initializing Databases"
+# community.mongodb.mongodb_shell:
+# mongo_cmd: mongosh
+# db: chatlogging
+# login_user: "{{ mongodb_admin_user }}"
+# login_password: "{{ mongodb_admin_password }}"
+# login_port: "27017"
+# eval: "{{ item }}"
+# with_items:
+# - 'db.EVENTS.insertOne({"_id" : "dummy"})'
+# - 'db.SESSIONS.insertOne({"_id" : "dummy"})'
+
+-
+
+ name: "Stopping MongoDB"
+ when: not sametime_installed_check.stat.exists
+ become: true
+ ansible.builtin.service:
+ name: mongod
+ state: stopped
+ enabled: true
+
+-
+ name: "Binding to all interfaces and Setting Replication set in MongoDB"
+ ansible.builtin.lineinfile:
+ mode: '0644'
+ path: "{{ item.dir }}"
+ regexp: "{{ item.regexp }}"
+ insertafter: "{{ item.insertafter }}"
+ create: true
+ line: "{{ item.line }}"
+ with_items:
+ - { regexp: "^#replication:", insertafter: "", line: 'replication:', dir: "/etc/mongod.conf" }
+ - { regexp: " replSetName: rs0", insertafter: "replication:", line: ' replSetName: rs0', dir: "/etc/mongod.conf" }
+ - { regexp: " bindIpAll: true", insertafter: "bindIp: 127.0.0.1", line: ' bindIpAll: true', dir: "/etc/mongod.conf" }
+
+-
+
+ name: "Starting MongoDB"
+ when: not sametime_installed_check.stat.exists
+ ansible.builtin.service:
+ name: mongod
+ state: started
+ enabled: true
+
+-
+ name: "Initializing Databases"
+ when: not sametime_installed_check.stat.exists
+ community.mongodb.mongodb_shell:
+ mongo_cmd: mongosh
+ db: admin
+ login_user: "{{ mongodb_admin_user }}"
+ login_password: "{{ mongodb_admin_password }}"
+ login_port: "27017"
+ eval: "{{ item }}"
+ with_items:
+ - 'rs.initiate()'
+
+-
+ name: "Ensuring replicaset rs0 exists"
+ when: not sametime_installed_check.stat.exists
+ community.mongodb.mongodb_replicaset:
+ login_user: "{{ mongodb_admin_user }}"
+ login_password: "{{ mongodb_admin_password }}"
+ login_port: "27017"
+ replica_set: rs0
+ members: localhost:27017
+ validate: false
+
+-
+ name: "Starting Domino"
+ when: not sametime_installed_check.stat.exists
+ ansible.builtin.service:
+ name: domino
+ state: started
+ enabled: true
+
+-
+ name: "Setting Sametime to installed"
+ when: not sametime_installed_check.stat.exists
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: touch
+ with_items:
+ - "{{ completed_dir }}/sametime_installed"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_service_nash/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_service_nash/defaults/main.yml
new file mode 100644
index 00000000..b4750fd7
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_service_nash/defaults/main.yml
@@ -0,0 +1,2 @@
+nash_domino_service_script_git_repo: https://github.com/nashcom/domino-startscript/releases/download
+nash_domino_service_script_version: 3.7.0
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_service_nash/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_service_nash/meta/main.yml
new file mode 100644
index 00000000..8dee28eb
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_service_nash/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: domino_service_nash
+ author: MarkProminic
+ description: Nash's Domino Start Scripts via systemd
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_service_nash/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_service_nash/tasks/main.yml
new file mode 100755
index 00000000..038b4557
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_service_nash/tasks/main.yml
@@ -0,0 +1,92 @@
+---
+-
+ name: "Creating Installation Directories"
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: directory
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ with_items:
+ - "{{ installer_dir }}/domino/service-file"
+
+-
+ name: "Checking if Nash's Domino Service Scripts has been installed"
+ register: domino_server_nash_installed
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/service_configured"
+ get_md5: false
+
+-
+ name: "Downloading Nash's Service Script for Linux version {{ nash_domino_service_script_version }}"
+ when: not domino_server_nash_installed.stat.exists
+ register: nashresult
+ until: "nashresult is not failed"
+ retries: 3
+ ansible.builtin.get_url:
+ mode: '0644'
+ url: "{{ nash_domino_service_script_git_repo }}/v{{ nash_domino_service_script_version }}/domino-startscript_v{{ nash_domino_service_script_version }}.taz"
+ dest: "{{ installer_dir }}/domino/archives/domino-startscript_v{{ nash_domino_service_script_version }}.tar"
+
+-
+ name: "Extracting Domino Service file Installer"
+ when: not domino_server_nash_installed.stat.exists
+ ansible.builtin.unarchive:
+ src: "{{ installer_dir }}/domino/archives/domino-startscript_v{{ nash_domino_service_script_version }}.tar"
+ dest: "{{ installer_dir }}/domino/service-file"
+ remote_src: true
+ creates: "{{ completed_dir }}/service_configured and not domino_server_nash_installed.stat.exists"
+
+-
+ name: "Changing Nash's service installer script's default Datadir"
+ when: not domino_server_nash_installed.stat.exists
+ ansible.builtin.replace:
+ path: "{{ installer_dir }}/domino/service-file/domino-startscript/sysconfig/rc_domino_config"
+ regexp: "(^DOMINO_DATA_PATH=)(.*)$"
+ replace: "DOMINO_DATA_PATH={{ domino_home_dir }}"
+
+-
+ name: "Changing Nash's service installer script's default User"
+ when: not domino_server_nash_installed.stat.exists
+ ansible.builtin.replace:
+ path: "{{ installer_dir }}/domino/service-file/domino-startscript/install_script"
+ regexp: "(^ DOMINO_USER=notes)(.*)$"
+ replace: " DOMINO_USER={{ domino_user }}"
+
+-
+ name: "Changing Nash's service installer script's default Domino Group"
+ when: not domino_server_nash_installed.stat.exists
+ ansible.builtin.replace:
+ path: "{{ installer_dir }}/domino/service-file/domino-startscript/install_script"
+ regexp: "(^ DOMINO_GROUP=notes)(.*)$"
+ replace: " DOMINO_GROUP={{ domino_group }}"
+
+-
+ name: "Configuring Domino systemd service file"
+ when: not domino_server_nash_installed.stat.exists
+ ansible.builtin.shell: "{{ item }}"
+ become: true
+ args:
+ chdir: "{{ installer_dir }}/domino/service-file/domino-startscript"
+ executable: "/bin/bash"
+ creates: "{{ completed_dir }}/service_configured"
+ with_items:
+ - "./install_script"
+
+-
+ name: "Marking Nash Systemd Service as installed"
+ when: not domino_server_nash_installed.stat.exists
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: touch
+ with_items:
+ - "{{ completed_dir }}/service_configured"
+
+-
+ name: "Ensuring Domino is stopped but enabled at boot"
+ become: true
+ ansible.builtin.service:
+ name: domino
+ state: stopped
+ enabled: true
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_service_nash/vars/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_service_nash/vars/main.yml
new file mode 100644
index 00000000..074a2aa0
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_service_nash/vars/main.yml
@@ -0,0 +1,2 @@
+---
+show_help: true
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler/defaults/main.yml
new file mode 100644
index 00000000..652d9db0
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler/defaults/main.yml
@@ -0,0 +1,14 @@
+---
+traveler_archive: Traveler_12.0.2_Linux_ML.tar.gz
+traveler_base_version: base
+traveler_fixpack_archive: Traveler_12.0.2_Linux_ML.tar.gz
+traveler_fixpack_version: FP2
+traveler_debug: true
+domino_traveler_port_forwards:
+ -
+ guest: "{{ domino_install_port_forwards[0].guest }}"
+ url: "traveler"
+ -
+ guest: "{{ domino_install_port_forwards[1].guest }}"
+ url: "traveler"
+domino_traveler_proxy_url: "{{ domino_traveler_port_forwards[0].url }}.{{ settings.hostname }}.{{ settings.domain }}"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler/meta/main.yml
new file mode 100644
index 00000000..7180e9d7
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: traveler
+ author: MarkProminic
+ description: Install and configure Traveler
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler/tasks/main.yml
new file mode 100755
index 00000000..171c55cd
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler/tasks/main.yml
@@ -0,0 +1,86 @@
+---
+## Install Guide here: https://help.hcltechsw.com/traveler/11.0.0/Silent_install_Linux.html
+-
+ name: "Creating installation directories for Traveler"
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: directory
+ with_items:
+ - "{{ installer_dir }}/traveler/archives"
+ - "{{ installer_dir }}/traveler/Traveler"
+
+-
+ name: "Checking if Traveler is installed: {{ traveler_base_version }} "
+ register: traveler_installed_check
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/traveler_install"
+ get_md5: false
+
+-
+ name: "Checking if Traveler installer is at traveler/archives/{{ traveler_archive }}"
+ register: traveler_installer_check
+ ansible.builtin.stat:
+ path: "{{ installer_dir }}/traveler/archives/{{ traveler_archive }}"
+ get_md5: false
+
+-
+ name: "Downloading Traveler from {{ domino_installer_base_url }}"
+ register: travelerresult
+ until: "travelerresult is not failed"
+ retries: 3
+ ansible.builtin.get_url:
+ mode: '0755'
+ url: "{{ domino_installer_base_url }}/ND{{ domino_major_version }}/{{ traveler_archive }}"
+ dest: "{{ installer_dir }}/traveler/archives/{{ traveler_archive }}"
+ username: "{{ domino_installer_url_user }}"
+ password: "{{ secrets.domino_installer_url_pass }}"
+ when: not traveler_installer_check.stat.exists and not traveler_installed_check.stat.exists
+
+-
+ name: "Extracting Traveler from {{ traveler_archive }}"
+ when: not traveler_installed_check.stat.exists
+ ansible.builtin.unarchive:
+ mode: "a+x"
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ src: "{{ installer_dir }}/traveler/archives/{{ traveler_archive }}"
+ dest: "{{ installer_dir }}/traveler/Traveler"
+ creates: "{{ installer_dir }}/traveler/Traveler/Traveler"
+ remote_src: true
+
+-
+ name: "Stopping Domino for Traveler Installation"
+ when: not traveler_installed_check.stat.exists
+ become: true
+ ansible.builtin.service:
+ name: domino
+ state: stopped
+ enabled: true
+
+-
+ name: "Adding Traveler silent install response file"
+ when: not traveler_installed_check.stat.exists
+ ansible.builtin.template:
+ dest: "{{ installer_dir }}/traveler/Traveler/installer.properties"
+ mode: a+x
+ src: "installer.properties.j2"
+
+-
+ name: "Installing Traveler"
+ when: not traveler_installed_check.stat.exists
+ ansible.builtin.shell: "{{ item }}"
+ become: true
+ args:
+ executable: "/bin/bash"
+ chdir: "{{ installer_dir }}/traveler/Traveler"
+ with_items:
+ - "./TravelerSetup -f ./installer.properties -i SILENT -l en && touch {{ completed_dir }}/traveler_install"
+
+-
+ name: "Starting Domino"
+ when: not traveler_installed_check.stat.exists
+ ansible.builtin.service:
+ name: domino
+ state: started
+ enabled: true
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler/templates/installer.properties.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler/templates/installer.properties.j2
new file mode 100644
index 00000000..530766ca
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler/templates/installer.properties.j2
@@ -0,0 +1,8 @@
+ACCEPT_LICENSE=true
+DOMINO_PROGRAM_DIRECTORY={{ domino_install_dir }}
+DOMINO_DATA_DIRECTORY_1={{ domino_home_dir }}
+DOMINO_NOTESINI_DIRECTORY_1={{ domino_home_dir }}
+LINUX_USER_NAME={{ domino_user }}
+LINUX_GROUP_NAME={{ domino_group }}
+NTS_WEBSITE_HOME=0
+OVERRIDE_BACKREV=false
\ No newline at end of file
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler_htmo/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler_htmo/defaults/main.yml
new file mode 100644
index 00000000..074a2aa0
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler_htmo/defaults/main.yml
@@ -0,0 +1,2 @@
+---
+show_help: true
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler_htmo/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler_htmo/meta/main.yml
new file mode 100644
index 00000000..7aaaccf7
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler_htmo/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: htmo
+ author: MarkProminic
+ description: Enable HTMO in Traveler
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler_htmo/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler_htmo/tasks/main.yml
new file mode 100755
index 00000000..470e1b3d
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler_htmo/tasks/main.yml
@@ -0,0 +1,69 @@
+---
+-
+ name: "Checking if HTMO is enabled"
+ register: htmo_installed_check
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/htmo_installed"
+ get_md5: false
+
+-
+ name: "Handing HTMO templated JSON to Genesis"
+ when: not htmo_installed_check.stat.exists
+ ansible.builtin.template:
+ dest: "{{ domino_home_dir }}/JavaAddin/Genesis/json/htmo-traveler-access.json"
+ mode: "a+x"
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ src: "htmo-traveler-access.json.j2"
+
+-
+ name: Waiting until Genesis returns OK
+ when: not htmo_installed_check.stat.exists
+ ansible.builtin.wait_for:
+ path: "{{ domino_home_dir }}/JavaAddin/Genesis/jsonresponse/htmo-traveler-access.json"
+
+-
+ name: "Checking if ACL applied succesfully"
+ when: not htmo_installed_check.stat.exists
+ ansible.builtin.lineinfile:
+ path: "{{ domino_home_dir }}/JavaAddin/Genesis/jsonresponse/htmo-traveler-access.json"
+ line: "OK"
+ state: present
+ check_mode: true
+ register: presence
+ failed_when: presence is changed
+
+-
+ name: "Stopping Domino for Changes to take effect"
+ when: not htmo_installed_check.stat.exists
+ become: true
+ ansible.builtin.service:
+ name: domino
+ state: stopped
+ enabled: true
+ register: domino_html_service_details_stop
+ until: domino_html_service_details_stop.state == "stopped"
+ retries: 3
+ delay: 5
+
+-
+ name: "Starting Domino for Changes to take effect"
+ when: not htmo_installed_check.stat.exists
+ ansible.builtin.service:
+ name: domino
+ state: started
+ enabled: true
+ register: domino_html_service_details_start
+ retries: 3
+ delay: 5
+ until: domino_html_service_details_start.state == "started"
+
+-
+ name: "Marking HTMO as installed"
+ when: not htmo_installed_check.stat.exists
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: touch
+ with_items:
+ - "{{ completed_dir }}/htmo_installed"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler_htmo/templates/htmo-traveler-access.json.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler_htmo/templates/htmo-traveler-access.json.j2
new file mode 100644
index 00000000..8e32a4f6
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_traveler_htmo/templates/htmo-traveler-access.json.j2
@@ -0,0 +1,34 @@
+{
+ "title":"Modifying Internet Site Document to allow HTMO access",
+ "versionjson":"1.0.0",
+ "steps":[
+ {
+ "title":"--- Step 1. SET value to include Freebusy and Mail ---",
+ "databases":[
+ {
+ "filePath":"names.nsf",
+ "action":"update",
+ "documents":[
+ {
+ "action":"update",
+ "search":{
+ "formula":"Type=\"WebSite\" & ISiteName=\"Domino Web Site\"",
+ "number":1
+ },
+ "computeWithForm":true,
+ "items":{
+ "WSEnabledServices":["TravelerAdmin","Freebusy","Mail"]
+ }
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "title":"--- Step 2 (final). Completed ---",
+ "messages":[
+ "You have adjusted Configurations for Traveler to support HTMO successfully"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/defaults/main.yml
new file mode 100644
index 00000000..472d3e0c
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/defaults/main.yml
@@ -0,0 +1,3 @@
+---
+updatesite_repo: "https://github.com/OpenNTF/generate-domino-update-site"
+nsfodp_updatesite_path: "/opt/nsfodp/updatesite"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/meta/main.yml
new file mode 100644
index 00000000..16ed1c1e
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: domino_updatesite
+ author: MarkProminic
+ description: Install and update updatesite to work with Moonshine
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/tasks/main.yml
new file mode 100755
index 00000000..bbef95c3
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/tasks/main.yml
@@ -0,0 +1,84 @@
+---
+-
+ name: "Creating Installation Directories"
+ ansible.builtin.file:
+ mode: 0755
+ path: "{{ item }}"
+ state: directory
+ owner: "{{ service_user }}"
+ with_items:
+ - "/vagrant/installers/update-site"
+ - "{{ nsfodp_updatesite_path }}"
+ - "{{ service_home_dir }}/.m2"
+
+-
+ name: "Checking if the updatesite has been installed"
+ register: updatesite_deployed
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/updatesite_deployed"
+ get_md5: false
+
+-
+ name: "Cloning updatesite to /vagrant/installers/update-site"
+ become: true
+ become_user: "{{ service_user }}"
+ when: not updatesite_deployed.stat.exists
+ ansible.builtin.git: # noqa: latest
+ repo: "{{ updatesite_repo }}"
+ dest: "/vagrant/installers/update-site/"
+ force: false
+ update: false
+
+-
+ name: "Placing Configuration and scripts"
+ when: not updatesite_deployed.stat.exists
+ become_user: "{{ service_user }}"
+ become: true
+ ansible.builtin.template:
+ owner: "{{ item.owner }}"
+ src: "{{ item.src }}"
+ dest: "{{ item.dest }}"
+ mode: "a+x"
+ loop:
+ - { src: 'run_nsfodp.sh.j2', dest: '/opt/nsfodp/run_nsfodp.sh', owner: '{{ service_user }}' }
+ - { src: 'maven_settings.xml.j2', dest: '{{ service_home_dir }}/.m2/settings.xml', owner: '{{ service_user }}' }
+
+-
+ name: "Installing updatesite with Maven {{ maven_version }}"
+ when: not updatesite_deployed.stat.exists
+ ansible.builtin.shell: |
+ source {{ service_home_dir }}/.sdkman/bin/sdkman-init.sh && mvn install
+ touch {{ completed_dir }}/updatesite_installed
+ become: true
+ become_user: "{{ service_user }}"
+ args:
+ executable: "/bin/bash"
+ chdir: "/vagrant/installers/update-site/generate-domino-update-site"
+ creates: "{{ completed_dir }}/updatesite_installed"
+ environment:
+ MAVEN_HOME: "{{ service_home_dir }}/.sdkman/candidates/maven/current"
+
+-
+ name: "Deploying updatesite with Maven {{ maven_version }}"
+ when: not updatesite_deployed.stat.exists
+ ansible.builtin.shell: |
+ source {{ service_home_dir }}/.sdkman/bin/sdkman-init.sh
+ mvn org.openntf.p2:generate-domino-update-site:generateUpdateSite -Ddest={{ nsfodp_updatesite_path }} -Dsrc={{ domino_install_dir }}
+ touch {{ completed_dir }}/updatesite_deployed
+ become: true
+ become_user: "{{ service_user }}"
+ args:
+ executable: "/bin/bash"
+ chdir: "/vagrant/installers/update-site/generate-domino-update-site"
+ creates: "{{ completed_dir }}/updatesite_deployed"
+ environment:
+ MAVEN_HOME: "{{ service_home_dir }}/.sdkman/candidates/maven/current"
+
+-
+ name: "Archiving updatesite into a zip for later use: {{ nsfodp_updatesite_path }}"
+ when: not updatesite_deployed.stat.exists
+ community.general.archive:
+ mode: '0644'
+ path: "{{ nsfodp_updatesite_path }}"
+ dest: "{{ nsfodp_updatesite_path }}.zip"
+ format: zip
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/templates/maven_settings.xml.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/templates/maven_settings.xml.j2
new file mode 100755
index 00000000..e19b084f
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/templates/maven_settings.xml.j2
@@ -0,0 +1,25 @@
+
+
+
+
+
+
+
+ nsfodp
+
+ {{ domino_install_dir }}
+ file://{{ nsfodp_updatesite_path }}
+ {{ service_home_dir }}/notes.ini
+ {{ service_home_dir }}
+
+
+
+
+
+
+ nsfodp
+
+
+
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/templates/run_nsfodp.sh.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/templates/run_nsfodp.sh.j2
new file mode 100755
index 00000000..8011a8ea
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_updatesite/templates/run_nsfodp.sh.j2
@@ -0,0 +1,43 @@
+#!/bin/bash
+# Open the zipped project and run the Maven script as NSFDP
+# This script assumes that the project was configured following the conventions
+# in the Domino Visual Editor and Domino On Disk Project templates in Moonshine-IDE
+#
+# USAGE: ./run_nsfodp.sh
+# Parameters:
+# - . The path to the zip file containing the application to deploy.
+# The zip is expected to contain pom.xml at the top level (no parent directory).
+
+set -e
+
+ZIP_FILE=$1
+# TODO: move this to parameter?
+DATABASE=nsfs/nsf-moonshine/target/nsf-moonshine-domino-1.0.0.nsf
+
+echo "Starting NSFODP for file $ZIP_FILE"
+
+# change permissions to ensure command runs cleanly
+TIMESTAMP=`date +%Y%m%d%H%M%S`
+TMP_DIR=/tmp/nsfodp/$TIMESTAMP
+mkdir -p $TMP_DIR
+
+# unzip and setup project
+cd $TMP_DIR
+unzip $ZIP_FILE
+
+# Read default user password
+PASSWORD=$(jq -r '.serverSetup | .admin | .password' {{ domino_home_dir }}/setup.json)
+
+# Run Maven
+yes "$PASSWORD" | mvn clean install
+
+# copy output file
+OUTPUT_DIR=/tmp/restinterface/generated/
+OUTPUT_FILE=$OUTPUT_DIR/$TIMESTAMP.nsf
+mkdir -p $OUTPUT_DIR
+cp "$DATABASE" "$OUTPUT_FILE"
+echo "Generated Database: '$OUTPUT_FILE'"
+
+
+# Cleanup
+sudo rm -rf "$TMP_DIR"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_readme/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_readme/defaults/main.yml
new file mode 100644
index 00000000..074a2aa0
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_readme/defaults/main.yml
@@ -0,0 +1,2 @@
+---
+show_help: true
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_readme/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_readme/meta/main.yml
new file mode 100644
index 00000000..25164f6d
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_readme/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: convenience
+ author: MarkProminic
+ description: Displays common commands users may need to run on the VM post provisioning
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_readme/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_readme/tasks/main.yml
new file mode 100755
index 00000000..9fdf7be2
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_readme/tasks/main.yml
@@ -0,0 +1,19 @@
+---
+-
+ name: "Generating Help File"
+ become: true
+ ansible.builtin.template:
+ mode: '0644'
+ dest: /tmp/CommandHelp.txt
+ src: CommandHelp.txt.j2
+
+-
+ name: "Registering Output of available Commands"
+ ansible.builtin.command: cat /tmp/CommandHelp.txt
+ register: help_commands
+ changed_when: false
+
+-
+ name: "Outputting available Help Text"
+ ansible.builtin.debug:
+ var: help_commands.stdout_lines
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_readme/templates/CommandHelp.txt.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_readme/templates/CommandHelp.txt.j2
new file mode 100755
index 00000000..1d4c00be
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_readme/templates/CommandHelp.txt.j2
@@ -0,0 +1,82 @@
+------------------------------------------------------------------------------------
+### Domino commands ###
+------------------------------------------------------------------------------------
+1) View the status of the automatic setup:
+ cat {{ domino_home_dir }}/IBM_TECHNICAL_SUPPORT/autoconfigure.log
+
+2) Start Domino via screen or "sudo service domino start":
+ Screen)
+ ./vagrant_ssh.sh
+ screen
+ sudo su
+ su - domino
+ {{ domino_install_basedir }}/bin/server
+ "-d" to detach from screen
+ "screen -ls" to list screen sessions
+ "screen -r" to resume screen session
+ Service)
+ "sudo service domino restart"
+
+
+3) Issue Domino commands
+ help
+ sh server
+ sh tasks
+ load http
+ tell http show thread state
+ tell http show security
+ load runjava
+ tell runjava quit
+ tell runjava show tasks
+ exit
+
+4) Retrieve the user.id from {{ service_home_dir }}/user.id via:
+ 1.HTTP: https://{{ ansible_all_ipv4_addresses[0] }}/downloads
+ 2.SFTP/SCPing
+
+ The user.id password is the one defined in the setup.json which is currently 'password'
+ You can know use this user.id in a Notes, Administrator or Designer Client.
+
+-----------------------------------------------------------------------------
+### Begin Stand-alone Java testing commands ###
+-----------------------------------------------------------------------------
+
+Run this command ONCE after creating a fresh instance to generate {{ service_home_dir }}/names.nsf:
+-Set the java_helper_application_install variable to true in your Hosts.yaml
+-run vagrant provision. This will:
+ Create a fresh names database: `cd {{ service_home_dir }}; java -jar ./CreateNamesDatabase.jar`
+ Password is the one defined in the setup.json which is currently "{{ domino_admin_notes_id_password }}"
+
+Accesst the server via "vagrant ssh"
+Test stand-alone Java API operations:
+-Verify user:
+ `cd {{ service_home_dir }}; java -jar ./CheckNotesUser.jar`
+
+-Verify connection to database:
+ `cd {{ service_home_dir }}; java -jar ./CheckDatabase.jar `
+
+Source code and build scripts are located in the VM here:
+ /vagrant/domino/domino-java-helpers
+
+-----------------------------------------------------------------------------
+### Begin To test Java servlets:
+-----------------------------------------------------------------------------
+
+Managing Javaâ„¢ servlets on a Web server
+ https://help.hcltechsw.com/domino/12.0.0/admin/conf_managingjavaservletsonawebserver_t.html
+
+Including Javaâ„¢ servlets in Web applications
+ https://help.hcltechsw.com/dom_designer/12.0.0/basic/H_OVERVIEW_OF_JAVA_SERVLETS.html
+
+Java Servlet Specification (prior to EE4J initiative transition):
+ https://javaee.github.io/servlet-spec/
+
+Elipse EE4J Project:
+ https://github.com/eclipse-ee4j
+
+Java Servlet 4.0 API Specification:
+ https://javadoc.io/doc/javax.servlet/javax.servlet-api/latest/index.html
+
+Introduction to Servlets with examples:
+ https://www3.ntu.edu.sg/home/ehchua/programming/java/JavaServlets.html
+-----------------------------------------------------------------------------
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/defaults/main.yml
new file mode 100644
index 00000000..4c6ee006
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/defaults/main.yml
@@ -0,0 +1,11 @@
+---
+rest_app_home_dir: "/opt/rest-interface"
+rest_config_file: "/config/application.yml"
+rest_vagrant_crud_version: 0.1.7
+rest_vagrant_crud_archive: VagrantCRUD_centos7.zip
+rest_interface_jar: rest-interface-0.1.5.jar
+domino_vagrant_rest_api_port_forwards:
+ -
+ guest: 8080
+ url: "restapi"
+domino_vagrant_rest_api_proxy_url: "{{ domino_vagrant_rest_api_port_forwards[0].url }}.{{ settings.hostname }}.{{ settings.domain }}"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/meta/main.yml
new file mode 100644
index 00000000..50d30054
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: domino_vagrant_rest_api
+ author: MarkProminic
+ description: Rest Interface
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/tasks/main.yml
new file mode 100755
index 00000000..f2c8bb56
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/tasks/main.yml
@@ -0,0 +1,127 @@
+---
+-
+ name: "Creating Installation Directories"
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: directory
+ owner: "{{ service_user }}"
+ with_items:
+ - "{{ installer_dir }}/restapi/rest"
+ - "{{ installer_dir }}/restapi/archives"
+
+-
+ name: "Checking if the Rest Interface deployed"
+ register: rest_interface_deployed
+ ansible.builtin.stat:
+ path: "{{ rest_app_home_dir }}/environment"
+ get_md5: false
+
+-
+ name: "Downloading Domino VagrantCRUD API {{ rest_vagrant_crud_version }}"
+ when: not rest_interface_deployed.stat.exists
+ ansible.builtin.get_url:
+ mode: '0644'
+ url: "https://github.com/Moonshine-IDE/Vagrant-REST-Interface/releases/download/{{ rest_vagrant_crud_version }}/{{ rest_vagrant_crud_archive }}"
+ dest: "{{ installer_dir }}/restapi/archives/{{ rest_vagrant_crud_archive }}"
+
+-
+ name: "Extracting Domino VagrantCRUD API from {{ rest_vagrant_crud_archive }}"
+ when: not rest_interface_deployed.stat.exists
+ ansible.builtin.unarchive:
+ src: "{{ installer_dir }}/restapi/archives/{{ rest_vagrant_crud_archive }}"
+ dest: "{{ installer_dir }}/restapi/rest"
+ remote_src: true
+ creates: "{{ installer_dir }}/restapi/rest/*.jar"
+
+-
+ name: "Making Rest Libraries and Binaries executable"
+ when: not rest_interface_deployed.stat.exists
+ ansible.builtin.file:
+ path: "{{ item }}"
+ owner: "{{ service_user }}"
+ mode: "a+x"
+ with_items:
+ - "{{ installer_dir }}/restapi/rest/provision.sh"
+ - "{{ installer_dir }}/restapi/rest/always.sh"
+
+-
+ name: "Cleaning up first before installing Vagrant Rest interface"
+ when: not rest_interface_deployed.stat.exists
+ ansible.builtin.file:
+ path: "{{ rest_app_home_dir }}"
+ state: absent
+ owner: "{{ service_user }}"
+ mode: "0744"
+
+-
+ name: "Installing Vagrant Rest interface"
+ when: not rest_interface_deployed.stat.exists
+ ansible.builtin.file:
+ path: "{{ item.path }}"
+ state: directory
+ owner: "{{ service_user }}"
+ mode: "0744"
+ with_items:
+ - { path: "/opt/domino/scripts" }
+ - { path: "{{ rest_app_home_dir }}" }
+ - { path: "{{ rest_app_home_dir }}/bin" }
+ - { path: "{{ rest_app_home_dir }}/log" }
+ - { path: "{{ rest_app_home_dir }}/config" }
+
+-
+ name: "Copying Rest interface Jar to working path"
+ become: true
+ when: not rest_interface_deployed.stat.exists
+ ansible.builtin.copy:
+ src: "{{ installer_dir }}/restapi/rest/{{ rest_interface_jar }}"
+ dest: "{{ rest_app_home_dir }}/bin/{{ rest_interface_jar }}"
+ mode: "a+x"
+ owner: "{{ service_user }}"
+ remote_src: true
+
+-
+ name: "Placing Domino Specific Configuration and scripts"
+ become: true
+ when: domino_home_dir is defined and domino_install_dir is defined and not rest_interface_deployed.stat.exists
+ ansible.builtin.template:
+ owner: "{{ item.owner }}"
+ src: "{{ item.src }}"
+ dest: "{{ item.dest }}"
+ mode: "a+x"
+ loop:
+ - { src: 'deploy_database.sh.j2', dest: '/opt/domino/scripts/deploy_database.sh', owner: '{{ service_user }}' }
+ - { src: 'deploy_html.sh.j2', dest: '/opt/domino/scripts/deploy_html.sh', owner: '{{ service_user }}' }
+ - { src: 'run_dxl_importer.sh.j2', dest: '/opt/domino/scripts/run_dxl_importer.sh', owner: '{{ service_user }}'}
+
+-
+ name: "Placing Configuration and scripts"
+ become: true
+ when: not rest_interface_deployed.stat.exists
+ ansible.builtin.template:
+ owner: "{{ item.owner }}"
+ src: "{{ item.src }}"
+ dest: "{{ item.dest }}"
+ mode: "a+x"
+ loop:
+ - { src: 'rest_config.yml.j2', dest: '{{ rest_app_home_dir }}{{ rest_config_file }}', owner: '{{ service_user }}' }
+ - { src: 'restapi.service.j2', dest: '/etc/systemd/system/restapi.service', owner: '{{ service_user }}' }
+ - { src: 'environment.j2', dest: '{{ rest_app_home_dir }}/environment', owner: '{{ service_user }}' }
+
+-
+ name: "Starting Vagrant CRUD Rest API"
+ when: not rest_interface_deployed.stat.exists
+ ansible.builtin.service:
+ name: restapi
+ state: started
+ enabled: true
+
+-
+ name: "Checking Vagrant CRUD Rest API is listening on port {{ domino_vagrant_rest_api_port_forwards[0].guest }}"
+ when: not rest_interface_deployed.stat.exists
+ ansible.builtin.wait_for:
+ port: "{{ domino_vagrant_rest_api_port_forwards[0].guest }}"
+ delay: 5
+ timeout: 60
+ msg: "Timeout waiting for {{ domino_vagrant_rest_api_port_forwards[0].guest }} to respond"
+ register: port_check
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/deploy_database.sh.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/deploy_database.sh.j2
new file mode 100755
index 00000000..84f9608d
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/deploy_database.sh.j2
@@ -0,0 +1,150 @@
+#!/bin/bash
+# USAGE: ./deploy_database.sh
+# Database will be uploaded to the server at the given path (relative to the data directory).
+# EXAMPLE: ./deploy_database.sh /tmp/upload.nsf test/mydatabase.nsf
+
+set -e
+
+# Parameters
+ORIGINAL_DATABASE=$1
+DBNAME=$2
+# TODO: make title a parameter
+TITLE=`printf "$DBNAME" | sed 's/\.nsf$//' | sed 's:^.*\/\([^/]*\)$:\1:'`
+echo "Uploading database $ORIGINAL_DATABASE to $DBNAME ($TITLE)"
+
+# add a timestamp to the JSON file to avoid conflicts
+TIMESTAMP=`date +%Y%m%d%H%M%S`
+JSON_NAME=create_${TIMESTAMP}.json
+JSON_TMP=/tmp/${JSON_NAME}
+JSON_TRIGGER_DIR={{ domino_home_dir }}/JavaAddin/Genesis/json
+JSON_TRIGGER_FILE=$JSON_TRIGGER_DIR/$JSON_NAME
+
+# Also copy the database to a temporary path with a timestamp
+# If I use the same template path for multiple databases, I run into caching issues on the Domino side.
+TEMP_PATH=/tmp/${TIMESTAMP}_template.nsf
+cp "$ORIGINAL_DATABASE" "$TEMP_PATH"
+
+# Determine if the database should be overwritten
+FULL_TARGET_PATH={{ domino_home_dir }}/$DBNAME
+REPLACE=false
+if [ -e "$FULL_TARGET_PATH" ]; then
+ REPLACE=true;
+fi
+
+# create the trigger file
+# For now, I am giving "-Default-" designer access to that the user be able to deploy agents.
+# TODO: add an entry for the safe ID user(s) instead.
+cat > /tmp/$JSON_NAME << EndOfMessage
+{
+ "title": "Upload Database",
+ "versionjson": "1.0.0",
+
+ "steps": [
+ {
+ "title": "Uploading database",
+ "databases": [
+ {
+ "action": "create",
+ "title": "$TITLE",
+ "filePath": "$DBNAME",
+ "templatePath": "$TEMP_PATH",
+ "sign": true,
+ "replace": $REPLACE,
+ "ACL": {
+ "ACLEntries": [
+ {
+ "name": "-Default-",
+ "level": "designer",
+ "type": "unspecified",
+ "isPublicReader": true,
+ "isPublicWriter": true,
+ "canDeleteDocuments": true,
+ "canCreateLSOrJavaAgent": true,
+ "canReplicateOrCopyDocuments": true
+ },
+ {
+ "name": "Anonymous",
+ "level": "depositor",
+ "type": "person",
+ "isPublicReader": true,
+ "isPublicWriter": true,
+ },
+ {
+ "name": "AutomaticallyCrossCertifiedUsers",
+ "level": "manager",
+ "type": "mixedGroup",
+ "isPublicReader": true,
+ "isPublicWriter": true,
+ "canDeleteDocuments": true
+ },
+ {
+ "name": "LocalDomainAdmins",
+ "level": "manager",
+ "type": "personGroup",
+ "isPublicReader": true,
+ "isPublicWriter": true,
+ "canDeleteDocuments": true
+ },
+ {
+ "name": "LocalDomainServers",
+ "level": "manager",
+ "type": "serverGroup",
+ "isPublicReader": true,
+ "isPublicWriter": true
+ },
+ {
+ "name": "OtherDomainServers",
+ "level": "noAccess",
+ "type": "serverGroup",
+ "isPublicReader": true,
+ "isPublicWriter": true
+ },
+ {
+ "name": "CN=Demo Admin/O=DEMO",
+ "level": "manager",
+ "type": "person",
+ "isPublicReader": true,
+ "isPublicWriter": true,
+ "canDeleteDocuments": true
+ }
+ ]
+ }
+ }
+ ]
+ }
+
+ ]
+}
+EndOfMessage
+
+# update the permissions
+sudo chown domino.domino "$TEMP_PATH"
+sudo chown domino.domino $JSON_TMP
+
+# Make sure the Genesis directory exists
+sudo mkdir -p "$JSON_TRIGGER_DIR"
+sudo chown -R domino.domino "$JSON_TRIGGER_DIR"
+
+# copy the file to trigger the action
+sudo mv $JSON_TMP $JSON_TRIGGER_FILE
+echo "Starting import: $JSON_TRIGGER_FILE"
+
+# wait for the file to be removed
+while [ -e $JSON_TRIGGER_FILE ]
+do
+ sleep 1
+done
+
+# cleanup temporary files
+sudo rm -f "$TEMP_PATH"
+# JSON file was moved and cleaned by Genesis
+
+# check response file for error messages
+RESPONSE_FILE=$(printf "$JSON_TRIGGER_FILE" | sed 's:/json/:/jsonresponse/:')
+RESPONSE=$(cat "$RESPONSE_FILE" | tr -d '\n')
+if [ "$RESPONSE" = "OK" ]; then
+ echo "Successfully deployed database."
+else
+ echo "Deployment failed: '$RESPONSE'"
+ exit 1
+fi
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/deploy_html.sh.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/deploy_html.sh.j2
new file mode 100755
index 00000000..decc87f0
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/deploy_html.sh.j2
@@ -0,0 +1,36 @@
+#!/bin/bash
+# Deploy an HTML application to the Domino server (domino/html)
+# USAGE: ./deploy_html.sh
+# Parameters:
+# - . The path to the zip file containing the application to deploy.
+# - . The directory relative to domino/html. The zip will be extracted into this directory. If the directory exists, it will be recreated.
+
+set -e
+
+ZIP_FILE=$1
+TARGET_DIR=$2
+
+DOMINO_HTML_DIR={{ domino_home_dir }}/domino/html
+TARGET_FULL=$DOMINO_HTML_DIR/$TARGET_DIR
+
+echo "Deploying $ZIP_FILE to $TARGET_FULL"
+
+# change permissions to ensure command runs cleanly
+TIMESTAMP=`date +%Y%m%d%H%M%S`
+ZIP_FILE_CHOWN=${ZIP_FILE}.${TIMESTAMP}.zip
+cp "$ZIP_FILE" "$ZIP_FILE_CHOWN"
+sudo chown domino.domino "$ZIP_FILE_CHOWN"
+
+# cleanup existing application
+if [ -e "$TARGET_FULL" ]; then
+ sudo rm -r "$TARGET_FULL";
+fi
+
+# ensure directory exists
+sudo su -c "mkdir -p '$TARGET_FULL'" - domino
+
+# unzip the application
+sudo su -c "unzip -q -d '$TARGET_FULL' '$ZIP_FILE_CHOWN'" - domino
+
+# Cleanup
+sudo rm -f "$ZIP_FILE_CHOWN"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/environment.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/environment.j2
new file mode 100755
index 00000000..d2d7dd08
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/environment.j2
@@ -0,0 +1,22 @@
+JAVA_HOME={{ service_home_dir }}/.sdkman/candidates/java/current
+
+GRADLE_HOME={{ service_home_dir }}/.sdkman/candidates/gradle/current
+MAVEN_HOME={{ service_home_dir }}/.sdkman/candidates/maven/current
+
+SDKMAN_CANDIDATES_DIR={{ service_home_dir }}/.sdkman/candidates
+SDKMAN_VERSION=5.16.0
+SDKMAN_DIR={{ service_home_dir }}/.sdkman
+SDKMAN_CANDIDATES_API=https://api.sdkman.io/2
+SDKMAN_PLATFORM=linuxx64
+
+{% if domino_install_dir is defined %}
+ LD_LIBRARY_PATH={{ domino_install_dir }}/
+{% endif %}
+
+
+
+HOME={{ service_home_dir }}
+
+PATH={{ service_home_dir }}/.sdkman/candidates/maven/current/bin:{{ service_home_dir }}/.sdkman/candidates/java/current/bin:{{ service_home_dir }}/.sdkman/candidates/gradle/current/bin:/usr/local/bin:/usr/bin:/bin
+
+
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/rest_config.yml.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/rest_config.yml.j2
new file mode 100755
index 00000000..a068ff26
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/rest_config.yml.j2
@@ -0,0 +1,13 @@
+grails:
+ controllers:
+ upload:
+ maxFileSize: 104857600
+ maxRequestSize: 104857600
+restinterface:
+ capabilities:
+ - upload-database
+ - upload-html
+ - java-domino-gradle
+ - nsfodp
+ serverName: demo/DEMO
+ baseURL: https://127.0.0.1:{{ domino_vagrant_rest_api_port_forwards[0].guest }}
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/restapi.service.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/restapi.service.j2
new file mode 100755
index 00000000..fb0e2827
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/restapi.service.j2
@@ -0,0 +1,18 @@
+[Unit]
+Description=Vagrant Crud Rest API
+After=syslog.target network.target
+
+[Service]
+Type=simple
+
+User={{ service_user }}
+
+Restart=always
+
+EnvironmentFile={{ rest_app_home_dir }}/environment
+WorkingDirectory={{ rest_app_home_dir }}
+ExecStart={{ service_home_dir }}/.sdkman/candidates/java/current/bin/java -Xmx1024m -Dgrails.env=prod -Dlogging.level.root=ERROR -Dserver.port={{ domino_vagrant_rest_api_port_forwards[0].guest }} -Dspring.config.location=classpath:application.yml,optional:file:{{ rest_app_home_dir }}{{ rest_config_file }} -jar {{ rest_app_home_dir }}/bin/{{ rest_interface_jar }}
+ExecStop=/bin/kill -15 $MAINPID
+
+[Install]
+WantedBy=multi-user.target
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/run_dxl_importer.sh.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/run_dxl_importer.sh.j2
new file mode 100755
index 00000000..89b15386
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_vagrant_rest_api/templates/run_dxl_importer.sh.j2
@@ -0,0 +1,40 @@
+#!/bin/bash
+# Open the zipped project and run the Gradle script as DXL Importer.
+# This script assumes that the project was configured following the conventions
+# in https://github.com/Moonshine-IDE/DXLImporter-Gradle-Demo.
+#
+# USAGE: ./run_dxl_importer.sh
+# Parameters:
+# - . The path to the zip file containing the application to deploy.
+# The zip is expected to contain build.gradle at the top level (no parent directory).
+
+set -e
+
+ZIP_FILE=$1
+
+echo "Starting DXL Import for file $ZIP_FILE"
+
+# change permissions to ensure command runs cleanly
+TIMESTAMP=`date +%Y%m%d%H%M%S`
+TMP_DIR=/tmp/dxlimporter/$TIMESTAMP
+mkdir -p $TMP_DIR
+
+# unzip and setup project
+cd $TMP_DIR
+unzip $ZIP_FILE
+ln -s {{ service_home_dir }}/notes.ini
+
+# Update Domino path
+DOMINO_INSTALL_PATH={{ domino_install_dir }}
+REPLACE_VAR=notesInstallation
+REPLACE_FILE=gradle.properties
+sed -i "s:^$REPLACE_VAR=.*$:$REPLACE_VAR=$DOMINO_INSTALL_PATH:" $REPLACE_FILE
+
+# Read default user password
+PASSWORD=$(jq -r '.serverSetup | .admin | .password' {{ domino_home_dir }}/setup.json)
+
+# Run Gradle
+gradle -PnotesIDPassword="$PASSWORD" clean importAll
+
+# Cleanup
+sudo rm -rf "$TMP_DIR"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/defaults/main.yml
new file mode 100644
index 00000000..ca6023ef
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/defaults/main.yml
@@ -0,0 +1,12 @@
+---
+verse_archive: HCL_Verse_3.0.0.zip
+verse_base_version: 3.0.0
+verse_debug: true
+domino_verse_port_forwards:
+ -
+ guest: "{{ domino_install_port_forwards[0].guest }}"
+ url: "verse"
+ -
+ guest: "{{ domino_install_port_forwards[1].guest }}"
+ url: "verse"
+domino_verse_proxy_url: "{{ domino_verse_port_forwards[0].url }}.{{ settings.hostname }}.{{ settings.domain }}"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/meta/main.yml
new file mode 100644
index 00000000..8ed912ca
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: verse
+ author: MarkProminic
+ description: Install and setup verse
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/tasks/main.yml
new file mode 100755
index 00000000..2262c507
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/tasks/main.yml
@@ -0,0 +1,88 @@
+---
+-
+ name: "Creating installation directories for Verse"
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: directory
+ with_items:
+ - "{{ installer_dir }}/verse/archives"
+ - "{{ installer_dir }}/verse/Verse"
+
+-
+ name: "Checking if Verse is installed: {{ verse_base_version }}"
+ register: verse_installed_check
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/verse_install"
+ get_md5: false
+
+-
+ name: "Checking if Verse installer is at verse/archives/{{ verse_archive }}"
+ register: verse_installer_check
+ ansible.builtin.stat:
+ path: "{{ installer_dir }}/verse/archives/{{ verse_archive }}"
+ get_md5: false
+
+-
+ name: "Downloading Verse from {{ domino_installer_base_url }}"
+ register: verseresult
+ until: "verseresult is not failed"
+ retries: 3
+ ansible.builtin.get_url:
+ mode: '0755'
+ url: "{{ domino_installer_base_url }}/Verse/{{ verse_archive }}"
+ dest: "{{ installer_dir }}/verse/archives/{{ verse_archive }}"
+ username: "{{ domino_installer_url_user }}"
+ password: "{{ secrets.domino_installer_url_pass }}"
+ when: not verse_installer_check.stat.exists and not verse_installed_check.stat.exists
+
+-
+ name: "Extracting Verse from {{ verse_archive }}"
+ when: not verse_installed_check.stat.exists
+ ansible.builtin.unarchive:
+ mode: "a+x"
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ src: "{{ installer_dir }}/verse/archives/{{ verse_archive }}"
+ dest: "{{ installer_dir }}/verse/Verse"
+ creates: "{{ installer_dir }}/verse/Verse/HCL_Verse_{{ verse_base_version }}"
+ remote_src: true
+
+-
+ name: "Stopping Domino for Verse Installation"
+ when: not verse_installed_check.stat.exists
+ become: true
+ ansible.builtin.service:
+ name: domino
+ state: stopped
+ enabled: true
+
+-
+ name: "Extracting Verse Files from HCL_Verse.zip: {{ verse_base_version }}"
+ when: not verse_installed_check.stat.exists
+ ansible.builtin.unarchive:
+ mode: "a+x"
+ owner: "{{ domino_user }}"
+ group: "{{ domino_group }}"
+ src: "{{ installer_dir }}/verse/Verse/HCL_Verse_{{ verse_base_version }}/HCL_Verse.zip"
+ dest: "{{ domino_home_dir }}/domino/workspace/applications"
+ creates: "{{ domino_home_dir }}/domino/workspace/applications/ats-3.0.0-0.0-773.jar"
+ remote_src: true
+
+-
+ name: "Starting Domino"
+ when: not verse_installed_check.stat.exists
+ ansible.builtin.service:
+ name: domino
+ state: started
+ enabled: true
+
+-
+ name: "Setting Verse as installed"
+ when: not verse_installed_check.stat.exists
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: touch
+ with_items:
+ - "{{ completed_dir }}/verse_install"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/templates/create-database-v1.json b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/templates/create-database-v1.json
new file mode 100755
index 00000000..b01bd0bc
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/templates/create-database-v1.json
@@ -0,0 +1,36 @@
+{
+ "title": "Create iwaredir database from Template",
+ "versionjson": "1.0.0",
+ "steps": [
+ {
+ "title": "--- Step 1. Create database from template ---",
+ "databases": [
+ {
+ "action": "create",
+ "title": "IWAREDIR",
+ "filePath": "webmail.nsf",
+ "templatePath": "iwaredir.ntf",
+ "sign": true,
+ "replace": true,
+
+ "documents":[
+ {
+ "action":"create",
+ "items":{
+ "Item1":["TravelerAdmin","Freebusy","Mail"],
+ "Item2":[0, 5, 6],
+ "Item3":55,
+ "Item4": "Hello"
+ }
+ }
+ }
+ ]
+ },
+ {
+ "title": "--- Step 2 (final). Completed ---",
+ "messages": [
+ "Verse Webmail Redirector Database Configured"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/templates/update-internet-sites-docs-v4.json b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/templates/update-internet-sites-docs-v4.json
new file mode 100755
index 00000000..a24cfd22
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/domino_verse/templates/update-internet-sites-docs-v4.json
@@ -0,0 +1,70 @@
+{
+ "title": "TEST JSON",
+ "versionjson": "1.0.0",
+ "steps": [
+ {
+ "title": "--- Step 1. Export original data ---",
+ "databases": [
+ {
+ "action": "export",
+ "filePath": "names.nsf",
+ "documents": [
+ {
+ "action": "export",
+ "filePath": "JavaAddin\\Genesis\\jsonresponse\\myexport_before.txt",
+ "search": {
+ "formula": "Type=\"WebSite\""
+ },
+ "evaluate": "\"WSEnabledServices=\"+@Implode(WSEnabledServices)"
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "title": "--- Step 2. Create database from template ---",
+ "databases": [
+ {
+ "action": "update",
+ "filePath": "names.nsf",
+ "documents": [
+ {
+ "action": "update",
+ "search": {
+ "formula": "Type=\"WebSite\""
+ },
+ "computeWithForm": true,
+ "evaluate": "@SetField(\"WSEnabledServices\"; @Trim(@Unique(WSEnabledServices : \"Freebusy\" : \"Mail\")));"
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "title": "--- Step 3. Export updated data ---",
+ "databases": [
+ {
+ "action": "export",
+ "filePath": "names.nsf",
+ "documents": [
+ {
+ "action": "export",
+ "filePath": "JavaAddin\\Genesis\\jsonresponse\\myexport_after.txt",
+ "search": {
+ "formula": "Type=\"WebSite\""
+ },
+ "evaluate": "\"WSEnabledServices=\"+@Implode(WSEnabledServices)"
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "title": "--- Step 4 (final). Completed ---",
+ "messages": [
+ "You have run test.json file successfully"
+ ]
+ }
+
+ ]
+}
\ No newline at end of file
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_gradle/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_gradle/defaults/main.yml
new file mode 100644
index 00000000..cac380da
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_gradle/defaults/main.yml
@@ -0,0 +1,6 @@
+---
+gradle_version: 5.4.1
+sdkman_update_alternatives:
+ - candidate: gradle
+ name: gradle
+ link: /usr/bin/gradle
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_gradle/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_gradle/meta/main.yml
new file mode 100644
index 00000000..67773cd5
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_gradle/meta/main.yml
@@ -0,0 +1,34 @@
+galaxy_info:
+ role_name: sdkman_gradle
+ author: MarkProminic
+ description: Install and Enable Gradle
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_gradle/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_gradle/tasks/main.yml
new file mode 100755
index 00000000..a998bd08
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_gradle/tasks/main.yml
@@ -0,0 +1,35 @@
+---
+-
+ name: "Installing gradle {{ gradle_version }}"
+ ansible.builtin.shell: "{{ item }}"
+ become: true
+ become_user: "{{ service_user }}"
+ args:
+ executable: "/bin/bash"
+ creates: "{{ service_home_dir }}/.sdkman/candidates/gradle/current/bin/gradle"
+ with_items:
+ - "source {{ service_home_dir }}/.sdkman/bin/sdkman-init.sh && sdk install gradle {{ gradle_version }} && sdk default gradle {{ gradle_version }}"
+
+-
+ name: "Adding Gradle to /etc/profile.d/gradle.sh for {{ service_user }}"
+ ansible.builtin.lineinfile:
+ mode: '0755'
+ path: "{{ item.dir }}"
+ regexp: "^PATH=\"$PATH:{{ service_home_dir }}/.sdkman/candidates/gradle/current/bin\""
+ insertbefore: EOF
+ create: true
+ line: 'PATH=$PATH:{{ service_home_dir }}/.sdkman/candidates/gradle/current/bin'
+ with_items:
+ - { user: "{{ service_user }}", dir: "/etc/profile.d/gradle.sh" }
+
+-
+ name: "Updating alternatives for Gradle"
+ community.general.alternatives:
+ name: "{{ item.name }}"
+ path: "{{ sdkman_dir }}/candidates/{{ item.candidate }}/current/bin/{{ item.name }}"
+ link: "{{ item.link }}"
+ loop: "{{ sdkman_update_alternatives }}"
+ become: true
+ when: ansible_os_family != 'Darwin'
+ tags:
+ - sdkman_privilege
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/defaults/main.yml
new file mode 100644
index 00000000..0f415875
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/defaults/main.yml
@@ -0,0 +1,56 @@
+---
+# defaults file for ansible-sdkman
+
+# Installation directory defaults to the HOME directory of the `service_user`
+# Set sdkman_dir to override
+# sdkman_dir: /usr/local/sdkman
+
+# The directory in which to download the SDKMAN initialization script.
+sdkman_tmp_dir: /tmp
+
+# Validate SSL when downloading init script?
+# This is different from `sdkman_insecure_ssl' (see below).
+sdkman_validate_ssl: true
+
+# Update SDKMAN?
+sdkman_update: true
+
+# Configuration file options
+# Reference: http://sdkman.io/usage (Configuration section)
+sdkman_auto_answer: true
+sdkman_selfupdate_enable: false
+sdkman_insecure_ssl: false
+sdkman_disable_gvm_alias: false
+sdkman_curl_connect_timeout: 7
+sdkman_curl_max_time: 10
+sdkman_beta_channel: false
+sdkman_debug_mode: false
+sdkman_colour_enable: true
+
+# (un)install no packages by default, see format below
+sdkman_install_packages: []
+# sdkman_install_packages:
+# - { candidate: gradle, version: '3.5' }
+# - { candidate: maven, version: 3.5.0 }
+sdkman_uninstall_packages: []
+# sdkman_uninstall_packages:
+# - { candidate: java, version: 6u45 }
+
+# Configure default candidate versions
+sdkman_defaults: {}
+# sdkman_defaults:
+# gradle: '3.5'
+# maven: '3.3.9'
+
+# Flush caches before/after installing SDK packages
+# Reference: http://sdkman.io/usage (Flush section)
+sdkman_flush_caches_before: []
+sdkman_flush_caches_after: []
+
+# Set SDKMAN to offline mode
+# Reference: https://sdkman.io/usage#offline
+sdkman_offline_mode: false
+
+# Link SDKMAN installed packages
+# Reference: https://linux.die.net/man/8/update-alternatives
+sdkman_update_alternatives: []
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/handlers/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/handlers/main.yml
new file mode 100644
index 00000000..0a546524
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/handlers/main.yml
@@ -0,0 +1,6 @@
+---
+-
+ name: Cleanup SDKMAN script
+ ansible.builtin.file:
+ path: '{{ sdkman_tmp_dir }}/sdkman_script'
+ state: absent
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/meta/main.yml
new file mode 100644
index 00000000..92d07706
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: startcloud_sdkman_install
+ author: MarkProminic
+ description: SDKMAN installer
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/tasks/install.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/tasks/install.yml
new file mode 100644
index 00000000..b44ef436
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/tasks/install.yml
@@ -0,0 +1,89 @@
+---
+-
+ name: "Set SDKMAN user/group vars"
+ ansible.builtin.set_fact:
+ service_user: '{{ service_user | default(ansible_user_id) }}'
+ service_group: '{{ service_group | default(ansible_user_gid) }}'
+
+-
+ name: "Including system vars"
+ ansible.builtin.include_vars: "{{ lookup('first_found', params) }}"
+ vars:
+ params:
+ files:
+ - '{{ ansible_distribution }}.yml'
+ - '{{ ansible_os_family }}.yml'
+ - default.yml
+ paths:
+ - vars
+
+-
+ name: "Installing system packages"
+ ansible.builtin.package:
+ name: '{{ system_packages }}'
+ use: '{{ ansible_pkg_mgr }}'
+ state: present
+ become: true
+ tags:
+ - sdkman_privilege
+
+-
+ name: "Creating Ansible Temp Directory"
+ become: true
+ ansible.builtin.file:
+ mode: '0777'
+ path: "{{ item }}"
+ state: directory
+ owner: '{{ service_user }}'
+ group: '{{ service_group }}'
+ with_items:
+ - "{{ service_home_dir }}/.ansible/tmp"
+
+-
+ name: "Setting SDKMAN_DIR environment variable"
+ ansible.builtin.set_fact:
+ sdkman_dir: '{{ sdkman_dir | default(service_home_dir + "/.sdkman") }}'
+
+-
+ name: "Checking for SDKMAN installation"
+ ansible.builtin.stat:
+ path: '{{ sdkman_dir }}/bin/sdkman-init.sh'
+ register: sdkman_init
+
+-
+ name: "Downloading SDKMAN"
+ when: not sdkman_init.stat.exists
+ become: '{{ service_user != ansible_user_id }}'
+ become_user: '{{ service_user }}'
+ block:
+ -
+ name: "Downloading SDKMAN"
+ become: true
+ ansible.builtin.get_url:
+ mode: '0755'
+ url: https://get.sdkman.io
+ dest: '{{ sdkman_tmp_dir }}/sdkman_script'
+ owner: '{{ service_user }}'
+ group: '{{ service_group }}'
+ validate_certs: '{{ sdkman_validate_ssl }}'
+
+ -
+ name: "Running SDKMAN script"
+ environment:
+ SDKMAN_DIR: '{{ sdkman_dir }}'
+ ansible.builtin.command: /bin/bash {{ sdkman_tmp_dir }}/sdkman_script
+ args:
+ creates: '{{ sdkman_dir }}/bin/sdkman-init.sh'
+ notify: Cleanup SDKMAN script
+
+-
+ name: "Fixing permissions on SDKMAN_DIR"
+ ansible.builtin.file:
+ path: '{{ sdkman_dir }}'
+ state: directory
+ owner: '{{ service_user }}'
+ group: '{{ service_group }}'
+ recurse: true
+ become: true
+ tags:
+ - sdkman_privilege
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/tasks/main.yml
new file mode 100644
index 00000000..58257586
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/tasks/main.yml
@@ -0,0 +1,39 @@
+---
+-
+ name: "Installing SDKMAN"
+ ansible.builtin.include_tasks: install.yml
+
+-
+ name: "Installing SDKMAN"
+ environment:
+ SDKMAN_DIR: '{{ sdkman_dir }}'
+ SDKMAN_OFFLINE_MODE: 'false'
+ become: '{{ service_user != ansible_user_id }}'
+ become_user: '{{ service_user }}'
+ block:
+ -
+ name: "Running SDKMAN tasks"
+ ansible.builtin.include_tasks: sdkman.yml
+
+ -
+ name: "Persisting additional SDKMAN environment variables"
+ ansible.builtin.include_tasks: persist_env.yml
+ loop:
+ - .bash_profile
+ - .profile
+ - .bashrc
+ - .zshrc
+ loop_control:
+ loop_var: sdkman_profile
+
+-
+ name: "Updating alternatives"
+ community.general.alternatives:
+ name: "{{ item.name }}"
+ path: "{{ sdkman_dir }}/candidates/{{ item.candidate }}/current/bin/{{ item.name }}"
+ link: "{{ item.link }}"
+ loop: "{{ sdkman_update_alternatives }}"
+ become: true
+ when: ansible_os_family != 'Darwin'
+ tags:
+ - sdkman_privilege
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/tasks/persist_env.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/tasks/persist_env.yml
new file mode 100644
index 00000000..fb40154e
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/tasks/persist_env.yml
@@ -0,0 +1,20 @@
+---
+-
+ name: "Set path {{ sdkman_profile }}"
+ ansible.builtin.set_fact:
+ sdkman_profile_path: '{{ service_home_dir }}/{{ sdkman_profile }}'
+
+-
+ name: "Detect settings in {{ sdkman_profile_path }}"
+ ansible.builtin.command: "grep 'sdkman-init.sh' '{{ sdkman_profile_path }}'"
+ changed_when: false
+ failed_when: false
+ register: sdkman_profile_result
+
+-
+ name: "Add SDKMAN_OFFLINE_MODE to {{ sdkman_profile_path }}"
+ ansible.builtin.lineinfile:
+ path: "{{ sdkman_profile_path }}"
+ regexp: '^export SDKMAN_OFFLINE_MODE='
+ line: "export SDKMAN_OFFLINE_MODE={{ 'true' if sdkman_offline_mode else 'false' }}"
+ when: sdkman_profile_result.rc == 0
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/tasks/sdkman.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/tasks/sdkman.yml
new file mode 100644
index 00000000..059c9425
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/tasks/sdkman.yml
@@ -0,0 +1,87 @@
+---
+-
+ name: Configure SDKMAN
+ ansible.builtin.template:
+ src: templates/sdkman_config.j2
+ dest: '{{ sdkman_dir }}/etc/config'
+ owner: '{{ service_user }}'
+ group: '{{ service_group }}'
+ mode: '0755'
+
+-
+ name: Flush SDK caches (before)
+ ansible.builtin.shell: >-
+ . {{ sdkman_dir }}/bin/sdkman-init.sh && sdk flush {{ item }}
+ args:
+ executable: /bin/bash
+ loop: '{{ sdkman_flush_caches_before }}'
+ changed_when: false
+
+-
+ name: Update SDKMAN
+ ansible.builtin.shell: . {{ sdkman_dir }}/bin/sdkman-init.sh && sdk selfupdate
+ args:
+ executable: /bin/bash
+ register: sdk_selfupdate
+ changed_when: sdk_selfupdate.stdout != 'No update available at this time.'
+ when: sdkman_update | bool
+
+-
+ name: Install SDK candidates/versions
+ ansible.builtin.shell: >-
+ . {{ sdkman_dir }}/bin/sdkman-init.sh &&
+ sdk install {{ item.candidate }} {{ item.version | default('') }} {{ item.localpath | default('') }}
+ args:
+ executable: /bin/bash
+ loop: '{{ sdkman_install_packages }}'
+ register: sdk_install
+ changed_when: >-
+ 'is already installed.' not in sdk_install.stdout
+ failed_when: >-
+ sdk_install.rc != 0 and
+ 'is already installed.' not in sdk_install.stdout
+
+-
+ name: Uninstall SDK candidates/versions
+ ansible.builtin.shell: >-
+ . {{ sdkman_dir }}/bin/sdkman-init.sh &&
+ sdk uninstall {{ item.candidate }} {{ item.version }}
+ args:
+ executable: /bin/bash
+ loop: '{{ sdkman_uninstall_packages }}'
+ register: sdk_uninstall
+ changed_when: >-
+ not item.candidate + ' ' + item.version + ' is not installed.'
+ in sdk_uninstall.stdout
+
+-
+ name: Get SDK defaults
+ ansible.builtin.shell: . {{ sdkman_dir }}/bin/sdkman-init.sh && sdk current {{ item }}
+ args:
+ executable: /bin/bash
+ register: get_sdk_defaults
+ changed_when: false
+ loop: >-
+ {{ sdkman_install_packages | map(attribute="candidate") | unique | list }}
+
+-
+ name: Set SDK defaults
+ ansible.builtin.shell: >-
+ . {{ sdkman_dir }}/bin/sdkman-init.sh &&
+ sdk default {{ item.key }} {{ item.value }}
+ args:
+ executable: /bin/bash
+ loop: '{{ sdkman_defaults | dict2items }}'
+ changed_when: >-
+ not item.value in
+ (get_sdk_defaults.results |
+ selectattr('item', 'equalto', item.key) |
+ first).stdout
+
+-
+ name: Flush SDK caches (after)
+ ansible.builtin.shell: . {{ sdkman_dir }}/bin/sdkman-init.sh && sdk flush {{ item }}
+ args:
+ executable: /bin/bash
+ loop: '{{ sdkman_flush_caches_after }}'
+ changed_when: false
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/templates/sdkman_config.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/templates/sdkman_config.j2
new file mode 100644
index 00000000..61a763a4
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/templates/sdkman_config.j2
@@ -0,0 +1,9 @@
+sdkman_auto_answer={{ sdkman_auto_answer | lower }}
+sdkman_selfupdate_enable={{ sdkman_selfupdate_enable | lower}}
+sdkman_insecure_ssl={{ sdkman_insecure_ssl | lower }}
+sdkman_disable_gvm_alias={{ sdkman_disable_gvm_alias | lower }}
+sdkman_curl_connect_timeout={{ sdkman_curl_connect_timeout }}
+sdkman_curl_max_time={{ sdkman_curl_max_time }}
+sdkman_beta_channel={{ sdkman_beta_channel | lower }}
+sdkman_debug_mode={{ sdkman_debug_mode | lower }}
+sdkman_colour_enable={{ sdkman_colour_enable | lower }}
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/Alpine.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/Alpine.yml
new file mode 100644
index 00000000..5414bc6a
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/Alpine.yml
@@ -0,0 +1,20 @@
+---
+# vars for Alpine systems
+system_packages:
+ - ca-certificates
+ - curl
+ - dpkg # for update-alternatives
+ - findutils
+ - libstdc++
+ - openssl
+ - unzip
+ - zip
+
+alpine_glibc_version: 2.32-r0
+alpine_glibc_download_url: https://github.com/sgerrand/alpine-pkg-glibc/releases/download/{{ alpine_glibc_version }}
+alpine_glibc_pubkey_url: https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub
+alpine_glibc_pkg_names:
+ - glibc
+ - glibc-bin
+ - glibc-dev
+ - glibc-i18n
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/Debian.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/Debian.yml
new file mode 100644
index 00000000..0de15cef
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/Debian.yml
@@ -0,0 +1,11 @@
+---
+# vars for Debian systems
+service_user: "{{ service_user }}"
+service_group: "{{ service_user }}"
+system_packages:
+ - curl
+ - debianutils
+ - findutils
+ - tar
+ - unzip
+ - zip
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/RedHat.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/RedHat.yml
new file mode 100644
index 00000000..58b18835
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/RedHat.yml
@@ -0,0 +1,11 @@
+---
+# vars for Redhat systems
+system_packages:
+ - curl
+ - findutils
+ - libselinux-python3
+ - libstdc++
+ - tar
+ - unzip
+ - which
+ - zip
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/default.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/default.yml
new file mode 100644
index 00000000..83374314
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/default.yml
@@ -0,0 +1,3 @@
+---
+# vars file for ansible-sdkman
+system_packages: []
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/main.yml
new file mode 100644
index 00000000..83374314
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_install/vars/main.yml
@@ -0,0 +1,3 @@
+---
+# vars file for ansible-sdkman
+system_packages: []
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_java/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_java/defaults/main.yml
new file mode 100644
index 00000000..5cb32a1a
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_java/defaults/main.yml
@@ -0,0 +1,13 @@
+---
+java_version: "LATEST"
+java_jvm: jre-8u251-linux-x64.tar.gz
+sdkman_update_alternatives:
+ - candidate: java
+ name: java
+ link: /usr/bin/java
+ - candidate: java
+ name: javac
+ link: /usr/bin/javac
+ - candidate: java
+ name: javadoc
+ link: /usr/bin/javadoc
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_java/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_java/meta/main.yml
new file mode 100644
index 00000000..102d61ad
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_java/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: startcloud_sdkman_java
+ author: MarkProminic
+ description: Install and Enable Java
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_java/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_java/tasks/main.yml
new file mode 100755
index 00000000..51c33a46
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_java/tasks/main.yml
@@ -0,0 +1,64 @@
+---
+ ## Do not quote the shell command below without reworking the command
+-
+ name: "Extracting Java SDK version from SDKMAN: {{ java_version }}"
+ ansible.builtin.shell: |
+ set -o pipefail
+ source {{ service_home_dir }}/.sdkman/bin/sdkman-init.sh
+ sdk list java | grep " 8.0.*-zulu" | grep -v "fx-" | sed 's/^.*\(8.0.[0-9]\+-zulu\)[ ]*$/\1/' | head -n 1
+ become: true
+ become_user: "{{ service_user }}"
+ args:
+ executable: "/bin/bash"
+ register: computed_java_version
+ when: "java_version == 'LATEST'"
+
+-
+ name: "Setting Java SDK version from SDKMAN: {{ java_version }}"
+ ansible.builtin.shell: "echo {{ java_version }}"
+ become: true
+ become_user: "{{ service_user }}"
+ args:
+ executable: "/bin/bash"
+ register: selected_java_version
+ when: "java_version != 'LATEST'"
+
+-
+ name: "Installing Java via SDKMAN: {{ java_version }}"
+ become: true
+ become_user: "{{ service_user }}"
+ ansible.builtin.shell: >-
+ . {{ sdkman_dir }}/bin/sdkman-init.sh &&
+ sdk install java {{ computed_java_version.stdout }} {{ item.localpath | default('') }}
+ args:
+ executable: /bin/bash
+ register: sdk_install
+ changed_when: >-
+ 'is already installed.' not in sdk_install.stdout
+ failed_when: >-
+ sdk_install.rc != 0 and
+ 'is already installed.' not in sdk_install.stdout
+
+-
+ name: "Adding Java to /etc/profile.d/java.sh for {{ service_user }}"
+ ansible.builtin.lineinfile:
+ mode: '0755'
+ path: "{{ item.dir }}"
+ regexp: "^PATH=\"$PATH:{{ service_home_dir }}/.sdkman/candidates/java/current/bin\""
+ insertbefore: EOF
+ create: true
+ line: 'PATH=$PATH:{{ service_home_dir }}/.sdkman/candidates/java/current/bin'
+ with_items:
+ - { user: "{{ service_user }}", dir: "/etc/profile.d/java.sh" }
+
+-
+ name: "Updating Java Alternatives"
+ community.general.alternatives:
+ name: "{{ item.name }}"
+ path: "{{ sdkman_dir }}/candidates/{{ item.candidate }}/current/bin/{{ item.name }}"
+ link: "{{ item.link }}"
+ loop: "{{ sdkman_update_alternatives }}"
+ become: true
+ when: ansible_os_family != 'Darwin'
+ tags:
+ - sdkman_privilege
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_maven/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_maven/defaults/main.yml
new file mode 100644
index 00000000..d97bf5ae
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_maven/defaults/main.yml
@@ -0,0 +1,7 @@
+---
+maven_version: 3.6.0
+
+sdkman_update_alternatives:
+ - candidate: maven
+ name: mvn
+ link: /usr/bin/mvn
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_maven/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_maven/meta/main.yml
new file mode 100644
index 00000000..ee131fc2
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_maven/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: startcloud_sdkman_maven
+ author: MarkProminic
+ description: Install and Enable Maven
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_maven/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_maven/tasks/main.yml
new file mode 100755
index 00000000..9b6e77b6
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/sdkman_maven/tasks/main.yml
@@ -0,0 +1,39 @@
+---
+ ## Do not quote the shell command below without reworking the command
+-
+ name: "Installing Maven to ~/.sdkman/candidates/maven/current/bin/mvn for {{ service_user }}"
+ ansible.builtin.shell: "{{ item }}"
+ become: true
+ become_user: "{{ service_user }}"
+ register: mvnmanresult
+ until: "mvnmanresult is not failed"
+ retries: 3
+ args:
+ executable: "/bin/bash"
+ creates: "{{ service_home_dir }}/.sdkman/candidates/maven/current/bin/mvn"
+ with_items:
+ - "source {{ service_home_dir }}/.sdkman/bin/sdkman-init.sh && sdk install maven {{ maven_version }} "
+
+-
+ name: "Adding Maven to /etc/profile.d/maven.sh for {{ service_user }}"
+ ansible.builtin.lineinfile:
+ mode: '0755'
+ path: "{{ item.dir }}"
+ regexp: "^PATH=\"$PATH:{{ service_home_dir }}/.sdkman/candidates/maven/current/bin\""
+ insertbefore: EOF
+ create: true
+ line: 'PATH=$PATH:{{ service_home_dir }}/.sdkman/candidates/maven/current/bin'
+ with_items:
+ - { user: "{{ service_user }}", dir: "/etc/profile.d/maven.sh" }
+
+-
+ name: "Updating alternatives for Maven"
+ community.general.alternatives:
+ name: "{{ item.name }}"
+ path: "{{ sdkman_dir }}/candidates/{{ item.candidate }}/current/bin/{{ item.name }}"
+ link: "{{ item.link }}"
+ loop: "{{ sdkman_update_alternatives }}"
+ become: true
+ when: ansible_os_family != 'Darwin'
+ tags:
+ - sdkman_privilege
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_chrome/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_chrome/defaults/main.yml
new file mode 100644
index 00000000..c2170f2a
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_chrome/defaults/main.yml
@@ -0,0 +1,5 @@
+---
+chrome_signing_key_url: https://dl-ssl.google.com/linux/linux_signing_key.pub
+chrome_driver_version: 2.41
+chrome_driver_server_installer_tar: chromedriver_linux64.zip
+chrome_driver_url: https://chromedriver.storage.googleapis.com/
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_chrome/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_chrome/meta/main.yml
new file mode 100644
index 00000000..b08eebca
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_chrome/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: startcloud_chrome
+ author: MarkProminic
+ description: Install Chrome for Web Dev
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_chrome/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_chrome/tasks/main.yml
new file mode 100644
index 00000000..63a6f89b
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_chrome/tasks/main.yml
@@ -0,0 +1,66 @@
+---
+-
+ name: "Creating Domino installation directories"
+ ansible.builtin.file:
+ path: "{{ item }}"
+ state: directory
+ recurse: true
+ with_items:
+ - "{{ installer_dir }}/chrome/archives"
+
+-
+ name: "Adding Chrome source Apt signing key"
+ ansible.builtin.apt_key:
+ state: present
+ url: "{{ chrome_signing_key_url }}"
+
+-
+ name: "Adding Chrome repository for {{ ansible_distribution_release }}"
+ ansible.builtin.lineinfile:
+ mode: "0755"
+ create: true
+ dest: "/etc/apt/sources.list.d/google-chrome.list"
+ line: "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main"
+ state: present
+
+-
+ name: "Ensuring apt cache is updated"
+ ansible.builtin.apt:
+ cache_valid_time: 3600
+ update_cache: true
+ when: "ansible_os_family == 'Debian'"
+
+-
+ name: "Upgrading all apt packages"
+ ansible.builtin.apt:
+ upgrade: dist
+ update_cache: true
+ when: "ansible_os_family == 'Debian'"
+
+-
+ name: "Installing Chrome"
+ ansible.builtin.apt:
+ name:
+ - google-chrome-stable
+ update_cache: true
+
+-
+ name: "Downloading Chromedriver from {{ chrome_driver_url }}"
+ register: domlsresult
+ until: "domlsresult is not failed"
+ retries: 3
+ ansible.builtin.get_url:
+ mode: "0755"
+ url: "{{ chrome_driver_url }}{{ chrome_driver_version }}/{{ chrome_driver_server_installer_tar }}"
+ dest: "{{ installer_dir }}/chrome/archives/{{ chrome_driver_server_installer_tar }}"
+
+-
+ name: "Extracting Chrome Driver version: {{ chrome_driver_version }} "
+ ansible.builtin.unarchive:
+ src: "{{ installer_dir }}/chrome/archives/{{ chrome_driver_server_installer_tar }}"
+ dest: "/tmp/"
+ creates: "/usr/bin/chromedriver"
+ remote_src: true
+ mode: "a+x"
+ owner: root
+ group: root
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_dependencies/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_dependencies/defaults/main.yml
new file mode 100644
index 00000000..745e2757
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_dependencies/defaults/main.yml
@@ -0,0 +1,12 @@
+---
+packages:
+ - htop
+ - lynx
+ - lshw
+ - libxml2-utils
+ - tree
+ - python3-pexpect
+ - python3-pymongo
+ - python3-pymysql
+pip_packages:
+
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_dependencies/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_dependencies/meta/main.yml
new file mode 100644
index 00000000..b0cc953f
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_dependencies/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: startcloud_dependencies
+ author: MarkProminic
+ description: List of additional dependencies needed for multiple roles
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_dependencies/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_dependencies/tasks/main.yml
new file mode 100755
index 00000000..e9b5037c
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_dependencies/tasks/main.yml
@@ -0,0 +1,29 @@
+---
+-
+ name: "Ensuring apt cache is updated"
+ ansible.builtin.apt:
+ cache_valid_time: 3600
+ update_cache: true
+ when: "ansible_os_family == 'Debian'"
+
+-
+ name: "Upgrading all apt packages"
+ ansible.builtin.apt:
+ upgrade: dist
+ update_cache: true
+ when: "ansible_os_family == 'Debian'"
+
+-
+ name: "Adding Core Dependencies"
+ ansible.builtin.apt:
+ name:
+ - htop
+ state: present
+ when: "ansible_os_family == 'Debian'"
+
+-
+ name: "Adding Additional Dependencies {{ packages }}"
+ ansible.builtin.apt:
+ name: "{{ packages }}"
+ state: present
+ when: "ansible_os_family == 'Debian'"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/defaults/main.yml
new file mode 100644
index 00000000..7b00d571
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/defaults/main.yml
@@ -0,0 +1,22 @@
+---
+guacamole_version: 1.5.0
+httpcomponents_archive: httpcomponents-client-4.5.6-bin.zip
+httpcomponents_version: 4.5.6
+log4j_version: 1.2.17
+auth_domino_version: 1.0.0
+guacamole_admin: guacadmin
+guacd_port: 4822
+guacamole_admin_pass: guacadmin
+guacamole_mariadb_db_admin_user: guacdbadmin
+guacamole_mariadb_db: guacdb
+guacamole_mariadb_db_admin_pass: guacSecurePassword
+mariadb_mysql_java_connector_jar: mariadb-java-client-3.0.9.jar
+mariadb_mysql_java_connector_version: 8.0.31
+startcloud_guacamole_port_forwards:
+ -
+ guest: "{{ startcloud_tomcat_port_forwards[1].guest }}"
+ url: "guacamole"
+ -
+ guest: 4822
+ url: "guacd"
+startcloud_guacamole_proxy_url: "{{ startcloud_guacamole_port_forwards[0].url }}.{{ settings.hostname }}.{{ settings.domain }}"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/files/branding.jar b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/files/branding.jar
new file mode 100644
index 00000000..e61c8a69
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/files/branding.jar
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:88e12c1e3087a487f6bbf3aa312050090f528167630f66cce30ad5e2def8dc59
+size 41243
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/files/guacamole-auth-domino-1.0.0.jar b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/files/guacamole-auth-domino-1.0.0.jar
new file mode 100644
index 00000000..3f52df2a
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/files/guacamole-auth-domino-1.0.0.jar
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e564b2d901f7efb524a93098f805a95a090db68197db5024941379d3b560524d
+size 11990
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/files/mysql-connector-j-8.0.31.jar b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/files/mysql-connector-j-8.0.31.jar
new file mode 100644
index 00000000..f610fbe0
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/files/mysql-connector-j-8.0.31.jar
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0b051a2bc20ec33c6a463da82a64260d47e4c2b66e54bdb1376ff09014d03724
+size 2515519
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/meta/main.yml
new file mode 100644
index 00000000..02670848
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: startcloud_guacamole
+ author: MarkProminic
+ description: Setup a guacamole server to access RDP and SSH over HTML5
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/tasks/main.yml
new file mode 100644
index 00000000..89a142c9
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_guacamole/tasks/main.yml
@@ -0,0 +1,283 @@
+---
+-
+ name: "Checking if Guacamole is installed: {{ guacamole_version }}"
+ register: guacamole_installed_check
+ ansible.builtin.stat:
+ path: "{{ completed_dir }}/guacamole_install"
+ get_md5: false
+
+-
+ name: "Installing extra dependencies for Guacamole"
+ when: not guacamole_installed_check.stat.exists
+ ansible.builtin.apt:
+ name:
+ - openjdk-11-jdk
+ - build-essential
+ - libcairo2-dev
+ - libjpeg62-turbo-dev
+ - libjpeg-dev
+ - libpng-dev
+ - libtool-bin
+ - libossp-uuid-dev
+ - libavutil-dev
+ - libswscale-dev
+ - freerdp2-dev
+ - libpango1.0-dev
+ - libpango1.0-0
+ - libssh2-1-dev
+ - libvncserver-dev
+ - libtelnet-dev
+ - libwebsockets-dev
+ - libwebsockets16
+ - libwebsocketpp-dev
+ - libssl-dev
+ - libvorbis-dev
+ - libwebp-dev
+ - libpulse-dev
+ - libavcodec-dev
+ - libavformat-dev
+ - openssl
+ - gcc
+ - make
+ - tzdata
+
+-
+ name: "Checking Build Environment"
+ register: folder_stats
+ ansible.builtin.stat:
+ path: "{{ item }}"
+ with_items:
+ -
+ - "/etc/guacamole"
+ - "/usr/local/src/guacamole/{{ guacamole_version }}/client"
+ - "/etc/guacamole/extensions"
+ - "/etc/guacamole/lib"
+
+-
+ name: "Creating Directories"
+ ansible.builtin.file:
+ group: root
+ mode: 493
+ owner: root
+ path: "{{ item.item }}"
+ state: directory
+ when: not item.stat.exists
+ with_items:
+ - "{{ folder_stats.results }}"
+
+-
+ name: "Downloading MySQL Connector {{ mariadb_mysql_java_connector_version }}"
+ when: not guacamole_installed_check.stat.exists
+ ansible.builtin.get_url:
+ mode: "0755"
+ dest: "/tmp/mysql-connector-j-{{ mariadb_mysql_java_connector_version }}.tar.gz"
+ url: "https://cdn.mysql.com//Downloads/Connector-J/mysql-connector-j-{{ mariadb_mysql_java_connector_version }}.tar.gz"
+
+-
+ name: "Unpacking MySQL Connector {{ mariadb_mysql_java_connector_version }}"
+ when: not guacamole_installed_check.stat.exists
+ register: asterisk_archive_contents
+ ansible.builtin.unarchive:
+ copy: false
+ dest: "/vagrant/ansible/roles/{{ role_name }}/files/"
+ list_files: true
+ src: "/tmp/mysql-connector-j-{{ mariadb_mysql_java_connector_version }}.tar.gz"
+
+-
+ name: "Installing MySQL Connector {{ mariadb_mysql_java_connector_version }}"
+ when: not guacamole_installed_check.stat.exists
+ ansible.builtin.copy:
+ mode: "0755"
+ dest: "/etc/guacamole/lib/mysql-connector-j-{{ mariadb_mysql_java_connector_version }}.jar"
+ src: "mysql-connector-j-{{ mariadb_mysql_java_connector_version }}.jar"
+
+-
+ name: "Downloading Guacamole Server"
+ when: not guacamole_installed_check.stat.exists
+ ansible.builtin.git: # noqa: latest
+ dest: "/usr/local/src/guacamole/{{ guacamole_version }}/guacamole-server"
+ repo: "https://github.com/apache/guacamole-server"
+
+-
+ name: "Downloading Guacamole Client"
+ when: not guacamole_installed_check.stat.exists
+ ansible.builtin.git: # noqa: latest
+ dest: "/usr/local/src/guacamole/{{ guacamole_version }}/guacamole-client"
+ repo: "https://github.com/apache/guacamole-client"
+
+-
+ name: "Preparing Guacamole Source for Compilation"
+ when: not guacamole_installed_check.stat.exists
+ ansible.builtin.shell: "autoreconf -fi && ./configure --with-systemd-dir=/etc/systemd/system"
+ args:
+ chdir: "/usr/local/src/guacamole/{{ guacamole_version }}/guacamole-server"
+
+-
+ name: "Compiling Guacamole Server from Source"
+ when: not guacamole_installed_check.stat.exists
+ ansible.builtin.shell: "make && make install && ldconfig"
+ args:
+ chdir: "/usr/local/src/guacamole/{{ guacamole_version }}/guacamole-server"
+
+-
+ name: "Building Guacamole Client"
+ when: not guacamole_installed_check.stat.exists
+ become: true
+ ansible.builtin.shell: "{{ item.shell }}"
+ args:
+ chdir: "/usr/local/src/guacamole/{{ guacamole_version }}/guacamole-client"
+ executable: "/bin/bash"
+ environment:
+ JAVA_HOME: "/usr/lib/jvm/java-11-openjdk-amd64"
+ with_items:
+ - {shell: "mvn clean package -U" }
+
+-
+ name: "Moving Guacamole-client to Guacamole Working Directory"
+ when: not guacamole_installed_check.stat.exists
+ ansible.builtin.copy:
+ mode: "0755"
+ dest: "{{ item.dest }}"
+ src: "{{ item.path }}"
+ remote_src: true
+ with_items:
+ -
+ path: "/usr/local/src/guacamole/{{ guacamole_version }}/guacamole-client/guacamole/target/guacamole-{{ guacamole_version }}.war"
+ dest: "/var/lib/tomcat9/webapps/ROOT.war"
+
+-
+ name: "Creating Guacamole Configuration File"
+ when: not guacamole_installed_check.stat.exists
+ ansible.builtin.blockinfile:
+ block: |
+ # Hostname and port of guacamole proxy
+ guacd-hostname: localhost
+ guacd-port: {{ startcloud_guacamole_port_forwards[1].guest }}
+ # MySQL properties
+ mysql-hostname: {{ netoutput.stdout }}
+ mysql-port: {{ startcloud_mariadb_port_forwards[0].guest }}
+ mysql-database: {{ guacamole_mariadb_db }}
+ mysql-username: {{ guacamole_mariadb_db_admin_user }}
+ mysql-password: {{ guacamole_mariadb_db_admin_pass }}
+ mysql-default-max-connections-per-user: 0
+ mysql-default-max-group-connections-per-user: 0
+ create: true
+ mode: "0755"
+ path: "/etc/guacamole/guacamole.properties"
+
+-
+ name: "Marking Guacamole as installed"
+ when: not guacamole_installed_check.stat.exists
+ ansible.builtin.file:
+ mode: "0755"
+ path: "{{ item }}"
+ state: touch
+ with_items:
+ - "{{ completed_dir }}/guacamole_install"
+
+-
+ name: "Removing Test Site"
+ when: not guacamole_installed_check.stat.exists
+ ansible.builtin.file:
+ mode: "0755"
+ path: "{{ item }}"
+ state: absent
+ with_items:
+ - "/var/lib/tomcat9/webapps/ROOT"
+
+-
+ name: "Copying JDBC Driver to Guacamole Working Directory"
+ ansible.builtin.copy:
+ mode: "0755"
+ dest: "{{ item.dest }}"
+ src: "{{ item.path }}"
+ with_items:
+ -
+ path: "/usr/local/src/guacamole/{{ guacamole_version }}/guacamole-client/extensions/guacamole-auth-jdbc/modules/guacamole-auth-jdbc-mysql/target/guacamole-auth-jdbc-mysql-{{ guacamole_version }}.jar"
+ dest: "/etc/guacamole/extensions/guacamole-auth-jdbc-mysql-{{ guacamole_version }}.jar"
+
+-
+ name: Adding Branding
+ ansible.builtin.copy:
+ mode: "0755"
+ src: branding.jar
+ dest: /etc/guacamole/extensions/branding.jar
+
+-
+ name: Adding extension
+ ansible.builtin.copy:
+ mode: "0755"
+ src: "guacamole-auth-domino-{{ auth_domino_version }}.jar"
+ dest: "/etc/guacamole/extensions/guacamole-auth-domino-{{ auth_domino_version }}.jar"
+
+-
+ name: "Downloading additional Jar Dependencies - log4j version {{ log4j_version }}"
+ ansible.builtin.get_url:
+ mode: "0755"
+ dest: "/etc/guacamole/lib/log4j-{{ log4j_version }}.jar"
+ url: "https://archive.apache.org/dist/logging/log4j/{{ log4j_version }}/log4j-{{ log4j_version }}.jar"
+
+-
+ name: "Downloading additional Jar Dependencies - httpcomponents version {{ httpcomponents_version }}"
+ ansible.builtin.get_url:
+ mode: "0755"
+ dest: "/tmp/httpcomponents-client-{{ httpcomponents_version }}-bin.tar.gz"
+ url: "https://archive.apache.org/dist/httpcomponents/httpclient/binary/httpcomponents-client-{{ httpcomponents_version }}-bin.tar.gz"
+
+-
+ name: "Unpacking Dependencies"
+ ansible.builtin.unarchive:
+ copy: false
+ dest: /tmp
+ src: "/tmp/httpcomponents-client-{{ httpcomponents_version }}-bin.tar.gz"
+
+-
+ name: "Adding SSO Dependency extension"
+ ansible.posix.synchronize:
+ src: "/tmp/httpcomponents-client-{{ httpcomponents_version }}/lib/"
+ dest: "/etc/guacamole/lib/"
+
+-
+ name: "Creating Guacamole Database {{ guacamole_mariadb_db }}"
+ community.mysql.mysql_db:
+ login_password: "{{ mariadb_admin_pass }}"
+ login_user: "{{ mariadb_admin_user }}"
+ name: "{{ guacamole_mariadb_db }}"
+ state: present
+
+-
+ name: "Create database user with all database privileges for {{ guacamole_mariadb_db }}"
+ community.mysql.mysql_user:
+ login_password: "{{ mariadb_admin_pass }}"
+ login_user: "{{ mariadb_admin_user }}"
+ name: "{{ guacamole_mariadb_db_admin_user }}"
+ password: "{{ guacamole_mariadb_db_admin_pass }}"
+ priv: "{{ guacamole_mariadb_db }}.*:ALL,GRANT"
+ host: "%"
+ state: present
+
+-
+ name: "Fixing bug with JDBC Driver over Version 8"
+ ansible.builtin.shell: "mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql -u{{ mariadb_admin_user }} mysql -p{{ mariadb_admin_pass }}"
+
+-
+ name: "Import Schema into database {{ guacamole_mariadb_db }}"
+ community.mysql.mysql_db:
+ login_password: "{{ mariadb_admin_pass }}"
+ login_user: "{{ mariadb_admin_user }}"
+ name: "{{ guacamole_mariadb_db }}"
+ state: import
+ target: "{{ item.srcdir }}{{ guacamole_version }}/guacamole-client/extensions/guacamole-auth-jdbc/modules/guacamole-auth-jdbc-mysql/schema/{{ item.file }}"
+ with_items:
+ - { file: "001-create-schema.sql", srcdir: "/usr/local/src/guacamole/"}
+ - { file: "002-create-admin-user.sql", srcdir: "/usr/local/src/guacamole/"}
+
+-
+ name: "Restarting Services"
+ ansible.builtin.systemd:
+ enabled: true
+ name: "{{ item }}"
+ state: restarted
+ with_items:
+ - guacd
+ - "tomcat{{ tomcat_version }}"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/defaults/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/defaults/main.yml
new file mode 100644
index 00000000..4ec4a5f0
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/defaults/main.yml
@@ -0,0 +1,14 @@
+---
+haproxy_http_port: 80
+haproxy_https_port: 443
+startcloud_haproxy_port_forwards:
+ -
+ guest: 443
+ url: "demo"
+ -
+ guest: 80
+ url: "demo"
+ -
+ guest: 444
+ url: "stats"
+startcloud_haproxy_proxy_url: "{{ startcloud_haproxy_port_forwards[0].url }}.{{ settings.hostname }}.{{ settings.domain }}"
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/handlers/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/handlers/main.yml
new file mode 100644
index 00000000..3f8c2fb8
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/handlers/main.yml
@@ -0,0 +1,6 @@
+---
+-
+ name: Restart haproxy
+ ansible.builtin.service:
+ name: haproxy
+ state: restarted
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/meta/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/meta/main.yml
new file mode 100644
index 00000000..4483b246
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/meta/main.yml
@@ -0,0 +1,33 @@
+galaxy_info:
+ role_name: startcloud_haproxy
+ author: MarkProminic
+ description: Setup a reverse proxy to aggregate roles and provide centralized SSL management
+ company: STARTCloud
+ # issue_tracker_url: http://example.com/issue/tracker
+ license: license (Apache)
+ min_ansible_version: '1.2'
+
+ # Optionally specify the branch Galaxy will use when accessing the GitHub
+ # repo for this role. During role install, if no tags are available,
+ # Galaxy will use this branch. During import Galaxy will access files on
+ # this branch. If Travis integration is configured, only notifications for this
+ # branch will be accepted. Otherwise, in all cases, the repo's default branch
+ # (usually master) will be used.
+ # github_branch:
+
+ platforms:
+ - name: Debian
+ versions:
+ - 'bullseye'
+
+ galaxy_tags: []
+ # List tags for your role here, one per line. A tag is a keyword that describes
+ # and categorizes the role. Users find roles by searching for tags. Be sure to
+ # remove the '[]' above, if you add tags to this list.
+ #
+ # NOTE: A tag is limited to a single word comprised of alphanumeric characters.
+ # Maximum 20 tags per role.
+
+dependencies: []
+ # List your role dependencies here, one per line. Be sure to remove the '[]' above,
+ # if you add dependencies to this list.
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/tasks/main.yml b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/tasks/main.yml
new file mode 100755
index 00000000..461212a4
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/tasks/main.yml
@@ -0,0 +1,55 @@
+---
+-
+ name: "Installing HAProxy and KeepAlived"
+ ansible.builtin.apt:
+ pkg:
+ - haproxy
+ - keepalived
+
+-
+ name: "Creating template and certifcate directories"
+ ansible.builtin.file:
+ mode: '0644'
+ path: "{{ item }}"
+ state: directory
+ owner: haproxy
+ with_items:
+ - "{{ cert_dir }}"
+ - "/etc/haproxy/errors/tpl"
+ - "/etc/haproxy/errors/html"
+
+-
+ name: "Configuring haproxy"
+ become: true
+ tags: haproxy
+ ansible.builtin.template:
+ owner: "{{ item.owner }}"
+ src: "{{ item.src }}"
+ dest: "{{ item.dest }}"
+ mode: "a+x"
+ loop:
+ - { src: 'haproxy.cfg.j2', dest: '/etc/haproxy/haproxy.cfg', owner: 'haproxy' }
+ - { src: 'tpl/400.http.j2', dest: '/etc/haproxy/errors/tpl/400.http', owner: 'haproxy' }
+ - { src: 'tpl/403.http.j2', dest: '/etc/haproxy/errors/tpl/403.http', owner: 'haproxy' }
+ - { src: 'tpl/404.http.j2', dest: '/etc/haproxy/errors/tpl/404.http', owner: 'haproxy' }
+ - { src: 'tpl/408.http.j2', dest: '/etc/haproxy/errors/tpl/408.http', owner: 'haproxy' }
+ - { src: 'tpl/500.http.j2', dest: '/etc/haproxy/errors/tpl/500.http', owner: 'haproxy' }
+ - { src: 'tpl/502.http.j2', dest: '/etc/haproxy/errors/tpl/502.http', owner: 'haproxy' }
+ - { src: 'tpl/503.http.j2', dest: '/etc/haproxy/errors/tpl/503.http', owner: 'haproxy' }
+ - { src: 'tpl/504.http.j2', dest: '/etc/haproxy/errors/tpl/504.http', owner: 'haproxy' }
+ - { src: 'html/400.html.j2', dest: '/etc/haproxy/errors/html/400.html', owner: 'haproxy' }
+ - { src: 'html/403.html.j2', dest: '/etc/haproxy/errors/html/403.html', owner: 'haproxy' }
+ - { src: 'html/404.html.j2', dest: '/etc/haproxy/errors/html/404.html', owner: 'haproxy' }
+ - { src: 'html/408.html.j2', dest: '/etc/haproxy/errors/html/408.html', owner: 'haproxy' }
+ - { src: 'html/500.html.j2', dest: '/etc/haproxy/errors/html/500.html', owner: 'haproxy' }
+ - { src: 'html/502.html.j2', dest: '/etc/haproxy/errors/html/502.html', owner: 'haproxy' }
+ - { src: 'html/503.html.j2', dest: '/etc/haproxy/errors/html/503.html', owner: 'haproxy' }
+ - { src: 'html/504.html.j2', dest: '/etc/haproxy/errors/html/504.html', owner: 'haproxy' }
+
+-
+ name: "Starting service haproxy"
+ ansible.builtin.service:
+ enabled: true
+ name: haproxy
+ state: restarted
+ tags: haproxy
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/templates/haproxy.cfg.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/templates/haproxy.cfg.j2
new file mode 100644
index 00000000..2de49d4f
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/templates/haproxy.cfg.j2
@@ -0,0 +1,320 @@
+global
+ pidfile /var/run/haproxy.pid
+ #chroot /var/lib/haproxy
+ user haproxy
+ group haproxy
+
+ # Makes the process fork into background
+ daemon
+
+ ## Logging stdout/stderr because we are in docker
+ log stdout format raw local0 debug
+
+ ## Tuning
+ tune.bufsize 64768
+ maxconn 500000
+
+ ## Enable Authelia Redirects
+ ##lua-prepend-path /etc/haproxy/?/http.lua
+ ##lua-load /etc/haproxy/haproxy-lua-http/auth-request.lua
+
+ ## Stats
+ stats socket /var/lib/haproxy/stats level admin mode 660 group haproxy expose-fd listeners
+ stats timeout 30s
+
+ ## Default SSL material locations
+ ca-base /etc/ssl/certs
+ crt-base /etc/ssl/private
+
+ ## SSL/TLS Cipher Suites
+ tune.ssl.default-dh-param 4096
+ ## See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
+ ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM->
+ ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
+ ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
+
+#### DEFAULTS ####
+defaults
+ default-server init-addr none
+ log global
+ mode http
+ retries 3
+ timeout http-request 60s
+ timeout queue 1m
+ timeout connect 30s
+ timeout client 1m
+ timeout server 1m
+ timeout http-keep-alive 30s
+ timeout check 10s
+ timeout client-fin 30s
+ maxconn 500000
+ option http-keep-alive
+ option forwardfor
+ option http-server-close
+ option dontlognull
+ option httplog
+ option redispatch
+ option tcpka
+ http-error status 503 content-type "text/html; charset=utf-8" lf-file /etc/haproxy/errors/html/503.html
+
+http-errors allerrors
+ errorfile 404 /etc/haproxy/errors/tpl/404.http
+ errorfile 400 /etc/haproxy/errors/tpl/400.http
+ errorfile 403 /etc/haproxy/errors/tpl/403.http
+ errorfile 408 /etc/haproxy/errors/tpl/408.http
+ errorfile 502 /etc/haproxy/errors/tpl/502.http
+ errorfile 504 /etc/haproxy/errors/tpl/504.http
+
+# errorfile 500 /etc/haproxy/errors/tpl/500.http
+# errorfile 503 /etc/haproxy/errors/tpl/503.http
+
+#---------------------------------------------------------------------
+##### FRONTENDs: WEB/HTTP/HTTPS
+#---------------------------------------------------------------------
+
+## HAProxy stats web gui.
+frontend STATS
+ mode http
+ {% if selfsigned_enabled %}
+ bind :::{{ startcloud_haproxy_port_forwards[2].guest }} ssl crt {{ cert_dir }}/combined/{{ settings.hostname }}.{{ settings.domain }}-self-signed.pem
+ {% else %}
+ bind :::{{ startcloud_haproxy_port_forwards[2].guest }} ssl crt {{ cert_dir }}/combined/default-signed.pem
+ {% endif %}
+ stats enable
+ stats uri /
+ stats admin if TRUE
+ http-request use-service prometheus-exporter if { path /metrics }
+
+frontend EXT-WEB:{{ startcloud_haproxy_port_forwards[1].guest}}
+ bind :::{{ startcloud_haproxy_port_forwards[1].guest}}
+ mode http
+ log global
+
+ # Redirections to Let's Encrypt local agent
+ acl ispath_letsencrypt path_beg /.well-known/acme-challenge/
+
+ # Redirect HTTP -> HTTPS (except let's encrypt)
+ redirect code 301 scheme https if !{ ssl_fc } !ispath_letsencrypt
+ use_backend letsencrypt_80 if ispath_letsencrypt
+
+frontend HTTPS-IN
+ ## Primary Ingress point
+ {% if selfsigned_enabled %}
+ bind :::{{ startcloud_haproxy_port_forwards[0].guest}} v4v6 ssl crt {{ cert_dir }}/combined/{{ settings.hostname }}.{{ settings.domain }}-self-signed.pem
+ {% else %}
+ bind :::{{ startcloud_haproxy_port_forwards[0].guest}} v4v6 ssl crt {{ cert_dir }}/combined/default-signed.pem
+ {% endif %}
+
+ mode http
+
+ http-request redirect scheme https unless { ssl_fc }
+
+ ## Dynamic Logging to Error Page
+
+ errorfiles allerrors
+
+ unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
+ unique-id-header X-Unique-ID
+
+ log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r %[unique-id]"
+
+
+ #BEGIN CORS
+ http-response set-header Access-Control-Allow-Origin "*"
+ http-response set-header Access-Control-Allow-Headers "Origin, X-Requested-With, Content-Type, Accept, Authorization, JSNLog-RequestId, activityId, applicationId, applicationUserId, channelId, senderId, sessionId"
+ http-response set-header Access-Control-Max-Age 3628800
+ http-response set-header Access-Control-Allow-Methods "*"
+ # END CORS
+
+
+ #### ACL's ####
+ ## Source ACL Definitions
+ #acl network_allowed src 10.0.2.0/16 192.168.0.0/16 172.16.0.0/16
+
+
+ ## Host ACL Definitions
+ acl host_demo hdr(host) -i {{ settings.hostname }}.{{ settings.domain }}
+
+ acl host_console hdr(host) -i console.{{ settings.hostname }}.{{ settings.domain }}
+
+ {% if domino_vagrant_rest_api_proxy_url is defined %}
+ acl host_restapi hdr(host) -i {{ domino_vagrant_rest_api_proxy_url }}
+ {% endif %}
+
+ {% if domino_nomadweb_proxy_url is defined %}
+ acl host_nomadweb hdr(host) -i {{ domino_nomadweb_proxy_url }}
+ {% endif %}
+
+ {% if domino_install_proxy_url is defined %}
+ acl host_domino hdr(host) -i {{ domino_install_proxy_url }}
+ {% endif %}
+
+ {% if startcloud_quick_start_proxy_url is defined %}
+ acl host_downloads hdr(host) -i {{ startcloud_quick_start_proxy_url }}
+ {% endif %}
+
+ {% if domino_leap_proxy_url is defined %}
+ acl host_leap hdr(host) -i {{ domino_leap_proxy_url }}
+ {% endif %}
+
+ {% if domino_traveler_proxy_url is defined %}
+ acl host_traveler hdr(host) -i {{ domino_traveler_proxy_url }}
+ {% endif %}
+
+ {% if domino_sametime_proxy_url is defined %}
+ acl host_sametime hdr(host) -i {{ domino_sametime_proxy_url }}
+ {% endif %}
+
+ {% if startcloud_guacamole_proxy_url is defined %}
+ acl host_guacamole hdr(host) -i {{ startcloud_guacamole_proxy_url }}
+ {% endif %}
+
+ {% if domino_verse_proxy_url is defined %}
+ acl host_verse hdr(host) -i {{ domino_verse_proxy_url }}
+ {% endif %}
+
+ ## Application backends
+ {% if domino_vagrant_rest_api_proxy_url is defined %}
+ use_backend restapi if host_restapi
+ {% endif %}
+
+ {% if domino_nomadweb_proxy_url is defined %}
+ use_backend nomadweb if host_nomadweb
+ {% endif %}
+
+ {% if domino_install_proxy_url is defined %}
+ use_backend domino if host_domino
+ {% endif %}
+
+ {% if startcloud_quick_start_proxy_url is defined %}
+ use_backend downloads if host_downloads || host_demo
+ {% endif %}
+
+ {% if domino_leap_proxy_url is defined %}
+ use_backend leap if host_leap
+ {% endif %}
+
+ {% if domino_traveler_proxy_url is defined %}
+ use_backend traveler if host_traveler
+ {% endif %}
+
+ {% if domino_sametime_proxy_url is defined %}
+ use_backend sametime if host_sametime
+ {% endif %}
+
+ {% if startcloud_guacamole_proxy_url is defined %}
+ use_backend guacamole if host_guacamole
+ {% endif %}
+
+ {% if domino_verse_proxy_url is defined %}
+ use_backend verse if host_verse
+ {% endif %}
+
+ use_backend console if host_console
+
+ default_backend downloads
+
+#---------------------------------------------------------------------
+##### BACKENDS ####
+#---------------------------------------------------------------------
+
+## Let's Encrypt Cert-bot Tool
+backend letsencrypt_80
+ mode http
+ log global
+ http-response set-header Server haproxy
+ server letsencrypt 127.0.0.1:8080
+
+# Console
+backend console
+ mode http
+ balance leastconn
+ server cockpit 127.0.0.1:9090 ssl verify none check cookie app1 init-addr last,libc,none
+
+{% if domino_vagrant_rest_api_port_forwards[0].guest is defined %}
+# Rest API
+backend restapi
+ mode http
+ balance leastconn
+ server restapi 127.0.0.1:{{ domino_vagrant_rest_api_port_forwards[0].guest }} check cookie app1 init-addr last,libc,none
+{% endif %}
+
+{% if startcloud_quick_start_port_forwards[0].guest is defined %}
+# Downloads
+backend downloads
+ mode http
+ balance leastconn
+ server downloads 127.0.0.1:{{ startcloud_quick_start_port_forwards[0].guest }} ssl verify none check cookie app1 init-addr last,libc,none
+{% endif %}
+
+{% if domino_install_port_forwards[0].guest is defined %}
+# Domino
+backend domino
+ mode http
+ balance leastconn
+ server domino-https 127.0.0.1:{{ domino_install_port_forwards[0].guest }} ssl verify none check cookie app1 init-addr last,libc,none
+ {% if domino_install_port_forwards[1].guest is defined %}
+ server domino-http 127.0.0.1:{{ domino_install_port_forwards[1].guest }} backup check cookie app1 init-addr last,libc,none
+ {% endif %}
+{% endif %}
+
+{% if domino_nomadweb_port_forwards[0].guest is defined %}
+# Nomad Web
+backend nomadweb
+ mode http
+ balance leastconn
+ server nomadweb 127.0.0.1:{{ domino_nomadweb_port_forwards[0].guest }} ssl verify none check cookie app1 init-addr last,libc,none
+{% endif %}
+
+
+{% if domino_traveler_port_forwards[0].guest is defined %}
+# Traveler
+backend traveler
+ mode http
+ balance leastconn
+ server domino-https 127.0.0.1:{{ domino_traveler_port_forwards[0].guest }} ssl verify none check cookie app1 init-addr last,libc,none
+ {% if domino_traveler_port_forwards[1].guest is defined %}
+ server domino-http 127.0.0.1:{{ domino_traveler_port_forwards[1].guest }} backup check cookie app1 init-addr last,libc,none
+ {% endif %}
+{% endif %}
+
+{% if domino_leap_port_forwards[0].guest is defined %}
+# Leap
+backend leap
+ mode http
+ balance leastconn
+ server domino-https 127.0.0.1:{{ domino_leap_port_forwards[0].guest }} ssl verify none check cookie app1 init-addr last,libc,none
+ {% if domino_leap_port_forwards[1].guest is defined %}
+ server domino-http 127.0.0.1:{{ domino_leap_port_forwards[1].guest }} backup check cookie app1 init-addr last,libc,none
+ {% endif %}
+{% endif %}
+
+{% if startcloud_guacamole_port_forwards[0].guest is defined %}
+# Guacamole
+backend guacamole
+ mode http
+ balance leastconn
+ server guacamole 127.0.0.1:{{ startcloud_guacamole_port_forwards[0].guest }} ssl verify none check cookie app1 init-addr last,libc,none
+{% endif %}
+
+{% if domino_verse_port_forwards[0].guest is defined %}
+# Verse
+backend verse
+ mode http
+ balance leastconn
+ server domino-https 127.0.0.1:{{ domino_verse_port_forwards[0].guest }} ssl verify none check cookie app1 init-addr last,libc,none
+ {% if domino_verse_port_forwards[1].guest is defined %}
+ server domino-http 127.0.0.1:{{ domino_verse_port_forwards[1].guest }} backup check cookie app1 init-addr last,libc,none
+ {% endif %}
+{% endif %}
+
+{% if domino_sametime_port_forwards[0].guest is defined %}
+# Sametime
+backend sametime
+ mode http
+ balance leastconn
+ server domino-https 127.0.0.1:{{ domino_sametime_port_forwards[0].guest }} ssl verify none check cookie app1 init-addr last,libc,none
+ {% if domino_sametime_port_forwards[1].guest is defined %}
+ server domino-http 127.0.0.1:{{ domino_sametime_port_forwards[0].guest }} backup check cookie app1 init-addr last,libc,none
+ {% endif %}
+{% endif %}
\ No newline at end of file
diff --git a/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/templates/html/400.html.j2 b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/templates/html/400.html.j2
new file mode 100755
index 00000000..7b23908f
--- /dev/null
+++ b/Assets/provisioners/demo-tasks/0.1.20/scripts/ansible/roles/startcloud_haproxy/templates/html/400.html.j2
@@ -0,0 +1,21 @@
+
+
+
+
+ 400 Bad request
+
+
+
+
Please ensure you have updated your HOSTS file for the following instructions to work. You can learn how to update your HOSTS file on the following Platforms: