-
Notifications
You must be signed in to change notification settings - Fork 62
sap_ha_pacemaker_cluster: Add support for SAP Web Dispatcher #974
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
@rob0d Can you share installation details that you used for testing? Installing WD on same hosts with ASCS/ERS would require using same SID, so installing them together.
|
Hi @marcelmamula, WD must not have the same SID as ASCS/ERS. Hence a different parameter as per the example in PR #929 ( The installation of WD on both cluster nodes is slightly more painful as there is only one WD instance unlike with ASCS + ERS where the installation of one on each node takes care of all the Linux changes. With WD, the installation has to be done on both nodes (as it would be the case if you had a 3-node ASCS/ERS cluster). After many tests I've decided that the most stable and least complicated option is:
Code looks like this (not fully functional as it's an extract): # TODO add code to handle when these are predefined in inventory in sap_vips
- name: Get wdp_host IP from DNS
ansible.builtin.shell: "dig {{ wdp_host }} +short"
changed_when: false
register: _wdp_host_ip
failed_when: _wdp_host_ip.rc != 0 or _wdp_host_ip.stdout | length == 0
when: wdp_is_virtual | d(false)
- name: Adding WebDisp VIP
ansible.builtin.shell: ip address add "{{ _wdp_host_ip.stdout }}"/26 dev ens192
ignore_errors: true
when: wdp_is_virtual | d(false)
- name: Import variables for sap_swpm Ansible Role (Default Mode)
ansible.builtin.include_vars:
file: ./vars/variables-sap-swpm-default-mode-webdisp-standalone-install.yml
# Prepare temporary NFS mounted storage only on primary node
# we will ignore and scrap /usr/sap/WDP/Wxx on secondary node
- name: Prepare temporary NFS mounted storage
when:
- wdp_is_virtual | d(false)
- "'webdisp_primary' in group_names"
ansible.builtin.include_tasks:
file: swpm-distributed-cs-prep.yml # This deals with /usr/sap/WDSID folders and NFS
vars:
__sap_sid: "{{ wdp_system_sid }}"
__instance_folder: "W{{ sap_swpm_wd_instance_nr }}"
- name: Execute Ansible Role sap_swpm
ansible.builtin.include_role:
name: { role: ../roles/sap_swpm }
- name: Block to handle HA WebDisp setup
when: wdp_is_virtual | d(false)
block:
# For WebDisp these tasks are handled at the end of install rather than at the beginning of HA setup
# because WebDisp is symmetrical (unlike ASCS/ERS)
- name: Execute Cluster preparation tasks - WebDisp
ansible.builtin.include_tasks:
file: cs-cluster-prep.yml # This stops WD and unmounts the temp NFS
vars:
__sap_sid: "{{ wdp_system_sid }}"
__instance_no: "{{ sap_swpm_wd_instance_nr }}"
__instance_folder: "W{{ sap_swpm_wd_instance_nr }}"
- name: Removing WEBDISP VIP after installation
ansible.builtin.shell: ip address del "{{ _wdp_host_ip.stdout }}"/26 dev ens192
ignore_errors: true
- name: Rename instance directory on secondary host
ansible.builtin.command:
cmd: mv "/usr/sap/{{ wdp_system_sid }}/W{{ sap_swpm_wd_instance_nr }}" "/usr/sap/{{ wdp_system_sid }}/W{{ sap_swpm_wd_instance_nr }}.old"
when: "'webdisp_secondary' in group_names"
- name: Create instance directory mount point on secondary host
ansible.builtin.file:
path: "/usr/sap/{{ wdp_system_sid }}/W{{ sap_swpm_wd_instance_nr }}"
state: directory
mode: '0755'
owner: "{{ wdp_system_sid | lower }}adm"
group: sapsys
when: "'webdisp_secondary' in group_names" |
@rob0d No wonder that this is not supported scenario since it has so many peculiarities. I supposed you did not use
|
I'm not sure if I understood you correctly, but no However, generally speaking Standalone/combined is in the context of coexistence on the same cluster as ASCS/ERS/HANA. Both options work as from cluster point of view it's just another resource group with resources in the same way if you installed 5 x ASCS/ERS instances for five different SAP systems on the same cluster. They are all independent. I am guessing the confusion comes from standalone/embedded (note 3115889) webdisp? Embedded webdisp would be pretty much invisible to the cluster (with the exception of the restart parameter in instance profile), but it's not something that's widely used. |
I had a look. I can see possibly two changes that I need to make, but one would require changing @ja9fuchs code which is very specific to ascs/ers pre_steps_nwas_cs_ers.yml in roles/sap_ha_pacemaker_cluster/tasks/main.yml and I'm not sure if to change it. There have been quite a few commits into since I've created this second PR it's looking quite difficult for me to rebase/remerge and add extra changes without messing it up. |
I mentioned that PR, because we had to rethink why our clusters were unstable, which was caused by incorrect order of steps. |
I was following your train of thought. If Standalone WD is different SID, then its NFS mounts (shared across cluster) are under different mount than ASCS/ERS SID mounts. This is usual simple setup when ASCS/ERS is mounted up, which would mean that fs-x.amazonaws.com:/AE1/usr/sap/AE1/SYS 8.0E 3.0G 8.0E 1% /usr/sap/AE1/SYS
fs-x.amazonaws.com:/AE1/usr/sap/AE1/ASCS00 8.0E 3.0G 8.0E 1% /usr/sap/AE1/ASCS00
fs-x.amazonaws.com:/AE1/usr/sap/AE1/ERS01 8.0E 3.0G 8.0E 1% /usr/sap/AE1/ERS01
fs-x.amazonaws.com:/AE1/usr/sap/trans 8.0E 3.0G 8.0E 1% /usr/sap/trans
fs-x.amazonaws.com:/AE1/sapmnt 8.0E 3.0G 8.0E 1% /sapmnt |
Yes, I'd like to apply similar changes to this branch (at least in relation to the monitoring interval) but I now have three different codebases and can't test any complicated changes in this branch. If we can merge this to dev, I can then take dev and merge everything back to the original private repo where this is developed, add the changes based on PR #972 and create a new PR here. I hope that makes sense :). |
I am still confused :), but maybe I'm starting to understand.
/usr/sap/ is local, /install,/usr/sap/trans and /install are always mounted. I don't have any cluster filesystems in the storage_definiton as I didn't want them mounted permanently in /etc/fstab by the role. Is this not how it's supposed to work? |
@rob0d This is example how filesystems would be created by fs-x.amazonaws.com:/AE1/usr/sap/AE1/SYS 8.0E 3.0G 8.0E 1% /usr/sap/AE1/SYS
fs-x.amazonaws.com:/AE1/usr/sap/AE1/ASCS00 8.0E 3.0G 8.0E 1% /usr/sap/AE1/ASCS00
fs-x.amazonaws.com:/AE1/usr/sap/AE1/ERS01 8.0E 3.0G 8.0E 1% /usr/sap/AE1/ERS01
fs-x.amazonaws.com:/AE1/usr/sap/trans 8.0E 3.0G 8.0E 1% /usr/sap/trans
fs-x.amazonaws.com:/AE1/sapmnt 8.0E 3.0G 8.0E 1% /sapmnt
fs-x.amazonaws.com:/WD1/usr/sap/WD1/SYS 8.0E 3.0G 8.0E 1% /usr/sap/WD1/SYS
fs-x.amazonaws.com:/WD1/usr/sap/WD1/W20 8.0E 3.0G 8.0E 1% /usr/sap/WD1/W20 |
Ah ok. I get it. I see no reason why we shouldn't enhance the storage role to support this WD deployment pattern. |
@ja9fuchs @marcelmamula Hopefully this one is correct.
Replacement for PR #929