-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DPE-4066] HA interface #317
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #317 +/- ##
==========================================
+ Coverage 71.47% 72.96% +1.48%
==========================================
Files 7 8 +1
Lines 1206 1276 +70
Branches 211 223 +12
==========================================
+ Hits 862 931 +69
Misses 266 266
- Partials 78 79 +1 ☔ View full report in Codecov by Sentry. |
Hey @dragomirp (cc @taurus-forever) I tried the bundle you send to me description: PGB VIP bundle
name: vip-bundle
series: jammy
applications:
data-integrator:
charm: data-integrator
channel: latest/stable
num_units: 3
options:
database-name: testdb
postgresql:
charm: postgresql
channel: 14/stable
num_units: 3
#options:
# profile: testing
pgbouncer:
channel: 1/edge/hacluster
revision: 347
charm: pgbouncer
series: jammy
num_units: 0
options:
max_db_connections: 30
vip: 10.58.159.250
relations:
- [pgbouncer:backend-database, postgresql:database]
- [pgbouncer:database, data-integrator:postgresql] # blue@blue:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
pg-vip overlord localhost/localhost 3.4.5 unsupported 23:39:19Z
App Version Status Scale Charm Channel Rev Exposed Message
data-integrator active 3 data-integrator latest/stable 27 no
pgbouncer 1.21.0 active 3 pgbouncer 1/edge/hacluster 347 no
postgresql 14.11 active 3 postgresql 14/stable 429 no
Unit Workload Agent Machine Public address Ports Message
data-integrator/0* active idle 0 10.58.159.146
pgbouncer/0* active idle 10.58.159.146 6432/tcp
data-integrator/1 active idle 1 10.58.159.150
pgbouncer/1 active idle 10.58.159.150 6432/tcp
data-integrator/2 active idle 2 10.58.159.212
pgbouncer/2 active idle 10.58.159.212 6432/tcp
postgresql/0 active idle 3 10.58.159.13 5432/tcp
postgresql/1* active idle 4 10.58.159.203 5432/tcp
postgresql/2 active idle 5 10.58.159.20 5432/tcp Primary
Machine State Address Inst id Base AZ Message
0 started 10.58.159.146 juju-fc0f0d-0 [email protected] Running
1 started 10.58.159.150 juju-fc0f0d-1 [email protected] Running
2 started 10.58.159.212 juju-fc0f0d-2 [email protected] Running
3 started 10.58.159.13 juju-fc0f0d-3 [email protected] Running
4 started 10.58.159.203 juju-fc0f0d-4 [email protected] Running
5 started 10.58.159.20 juju-fc0f0d-5 [email protected] Running But It does not seem to be assigning the VIP to any machine
|
Adding some more juju deploy hacluster --channel 2.4/stable
juju relate hacluster data-integrator
juju relate hacluster:juju-info data-integrator:juju-info Ok now it looks it is there # blue@blue:~$ ping 10.58.159.250
PING 10.58.159.250 (10.58.159.250) 56(84) bytes of data.
64 bytes from 10.58.159.250: icmp_seq=1 ttl=64 time=575 ms
64 bytes from 10.58.159.250: icmp_seq=2 ttl=64 time=0.099 ms
64 bytes from 10.58.159.250: icmp_seq=3 ttl=64 time=0.083 ms
^C
--- 10.58.159.250 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2006ms
rtt min/avg/max/mdev = 0.083/191.571/574.533/270.794 ms
blue@blue:~$ juju exec -a data-integrator -- 'ip -o -c -4 a'
data-integrator/0:
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
9: eth0 inet 10.58.159.146/24 metric 100 brd 10.58.159.255 scope global dynamic eth0\ valid_lft 2195sec preferred_lft 2195sec
9: eth0 inet 10.58.159.250/24 scope global secondary eth0\ valid_lft forever preferred_lft forever
data-integrator/1:
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
11: eth0 inet 10.58.159.150/24 metric 100 brd 10.58.159.255 scope global dynamic eth0\ valid_lft 2327sec preferred_lft 2327sec
data-integrator/2:
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
13: eth0 inet 10.58.159.212/24 metric 100 brd 10.58.159.255 scope global dynamic eth0\ valid_lft 2183sec preferred_lft 2183sec |
Looks working # juju run data-integrator/leader get-credentials
Running operation 19 with 1 task
- task 20 on unit-data-integrator-0
Waiting for task 20...
ok: "True"
postgresql:
data: '{"database": "testdb", "external-node-connectivity": "true", "requested-secrets":
"[\"username\", \"password\", \"tls\", \"tls-ca\", \"uris\"]"}'
database: testdb
endpoints: 10.58.159.250:6432
password: REDACTED
subordinated: "true"
username: relation_id_7
version: "14.11"
# psql --host 10.58.159.250 --port 6432 --username relation_id_7 --password --database testdb
psql (14.12 (Ubuntu 14.12-0ubuntu0.22.04.1), server 14.11 (Ubuntu 14.11-0ubuntu0.22.04.1))
Type "help" for help.
testdb=> |
Hi, @gustavosr98, yes on LXD you need hacluster to get the vip. Adding an LXD bundle here for completeness:
|
Hi @gustavosr98 , thank you for the instant testing here! Great work @dragomirp ! |
def update_client_connection_info(self, port: Optional[str] = None): | ||
def update_client_connection_info(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really passed anywhere
src/relations/hacluster.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly following the implementation in https://github.com/openstack/charm-interface-hacluster
if self.check_pgb_running(): | ||
self.unit.status = ActiveStatus() | ||
if self.unit.is_leader() and vip: | ||
self.unit.status = ActiveStatus(f"VIP: {vip}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting just the app status message here didn't show when testing locally, while setting a proper app status didn't clear up when blocking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested with HAcluster from stable. Work well.
Also tested by Gustavo to ensure OpenStack case works the same as legacy pgbouncer.
Adding Mohamed to approve the config name.
Got approval from @7annaba3l for the config name. |
Related to #212