You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
kubectl get cluster
NAME CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS AGE
gaussdb-sspxtl gaussdb Halt Updating 47m
kbcli cluster list-instances gaussdb-sspxtl
NAME NAMESPACE CLUSTER COMPONENT STATUS ROLE ACCESSMODE AZ CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE NODE CREATED-TIME
gaussdb-sspxtl-gaussdb-0 default gaussdb-sspxtl gaussdb Running secondary <none> cn-shanghai-b 500m / 500m 2Gi / 2Gi data:20Gi 172.16.0.86/172.16.0.86 Jan 10,2025 09:41 UTC+0800
gaussdb-sspxtl-gaussdb-1 default gaussdb-sspxtl gaussdb Running secondary <none> cn-shanghai-b 500m / 500m 2Gi / 2Gi data:20Gi 172.16.0.4/172.16.0.4 Jan 10,2025 09:34 UTC+0800
gaussdb-sspxtl-gaussdb-2 default gaussdb-sspxtl gaussdb Running secondary <none> cn-shanghai-b 500m / 500m 2Gi / 2Gi data:20Gi 172.16.0.88/172.16.0.88 Jan 10,2025 09:45 UTC+0800
logs pod
kubectl logs gaussdb-sspxtl-gaussdb-0
Defaulted container "gaussdb" out of: gaussdb, exporter, lorry, config-manager, init-lorry (init)
Changing password for user omm.
passwd: all authentication tokens updated successfully.
GaussDB Database directory appears to contain a database; Skipping init
GaussDB cluster has no primary now, use minimum ordinal pod: 0 as primary
python3 /scripts/generate_xml.py --primary-host=172.16.0.86 --standby-host=172.16.0.4,172.16.0.88 --primary-hostname=gaussdb-sspxtl-gaussdb-0 --standby-hostname=gaussdb-sspxtl-gaussdb-1,gaussdb-sspxtl-gaussdb-2
generate cluster xml file success.
Generating static configuration files for all nodes.
Creating temp directory to store static configuration files.
Successfully created the temp directory.
Generating static configuration files.
Successfully generated static configuration files.
Static configuration files for all nodes are saved in /opt/om/script/static_config_files.
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.86/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.86/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c replconninfo1='localhost=172.16.0.86 localport=1141 localheartbeatport=1142 remotehost=172.16.0.4 remoteport=1141 remoteheartbeatport=1142' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: replconninfo1='localhost=172.16.0.86 localport=1141 localheartbeatport=1142 remotehost=172.16.0.4 remoteport=1141 remoteheartbeatport=1142': [/gaussdb/volume_data/data/postgresql.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.4/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.4/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c replconninfo2='localhost=172.16.0.86 localport=1141 localheartbeatport=1142 remotehost=172.16.0.88 remoteport=1141 remoteheartbeatport=1142' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: replconninfo2='localhost=172.16.0.86 localport=1141 localheartbeatport=1142 remotehost=172.16.0.88 remoteport=1141 remoteheartbeatport=1142': [/gaussdb/volume_data/data/postgresql.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.88/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.88/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c pgxc_node_name='gaussdb_sspxtl_gaussdb_0' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: pgxc_node_name='gaussdb_sspxtl_gaussdb_0': [/gaussdb/volume_data/data/postgresql.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c port=1140 set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: port=1140: [/gaussdb/volume_data/data/postgresql.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
➜ ~
➜ ~ kubectl logs gaussdb-sspxtl-gaussdb-1
Defaulted container "gaussdb" out of: gaussdb, exporter, lorry, config-manager, init-lorry (init)
create dir: /gaussdb/volume_data
create dir: /gaussdb/volume_data/log
create dir: /gaussdb/volume_data/tmp
create dir: /gaussdb/volume_data/cm
create dir: /gaussdb/volume_data/log/cm
create dir: /gaussdb/volume_data/cm/cm_agent
create dir: /gaussdb/volume_data/cm/cm_server
create dir: /gaussdb/volume_data/log/cm/cm_agent
create dir: /gaussdb/volume_data/log/cm/cm_server
create dir: /gaussdb/volume_data/log/cm/om_monitor
create dir: /gaussdb/volume_data/etcd
Changing password for user omm.
passwd: all authentication tokens updated successfully.
The files belonging to this database system will be owned by user "omm".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
creating directory /gaussdb/volume_data/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 32MB
creating configuration files ... ok
Begin init undo subsystem meta.
[INIT UNDO] Init undo subsystem meta successfully.
creating template1 database in /gaussdb/volume_data/data/base/1 ... 2025-01-10 09:35:14.966 [unknown] [unknown] localhost 140598692997056 0[0:0#0] [BACKEND] WARNING: mac_addr is 22/1045413240, sysidentifier is 1457743/3178759091, random_num is 4132832179
ok
initializing pg_authid ... ok
setting password ... ok
initializing dependencies ... ok
loading PL/pgSQL server-side language ... ok
creating system views ... ok
creating private system views ... ok
creating performance views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
initialize global configure for bucketmap length ... ok
initialize storage type for undo subsystem ... ok
creating information schema ... ok
loading foreign-data wrapper for distfs access ... ok
loading packages extension ... ok
loading foreign-data wrapper for log access ... ok
loading hstore extension ... ok
loading security plugin ... ok
loading dblink_fdw extension ... ok
update system tables ... ok
creating snapshots catalog ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
freezing database template0 ... ok
freezing database template1 ... ok
freezing database postgres ... ok
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run gs_initdb.
Success. You can now start the database server of single node using:
gaussdb -D /gaussdb/volume_data/data --single_node
or
gs_ctl start -D /gaussdb/volume_data/data -Z single_node -l logfile
GaussDB cluster has no primary now, use minimum ordinal pod: 0 as primary
python3 /scripts/generate_xml.py --primary-host=172.16.0.76 --standby-host=172.16.0.4,172.16.0.87 --primary-hostname=gaussdb-sspxtl-gaussdb-0 --standby-hostname=gaussdb-sspxtl-gaussdb-1,gaussdb-sspxtl-gaussdb-2
generate cluster xml file success.
Generating static configuration files for all nodes.
Creating temp directory to store static configuration files.
Successfully created the temp directory.
Generating static configuration files.
Successfully generated static configuration files.
Static configuration files for all nodes are saved in /opt/om/script/static_config_files.
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c replconninfo1='localhost=172.16.0.4 localport=1141 localheartbeatport=1142 remotehost=172.16.0.76 remoteport=1141 remoteheartbeatport=1142' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: replconninfo1='localhost=172.16.0.4 localport=1141 localheartbeatport=1142 remotehost=172.16.0.76 remoteport=1141 remoteheartbeatport=1142': [/gaussdb/volume_data/data/postgresql.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.76/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
realpath(/gaussdb/volume_data/data/pg_hba.conf.lock) failed : No such file or directory!
gs_guc sethba: host all omm 172.16.0.76/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.4/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.4/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c replconninfo2='localhost=172.16.0.4 localport=1141 localheartbeatport=1142 remotehost=172.16.0.87 remoteport=1141 remoteheartbeatport=1142' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: replconninfo2='localhost=172.16.0.4 localport=1141 localheartbeatport=1142 remotehost=172.16.0.87 remoteport=1141 remoteheartbeatport=1142': [/gaussdb/volume_data/data/postgresql.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.87/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.87/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c pgxc_node_name='gaussdb_sspxtl_gaussdb_1' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: pgxc_node_name='gaussdb_sspxtl_gaussdb_1': [/gaussdb/volume_data/data/postgresql.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c port=1140 set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: port=1140: [/gaussdb/volume_data/data/postgresql.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
node_id is 2
cm_ctl: set cm_server.conf success.
cm_ctl: set cm_agent.conf success.
cm_ctl: set cm_agent.conf success.
cm_ctl: set cm_agent.conf success.
cm_ctl: set cm_server.conf success.
➜ ~
➜ ~ kubectl logs gaussdb-sspxtl-gaussdb-2
Defaulted container "gaussdb" out of: gaussdb, exporter, lorry, config-manager, init-lorry (init)
Changing password for user omm.
passwd: all authentication tokens updated successfully.
GaussDB Database directory appears to contain a database; Skipping init
GaussDB cluster has no primary now, use minimum ordinal pod: 0 as primary
python3 /scripts/generate_xml.py --primary-host=172.16.0.86 --standby-host=172.16.0.4,172.16.0.88 --primary-hostname=gaussdb-sspxtl-gaussdb-0 --standby-hostname=gaussdb-sspxtl-gaussdb-1,gaussdb-sspxtl-gaussdb-2
generate cluster xml file success.
Generating static configuration files for all nodes.
Creating temp directory to store static configuration files.
Successfully created the temp directory.
Generating static configuration files.
Successfully generated static configuration files.
Static configuration files for all nodes are saved in /opt/om/script/static_config_files.
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c replconninfo1='localhost=172.16.0.88 localport=1141 localheartbeatport=1142 remotehost=172.16.0.86 remoteport=1141 remoteheartbeatport=1142' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: replconninfo1='localhost=172.16.0.88 localport=1141 localheartbeatport=1142 remotehost=172.16.0.86 remoteport=1141 remoteheartbeatport=1142': [/gaussdb/volume_data/data/postgresql.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.86/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.86/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c replconninfo2='localhost=172.16.0.88 localport=1141 localheartbeatport=1142 remotehost=172.16.0.4 remoteport=1141 remoteheartbeatport=1142' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: replconninfo2='localhost=172.16.0.88 localport=1141 localheartbeatport=1142 remotehost=172.16.0.4 remoteport=1141 remoteheartbeatport=1142': [/gaussdb/volume_data/data/postgresql.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.4/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.4/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.88/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.88/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c pgxc_node_name='gaussdb_sspxtl_gaussdb_2' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: pgxc_node_name='gaussdb_sspxtl_gaussdb_2': [/gaussdb/volume_data/data/postgresql.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c port=1140 set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: port=1140: [/gaussdb/volume_data/data/postgresql.conf]
Total instances: 1. Failed instances: 0.
Success to perform gs_guc!
➜ ~
logs pod lorry
➜ ~ kubectl logs gaussdb-sspxtl-gaussdb-0 lorry
2025-01-10T01:45:06Z INFO Initialize DB manager
2025-01-10T01:45:06Z INFO KB_WORKLOAD_TYPE ENV not set
2025-01-10T01:45:06Z INFO Volume-Protection succeed to init volume protection {"pod": "gaussdb-sspxtl-gaussdb-0", "spec": {"highWatermark":"0","volumes":[]}}
2025-01-10T01:45:06Z INFO HTTPServer Starting HTTP Server
2025-01-10T01:45:06Z INFO HTTPServer API route path {"method": "GET", "path": ["/v1.0/query", "/v1.0/checkrole", "/v1.0/describeuser", "/v1.0/getrole", "/v1.0/listusers", "/v1.0/listsystemaccounts", "/v1.0/healthycheck"]}
2025-01-10T01:45:06Z INFO HTTPServer API route path {"method": "POST", "path": ["/v1.0/joinmember", "/v1.0/createuser", "/v1.0/preterminate", "/v1.0/unlockinstance", "/v1.0/deleteuser", "/v1.0/leavemember", "/v1.0/datadump", "/v1.0/dataload", "/v1.0/exec", "/v1.0/grantuserrole", "/v1.0/lockinstance", "/v1.0/revokeuserrole", "/v1.0/postprovision", "/v1.0/rebuild", "/v1.0/switchover", "/v1.0/volumeprotection", "/v1.0/checkrunning", "/v1.0/getlag"]}
2025-01-10T01:45:06Z INFO cronjobs env is not set {"env": "KB_CRON_JOBS"}
2025-01-10T01:45:17Z INFO DCS-K8S pod selector: app.kubernetes.io/instance=gaussdb-sspxtl,app.kubernetes.io/managed-by=kubeblocks,apps.kubeblocks.io/component-name=gaussdb
2025-01-10T01:45:17Z INFO DCS-K8S podlist: 3
2025-01-10T01:45:17Z INFO DCS-K8S Leader configmap is not found {"configmap": "gaussdb-sspxtl-gaussdb-leader"}
2025-01-10T01:45:17Z INFO event send event: map[event:Success operation:checkRole originalRole:waitForStart role:secondary]
2025-01-10T01:45:17Z INFO event send event success {"message": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"waitForStart\",\"role\":\"secondary\"}"}
➜ ~
➜ ~ kubectl logs gaussdb-sspxtl-gaussdb-1 lorry
2025-01-10T01:35:13Z INFO Initialize DB manager
2025-01-10T01:35:13Z INFO KB_WORKLOAD_TYPE ENV not set
2025-01-10T01:35:13Z INFO Volume-Protection succeed to init volume protection {"pod": "gaussdb-sspxtl-gaussdb-1", "spec": {"highWatermark":"0","volumes":[]}}
2025-01-10T01:35:13Z INFO HTTPServer Starting HTTP Server
2025-01-10T01:35:13Z INFO HTTPServer API route path {"method": "POST", "path": ["/v1.0/volumeprotection", "/v1.0/exec", "/v1.0/deleteuser", "/v1.0/postprovision", "/v1.0/joinmember", "/v1.0/grantuserrole", "/v1.0/getlag", "/v1.0/checkrunning", "/v1.0/rebuild", "/v1.0/createuser", "/v1.0/dataload", "/v1.0/revokeuserrole", "/v1.0/unlockinstance", "/v1.0/datadump", "/v1.0/leavemember", "/v1.0/switchover", "/v1.0/lockinstance", "/v1.0/preterminate"]}
2025-01-10T01:35:13Z INFO HTTPServer API route path {"method": "GET", "path": ["/v1.0/listusers", "/v1.0/getrole", "/v1.0/listsystemaccounts", "/v1.0/checkrole", "/v1.0/healthycheck", "/v1.0/describeuser", "/v1.0/query"]}
2025-01-10T01:35:13Z INFO cronjobs env is not set {"env": "KB_CRON_JOBS"}
2025-01-10T01:35:19Z INFO DCS-K8S pod selector: app.kubernetes.io/instance=gaussdb-sspxtl,app.kubernetes.io/managed-by=kubeblocks,apps.kubeblocks.io/component-name=gaussdb
2025-01-10T01:35:19Z INFO DCS-K8S podlist: 3
2025-01-10T01:35:19Z INFO DCS-K8S Leader configmap is not found {"configmap": "gaussdb-sspxtl-gaussdb-leader"}
2025-01-10T01:35:19Z INFO event send event: map[event:Success operation:checkRole originalRole:waitForStart role:secondary]
2025-01-10T01:35:19Z INFO event send event success {"message": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"waitForStart\",\"role\":\"secondary\"}"}
2025-01-10T01:37:19Z INFO DCS-K8S pod selector: app.kubernetes.io/instance=gaussdb-sspxtl,app.kubernetes.io/managed-by=kubeblocks,apps.kubeblocks.io/component-name=gaussdb
2025-01-10T01:37:19Z INFO DCS-K8S podlist: 3
2025-01-10T01:37:19Z DEBUG checkrole check member {"member": "gaussdb-sspxtl-gaussdb-0", "role": ""}
2025-01-10T01:37:19Z DEBUG checkrole check member {"member": "gaussdb-sspxtl-gaussdb-1", "role": "secondary"}
2025-01-10T01:37:19Z DEBUG checkrole check member {"member": "gaussdb-sspxtl-gaussdb-2", "role": "secondary"}
2025-01-10T01:37:19Z INFO event send event: map[event:Success operation:checkRole originalRole:secondary role:{"term":"1736473039323843","PodRoleNamePairs":[{"podName":"gaussdb-sspxtl-gaussdb-1","roleName":"primary","podUid":"dfbe0ed3-1614-470b-8355-184e6ac3de16"}]}]
2025-01-10T01:37:19Z INFO event send event success {"message": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"secondary\",\"role\":\"{\\\"term\\\":\\\"1736473039323843\\\",\\\"PodRoleNamePairs\\\":[{\\\"podName\\\":\\\"gaussdb-sspxtl-gaussdb-1\\\",\\\"roleName\\\":\\\"primary\\\",\\\"podUid\\\":\\\"dfbe0ed3-1614-470b-8355-184e6ac3de16\\\"}]}\"}"}
2025-01-10T01:40:07Z INFO HTTPServer HTTP API Called {"useragent": "Go-http-client/1.1", "method": "GET", "path": "/v1.0/describeuser"}
2025-01-10T01:40:07Z INFO describeUser executing describeUser error {"error": "not implemented"}
2025-01-10T01:40:07Z INFO HTTPServer HTTP API Called {"useragent": "Go-http-client/1.1", "status code": 501, "cost": 0}
2025-01-10T01:40:07Z INFO HTTPServer HTTP API Called {"useragent": "Go-http-client/1.1", "method": "POST", "path": "/v1.0/createuser"}
2025-01-10T01:40:07Z INFO custom account provision {"output": "CREATE ROLE\n"}
2025-01-10T01:40:07Z INFO HTTPServer HTTP API Called {"useragent": "Go-http-client/1.1", "status code": 200, "cost": 33}
2025-01-10T01:40:59Z INFO event send event: map[event:Success operation:checkRole originalRole:primary role:secondary]
2025-01-10T01:40:59Z INFO event send event success {"message": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"primary\",\"role\":\"secondary\"}"}
➜ ~
➜ ~ kubectl logs gaussdb-sspxtl-gaussdb-2 lorry
2025-01-10T01:45:36Z INFO Initialize DB manager
2025-01-10T01:45:36Z INFO KB_WORKLOAD_TYPE ENV not set
2025-01-10T01:45:36Z INFO Volume-Protection succeed to init volume protection {"pod": "gaussdb-sspxtl-gaussdb-2", "spec": {"highWatermark":"0","volumes":[]}}
2025-01-10T01:45:36Z INFO HTTPServer Starting HTTP Server
2025-01-10T01:45:36Z INFO HTTPServer API route path {"method": "POST", "path": ["/v1.0/preterminate", "/v1.0/revokeuserrole", "/v1.0/unlockinstance", "/v1.0/deleteuser", "/v1.0/dataload", "/v1.0/getlag", "/v1.0/leavemember", "/v1.0/switchover", "/v1.0/grantuserrole", "/v1.0/createuser", "/v1.0/volumeprotection", "/v1.0/postprovision", "/v1.0/rebuild", "/v1.0/joinmember", "/v1.0/datadump", "/v1.0/checkrunning", "/v1.0/lockinstance", "/v1.0/exec"]}
2025-01-10T01:45:36Z INFO HTTPServer API route path {"method": "GET", "path": ["/v1.0/getrole", "/v1.0/listusers", "/v1.0/healthycheck", "/v1.0/query", "/v1.0/describeuser", "/v1.0/checkrole", "/v1.0/listsystemaccounts"]}
2025-01-10T01:45:36Z INFO cronjobs env is not set {"env": "KB_CRON_JOBS"}
2025-01-10T01:45:48Z INFO DCS-K8S pod selector: app.kubernetes.io/instance=gaussdb-sspxtl,app.kubernetes.io/managed-by=kubeblocks,apps.kubeblocks.io/component-name=gaussdb
2025-01-10T01:45:48Z INFO DCS-K8S podlist: 3
2025-01-10T01:45:48Z INFO DCS-K8S Leader configmap is not found {"configmap": "gaussdb-sspxtl-gaussdb-leader"}
2025-01-10T01:45:49Z INFO event send event: map[event:Success operation:checkRole originalRole:waitForStart role:secondary]
2025-01-10T01:45:49Z INFO event send event success {"message": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"waitForStart\",\"role\":\"secondary\"}"}
➜ ~
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: [e.g. iOS]
Browser [e.g. chrome, safari]
Version [e.g. 22]
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
Describe the bug
To Reproduce
Steps to reproduce the behavior:
logs pod
logs pod lorry
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: