Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] gaussdb cluster all role are secondary after switchover then delete primary pod #8782

Open
JashBook opened this issue Jan 10, 2025 · 0 comments
Assignees
Labels
kind/bug Something isn't working
Milestone

Comments

@JashBook
Copy link
Collaborator

JashBook commented Jan 10, 2025

Describe the bug

kbcli version
Kubernetes: v1.28.15-vke.18
KubeBlocks: 0.9.3-beta.20
kbcli: 0.9.3-beta.2

To Reproduce
Steps to reproduce the behavior:

  1. create cluster
apiVersion: apps.kubeblocks.io/v1alpha1
kind: Cluster
metadata:
  name: gaussdb-sspxtl
  namespace: default
  annotations:
      kubeblocks.io/host-network: "gaussdb"
spec:
  clusterDefinitionRef: gaussdb
  topology: replication
  terminationPolicy: Halt
  componentSpecs:
    - name: gaussdb
      serviceVersion: 2.23.1
      replicas: 3
      resources:
        requests:
          cpu: 500m
          memory: 2Gi
        limits:
          cpu: 500m
          memory: 2Gi
      volumeClaimTemplates:
        - name: data
          spec:
            storageClassName:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 20Gi

kbcli cluster list-instances gaussdb-sspxtl --namespace default
    
NAME                       NAMESPACE   CLUSTER          COMPONENT   STATUS    ROLE        ACCESSMODE   AZ              CPU(REQUEST/LIMIT)   MEMORY(REQUEST/LIMIT)   STORAGE     NODE                      CREATED-TIME                 
gaussdb-sspxtl-gaussdb-0   default     gaussdb-sspxtl   gaussdb     Running   secondary   <none>       cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.76/172.16.0.76   Jan 10,2025 09:34 UTC+0800   
gaussdb-sspxtl-gaussdb-1   default     gaussdb-sspxtl   gaussdb     Running   primary     <none>       cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.4/172.16.0.4     Jan 10,2025 09:34 UTC+0800   
gaussdb-sspxtl-gaussdb-2   default     gaussdb-sspxtl   gaussdb     Running   secondary   <none>       cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.87/172.16.0.87   Jan 10,2025 09:34 UTC+0800   

  1. switch over
kbcli cluster promote gaussdb-sspxtl --auto-approve --force=true                  --component gaussdb  --namespace default
`kbcli cluster list-instances gaussdb-sspxtl --namespace default `
    
NAME                       NAMESPACE   CLUSTER          COMPONENT   STATUS    ROLE        ACCESSMODE   AZ              CPU(REQUEST/LIMIT)   MEMORY(REQUEST/LIMIT)   STORAGE     NODE                      CREATED-TIME                 
gaussdb-sspxtl-gaussdb-0   default     gaussdb-sspxtl   gaussdb     Running   secondary   <none>       cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.76/172.16.0.76   Jan 10,2025 09:34 UTC+0800   
gaussdb-sspxtl-gaussdb-1   default     gaussdb-sspxtl   gaussdb     Running   secondary   <none>      cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.4/172.16.0.4     Jan 10,2025 09:34 UTC+0800   
gaussdb-sspxtl-gaussdb-2   default     gaussdb-sspxtl   gaussdb     Running   primary     <none>        cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.87/172.16.0.87   Jan 10,2025 09:34 UTC+0800  
  1. evicting pod default/gaussdb-sspxtl-gaussdb-2
  2. See error
kubectl get cluster 
NAME             CLUSTER-DEFINITION   VERSION   TERMINATION-POLICY   STATUS     AGE
gaussdb-sspxtl   gaussdb                        Halt                 Updating   47m

kbcli cluster list-instances gaussdb-sspxtl  
NAME                       NAMESPACE   CLUSTER          COMPONENT   STATUS    ROLE        ACCESSMODE   AZ              CPU(REQUEST/LIMIT)   MEMORY(REQUEST/LIMIT)   STORAGE     NODE                      CREATED-TIME                 
gaussdb-sspxtl-gaussdb-0   default     gaussdb-sspxtl   gaussdb     Running   secondary   <none>       cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.86/172.16.0.86   Jan 10,2025 09:41 UTC+0800   
gaussdb-sspxtl-gaussdb-1   default     gaussdb-sspxtl   gaussdb     Running   secondary   <none>       cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.4/172.16.0.4     Jan 10,2025 09:34 UTC+0800   
gaussdb-sspxtl-gaussdb-2   default     gaussdb-sspxtl   gaussdb     Running   secondary   <none>       cn-shanghai-b   500m / 500m          2Gi / 2Gi               data:20Gi   172.16.0.88/172.16.0.88   Jan 10,2025 09:45 UTC+0800   

logs pod

kubectl logs gaussdb-sspxtl-gaussdb-0      
Defaulted container "gaussdb" out of: gaussdb, exporter, lorry, config-manager, init-lorry (init)
Changing password for user omm.
passwd: all authentication tokens updated successfully.
GaussDB Database directory appears to contain a database; Skipping init
GaussDB cluster has no primary now, use minimum ordinal pod: 0 as primary
python3 /scripts/generate_xml.py --primary-host=172.16.0.86 --standby-host=172.16.0.4,172.16.0.88 --primary-hostname=gaussdb-sspxtl-gaussdb-0 --standby-hostname=gaussdb-sspxtl-gaussdb-1,gaussdb-sspxtl-gaussdb-2
generate cluster xml file success.
Generating static configuration files for all nodes.
Creating temp directory to store static configuration files.
Successfully created the temp directory.
Generating static configuration files.
Successfully generated static configuration files.
Static configuration files for all nodes are saved in /opt/om/script/static_config_files.
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.86/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.86/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c replconninfo1='localhost=172.16.0.86 localport=1141 localheartbeatport=1142 remotehost=172.16.0.4 remoteport=1141 remoteheartbeatport=1142' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: replconninfo1='localhost=172.16.0.86 localport=1141 localheartbeatport=1142 remotehost=172.16.0.4 remoteport=1141 remoteheartbeatport=1142': [/gaussdb/volume_data/data/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.4/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.4/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c replconninfo2='localhost=172.16.0.86 localport=1141 localheartbeatport=1142 remotehost=172.16.0.88 remoteport=1141 remoteheartbeatport=1142' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: replconninfo2='localhost=172.16.0.86 localport=1141 localheartbeatport=1142 remotehost=172.16.0.88 remoteport=1141 remoteheartbeatport=1142': [/gaussdb/volume_data/data/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.88/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.88/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c pgxc_node_name='gaussdb_sspxtl_gaussdb_0' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: pgxc_node_name='gaussdb_sspxtl_gaussdb_0': [/gaussdb/volume_data/data/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c port=1140 set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: port=1140: [/gaussdb/volume_data/data/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

➜  ~ 
➜  ~ kubectl logs gaussdb-sspxtl-gaussdb-1
Defaulted container "gaussdb" out of: gaussdb, exporter, lorry, config-manager, init-lorry (init)
create dir: /gaussdb/volume_data
create dir: /gaussdb/volume_data/log
create dir: /gaussdb/volume_data/tmp
create dir: /gaussdb/volume_data/cm
create dir: /gaussdb/volume_data/log/cm
create dir: /gaussdb/volume_data/cm/cm_agent
create dir: /gaussdb/volume_data/cm/cm_server
create dir: /gaussdb/volume_data/log/cm/cm_agent
create dir: /gaussdb/volume_data/log/cm/cm_server
create dir: /gaussdb/volume_data/log/cm/om_monitor
create dir: /gaussdb/volume_data/etcd
Changing password for user omm.
passwd: all authentication tokens updated successfully.
The files belonging to this database system will be owned by user "omm".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

creating directory /gaussdb/volume_data/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 32MB
creating configuration files ... ok
Begin init undo subsystem meta.
[INIT UNDO] Init undo subsystem meta successfully.
creating template1 database in /gaussdb/volume_data/data/base/1 ... 2025-01-10 09:35:14.966 [unknown] [unknown] localhost 140598692997056 0[0:0#0]  [BACKEND] WARNING:  mac_addr is 22/1045413240, sysidentifier is 1457743/3178759091, random_num is 4132832179
ok
initializing pg_authid ... ok
setting password ... ok
initializing dependencies ... ok
loading PL/pgSQL server-side language ... ok
creating system views ... ok
creating private system views ... ok
creating performance views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
initialize global configure for bucketmap length ... ok
initialize storage type for undo subsystem ... ok
creating information schema ... ok
loading foreign-data wrapper for distfs access ... ok
loading packages extension ... ok
loading foreign-data wrapper for log access ... ok
loading hstore extension ... ok
loading security plugin ... ok
loading dblink_fdw extension ... ok
update system tables ... ok
creating snapshots catalog ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
freezing database template0 ... ok
freezing database template1 ... ok
freezing database postgres ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run gs_initdb.

Success. You can now start the database server of single node using:

    gaussdb -D /gaussdb/volume_data/data --single_node
or
    gs_ctl start -D /gaussdb/volume_data/data -Z single_node -l logfile

GaussDB cluster has no primary now, use minimum ordinal pod: 0 as primary
python3 /scripts/generate_xml.py --primary-host=172.16.0.76 --standby-host=172.16.0.4,172.16.0.87 --primary-hostname=gaussdb-sspxtl-gaussdb-0 --standby-hostname=gaussdb-sspxtl-gaussdb-1,gaussdb-sspxtl-gaussdb-2
generate cluster xml file success.
Generating static configuration files for all nodes.
Creating temp directory to store static configuration files.
Successfully created the temp directory.
Generating static configuration files.
Successfully generated static configuration files.
Static configuration files for all nodes are saved in /opt/om/script/static_config_files.
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c replconninfo1='localhost=172.16.0.4 localport=1141 localheartbeatport=1142 remotehost=172.16.0.76 remoteport=1141 remoteheartbeatport=1142' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: replconninfo1='localhost=172.16.0.4 localport=1141 localheartbeatport=1142 remotehost=172.16.0.76 remoteport=1141 remoteheartbeatport=1142': [/gaussdb/volume_data/data/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.76/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
realpath(/gaussdb/volume_data/data/pg_hba.conf.lock) failed : No such file or directory!
gs_guc sethba: host all omm 172.16.0.76/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.4/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.4/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c replconninfo2='localhost=172.16.0.4 localport=1141 localheartbeatport=1142 remotehost=172.16.0.87 remoteport=1141 remoteheartbeatport=1142' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: replconninfo2='localhost=172.16.0.4 localport=1141 localheartbeatport=1142 remotehost=172.16.0.87 remoteport=1141 remoteheartbeatport=1142': [/gaussdb/volume_data/data/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.87/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.87/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c pgxc_node_name='gaussdb_sspxtl_gaussdb_1' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: pgxc_node_name='gaussdb_sspxtl_gaussdb_1': [/gaussdb/volume_data/data/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c port=1140 set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: port=1140: [/gaussdb/volume_data/data/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

node_id is 2
cm_ctl: set cm_server.conf success.
cm_ctl: set cm_agent.conf success.
cm_ctl: set cm_agent.conf success.
cm_ctl: set cm_agent.conf success.
cm_ctl: set cm_server.conf success.
➜  ~ 
➜  ~ kubectl logs gaussdb-sspxtl-gaussdb-2
Defaulted container "gaussdb" out of: gaussdb, exporter, lorry, config-manager, init-lorry (init)
Changing password for user omm.
passwd: all authentication tokens updated successfully.
GaussDB Database directory appears to contain a database; Skipping init
GaussDB cluster has no primary now, use minimum ordinal pod: 0 as primary
python3 /scripts/generate_xml.py --primary-host=172.16.0.86 --standby-host=172.16.0.4,172.16.0.88 --primary-hostname=gaussdb-sspxtl-gaussdb-0 --standby-hostname=gaussdb-sspxtl-gaussdb-1,gaussdb-sspxtl-gaussdb-2
generate cluster xml file success.
Generating static configuration files for all nodes.
Creating temp directory to store static configuration files.
Successfully created the temp directory.
Generating static configuration files.
Successfully generated static configuration files.
Static configuration files for all nodes are saved in /opt/om/script/static_config_files.
The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c replconninfo1='localhost=172.16.0.88 localport=1141 localheartbeatport=1142 remotehost=172.16.0.86 remoteport=1141 remoteheartbeatport=1142' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: replconninfo1='localhost=172.16.0.88 localport=1141 localheartbeatport=1142 remotehost=172.16.0.86 remoteport=1141 remoteheartbeatport=1142': [/gaussdb/volume_data/data/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.86/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.86/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c replconninfo2='localhost=172.16.0.88 localport=1141 localheartbeatport=1142 remotehost=172.16.0.4 remoteport=1141 remoteheartbeatport=1142' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: replconninfo2='localhost=172.16.0.88 localport=1141 localheartbeatport=1142 remotehost=172.16.0.4 remoteport=1141 remoteheartbeatport=1142': [/gaussdb/volume_data/data/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.4/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.4/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -h host all omm 172.16.0.88/32 trust set ].
expected instance path: [/gaussdb/volume_data/data/pg_hba.conf]
gs_guc sethba: host all omm 172.16.0.88/32 trust: [/gaussdb/volume_data/data/pg_hba.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c pgxc_node_name='gaussdb_sspxtl_gaussdb_2' set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: pgxc_node_name='gaussdb_sspxtl_gaussdb_2': [/gaussdb/volume_data/data/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

The gs_guc run with the following arguments: [gs_guc -D /gaussdb/volume_data/data -c port=1140 set ].
expected instance path: [/gaussdb/volume_data/data/postgresql.conf]
gs_guc set: port=1140: [/gaussdb/volume_data/data/postgresql.conf]

Total instances: 1. Failed instances: 0.
Success to perform gs_guc!

➜  ~ 

logs pod lorry

➜  ~ kubectl logs gaussdb-sspxtl-gaussdb-0 lorry
2025-01-10T01:45:06Z	INFO	Initialize DB manager
2025-01-10T01:45:06Z	INFO	KB_WORKLOAD_TYPE ENV not set
2025-01-10T01:45:06Z	INFO	Volume-Protection	succeed to init volume protection	{"pod": "gaussdb-sspxtl-gaussdb-0", "spec": {"highWatermark":"0","volumes":[]}}
2025-01-10T01:45:06Z	INFO	HTTPServer	Starting HTTP Server
2025-01-10T01:45:06Z	INFO	HTTPServer	API route path	{"method": "GET", "path": ["/v1.0/query", "/v1.0/checkrole", "/v1.0/describeuser", "/v1.0/getrole", "/v1.0/listusers", "/v1.0/listsystemaccounts", "/v1.0/healthycheck"]}
2025-01-10T01:45:06Z	INFO	HTTPServer	API route path	{"method": "POST", "path": ["/v1.0/joinmember", "/v1.0/createuser", "/v1.0/preterminate", "/v1.0/unlockinstance", "/v1.0/deleteuser", "/v1.0/leavemember", "/v1.0/datadump", "/v1.0/dataload", "/v1.0/exec", "/v1.0/grantuserrole", "/v1.0/lockinstance", "/v1.0/revokeuserrole", "/v1.0/postprovision", "/v1.0/rebuild", "/v1.0/switchover", "/v1.0/volumeprotection", "/v1.0/checkrunning", "/v1.0/getlag"]}
2025-01-10T01:45:06Z	INFO	cronjobs	env is not set	{"env": "KB_CRON_JOBS"}
2025-01-10T01:45:17Z	INFO	DCS-K8S	pod selector: app.kubernetes.io/instance=gaussdb-sspxtl,app.kubernetes.io/managed-by=kubeblocks,apps.kubeblocks.io/component-name=gaussdb
2025-01-10T01:45:17Z	INFO	DCS-K8S	podlist: 3
2025-01-10T01:45:17Z	INFO	DCS-K8S	Leader configmap is not found	{"configmap": "gaussdb-sspxtl-gaussdb-leader"}
2025-01-10T01:45:17Z	INFO	event	send event: map[event:Success operation:checkRole originalRole:waitForStart role:secondary]
2025-01-10T01:45:17Z	INFO	event	send event success	{"message": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"waitForStart\",\"role\":\"secondary\"}"}
➜  ~ 
➜  ~ kubectl logs gaussdb-sspxtl-gaussdb-1 lorry
2025-01-10T01:35:13Z	INFO	Initialize DB manager
2025-01-10T01:35:13Z	INFO	KB_WORKLOAD_TYPE ENV not set
2025-01-10T01:35:13Z	INFO	Volume-Protection	succeed to init volume protection	{"pod": "gaussdb-sspxtl-gaussdb-1", "spec": {"highWatermark":"0","volumes":[]}}
2025-01-10T01:35:13Z	INFO	HTTPServer	Starting HTTP Server
2025-01-10T01:35:13Z	INFO	HTTPServer	API route path	{"method": "POST", "path": ["/v1.0/volumeprotection", "/v1.0/exec", "/v1.0/deleteuser", "/v1.0/postprovision", "/v1.0/joinmember", "/v1.0/grantuserrole", "/v1.0/getlag", "/v1.0/checkrunning", "/v1.0/rebuild", "/v1.0/createuser", "/v1.0/dataload", "/v1.0/revokeuserrole", "/v1.0/unlockinstance", "/v1.0/datadump", "/v1.0/leavemember", "/v1.0/switchover", "/v1.0/lockinstance", "/v1.0/preterminate"]}
2025-01-10T01:35:13Z	INFO	HTTPServer	API route path	{"method": "GET", "path": ["/v1.0/listusers", "/v1.0/getrole", "/v1.0/listsystemaccounts", "/v1.0/checkrole", "/v1.0/healthycheck", "/v1.0/describeuser", "/v1.0/query"]}
2025-01-10T01:35:13Z	INFO	cronjobs	env is not set	{"env": "KB_CRON_JOBS"}
2025-01-10T01:35:19Z	INFO	DCS-K8S	pod selector: app.kubernetes.io/instance=gaussdb-sspxtl,app.kubernetes.io/managed-by=kubeblocks,apps.kubeblocks.io/component-name=gaussdb
2025-01-10T01:35:19Z	INFO	DCS-K8S	podlist: 3
2025-01-10T01:35:19Z	INFO	DCS-K8S	Leader configmap is not found	{"configmap": "gaussdb-sspxtl-gaussdb-leader"}
2025-01-10T01:35:19Z	INFO	event	send event: map[event:Success operation:checkRole originalRole:waitForStart role:secondary]
2025-01-10T01:35:19Z	INFO	event	send event success	{"message": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"waitForStart\",\"role\":\"secondary\"}"}
2025-01-10T01:37:19Z	INFO	DCS-K8S	pod selector: app.kubernetes.io/instance=gaussdb-sspxtl,app.kubernetes.io/managed-by=kubeblocks,apps.kubeblocks.io/component-name=gaussdb
2025-01-10T01:37:19Z	INFO	DCS-K8S	podlist: 3
2025-01-10T01:37:19Z	DEBUG	checkrole	check member	{"member": "gaussdb-sspxtl-gaussdb-0", "role": ""}
2025-01-10T01:37:19Z	DEBUG	checkrole	check member	{"member": "gaussdb-sspxtl-gaussdb-1", "role": "secondary"}
2025-01-10T01:37:19Z	DEBUG	checkrole	check member	{"member": "gaussdb-sspxtl-gaussdb-2", "role": "secondary"}
2025-01-10T01:37:19Z	INFO	event	send event: map[event:Success operation:checkRole originalRole:secondary role:{"term":"1736473039323843","PodRoleNamePairs":[{"podName":"gaussdb-sspxtl-gaussdb-1","roleName":"primary","podUid":"dfbe0ed3-1614-470b-8355-184e6ac3de16"}]}]
2025-01-10T01:37:19Z	INFO	event	send event success	{"message": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"secondary\",\"role\":\"{\\\"term\\\":\\\"1736473039323843\\\",\\\"PodRoleNamePairs\\\":[{\\\"podName\\\":\\\"gaussdb-sspxtl-gaussdb-1\\\",\\\"roleName\\\":\\\"primary\\\",\\\"podUid\\\":\\\"dfbe0ed3-1614-470b-8355-184e6ac3de16\\\"}]}\"}"}
2025-01-10T01:40:07Z	INFO	HTTPServer	HTTP API Called	{"useragent": "Go-http-client/1.1", "method": "GET", "path": "/v1.0/describeuser"}
2025-01-10T01:40:07Z	INFO	describeUser	executing describeUser error	{"error": "not implemented"}
2025-01-10T01:40:07Z	INFO	HTTPServer	HTTP API Called	{"useragent": "Go-http-client/1.1", "status code": 501, "cost": 0}
2025-01-10T01:40:07Z	INFO	HTTPServer	HTTP API Called	{"useragent": "Go-http-client/1.1", "method": "POST", "path": "/v1.0/createuser"}
2025-01-10T01:40:07Z	INFO	custom	account provision	{"output": "CREATE ROLE\n"}
2025-01-10T01:40:07Z	INFO	HTTPServer	HTTP API Called	{"useragent": "Go-http-client/1.1", "status code": 200, "cost": 33}
2025-01-10T01:40:59Z	INFO	event	send event: map[event:Success operation:checkRole originalRole:primary role:secondary]
2025-01-10T01:40:59Z	INFO	event	send event success	{"message": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"primary\",\"role\":\"secondary\"}"}
➜  ~ 
➜  ~ kubectl logs gaussdb-sspxtl-gaussdb-2 lorry
2025-01-10T01:45:36Z	INFO	Initialize DB manager
2025-01-10T01:45:36Z	INFO	KB_WORKLOAD_TYPE ENV not set
2025-01-10T01:45:36Z	INFO	Volume-Protection	succeed to init volume protection	{"pod": "gaussdb-sspxtl-gaussdb-2", "spec": {"highWatermark":"0","volumes":[]}}
2025-01-10T01:45:36Z	INFO	HTTPServer	Starting HTTP Server
2025-01-10T01:45:36Z	INFO	HTTPServer	API route path	{"method": "POST", "path": ["/v1.0/preterminate", "/v1.0/revokeuserrole", "/v1.0/unlockinstance", "/v1.0/deleteuser", "/v1.0/dataload", "/v1.0/getlag", "/v1.0/leavemember", "/v1.0/switchover", "/v1.0/grantuserrole", "/v1.0/createuser", "/v1.0/volumeprotection", "/v1.0/postprovision", "/v1.0/rebuild", "/v1.0/joinmember", "/v1.0/datadump", "/v1.0/checkrunning", "/v1.0/lockinstance", "/v1.0/exec"]}
2025-01-10T01:45:36Z	INFO	HTTPServer	API route path	{"method": "GET", "path": ["/v1.0/getrole", "/v1.0/listusers", "/v1.0/healthycheck", "/v1.0/query", "/v1.0/describeuser", "/v1.0/checkrole", "/v1.0/listsystemaccounts"]}
2025-01-10T01:45:36Z	INFO	cronjobs	env is not set	{"env": "KB_CRON_JOBS"}
2025-01-10T01:45:48Z	INFO	DCS-K8S	pod selector: app.kubernetes.io/instance=gaussdb-sspxtl,app.kubernetes.io/managed-by=kubeblocks,apps.kubeblocks.io/component-name=gaussdb
2025-01-10T01:45:48Z	INFO	DCS-K8S	podlist: 3
2025-01-10T01:45:48Z	INFO	DCS-K8S	Leader configmap is not found	{"configmap": "gaussdb-sspxtl-gaussdb-leader"}
2025-01-10T01:45:49Z	INFO	event	send event: map[event:Success operation:checkRole originalRole:waitForStart role:secondary]
2025-01-10T01:45:49Z	INFO	event	send event success	{"message": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"waitForStart\",\"role\":\"secondary\"}"}
➜  ~ 

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

@JashBook JashBook added the kind/bug Something isn't working label Jan 10, 2025
@JashBook JashBook added this to the Release 0.9.2 milestone Jan 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants