Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jitsi requires delete/create namespace after a restart #123

Open
f3k-freek opened this issue Aug 8, 2024 · 2 comments
Open

Jitsi requires delete/create namespace after a restart #123

f3k-freek opened this issue Aug 8, 2024 · 2 comments

Comments

@f3k-freek
Copy link

f3k-freek commented Aug 8, 2024

For some reason Jitsi stops working after a restart of the pods when websockets are enabled.
Everything seems to be working fine (all pods are running and webui is online). But clients can't connect.
After deleting and re-creating the namespace it works again.
Deleting or restarting the deployment does not suffice.
Disabling websockets fixes the issue. But that's is not an option for us.

So right now we are making sure that the jitsi pods don't restart, which works but is not ideal.

These are our values. I have anonymised some data:

global:
  clusterDomain: cluster.local
  podLabels: {}
  podAnnotations: {}
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

enableAuth: false
enableGuests: true
publicURL: ""

tz: Europe/Amsterdam

image:
  pullPolicy: IfNotPresent

websockets:
  colibri:
    enabled: true
  xmpp:
    enabled: true

web:
  replicaCount: 1
  image:
    repository: jitsi/web

  extraEnvs: {}
  service:
    type: ClusterIP
    port: 80

  ingress:
    enabled: true
    ingressClassName: "traefik"
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-production
    hosts:
    - host: anonymized-host
      paths: ['/']
    tls:
     - secretName: jitsi-web-tls
       hosts:
         - anonymized-host

  httpRedirect: false
  httpsEnabled: false

  livenessProbe:
    httpGet:
      path: /
      port: 80
  readinessProbe:
    httpGet:
      path: /
      port: 80

  podLabels: {}
  podAnnotations: {}
  podSecurityContext: {}

  securityContext: {}

  resources: {}

  nodeSelector: {}

  tolerations: []

  affinity: {}

jicofo:
  replicaCount: 1
  image:
    repository: jitsi/jicofo

  xmpp:
    password:
    componentSecret:

  livenessProbe:
    tcpSocket:
      port: 8888
  readinessProbe:
    tcpSocket:
      port: 8888

  podLabels: {}
  podAnnotations: {}
  podSecurityContext: {}
  securityContext: {}
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  extraEnvs: {}

jvb:
  replicaCount: 1
  image:
    repository: jitsi/jvb

  xmpp:
    user: jvb
    password:

  publicIPs: []
  stunServers: 'meet-jit-si-turnrelay.jitsi.net:443'
  useHostPort: false
  useHostNetwork: false
  UDPPort: 10000

  service:
    enabled:
    type: NodePort
    externalTrafficPolicy: Cluster
    externalIPs: []

    annotations: {}

  breweryMuc: jvbbrewery

  livenessProbe:
    httpGet:
      path: /about/health
      port: 8080
  readinessProbe:
    httpGet:
      path: /about/health
      port: 8080

  podLabels: {}
  podAnnotations: {}
  podSecurityContext: {}
  securityContext: {}
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  extraEnvs: {}

  metrics:
    enabled: false
    prometheusAnnotations: false
    image:
      repository: docker.io/systemli/prometheus-jitsi-meet-exporter
      pullPolicy: IfNotPresent
    serviceMonitor:
      enabled: true
      selector:
        release: prometheus-operator
      interval: 10s

    resources:
      requests:
        cpu: 10m
        memory: 16Mi
      limits:
        cpu: 20m
        memory: 32Mi

octo:
  enabled: false

jibri:
  enabled: false
  useExternalJibri: false
  singleUseMode: false
  recording: true
  livestreaming: false
  replicaCount: 1
  persistence:
    enabled: false
    size: 4Gi

  shm:
    enabled: false

  image:
    repository: jitsi/jibri

  podLabels: {}
  podAnnotations: {}

  breweryMuc: jibribrewery
  timeout: 90

  xmpp:
    user: jibri
    password:

  recorder:
    user: recorder
    password:

  livenessProbe:
    initialDelaySeconds: 5
    periodSeconds: 5
    failureThreshold: 2
    exec:
      command:
        - /bin/bash
        - "-c"
        - >-
          curl -sq localhost:2222/jibri/api/v1.0/health
          | jq '"\(.status.health.healthStatus) \(.status.busyStatus)"'
          | grep -qP 'HEALTHY (IDLE|BUSY)'

  readinessProbe:
    initialDelaySeconds: 5
    periodSeconds: 5
    failureThreshold: 2
    exec:
      command:
        - /bin/bash
        - "-c"
        - >-
          curl -sq localhost:2222/jibri/api/v1.0/health
          | jq '"\(.status.health.healthStatus) \(.status.busyStatus)"'
          | grep -qP 'HEALTHY (IDLE|BUSY)'

  extraEnvs: {}

serviceAccount:
  create: true
  annotations: {}
  name:

xmpp:
  domain: meet.jitsi
  authDomain:
  mucDomain:
  internalMucDomain:
  guestDomain:

extraCommonEnvs: {}

prosody:
  enabled: true
  server:
  extraEnvFrom:
  - secretRef:
      name: '{{ include "prosody.fullname" . }}-jicofo'
  - secretRef:
      name: '{{ include "prosody.fullname" . }}-jvb'
  - configMapRef:
      name: '{{ include "prosody.fullname" . }}-common'

  image:
    repository: jitsi/prosody
@sakiphandursun
Copy link

WebSocket connection to 'wss://...' failed: is this the error? Also, do you do any nginx routing for web socket? Have you also tried httpsEnabled: true?

I couldn't think of any solutions other than these. I am just beginning to understand the architecture.

@spijet
Copy link
Collaborator

spijet commented Aug 29, 2024

Hello @f3k-freek!

If I'm reading the values correctly, the JVB is unable to determine the IP address it can announce to the clients so the clients can connect to its' UDP port. Can you try again, but this time set .Values.jvb.publicIPs=["$IP"] or .Values.jvb.useNodeIP=true?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants