Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jitsi meet not working using helm after joining two or more people #119

Open
sushantpatil12 opened this issue Jul 8, 2024 · 4 comments
Open

Comments

@sushantpatil12
Copy link

sushantpatil12 commented Jul 8, 2024

Title: Audio and Video Not Working for Third Participant in Jitsi Meet Deployment via Helm

Description:
We have deployed Jitsi Meet using Helm. All the necessary services like jitsi-web and jvb have been created with the necessary pods. We exposed the web service via Ingress and are able to create meetings for two participants without any issues. However, when a third participant joins the meeting, the audio and video stop working.

Below is the values.yml file we used for the deployment. Please advise if any changes need to be made to the values.yml to resolve this issue.

yaml

global:
clusterDomain: cluster.local
podLabels: {}
podAnnotations: {}
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

enableAuth: false
enableGuests: true
publicURL: "jitsiks.example.com"

tz: Asia/Kolkata

image:
pullPolicy: IfNotPresent

websockets:
colibri:
enabled: true
xmpp:
enabled: true

web:
replicaCount: 1
image:
repository: jitsi/web
tag: 'stable-9584-1'

extraEnvs:
name: ENABLE_P2P
value: "0"

service:
type: ClusterIP
port: 80
externalIPs: []

ingress:
enabled: false
annotations: {}
hosts:
- host: jitsi.local
paths: ['/']
tls: []

httpRedirect: false
httpsEnabled: false

livenessProbe:
httpGet:
path: /
port: 80
readinessProbe:
httpGet:
path: /
port: 80

podLabels: {}
podAnnotations: {}
podSecurityContext: {}
securityContext: {}
resources:
limits:
cpu: 1
memory: 2096Mi
requests:
cpu: 1
memory: 1096Mi

nodeSelector: {}

tolerations: []

affinity: {}

jicofo:
replicaCount: 1
image:
repository: jitsi/jicofo
tag: 'stable-9584-1'

xmpp:
password:
componentSecret:

livenessProbe:
tcpSocket:
port: 8888
readinessProbe:
tcpSocket:
port: 8888

podLabels: {}
podAnnotations: {}
podSecurityContext: {}
securityContext: {}
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
extraEnvs: {}

jvb:
replicaCount: 1
image:
repository: jitsi/jvb
tag: 'stable-9584-1'

xmpp:
user: jvb
password:

stunServers: 'meet-jit-si-turnrelay.jitsi.net:443'
useHostPort: true
useHostNetwork: true

UDPPort: 10000

service:
type: NodePort
annotations: {}

breweryMuc: jvbbrewery

livenessProbe:
httpGet:
path: /about/health
port: 8080
readinessProbe:
httpGet:
path: /about/health
port: 8080

prosody:
enabled: true
persistence:
enabled: false
server:
extraEnvFrom:

secretRef:
name: '{{ include "prosody.fullname" . }}-jicofo'
secretRef:
name: '{{ include "prosody.fullname" . }}-jvb'
configMapRef:
name: '{{ include "prosody.fullname" . }}-common'
name: '{{ include "prosody.fullname" . }}-jibri'
image:
repository: jitsi/prosody
tag: 'stable-9584-1'
@spijet
Copy link
Collaborator

spijet commented Aug 7, 2024

Hello @sushantpatil12!

Can you please edit your original message and use triple backticks (```) around the values.yaml contents, so it doesn't get messed up by GitHub's markdown?

@Tlmonko
Copy link

Tlmonko commented Aug 12, 2024

I have same issue. Here is my values.yaml file:

jitsi-meet:
  web:
    ingress:
      enabled: true
      ingressClassName: "traefik"
      annotations:
        cert-manager.io/cluster-issuer: letsencrypt-prod
        kubernetes.io/ingress.provider: traefik
      hosts:
      - paths: ['/']
  
  jvb:
    publicIPs:
      - <MY_PUBLIC_IP>
    service:
      externalTrafficPolicy: ""
    stunServers: "meet-jit-si-turnrelay.jitsi.net:443,stun1.l.google.com:19302,stun2.l.google.com:19302,stun3.l.google.com:19302,stun4.l.google.com:19302"

Also I tried to add this options:

  websockets:
    colibri:
      enabled: true
    
    xmpp:
      enabled: true

But issue was still present

Also I use traefik IngressRouteUDP:

apiVersion: traefik.io/v1alpha1
kind: IngressRouteUDP
metadata:
  name: jvb
  namespace: jitsi
spec:
  entryPoints:
    - jitsi-jvb
  routes:
    - services:
        - name: jitsi-jitsi-meet-jvb
          port: 10000

Here is my JVB pod logs:
log.txt

@spijet
Copy link
Collaborator

spijet commented Aug 29, 2024

Hello @Tlmonko!

Judging by the log you've attached, it looks like the JVB is unable to reach the client with UDP replies. Can you verify that Traefik really routes the UDP datagrams to JVB properly and sends the replies back to the user as well?

I had a similar problem when I used the ingress-nginx Ingress Controller with an extra config so it'd proxy incoming UDP to the JVB service. As it turned out, nginx assumes that all UDP connections behave like DNS, and so by default it would accept one UDP packet from the user, forward it to the JVB and then get exactly one response packet from the JVB and forward it back to the user. At the time I managed to fix this problem by changing some nginx configuration option to tell it to proxy at most 9999999 UDP response packets back to the user. It somewhat worked, but in the end I decided to cut the middleman and exposed the JVB via useHostPort: true instead.

@spijet
Copy link
Collaborator

spijet commented Aug 29, 2024

@sushantpatil12, can you please try deploying Jitsi Meet with these values for JVB?

    jvb:
      UDPPort: 32768
      stunServers: >-
        meet-jit-si-turnrelay.jitsi.net:443,stun1.l.google.com:19302,stun2.l.google.com:19302,stun3.l.google.com:19302,stun4.l.google.com:19302
      useHostPort: true
      useNodeIP: true

This way JVB's data port will be exposed via the hostPort feature, and JVB will auto-detect the server IP and will announce it to the users.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants