Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential bug in entrypoint script #4251

Open
applike-ss opened this issue Dec 4, 2024 · 3 comments
Open

Potential bug in entrypoint script #4251

applike-ss opened this issue Dec 4, 2024 · 3 comments
Assignees
Labels
bug Something isn't working

Comments

@applike-ss
Copy link
Contributor

applike-ss commented Dec 4, 2024

Describe the bug
When i start my dragonfly test container via the following command the io threads are correctly set:

# set proactor threads to 1
docker run -it --rm ghcr.io/dragonflydb/dragonfly:v1.25.4-ubuntu --proactor_threads=1
# > I20241204 09:13:18.126736     1 proactor_pool.cc:147] Running 1 io threads

# set proactor threads to 2
docker run -it --rm ghcr.io/dragonflydb/dragonfly:v1.25.4-ubuntu --proactor_threads=2
# > I20241204 09:07:52.068485     1 proactor_pool.cc:147] Running 2 io threads

When i now try to achieve the same in my k3d test kubernetes cluster, it does always set the proactor threads to the amount of threads available to the pod. This would be fine if i currently did have another issue where limiting the pod resources isn't working correctly.

snippet from the pod:

  containers:
    - name: dragonfly
      image: ghcr.io/dragonflydb/dragonfly:v1.25.4
      args:
        - '--proactor_threads=1'

I assume that maybe the entryscript could be handlings things differently in kubernetes, but i didn't dig down that road. Can't imagine any other reason why there would be a difference so far.

To Reproduce
Steps to reproduce the behavior:

  1. k3d cluster create
  2. Install dragonfly operator as per documentation kubectl apply -f https://raw.githubusercontent.com/dragonflydb/dragonfly-operator/main/manifests/dragonfly-operator.yaml
  3. Create dragonfly cluster via Dragonfly resource with args passed
  4. See log message showing a different amount of proactor threads

Demo Resource:

apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
  name: dragonfly-sample
  namespace: default
spec:
  image: ghcr.io/dragonflydb/dragonfly:v1.25.4
  args:
    - '--maxmemory=512M'
    - '--proactor_threads=1'
  replicas: 2
  resources:
    limits:
      cpu: 2
      memory: 768Mi
    requests:
      cpu: 2
      ephemeral-storage: 1Gi
      memory: 768Mi

Expected behavior
Setting proactor threads (potentially other flags are also affected?) should work the same, no matter whether using docker or k8s/containerd

Screenshots

Environment (please complete the following information):

  • OS: Darwin + Linux
  • Kernel: 6.10.14-linuxkit
  • Containerized?: Docker and Kubernetes
  • Dragonfly Version: 1.25.4

Reproducible Code Snippet
see above at "To reproduce"

Additional context

@applike-ss applike-ss added the bug Something isn't working label Dec 4, 2024
@Abhra303
Copy link
Contributor

Abhra303 commented Dec 6, 2024

I can reproduce it. I have checked the args passed to the container, verified the command that the container run dragonfly --logtostderr --alsologtostderr --primary_port_http_enabled=false --admin_port=9999 --admin_nopass --proactor_threads=1 --maxmemory=512M. Everything looks ok on the k8s side. As it passes --proactor_threads to dragonfly, I am not sure why dragonfly can't configure the threads given that the standalone docker container works. I also tried to run another dragonfly process in the k8s pod container with --proactor_threads - /usr/local/bin/dragonfly --port=6399 --alsologtostderr --proactor_threads=1. Same issue.

So, I don't think its an entrypoint script issue. Maybe the k8s env is somehow preventing dragonfly to configure the threads. I need to dig deeper.

@applike-ss
Copy link
Contributor Author

Great, then I know it was not just a stupid mistake on my end.

Here's what i'm using to test:
k3d version v5.6.0
k3s version v1.27.4-k3s1 (default) - docker.io/rancher/k3s:v1.27.4-k3s1

@applike-ss
Copy link
Contributor Author

Out of curiosity i also tried minikube v1.34.0 via docker driver with kubernetes v1.31.0 and got the same result as you.
Same for k3d.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants