Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run Open edX on a local Kubernetes cluster? #4

Open
3 tasks
regisb opened this issue Feb 27, 2023 · 12 comments
Open
3 tasks

How to run Open edX on a local Kubernetes cluster? #4

regisb opened this issue Feb 27, 2023 · 12 comments
Labels
discovery Pre-work to determine if an idea is feasible feature 🌟 new addition of functionality help wanted Ready to be picked up by anyone in the community

Comments

@regisb
Copy link
Contributor

regisb commented Feb 27, 2023

Tutor makes it possible to run Open edX on a remote Kubernetes cluster: https://docs.tutor.overhang.io/k8s.html
But in such a setup the Kubernetes cluster needs to expose public endpoints that the IDAs use to communicate between each other. As a consequence, when deploying to a local Kubernetes cluster running on the laptop, tutor k8s launch will fail.

We should:

  • Provide instructions (or at least clear pointers) to run a local Kubernetes cluster, using either: Minikube, k3s, microk8s or any other lightweight Kubernetes distribution.
  • Identify the steps that fail during a typical deployment of Open edX with Tutor to that Kubernetes cluster.
  • Address those issues, either by making changes to Tutor or by adding instructions to the Tutor docs.
@regisb regisb added discovery Pre-work to determine if an idea is feasible feature 🌟 new addition of functionality help wanted Ready to be picked up by anyone in the community labels Feb 27, 2023
@Henrrypg
Copy link

Henrrypg commented May 29, 2023

Hello @regisb, i was testing with minikube for Olive and it appears to works fine. tutor k8s launch didn't fail. And doing minikube tunnel + adding CMS and LMS hostnames to /etc/hosts pointing to the ip taken from caddy external ip (Image with example from my test) i can see the sites.

image

I can take this task and add docs

@regisb
Copy link
Contributor Author

regisb commented May 30, 2023

That would be absolutely amazing! Looking forward to the PR.

I'm wondering if there is a way to avoid the manual changes to /etc/hosts, though?

@jmbowman
Copy link

2U SRE also uses primarily minikube, although it's more due to a lack of alternatives when the choice was originally made than the result of a detailed comparison. I heard one downvote for k3s for reasons not yet explained. Still waiting on input from a few people, I'll comment again if any useful insights come out of it.

@Henrrypg
Copy link

@jmbowman Sure, thank you, i'll explore others alternatives before doing anything, i was testing to avoid the current necessary change in /etc/hosts with Minikube but without a good result for now (@regisb). I'll comment here when i have more info.

@jmbowman
Copy link

Another vote came in for kind because it's "more robust with less memory footprint". A couple of comparisons that were fairly informative:

Generally what I'm seeing is "alternatives to minikube came up to address its limitations, and they've been stealing features from each other since". I think the current state is that minikube is still the most popular and is competitive in features, some of the alternatives are a little more efficient in resource usage.

@feoh
Copy link

feoh commented May 31, 2023

Another vote for minikube IMO is its very rich driver support.

Knowing that I can spin tutor up using my ProxMox servers as VM hosts and get all that centrally managed goodness is a big win in my book :)

MicroK8s also runs in a number of environments but the support seems a bit more hit and miss.

@regisb
Copy link
Contributor Author

regisb commented Jun 1, 2023

Thanks for your input ya'll! It's very much appreciated. Please keep posting your opinions here :)

@jmbowman
Copy link

jmbowman commented Jun 7, 2023

A visual of relative popularity over time of the options I've seen mentioned so far, courtesy of https://star-history.com/:

Local_k8s_runtimes

@Shubhamsingh1998
Copy link

Hi I have set up this much, my services are running on it, I have caddy serving as load-balancer , the issue is my http request is going to caddy but caddy behaves like it doses have path to redirect that traffic. When I put the IP it shows White Page and when I Put hostname its gives 502 Gateway error.

can Some help with this issue

Screenshot from 2023-06-16 15-45-03
Screenshot from 2023-06-16 15-22-39

@Henrrypg
Copy link

Hello @Shubhamsingh1998.

I'll comment here the steps to run local installation wtih Minikube. For now, i have not been able to work with Kind(I'm having problems to expose the caddy)

How to run Open edX on a local Kubernetes cluster with Minikube

  1. You need to have installed Minikube
  2. Start minikube cluster (E. g. minikube start -p openedxcluster)
  3. Run tutor k8s launch and set LMS and CMS local domains (such as local.overhang.io )
  4. Run minikube tunnel -p openedxcluster
  5. Copy your extenal-ip from caddy service (Running kubectl get svc -n openedx)
  6. edit /etc/hosts with
EXTERNAL-IP LMS_HOST
EXTERNAL-IP CMS_HOST

For my example was:

10.105.250.33   local.overhang.io
10.105.250.33   studio.local.overhang.io
  1. Enjoy

image

image

@Shubhamsingh1998
Copy link

Shubhamsingh1998 commented Jun 21, 2023

Thank you @Henrrypg for this.

@regisb
Copy link
Contributor Author

regisb commented Jun 22, 2023

I've done further research, with interesting results.

k3s: not usable out of the box

I was not able to get k3s to work with Tutor out of the box, because k3s does not apply fsGroup properties. I believe this is related to this upstream issue: k3s-io/k3s#6401

As a consequence of this issue, mysql volumes do not have the right permissions, and mysql fails to boot.

minikube: works great

Just run:

minikube start
tutor k8s launch

While Tutor is launching, open the k8s dashboard with:

minikube dashboard

To access the Open edX platform, see port forwarding below.

EDIT: as a side note, Docker images can be loaded in Minikube easily: https://minikube.sigs.k8s.io/docs/handbook/pushing/

eval $(minikube docker-env)
tutor images build all

kind: works great

Run:

kind create cluster
tutor k8s launch

And you're done. The only downside, compared to minikube, is that the kubernetes dashboard must be installed manually (not a big deal). See port forwarding below.

Also, Open edX seems super slow. I did not take the time to tweak the settings, maybe there is a way to allocate more CPU/memory.

port forwarding

Once Open edX is running inside the k8s cluster, all we need to do is to forward localhost:80 to the caddy:80 service. Run:

kubectl --namespace=openedx port-forward svc/caddy 80

But because 80 is a privileged port, this command must be run as root. You must then point kubectl to the right kubeconfig:

sudo kubectl --kubeconfig=/home/YOURUSERNAME/.kube/config --namespace=openedx port-forward svc/caddy 80

And tada! You can now open http://local.overhangio, http://studio.local.overhangio, http://apps.local.overhangio, etc. This method has the advantage that you don't need to add a long list of host names to /etc/hosts.

What's also great is that this port forwarding method should work for all k8s providers, not just kind or minikube. What's not so great is that, in practice, I've observed that the port forwarding does not work so well with kind. In some cases I was able to access the LMS, but not the Studio, and vice versa. We might have to setup extra port mappings in kind.

remaining issues

I did not manage to get the minio plugin to work yet. This is because, during migrations, the LMS makes a call to files.local.overhang.io, which does not resolve correctly.

TODO

  • Fix the MinIO issue above
  • Add instructions to the docs to describe how to run Kubernetes locally. We don't need to recommend a specific k8s distribution, maybe just mention that k3s will not work out of the box.
  • Change tutor k8s launch to ask whether we should run in "local" mode. In that case, set LMS_HOST=local.overhang.io and ENABLE_HTTPS=false.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discovery Pre-work to determine if an idea is feasible feature 🌟 new addition of functionality help wanted Ready to be picked up by anyone in the community
Projects
Status: In Progress
Development

No branches or pull requests

5 participants