Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

my findings #38

Open
bdw617 opened this issue Jun 24, 2023 · 1 comment
Open

my findings #38

bdw617 opened this issue Jun 24, 2023 · 1 comment

Comments

@bdw617
Copy link

bdw617 commented Jun 24, 2023

Hi!

I'm happy to contribute to this chart to keep it up to date, but I have some feedback from my experience with it! I am helping a friend with getting this deployed to AWS and the AWS instructions aren't really designed for a small lab with single digit users. (Also I know kubernetes way better :)). I'm going to see if I can run this on a fairly small single node EKS cluster later, I'll update you on how that goes.

docs:

  1. update docs to say the web is only needed if the heavy client (omero insights)
  2. of course I didn't realize the user was root for the root password, I couldn't get omero to work (oh wait, I'm an idiot).

to get this working on my PC (WSL2, docker running in WSL2, not docker desktop) I used k3d (if you're not using k3d yet, it's awesome)

to create a single loadbalancer port so it can be picked up as localhost, you can do this:
k3d cluster create -p '4064:4064@loadbalancer' --k3s-arg='--disable=traefik@server:0'

I found the client affinity broke the service in k3d, I'm not going to bother testing it. we just don't need it as we're not going to multi-server this for now. we'll focus on backup and restore for DR (the PVCs primarily)

service
  type: LoadBalancer
    affinity:
      enabled: false

I'm not really sure there's a lot of demand for this, but I'm allowed to contribute back to open source as part of my day job and could adopt this. Please drop me a note if you'd like me to contribute with some PRs.

@manics
Copy link
Owner

manics commented Feb 1, 2024

Sorry for the very late reply to you! Thanks for your interest and feedback.

I'm happy to receive contributions. However I'd like to keep this chart as minimal and focussed as possible, since OMERO and Kubernetes already have a lot of documentation, and from past experience trying to keep detailed documentation of external systems rapidly ends up in things going out of date.

I think it's fine to link to relevant external documentation though. For example

update docs to say the web is only needed if the heavy client (omero insights)

If there's relevant official OMERO documentation we could link out to this, if there isn't then maybe you could open an issue or PR on the OMERO docs repo?

k3d cluster create -p '4064:4064@loadbalancer' --k3s-arg='--disable=traefik@server:0'

The danger of going down the route of configuring individual on-prem k8s distributions is there are so many! If there was just one it be reasonable, but the number keeps growing. Minikube used to be pretty standard, then there was Kind, followed by K3s and MicroK8s, then more recently k0s.

What might be nice though is if you did your own writeup that you maintain, and then link to that?

Once again sorry for being so slow in replying!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants