Skip to content

Commit

Permalink
merge
Browse files Browse the repository at this point in the history
  • Loading branch information
simsine committed Jan 28, 2025
2 parents 6411312 + 3c0f2b8 commit 55c7892
Show file tree
Hide file tree
Showing 7 changed files with 478 additions and 44 deletions.
8 changes: 8 additions & 0 deletions content/docs/educational_resources/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
+++
title = "Educational resources"
description = "Beginner tutorials and educational material"
template = "docs/section.html"
sort_by = "weight"
weight = 1
draft = false
+++
354 changes: 354 additions & 0 deletions content/docs/educational_resources/terminal-intro.md

Large diffs are not rendered by default.

6 changes: 4 additions & 2 deletions content/docs/instrukser/ha_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ weight = 3
First, in order to set up HA, you need to have the VM hosted on one of the
following servers:

- Bolivar
- Fergus
- Skaftetrynet
- Pluto

Expand All @@ -26,14 +26,16 @@ Other than that, the setup for creating a vm is the same as can be found
of the VM is stored on the pool "basseng"**
2. Create a Replication job, this can be found under the `Replication` tab
within the Proxmox UI after clicking the VM.
- Set the target to be the current server
- You need to create two of these, each pointing to one of the other servers
- Set the target to one of the other servers
- Schedule: \*/15 (replication every 15 minutes, this will only transfer
changes so won't bog the network)
- Enabled checked
- Everything else default
3. Navigate to the `Datacenter` tab found in the top of the tree structure found
on the left. Then click on `HA`
4. Then fill it in as follows:
- Group: Main
- VM: Select the VM you want to enable HA for
- The rest default
5. After all these steps are complete, replication is now set up!
38 changes: 24 additions & 14 deletions content/docs/instrukser/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@ Målet er å gjøre tjenestene enda mer robuste og enklere å vedlikeholde. Vi
relevant for arbeidsliv.

- https://docs.k3s.io/datastore/ha-embedded
- [ ] En/to HaProxy node som lastbalanserer mellom server nodene
- [x] En/to HaProxy node som lastbalanserer mellom server nodene (lb-1 & lb-2 på
letti & netti proxmox clusteret)
- [x] Tre server noder (Kontrollerer klustert)
- Petter.fribyte.no
- Raptus.fribyte.no
Expand All @@ -25,47 +26,56 @@ relevant for arbeidsliv.
- lille-hjelper-1.fribyte.no
- lille-hjelper-1.fribyte.no
- lille-hjelper-1.fribyte.no
- [ ] Letsencrypt inni Kubernetes for å utstede sertifikater til tjenestene
- [ ] Simple static docker container eksempel

## Oppsett av kluster

[Installere k3sup](https://github.com/alexellis/k3sup?tab=readme-ov-file#download-k3sup-tldr) på Pluto, som har tilgang til NAT nettverket.
[Installere k3sup](https://github.com/alexellis/k3sup?tab=readme-ov-file#download-k3sup-tldr)
på Pluto, som har tilgang til NAT nettverket.

Installer master og agent nodene med k3sup på Pluto:

[Basert på denne kommandoen](https://github.com/alexellis/k3sup?tab=readme-ov-file#create-a-multi-master-ha-setup-with-embedded-etcd)


```bash
export USER=fribyte
export K3S_VERSION="v1.31.2+k3s1"

# Tailscale IP of the ha proxy load balancer, needed to access kubectl over tailscale
export HA_PROXY_TAILSCALE_IP=100.64.0.43

# Server nodes
export RAPTUS_IP=10.0.0.70
export PETTER_IP=10.0.0.71
export HUTRE_IP=10.0.0.72

# Worker/agwnt nodes
export LILLE_HJELPER_1_IP=10.0.0.80
export LILLE_HJELPER_2_IP=10.0.0.81
export LILLE_HJELPER_3_IP=10.0.0.82

# Master noder

# Raptus
k3sup install --ip $RAPTUS_IP --user $USER --cluster --k3s-version $K3S_VERSION
# Petter
k3sup join --ip $PETTER_IP --user $USER --server-user $USER --server-ip $RAPTUS_IP --server --k3s-version $K3S_VERSION
# Hutre
k3sup join --ip $HUTRE_IP --user $USER --server-user $USER --server-ip $RAPTUS_IP --server --k3s-version $K3S_VERSION
k3sup install --ip $RAPTUS_IP --user $USER --cluster --k3s-version $K3S_VERSION --k3s-extra-args "--tls-san $HA_PROXY_TAILSCALE_IP"
k3sup join --ip $PETTER_IP --user $USER --server-user $USER --server-ip $RAPTUS_IP --server --k3s-version $K3S_VERSION --k3s-extra-args "--tls-san $HA_PROXY_TAILSCALE_IP"
k3sup join --ip $HUTRE_IP --user $USER --server-user $USER --server-ip $RAPTUS_IP --server --k3s-version $K3S_VERSION --k3s-extra-args "--tls-san $HA_PROXY_TAILSCALE_IP"

# Agent noder

# Lille-Hjelper-1
k3sup join --ip $LILLE_HJELPER_1_IP --user $USER --server-user $USER --server-ip $RAPTUS_IP --k3s-version $K3S_VERSION
# Lille-Hjelper-2
k3sup join --ip $LILLE_HJELPER_2_IP --user $USER --server-user $USER --server-ip $RAPTUS_IP --k3s-version $K3S_VERSION
# Lille-Hjelper-3
k3sup join --ip $LILLE_HJELPER_3_IP --user $USER --server-user $USER --server-ip $RAPTUS_IP --k3s-version $K3S_VERSION


mv ./.kube/config ./.kube/config-$(date +%s) # Backup existing kubeconfig
mv kubeconfig ./.kube/config # WARNING - This will overwrite your existing kubeconfig
kubectl label node lille-hjelper-1 kubernetes.io/role=agent
kubectl label node lille-hjelper-2 kubernetes.io/role=agent
kubectl label node lille-hjelper-3 kubernetes.io/role=agent
```

You can now access the Kubernetes cluster from your local machine by copying the
`~/.kube/config` file from Pluto to your local machine to the home directory.

- And replace `server: https://10.0.0.70:6443` with
`server: https://100.64.0.43:6443`
- Confirm it is working using `kubectl get nodes`
23 changes: 14 additions & 9 deletions content/docs/instrukser/ny-vm.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,15 @@ draft = false
If this VM is meant to have High availability, please read the setup for that
before starting. The setup can be found [here](../ha-setup)

1. Koble til wireguard
1. Gå til proxmox [https://10.100.10.1:8006](https://10.100.10.1:8006)
1. Koble til tailscale
1. Gå til proxmox
[https://pluto.vpn.fribyte.no:8006](https://pluto.vpn.fribyte.no:8006)
alternativt:
[https://proxmox.fribyte.no:8006](https://proxmox.fribyte.no:8006)
1. Clone templaten `6000 (Clone me ubuntu)` på Skaftetrynet eller
`1000 (VM 1000)` på Fergus (høyreklikk -> clone)
1. Definer navn, gjerne navn på kunde
1. Velg en ledig VM-ID
1. Sett mode til full clone
1. Sett target storage til `basseng`
1. Klikk "Clone"
1. Brukernavn: fribyte
Expand All @@ -41,13 +42,17 @@ before starting. The setup can be found [here](../ha-setup)
1. Start vm og vent på at den er klar. Se progress ved å trykke på `Console` og
se at den booter.

- Det er ofte problemer på dette steget, fordi template man kopierer fra ikke
virker som den skal. Spør gjerne om hjelp her.
- Det er ofte problemer på dette steget, fordi template man kopierer fra ikke
virker som den skal. Spør gjerne om hjelp her.

1. Legg gjerne inn en DNS peker på DNS slik at man kan bruke
`ssh [email protected]`.
1. Du skal nå kunne `ssh` inn til den via skaftetrynet med `ssh {ip-addresse}`
eller `ssh {definert-navn}`
1. Registrer serveren i tailscale nettverket vårt. Dette kan du se her:
[/docs/instrukser/tailscale](/docs/instrukser/tailscale)

## Registrer VMen i HA

Dersom denne VMen er kritisk for kunde, bør den registreres i High Availibility
i clusteret vårt. Dette kan du se
[/docs/instrukser/ha-setup](/docs/instrukser/ha-setup)

## Tildel mer lagringsplass

Expand Down
55 changes: 55 additions & 0 deletions content/docs/instrukser/tailscale-setup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
+++
title = "Ny VM Tailscale"
description = "Koble ny VM til tailscale nettet"
template = "docs/page.html"
sort_by = "weight"
weight = 5
draft = false
+++

## How to connect a new VM to the tailscale network so we can ssh to it

1. Install tailscale following their official guide
- [https://tailscale.com/download](https://tailscale.com/download)
1. Then connect to the tailscale mesh VPN by running this command

```
sudo tailscale up --login-server https://headscale.fribyte.no --ssh --accept-dns=false
```

1. Then you want to copy the key from the output, it should look something like
follows

```
mkey:27d04d3c1c5e9........
```

1. Then while keeping that terminal active, ssh into the headscale VM

```
ssh fribyte@headscale
```

1. Then run this command where NAME=name of VM and the `mkey:cccfdsdsfds.....`

```
sudo ~/join.sh <NAME> <Machine Key>
```

1. Check that the VM is accessible over tailscale by running

```
ssh fribyte@<NAME>
```

## Empty output

If for some reason you get an empty output when running `tailscale up` you can
get it from the journal log of the headscale VM. This can be seen with the
command

```
sudo journalctl -xeu headscale.service
```

It should be within one of the last log entries
38 changes: 19 additions & 19 deletions content/docs/nettverk/nettverk-oversikt.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,22 +103,22 @@ This uses the subnet `10.0.0.0/24` for local adresses
- Gateway: 10.0.0.11
- DNS servers: 158.37.6.21, 158.37.6.22, 1.1.1.1, 1.0.0.1

| IPV4 | IPV6 | Name | Interface | Comment |
| --------- | ---- | --------------- | ---------------------- | -------------------- |
| 10.0.0.1 | | fw-1 (netti) | bge0 | LAN interface fw-1 |
| 10.0.0.2 | | fw-2 | bge0 | Lan interface fw-2 |
| 10.0.0.11 | | Gateway NAT | | Opnsense LAN gateway |
| 10.0.0.20 | | Letti | enp130s0f0 | Letti Proxmox Host |
| 10.0.0.21 | | Netti | enp132s0f0 | Netti Proxmox Host |
| 10.0.0.25 | | Pluto | enp7s0f1 | Netti Proxmox Host |
| 10.0.0.26 | | Fergus | enp7s0f1 | Netti Proxmox Host |
| 10.0.0.27 | | Skaftetrynet | eno1 | Netti Proxmox Host |
| 10.0.0.70 | | Raptus | vmbrNAT (pluto) | k3s Master |
| 10.0.0.71 | | Petter | vmbrNAT (Fergus) | k3s Master |
| 10.0.0.72 | | Hutre | vmbrNAT (Skaftetrynet) | k3s Master |
| 10.0.0.80 | | Lille-Hjelper-1 | vmbrNAT (Pluto) | k3s Slave |
| 10.0.0.81 | | Lille-Hjelper-2 | vmbrNAT (Fergus) | k3s Slave |
| 10.0.0.82 | | Lille-Hjelper-3 | vmbrNAT (Skaftetrynet) | k3s Slave |
| | | | | |
| | | | | |
| | | | | |
|IPV4 (1-254)| IPV6 | Name | Interface | Comment |
| ---------- | ---- | --------------- | ---------------------- | -------------------- |
| 10.0.0.1 | | fw-1 (netti) | bge0 | LAN interface fw-1 |
| 10.0.0.2 | | fw-2 (letti) | bge0 | Lan interface fw-2 |
| 10.0.0.11 | | Gateway NAT | | Opnsense LAN gateway |
| 10.0.0.20 | | Letti | enp130s0f0 | Letti Proxmox Host |
| 10.0.0.21 | | Netti | enp132s0f0 | Netti Proxmox Host |
| 10.0.0.25 | | Pluto | enp7s0f1 | Netti Proxmox Host |
| 10.0.0.26 | | Fergus | enp7s0f1 | Netti Proxmox Host |
| 10.0.0.27 | | Skaftetrynet | eno1 | Netti Proxmox Host |
| 10.0.0.70 | | Raptus | vmbrNAT (pluto) | k3s Master |
| 10.0.0.71 | | Petter | vmbrNAT (Fergus) | k3s Master |
| 10.0.0.72 | | Hutre | vmbrNAT (Skaftetrynet) | k3s Master |
| 10.0.0.80 | | Lille-Hjelper-1 | vmbrNAT (Pluto) | k3s Slave |
| 10.0.0.81 | | Lille-Hjelper-2 | vmbrNAT (Fergus) | k3s Slave |
| 10.0.0.82 | | Lille-Hjelper-3 | vmbrNAT (Skaftetrynet) | k3s Slave |
| 10.0.0.100 | | lb-1 (netti) | vmbrNAT (netti) | NAT tcp proxy |
| 10.0.0.101 | | lb-2 (letti) | vmbrNAT (letti) | NAT tcp proxy |
| | | | | |

0 comments on commit 55c7892

Please sign in to comment.