From 5fd5ae3c74dd5534ffb9a52efd23da406277bc7b Mon Sep 17 00:00:00 2001 From: Kevin Hu Date: Tue, 3 Sep 2024 15:04:23 -0700 Subject: [PATCH] Deployed 19c2b3b with MkDocs version: 1.6.0 --- Docker Apps/Web/caddy/index.html | 15 +++++++++++---- index.html | 2 +- search/search_index.json | 2 +- sitemap.xml.gz | Bin 127 -> 127 bytes 4 files changed, 13 insertions(+), 6 deletions(-) diff --git a/Docker Apps/Web/caddy/index.html b/Docker Apps/Web/caddy/index.html index 152217f..fc00cf3 100755 --- a/Docker Apps/Web/caddy/index.html +++ b/Docker Apps/Web/caddy/index.html @@ -1,4 +1,4 @@ - Custom Caddy Lego - Documentation
Skip to content

Custom Caddy Lego

https://github.com/vttc08/caddy-lego
Customized caddy docker container that has Dynu support for wildcard certificates.

Install

Create a Docker network specific to publicly accessible container.

docker network create public --subnet 172.80.0.0/16
+ Custom Caddy Lego - Documentation      

Custom Caddy Lego

https://github.com/vttc08/caddy-lego
Customized caddy docker container that has Dynu support for wildcard certificates.

Install

Create a Docker network specific to publicly accessible container.

docker network create public --subnet 172.80.0.0/16
 

  • the Caddy container will have IP address of 172.80.44.3

    services:
       caddy:
         image: vttc08/caddy
    @@ -56,9 +56,16 @@
                     reverse_proxy mynginx:80
             }
     }
    -
    • start with *.website to indicate wildcard
    • the tls block uses dynu
    • declare @web host with the subdomain name
      • this is later used in handle @web
      • use reverse_proxy block to define the port to be reverse proxied
        In this method, only Docker containers that is in the same Docker network of public can be reverse proxied. By the internal port and via container names. Tailscale IP entries should also work.

    Environment Variables

    The previous codeblock already utilize environment variables. The syntax is {$NAME}.

    Whitelisting

                    @blocked not remote_ip {$WHITELIST}
    -                respond @blocked "Unauthorized" 403
    -

    This respond 403 unauthorized on any IP addresses not in whitelist.

    Comments

    \ No newline at end of file + Documentation
    \ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json index 08f7903..9e858c5 100755 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#recent-updates","title":"Recent Updates","text":"
    • Custom Caddy Lego
    • Basic Server Setup, Caddy, Docker, Tailscale
    • Debian-Based Server Setup
    • RuTorrent
    • Tunneling Basic Services (Jellyfin, Web) with Caddy and Tailscale
    • JDownloader
    • Fireshare
    • Audiobookshelf
    • Jellystat
    • Bluemap
    "},{"location":"mkdocs/","title":"Mkdocs","text":""},{"location":"mkdocs/#mkdocs-gotchas","title":"Mkdocs Gotchas","text":"
    • yaml highlighting is broken with mdx-breakless-lists
    • when using heading #, if there are no line breaks between headings, any lists that is after content of the second heading will not be rendered properly, even with mdx-breakless-lists
    • furthermore, if using lists right after a yaml code block, the list will also not be rendered correctly
    • when referencing a subheading in another file, mkdocs uses [](file.md#heading-with-space) while obsidian uses [](file.md#heading%20with%20space)
    • Before switching from lists to normal content, a line break is needed, otherwise the text below will be rendered with a indent
    • mkdocs subheadings [](#subheadings) must be in lower case
    "},{"location":"mkdocs/#admonitioncallouts","title":"Admonition/Callouts","text":"Mkdocs native callout

    callout content mkdocs

    Nested

    Nesting

    • ??? is also valid syntax for mkdocs
    • ???+ makes the callout collapsible and opens by default, while ???- makes it closed by default
      !!! notes \"Title\"\n    content\n
      Obsidian callouts requires the plugin mkdocs-callouts
    Obsidian Native Callout

    Callout content mkdocs

    Nested callout

    callout

    > [!notes]+/- Callout title\n> Callout content\n
    • obsidian callout syntax also follows the same +,- for collapsing, it is to be inserted after the brackets

    Available callouts include notes, info, warning, danger, success, failure, example, abstract, tip, question, bug.

    "},{"location":"mkdocs/#keys-caret-mark-tilde","title":"Keys, Caret, Mark, Tilde","text":"

    Keys ++ctrl+alt+plus++ Ctrl+Alt++ mark highlighting tilde strikethrough

    "},{"location":"mkdocs/#tabbed-content","title":"Tabbed Content","text":"Tab 1Tab 2

    Tab 1 content mkdocs Second line here.

    Tab 2 content

    === \"Tab Name\"\n    Tab content\n

    • not supported in obsidian
    "},{"location":"mkdocs/#attr_list","title":"attr_list","text":"

    Fancy Buttons mkdocs [button text](link.md){ .md-button } Tooltip I\u2019m a tooltip that you can hover or click. [tooltip](https://link \"hover text\") Annotation I\u2019m an annotation, but you need to click the plus icon (1) to show. (2)

    1. annotation 1
    2. annotation 2
      Annotation location 1 (1), location (2)\n{ .annotate }\n1. annotation text to be shown\n

    Footnote Insert footnote like [^1] 1

    • for inserting footnote [^1]
    • [^1]: at the end to explain the footnote; not supported in obsidian
    "},{"location":"mkdocs/#code-highlighting","title":"Code Highlighting","text":"
    from python import python\npython.run(arg1=123, arg2=\"mystr\")[2]\n
    #!/bin/bash\nvar=\"myvar\"\necho $var+3\n
    # yaml highlighting has to be `yaml` not `yml` and it's broken\n---\nversion: \"2.1\"\nservices:\n  clarkson:\n    image: lscr.io/linuxserver/clarkson\n    container_name: clarkson\n    environment:\n\n      - PUID=1000\n      - PGID=1000\n    ports:\n      - 3000:3000\n    restart: unless-stopped\n
    1. explaining the footnote.\u00a0\u21a9

    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/","title":"Basic Server Setup, Caddy, Docker, Tailscale","text":""},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#basics","title":"Basics","text":""},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#creating-the-vm-in-oracle-cloud","title":"Creating the VM in oracle cloud","text":"
    1. Go to instances, new instance.
    2. Select the Always Free image, ARM or x86, recommended 4GB RAM.
    3. Choose Ubuntu image.
    4. Download the SSH key and name it accordingly.
    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#ssh-keys","title":"SSH Keys","text":"

    Using PuttyGen.

    • Place the key in ./ssh/openssh_keys
    • Open PuttyGen, conversion -> import keys
    • Save the key files as ppk file in root folder of ./ssh

    Putty

    • Grab the IP address in the cloud console
    • Give a name in saved sessions
    • Go to behavior, choose these options
    • Under Data, make sure Terminal-type string is xterm-256color
    • Under Terminal -> Features, check \u201cdisable application keypad mode\u201d to fix issues with nano
    • The private key needs to be load in Connection -> SSH -> Auth -> Credentials

    To get the IP address of the VPS at any time

    curl ifconfig.me\n

    Useful packages to install

    htop iotop iftop fio curl gnupg wget neofetch ca-certificates lsb-release fzf screen firewalld net-tools bash-completion\n

    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#docker","title":"Docker","text":"

    https://docs.docker.com/engine/install/ubuntu/

    sudo apt-get update\nsudo apt-get install \\\n    ca-certificates \\\n    curl \\\n    gnupg \\\n    lsb-release\n\nsudo mkdir -p /etc/apt/keyrings\ncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg\n\necho \\\n  \"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \\\n  $(lsb_release -cs) stable\" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null\n\nsudo apt-get update\nsudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-compose\n

    sudo groupadd docker \\\nsudo usermod -aG docker ubuntu\nnewgrp docker # activate docker group immediately\n

    The machine needs to be rebooted from Oracle Cloud console to finish installation.

    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#caddy","title":"Caddy","text":""},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#docker-version-install","title":"Docker Version Install","text":"

    Detailed information on installing Caddy has moved to caddy If Nginx is installed alongside Caddy, it needs to be changed to listen on port 81 instead.

    sudo nano /etc/nginx/sites-enabled/default\n

    • change the server block\u2019s listen from 80 to 81
      sudo service nginx restart\n
    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#port-forwarding","title":"Port Forwarding","text":"

    On the Oracle Cloud side, login and go to Virtual Cloud Networks, click the one that\u2019s available, then the default subnet, this will bring up the Security Lists

    • this is an example of SSH port, configure by Add Ingress Rules and add the ports accordingly; it\u2019s also possible to allow everything and install a firewall in the OS itself

    On the Linux machine, either use iptables or firewall-cmd

    Firewall-cmd (recommended)iptables
    sudo firewall-cmd --zone=public --add-port 19132/tcp --permanent\nsudo firewall-cmd --zone=public --add-port 19132/udp --permanent\nsudo firewall-cmd --zone=public --add-port 25565/tcp --permanent\nsudo firewall-cmd --zone=public --add-port 25565/udp --permanent\nsudo firewall-cmd --zone=public --add-port 80/tcp --permanent\nsudo firewall-cmd --zone=public --add-port 443/tcp --permanent\nsudo firewall-cmd --zone=public --add-port 5800/tcp --permanent\nsudo firewall-cmd --reload\n
    sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT\nsudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT\nsudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 25565 -j ACCEPT\nsudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 19132 -j ACCEPT\nsudo iptables -I INPUT 6 -m state --state NEW -p udp --dport 25565 -j ACCEPT\nsudo iptables -I INPUT 6 -m state --state NEW -p udp --dport 19132 -j ACCEPT\nsudo iptables -I INPUT 6 -m state --state NEW -p udp --dport 51820 -j ACCEPT\nsudo netfilter-persistent save\n
    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#troubleshooting","title":"Troubleshooting","text":"

    For firewall-cmd, use this command to check all open ports.

    sudo firewall-cmd --list-all\n

    Using netstat, or pipe it to grep

    netstat -tln\n# | grep 8080 etc...\n
    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#tailscale","title":"Tailscale","text":"

    Installation and setup of basic services is covered in tunneling basic services. For usage such as exit-node and subnet-routes.

    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#exit-nodesubnet-routes","title":"Exit Node/Subnet Routes","text":"

    First need to enable IP forwarding.

    echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.conf\necho 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.conf\nsudo sysctl -p /etc/sysctl.conf\n
    When using with firewalld, additional configuration is needed such as masquerade.
    sudo firewall-cmd --add-masquerade --zone=public --permanent \nsudo firewall-cmd --add-interface=tailscale0 --zone=trusted --permanent\nsudo firewall-cmd --reload\n
    Basic command to advertise as exit-node and subnet routes
    sudo tailscale up --advertise-exit-node --advertise-subnet-routes=10.10.120.0/24\n
    When connect tailscale in CLI, additional arguments is needed to accept routes (the command below also activate exit node)
    sudo tailscale up --advertise-exit-node --accept-routes\n
    To enable these features, need to go to admin console, go to each machine settings, Edit Route Settings and enable exit-node or subnet routes.

    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#advanced","title":"Advanced","text":"

    Tunneling Jellyfin and other web services with tailscale and caddy

    Minecraft Tunneling

    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#archived","title":"Archived","text":"

    Basic Setup + Docker

    1. Installing Caddy web server (simple to use reverse proxy), lightweight, easy and no need for docker. (Nginx is also a good candidate for reverse proxy as the command is easy to memorize and does not require consulting documentation sites. However, the syntax for nginx is extremely complex compared to caddy and might not be easily memorized.

    https://caddyserver.com/docs/install#debian-ubuntu-raspbian

    sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https\ncurl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg\ncurl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list\nsudo apt update\nsudo apt install caddy net-tools\n# net-tools is good utility, optionally can install firewall-cmd or nginx\n# sudo apt install firewalld nginx\n

    Basic Caddy Syntax (if applicable) If the server that is being setup or restored needs functional service like bookstack or uptime-kuma, reverse proxy is needed.

    sudo nano /etc/caddy/Caddyfile\n

    {\n    email weebly2x10@gmail.com\n}\n\nyour-uptime-kuma.yoursubdomain.duckdns.org {\n        reverse_proxy http://127.0.0.1:3001\n}\n\nwiki.yoursubdomain.duckdns.org {\n        reverse_proxy http://127.0.0.1:6975\n}\n
    "},{"location":"Cloud%20VPS/jdownloader/","title":"JDownloader","text":""},{"location":"Cloud%20VPS/jdownloader/#basic-setup","title":"Basic Setup","text":""},{"location":"Cloud%20VPS/jdownloader/#configuring-jdownloader","title":"Configuring JDownloader","text":"
    • Go to the JDownloader WebUI
    • Go to Settings
    • Under general, change the max number of downloads (2) and DL per hoster (1) to minimize issues
    • Go to MyJDownloader and configure MyJDownloader account
    • Go to extension modules, install and enable \u201cfolderwatch\u201d

    The configuration for JDownloader is complete and should appear and be functional in WebUI. Advanced JDownloader documentation will be covered in detailed in another section. It is recommended to close port 5800 after configuring to prevent others accessing.

    After setting up JDownloader and it appears well in WebUI.

    The section is useless now as UHDMV has shutdown and it\u2019s pointless to setup multiple automated JDownloader server on VPS.

    "},{"location":"Cloud%20VPS/jdownloader/#settings-for-jdownloader","title":"Settings for JDownloader","text":"

    Debloat settings https://rentry.org/jdownloader2 Advanced Settings GraphicalUserInterfaceSettings: Banner -> disable GraphicalUserlnterfaceSettings: Premium Alert Task Column - > disable GraphicalUserInterfaceSeftings: Premium Alert Speed Column -> disable GraphicalUserInterfaceSettings: Premium Alert ETA Column -> disable GraphicalUsserInterfaceSeftings: Special Deal Oboom Dialog Visible On Startup -> disable GraphicalUsserInterfaceSeftings: Special Deals\u00a0-> disable GraphicalUsserInterfaceSeftings: Donate Button State\u00a0-> Hidden (automode)

    "},{"location":"Cloud%20VPS/jdownloader/#theming","title":"Theming","text":"

    GraphicalUserInterfaceSettings: Look And Feel Theme - > BLACK_EYE For Colors LAFSettings: Color For

    • Panel background and header background and alternate row background- #ff222222
    • Selected Rows Background - #ff666666
    • Package Row Background - #ff333333
    • Mouse Over Row Background - #ff666666
    • Panel Header Foreground, Tooltip Foreground, Selected Rows Foreground, Package Row Foreground, Mouse Over Row Foreground, Alternate Row Foreground, Account Temp Error Row Foreground, Account Error Row Foreground- #ffffffff
      • basically, change all the black values to white when searching for color fore, change everything except blue colors and error color
    • Enabled Text Color, Speed Meter Text, Speed Meter Average Text, Config Panel Description Text, Config Header Text Color - #ffffffff
    • Disabled Text Color - #ff666666
      • basically, when searching for color text, change all to white except for disabled text
    "},{"location":"Cloud%20VPS/tunneling-basic-services-jellyfin-web-with-caddy-and-tailscale/","title":"Tunneling Basic Services (Jellyfin, Web) with Caddy and Tailscale","text":"

    This procedure is not reproducible yet. Rigorous testing is still required before being documented. Here are the known procedures.

    The purpose is to tunnel normal web or network intensive traffic such as Jellyfin when faced with CG-NAT or similar situations (in this case locked down dorm internet), also configure hardware transcoding (in this case NVENC, but Intel QSV for future) to mitigate limitations with Canadian ISP(s).

    "},{"location":"Cloud%20VPS/tunneling-basic-services-jellyfin-web-with-caddy-and-tailscale/#jellyfin","title":"Jellyfin","text":""},{"location":"Cloud%20VPS/tunneling-basic-services-jellyfin-web-with-caddy-and-tailscale/#install","title":"Install","text":"

    https://jellyfin.org/downloads/server Download and run the server installer. Configure Jellyfin to your liking.

    "},{"location":"Cloud%20VPS/tunneling-basic-services-jellyfin-web-with-caddy-and-tailscale/#tailscale","title":"Tailscale","text":""},{"location":"Cloud%20VPS/tunneling-basic-services-jellyfin-web-with-caddy-and-tailscale/#windows","title":"Windows","text":"

    https://tailscale.com/download/windows Download, install and login.

    "},{"location":"Cloud%20VPS/tunneling-basic-services-jellyfin-web-with-caddy-and-tailscale/#linux","title":"Linux","text":"
    curl -fsSL https://tailscale.com/install.sh | sh\n
    sudo tailscale up\n

    All the tailscale management is done in the WebUI.

    The Windows client is given a tailscale network IP address in 100 range. Check if Windows client is pingable on server.

    ping 100.x.y.z #100.79.28.31\n

    Check if Jellyfin is running and tunneled properly on Oracle cloud. It should get a webpage html rather than unable to resolve host etc.

    curl http://100.x.y.z:8096\n
    "},{"location":"Cloud%20VPS/tunneling-basic-services-jellyfin-web-with-caddy-and-tailscale/#reverse-proxy","title":"Reverse Proxy","text":"

    basic-server-setup-caddy-docker-tailscale

    Caddy installation and syntax is can be found on this page. Replace 127.0.0.1 with the tailscale IP address.

    {\n    email weebly2x10@gmail.com\n}\n\nmovies.yoursubdomain.duckdns.org {\n        reverse_proxy http://100.x.y.z:8096\n}\n

    It is possible to set use the root domain (yoursub.duckdns.org) or a subfolder domain (movies.yousub.duckdns.org) for Jellyfin. After configuring the Caddyfile.

    sudo systemctl reload caddy\n

    Use netstat to check port 80, 443 is being listened. Make sure to port forward Oracle VPS.

    Other Services

    Follow the same syntax as the caddy file provided, if the root domain is used, then a subdomain must be used for other services.

    Results

    Inconclusive yet, more testing required.

    "},{"location":"Cloud%20VPS/tunneling-minecraft-server-tcp-only-with-nginx/","title":"Tunneling Minecraft Server (tcp only) with Nginx","text":"

    Procedure not reproducible yet, will be documented later.

    "},{"location":"Computer%20Stuff/demucs-nvidia/","title":"Demucs Nvidia","text":"

    Demucs is an music separation tool that has potential for a karaoke setup.

    https://github.com/facebookresearch/demucs

    https://www.youtube.com/watch?v=9QnFMKWEFcI&t=585s

    https://docs.google.com/document/d/1XMmLrz-Tct1Hdb_PatcwEeBrV9Wrt15wHB1xhkB2oiY/edit

    Installation on PC with Nvidia

    1. Firstly install Anaconda. Download Anaconda for Windows https://www.anaconda.com/products/distribution
    2. Install PyTorch. https://pytorch.org/get-started/locally/. Select the correct version of pytorch.
    3. Install ffmpeg. [https://www.gyan.dev/ffmpeg/builds/]](assets/gallery/2022-12/TwJimage.png)

    Demucs

    After installing the prerequesties.

    Open \u201cAnaconda terminal\u201d and type

    python.exe -m pip install -U demucs\n
    pip install PySoundFile \n

    Running Demucs

    demucs \"C:\\path\\to\\music\\file.mp3\"\n

    This will run demucs with CUDA GPU acceleration, make sure to put the path in double quote. The extracted file will be found in the directory where you run the command eg. the default Anaconda prompt starts in ~/separated

    "},{"location":"Docker%20Apps/01-docker-infra/","title":"01 Docker Infrastructure","text":""},{"location":"Docker%20Apps/01-docker-infra/#filesystem","title":"Filesystem","text":""},{"location":"Docker%20Apps/01-docker-infra/#compose","title":"Compose","text":"

    All docker-compose.yml files are stored in ~/docker folder, which then by default is under the network docker_default.

    • by default for newly created apps, a new folder is created and docker-compose.yml is created for that app for testing
      • once app testing is complete, the compose file can be moved docker root folder if appropriate or remain
    • some apps can be grouped together and these compose files are in the root docker folder such as media.yml, network.yml, the grouping allows multiple services to be managed by a single compose. For grouping, some of the property can include
      • the apps share common properties such as arrs apps
      • it is preferable for apps to live in same network, eg. teslamate
      • a large app requiring multiple containers eg. frontend, mysql etc..
      • apps share similar/same category, such as qBittorrent and nzbget can be put together in downloader.yml even though they do not have common properties or require same networking
    "},{"location":"Docker%20Apps/01-docker-infra/#storage","title":"Storage","text":"

    The storage used for all containers are bind mount.

    • application configs are stored in ~/docker/[app]
      • if an app has multiple components needing persistence (eg. app with database, helpers), a folder will be created as such ~/docker/[app]/postgres etc.
    • apps that also store non-config data (such as music, documents etc.) and not using a lot of space can bind mount /mnt/nvme/share (a directory on local or another SSD) for fast data access and without spinning up HDD
    • exceptions are home assistant or its related home automation containers and these are stored at /srv/homeassistant
    "},{"location":"Docker%20Apps/01-docker-infra/#backup","title":"Backup","text":"

    The entire docker root folder is copied to a NFS share on another computer. With exception of minecraft and home assistant which a specialized method is used.

    "},{"location":"Docker%20Apps/01-docker-infra/#network","title":"Network","text":"

    With docker-compose, a new network is created with the name of folder the compose is located, while it\u2019s possible to change network, it is not straightforward, therefore, there is no points in manually defining networks unless required.

    Public 172.80.0.0/16 - bridge network for public facing applications with reverse proxy, this way when configuring Nginx Proxy Manager, all it need is to enter container_name:80 rather than IP address.

    • Nginx Proxy Manager - 172.80.44.3
    • Other containers will use docker DHCP to get address
    • Containers that need to public facing can attach to this network Media 172.96.0.0/16 - bridge network for arrs, downloader and management applications for easy interconnection when configuring Minecraft 172.255.255.0/24 - bridge network for Minecraft related networks
    • Minecraft server (mcserver) - 172.255.255.65
    "},{"location":"Docker%20Apps/01-docker-infra/#categories","title":"Categories","text":"

    Media Apps - apps related to media acquisition, curation and other functions services for Jellyfin Networking - reverse proxy, DNS, VPN and related services Home Automation - home assistant and its associated functions VNC - containers based on jlesage-vnc-apps or Linuxserver Kasm images, usually desktop apps run in a browser via noVNC Management - tools for managing docker containers or entire server Games - game servers and associated tools Filesharing - apps that share files to other clients Documentation - notes and operation procedures for server infrastructure Authentication - services that handle single sign-on (SSO) with users

    "},{"location":"Docker%20Apps/02-docker-ratings/","title":"Ratings","text":"

    Docker App Rating consist of a table that look at the docker app and evaluate its configurations, deployment and usage against some quality of life features such as easy backup/restore, migration, user mapping, time zone logs, single-sign on with multi-user support etc. These ratings will change as more testing are done.

    Docker Apps Rating U/GID TZ SSO/Users Existing FS Portable Subfolder \u274e \u2705* \u274c\ud83e\udd35 \u2705 \u274c \u274c"},{"location":"Docker%20Apps/02-docker-ratings/#ugid","title":"UGID","text":"

    The Docker container/application or stack supports user ID and group ID mapping and respect the ID matching the host system. For example, Linuxserver.io and jlesage containers are gold standard.

    \u2705 Natively Supported\u274e Supported\ud83d\udfe8 Usable\u274cNot Supported
    • All Linuxserver and jlesage containers and projects that build with their baseimages, uses environment variables PUID/PGID for mapping
    • Fully respect UID and GID mappings on the host, will be able to all bind mounted files on the host with the respective permission without permission error and app issues
    • All the files the app need to write to the bind mount are written with the ID set in environment variable and are accessible via anything such as VSCode and other containers
    • For apps that don\u2019t have environment variables as above, but still following host user ID and permissions when modifying files will also have this rating eg. Audiobookshelf, Navidrome
    • If the app require multiple containers deployed as a stack and if the main app or the app that stores configuration/appdata fully support it but other part of the app do not, it will have \u2705* rating eg. Bookstack
    • The container do not have these environment variables and by default when it needs to create files on the host, it creates them with root:root permission but functions correctly
    • The container permission can be fixed simply with user: 1000:1001 in compose
    • After this fix, these should not be any permission issues and the container functions without issues and create files that are accessible via anything eg. Authelia, Jellystat
    • The container do not support environment variables and by using user:, the functionality of the container is broken and have permission issues or still writes files as root
    • However, the container do not write configuration data or there is no need to have shared access of data
    • eg. a database application, app that is entirely configured via environment/labels
    • The container exhibit symptoms of Usable rating but user: either breaks the containers or still won\u2019t fix permission
    • The container bind mounts to configuration or shared data that needs to be accessible by other tools, it would need constant chown -R to ensure access by others are possible
    • By setting user: or chown to make files accessible to the host and other tools, the container cease to function
    • Only named volumes can be used not bind mounts
    "},{"location":"Docker%20Apps/02-docker-ratings/#tz","title":"TZ","text":"

    The container support standard timezone variable. All logs generated by the container follows the timezone specified by TZ or other supported environment variables. This is either \u2705 or \u274c.

    "},{"location":"Docker%20Apps/02-docker-ratings/#sso","title":"SSO","text":"

    Users

    • \ud83e\udd35: Only a single user/session is supported at a time
    • \ud83d\udc6a: Multiple users are supported via SSO or internally

    Authelia Authelia is the SSO provider that is used for the setup. Only support and compatibility for this will be documented. Only the main app via an exposed web interface need to support it, otherwise it\u2019s not applicable. If there are zero reasons to expose this app to the internet and have multiple local users, this is n/a.

    \u2705 Natively Supported\u274e Supported\ud83d\udfe8 Usable\u274c Not Supported
    • App has OIDC support that works with third-party provider eg. Audiobookshelf, Portainer
    • App without advanced OIDC but have documented other ways to integrate SSO for users eg. Filebrowser, Navidrome
    • The user via SSO can be mapped to existing users with same name or creates the user if not exist
    • App is able to fully integrate with ALL third party service or mobile/desktop apps flawlessly even after installing SSO/2FA
    • Authelia whitelist rules can be easily created to restore full functionality of the app (eg. API, public portion) without compromising security where Authelia is needed
    • App do not provide native integration for third party sign-in providers; but has an option to fully disable internal authentication in favor of Authelia eg. Radarr, Nzbget
    • App do not have internal authentication eg. Memos, jlesage VNC
    • By adding Authelia to add authentication or to replace internal authentication, the app is able to fully integrate with ALL third party service or mobile/desktop apps flawlessly even after installing SSO/2FA
    • Authelia whitelist rules can be easily created to restore full functionality of the app (eg. API, public portion) without compromising security where Authelia is needed
    • The above only apply with single-user apps, if a multi-user app do not natively support 3p SSO provider, Authelia is unable to passthrough the correct user
    • Apps that have removable authentication or no authentication which Authelia can be added
    • The only logical way to access the app is via a web browser where Authelia is fully supported
    • Accessing the app via third-party services is restricted to LAN only or behind a VPN where Authelia is not relevant eg. Nginx Proxy Manager, Teslamate
    • The app has internal authentication that cannot be disabled or integrate with Authelia
    • After installing Authelia, only way to use the app is via web browser; third party integration and mobile/desktop apps no longer function even with whitelisting rules eg. Jellyfin, Home Assistant
    • Using whitelist rules to restore functionality with third party apps would compromise security where Authelia is needed
    • No workarounds are possible to have both SSO and 3p integrations
    "},{"location":"Docker%20Apps/02-docker-ratings/#existing-fs","title":"Existing-FS","text":"

    Existing filesystem structures, the app do not require a folder structure that only the app can use and is able to use it as is and allow user to not change workflow when switching to this app. (This section is incomplete, more updates needed)

    • config: type of files that governs how an app behaves eg. configuration.yaml, app.conf
    • media: files includes videos, photos, documents or other files the user want the app to manage
    \u2705 Yes\ud83d\udfe8 Partially\u274c No
    • App work with a bind mount to a host path where other process can also access it and the app do not have conflict with other processes
    • App do not modify existing file structures and permissions
    • User is able to import/export/edit data stored in the app (both configs and media) freely with or without the app eg. Jellyfin, Filebrowser
    • User is able to move relatively freely to a similar app
    • (To be updated)
    • App store its data (both config and media) in encrypted blob, proprietary format, specific database only the app can read
    • App modify existing file structure for it to work and the permissions it need are incompatible with other workflows, refer to U/GID
    • The only way to import/export/edit data is via the app, it\u2019s difficult to use another workflow
    "},{"location":"Docker%20Apps/02-docker-ratings/#portable","title":"Portable","text":"

    The portability of the app refers to how easy is it to migrate, backup/restore an app\u2019s config. If the frequency of backup/restore is irrelevant or no persistence data is needed such the app runs entirely via docker-compose, it\u2019d be n/a.

    \u2705 Yes\ud83d\udfe8 Partially\u274c No
    • The app will work on another machine simply by copying the bind mount to the new machine
    • If U/GID are not supported and a named volume is used, copying the volume with various tools will transfer the app to the new machine
    • If an app uses a database, it will still work after either copying the bind path or volume to the new machine; if not, a repeatable and documented way to dump and import the database is provide so the app will transfer smoothly
    • After the app is migrated, zero user intervention is needed and the app to function exactly the same
    • App does not work by simply copying over the persistent data, but only a quick user intervention is needed eg. backup/restore file in WebUI
    • App data migration will work, but might require complex scripts or other dependencies that makes scripting harder
    • App cannot be migrated or restored by simply copying the files, the app stop workings
    • The backup process is difficult and often fails
    • Even with a migration, heavy user intervention is needed for the app to function exactly the same if it\u2019s possible
    "},{"location":"Docker%20Apps/02-docker-ratings/#mobile","title":"Mobile","text":"

    The mobile refers to mobiles apps section, this rating determines the quality of mobile integration (only Android tested) since an app on mobiles offers more function than a website.

    \u2705 Great\u2714 App Present/PWA\u274cNot Mobile Friendly
    • The app has a mobile app on app store or APK either from the developer itself or has viable well-maintained third-party apps
    • The mobile app enhance the experience of the app and offers better usability compared to a web browser
    • Mobile app offers deep integration with Android OS or other apps with widgets, controls, intents where nessecary (eg. Audiobookshelf, Home Assistant, Jellyfin, share icon to and from app)
    • The app website has a mobile-friendly layout which a progressive-web app can be used and the webapp offers equivalent functionality to desktop counterpart
    • The app in question is basic and all its functions are supported via a website without deep system integration (eg. Dashboard app for display only)
    • App will be given * rating if the app does not have a mobile app or support PWA but it\u2019s mobile friendly when opened in a traditional mobile browser
    • The app either do not have a mobile friendly website/app or it\u2019s mobile counterpart is not useable that a lot of desktop functionality is lost (eg. Grafana, webtop)
    "},{"location":"Docker%20Apps/bookstack/","title":"Bookstack","text":""},{"location":"Docker%20Apps/bookstack/#installation","title":"Installation","text":"

    Change port to 6975

    Add in docker-compose: restart: unless-stopped

    $docker directory = /home/docker .... etc

    Docker-Compose file reference

    https://github.com/solidnerd/docker-bookstack/blob/master/docker-compose.yml

    version: '2'\nservices:\n  mysql:\n    image: mysql:8.0\n    environment:\n\n    - MYSQL_ROOT_PASSWORD=secret\n    - MYSQL_DATABASE=bookstack\n    - MYSQL_USER=bookstack\n    - MYSQL_PASSWORD=secret\n    volumes:\n    - mysql-data:/var/lib/mysql\n    restart: unless-stopped\n\n  bookstack:\n    image: solidnerd/bookstack:22.10.2\n    depends_on:\n\n    - mysql\n    environment:\n    - DB_HOST=mysql:3306\n    - DB_DATABASE=bookstack\n    - DB_USERNAME=bookstack\n    - DB_PASSWORD=secret\n    #set the APP_ to the URL of bookstack without without a trailing slash APP_URL=https://example.com\n    - APP_URL=http://xxx.xxxmydomainxxx.duckdns.org\n    volumes:\n    - $docker/public-uploads:/var/www/bookstack/public/uploads\n    - $docker/storage-uploads:/var/www/bookstack/storage/uploads\n    ports:\n    - \"6975:8080\"\n    restart: unless-stopped\n

    Notice: The default password for bookstack is

    admin@admin.com

    password

    Permissions: remember the set write permission on public-uploads folder so users can upload photos.

    "},{"location":"Docker%20Apps/bookstack/#backup-and-restore","title":"Backup and Restore","text":"

    Files Backup:

    tar -czvf bookstack-files-backup.tar.gz public-uploads storage-uploads\n

    Restore:

    tar -xvzf bookstack-files-backup.tar.gz\n

    Database backup:

    sudo docker exec bookstack_mysql_1 /usr/bin/mysqldump -u root --password=secret bookstack > ./bookstack/bookstack_db.sql\n

    Restore:

    sudo docker exec -i bookstack_mysql_1 mysql -u root --password=secret bookstack < /$docker/bookstack/bookstack_db.sql\n
    • bookstack_mysql1 is the container name
    • password is secret or the database password
    "},{"location":"Docker%20Apps/bookstack/#reverse-proxy","title":"Reverse Proxy","text":"

    Use subdomain in proxy manager.

    Backing Up and Restoring with LinuxServer.io container

    Due to limits or Oracle Cloud free tier. The only arm image is from linuxserver io container, and it is different than solidnerd image.

    Docker-Compose file

    version: \"2\"\nservices:\n  bookstack:\n    image: lscr.io/linuxserver/bookstack\n    container_name: bookstack\n    environment:\n\n      - PUID=1001\n      - PGID=1001\n      - APP_URL=https://wiki.xxx.duckdns.org\n      - DB_HOST=bookstack_db\n      - DB_USER=bookstack\n      - DB_PASS=secret\n      - DB_DATABASE=bookstackapp\n    volumes:\n      - /home/ubuntu/bookstack:/config\n    ports:\n      - 6975:80\n    restart: unless-stopped\n    depends_on:\n      - bookstack_db\n\n  bookstack_db:\n    image: lscr.io/linuxserver/mariadb\n    container_name: bookstack_db\n    environment:\n\n      - PUID=1001\n      - PGID=1001\n      - MYSQL_ROOT_PASSWORD=secret\n      - TZ=Europe/London\n      - MYSQL_DATABASE=bookstackapp\n      - MYSQL_USER=bookstack\n      - MYSQL_PASSWORD=secret\n    volumes:\n      - /home/ubuntu/bookstack:/config\n    restart: unless-stopped\n

    Notice: In Oracle cloud free tier, the default ubuntu user is 1001, not 1000. For database name, it it bookstackapp, keep in mind when executing restore command. The folder structure is also different. In the solidnerd container, the images are stored at /public-uploads while in LSIO container it is stored at /www/uploads

    "},{"location":"Docker%20Apps/bookstack/#backing-up-from-home-pc","title":"Backing Up (from home PC)","text":"

    Images

    cd into /public-uploads and make a tar archive

    tar -czvf images.tar.gz images\n

    Backup the database

    sudo docker exec bookstack_mysql_1 /usr/bin/mysqldump -u root --password=secret bookstack > ./bookstack_db.sql\n

    Transfer to Oracle Cloud Server

    scp -i oracle-arm-2.key images.tar.gz bookstack_db.sql ubuntu@$IPADDR:/home/ubuntu/bookstack/www/uploads\n

    Take in consideration the location where LSIO image stores the images.

    "},{"location":"Docker%20Apps/bookstack/#restore-into-oracle-cloud","title":"Restore (into Oracle Cloud)","text":"

    Images (/home/ubuntu/bookstack/www/uploads)

    tar -xvzf images.tar.gz\n

    Database

    The image url in the database still refers to old server url, it needs to be changed. The following command replace the subdomain in the sq1 dump.

    sed -i 's/wiki.$home.duckdns.org/wiki.$oracle.duckdns.org/g' bookstack_db.sql\n

    Restore the database.

    sudo docker exec -i bookstack_db mysql -u root --password=secret bookstackapp < /home/ubuntu/bookstack/www/uploads/bookstack_db.sql\n
    "},{"location":"Docker%20Apps/bookstack/#crontab","title":"Crontab","text":"

    On Home PC

    0 23 * * 2,5 /home/karis/bookstack.sh\n
    #!/bin/bash\n\ncd ~/docker/bookstack/public-uploads #location of bookstack public uploads\ntar -czvf images.tar.gz images\nsudo docker exec bookstack_mysql_1 /usr/bin/mysqldump -u root --password=secret bookstack > ./bookstack_db.sql\nscp -i oracle-arm-2.key images.tar.gz bookstack_db.sql ubuntu@$ORACLEIP:/home/ubuntu/bookstack/www/uploads\n

    Make sure to copy the oracle-arm-2.key to the appropriate location (~/docker/bookstack/public-uploads)

    Also make sure the permission of oracle-arm-2.key is in correct permission (600). Especially changing the permission of public-uploads folder to allow write access.

    Do a backup sequence in crontab at 11pm every Tuesday and Friday.

    Oracle Cloud Server

    0 8 * * 3,6 /home/ubuntu/bookstack.sh\n
    #!/bin/bash\n\ncd ~/bookstack/www/uploads #directory where bookstack files scp from home are located\ntar -xvzf images.tar.gz\nsed -i 's/wiki.$homeip.duckdns.org/wiki.$oracle.duckdns.org/g' bookstack_db.sql\nsudo docker exec -i bookstack_db mysql -u root --password=secret bookstackapp < /home/ubuntu/bookstack/www/uploads/bookstack_db.sql\n

    Restore the sequence after backup, every Wednesday and Saturday at 8am (need to consider the TZ between Vancouver, Edmonton and Toronto, or any the time zone of the remote server)

    "},{"location":"Docker%20Apps/ddns-update/","title":"Dynamic DNS Updater Docker","text":"

    Official Image: https://hub.docker.com/r/linuxserver/duckdns Custom Github Page: https://github.com/vttc08/docker-duckdns-dynu

    This is a docker container that automatically updates the public IPv4 address of the server every 5 minutes to dynamic DNS services Dynu and DuckDNS. It is the fork of Linuxserver DuckDNS container.

    "},{"location":"Docker%20Apps/ddns-update/#docker-compose","title":"Docker Compose","text":"
      services:\n      duckdns:\n        image: vttc08/docker-duckdns-dynu:latest\n        container_name: duckdns\n        env_file: ddns.env\n        environment:\n\n          - TZ=America/Vancouver\n          - PUID=1000\n          - PGID=1001\n        restart: unless-stopped\n

    These need to be filled in the ddns.env

    DYNU_HOST= # full name of dynu domains\nDYNU_PASS= # md5 hashed dynu login pass\nSUBDOMAINS= # DuckDNS domains without the duckdns.org part\nTOKEN= # DuckDNS token \n

    • token will be visible in DuckDNS dashboard
    • Dynu pass is the same as login; alternatively, it is possible to create a dedicated password just for IP update MD5 generator
      echo -n \"password\" | md5sum\n
    • when setting the IP to 10.0.0.0 in Dynu update API, dynu will automatically update the IP address to the IP address making that request
    "},{"location":"Docker%20Apps/ddns-update/#other-usage","title":"Other Usage","text":"

    docker restart duckdns will manually run IP update docker exec -it duckdns /app/debug.sh or other scripts, debug script will print out IP address of subdomains resolved by Cloudflare

    "},{"location":"Docker%20Apps/epic-games-free-games/","title":"Epic Games Free Games","text":"

    Buy Free Games from Epic Games

    https://hub.docker.com/r/charlocharlie/epicgames-freegames

    Config

    NEED TO CHANGE

    Email: email address

    Password: password

    Webhook URL: make a discord channel and click settings. Go to integrations, then webhook, copy webhook URL.

    mentioned Users: right click your profile, and click Copy ID

    TOTP

    1. Go here to login. https://www.epicgames.com/account/password Login with Epic Games account.
    2. Click \u201cenable authenticator app.\u201d
    3. In the section labeled \u201cmanual entry key,\u201d copy the key.
    4. Use your authenticator app to add scan the QR code.
    5. Activate 2FA by completing the form and clicking activate.
    6. Once 2FA is enabled, use the key you copied as the value for the TOTP parameter.

    Docker

    docker run -d -v /home/karis/docker/epicgames:/usr/app/config:rw -p 3000:3000 -m 2g --name epicgames --restart unless-stopped charlocharlie/epicgames-freegames:latest\n

    Change the name of the container to a friendly name. Restart unless stopped so it restart automatically.

    Copy and Paste

    The default json configuration is located at /home/karis/docker/epicgames or $HOME/docker/epicgames.

    Fix Login Issue Using Cookies

    https://store.epicgames.com/en-US/

    1. Visit this site and make sure it\u2019s logged in.
    2. Install this extension EditThisCookie https://chrome.google.com/webstore/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg/related
    3. Open the extension and change the url to epicgames.com/id as in screenshot below
    4. Export the cookie

    1. Go to $HOME/docker/epicgames and create a new file email@gmail.com-cookies.json
    2. If the json file is already there, truncate it with \u2013size 0
    3. Paste the cookie value to the json file
    4. Restart container.

    Update

    docker pull charlocharlie/epicgames-freegames:latest\ndocker rm -f epicgames\ndocker images | grep epicgames\n# use docker rmi to remote the corresponding image \n# re run the epicgames docker run command\n
    "},{"location":"Docker%20Apps/filebrowser/","title":"Filebrowser","text":"

    Filebrowser app on a webbrowser, port 4455. free-games-claimer

    Docker-compose deployment

    version: '3.9'\nservices:\n    filebrowser:\n        container_name: filebrowser\n        image: filebrowser/filebrowser\n        ports:\n\n            - '4455:80'\n        user: 1000:1000\n        restart: unless-stopped\n        volumes:\n            - '~/docker/filebrowser/.filebrowser.json:/.filebrowser.json'\n            - '~/docker/filebrowser/filebrowser.db:/database.db'\n            - '~/docker/filebrowser/branding:/branding'\n            - '~/docker:/srv/docker'\n            - '/mnt/data:/srv/data'\n            - '/mnt/nvme/share:/srv/nvme-share'\n

    The first 3 bind mount are for configuration of filebrowser, eg. config, database and branding files. On first deployment, need to create an empty database.db file. The remaining bind mount are for the folders that need to be accessed, the folders should be bound under /srv. Filebrowser by default create a volume under /srv, in this setup where folders are bind mount to subfolders in /srv and nothing bind mount directly, it could create a specific volume under docker just for /srv which is unavoidable.

    This is the content of .filebrowser.json

    {\n    \"port\": 80,\n    \"baseURL\": \"\",\n    \"address\": \"\",\n    \"log\": \"stdout\",\n    \"database\": \"/database.db\",\n    \"root\": \"/srv\"\n  }\n
    "},{"location":"Docker%20Apps/filebrowser/#usershare","title":"User/Share","text":"

    The user and share management in filebrowser is simple. The shares have a expiring time, and can optionally have a password. The recipient can view and download files in the share but cannot upload.

    To create a new user, it\u2019s under settings -> User Management, and add a user and password accordingly, and give appropriate permission. The scope is where the root folder where the user have access to, since the docker data folder is bound at /srv/docker and /srv is defined as root folder in config, the folder name to put in scopes would be /docker. Only one scope is allowed.

    It is also possible to add rules to prevent user access of files within a scope. Under rules, enter the path that is relative to the scope, for example /docker/minecraft/config would be /config

    "},{"location":"Docker%20Apps/filebrowser/#personalization","title":"Personalization","text":"

    Enable dark theme - Setting -> Global Settings -> Branding

    • also change the branding directory path to /branding which is bind mount in docker

    Under the branding folder, create a file custom.csswhich is used for css customization. Then create a folder img and place logo.svg in it for custom icon. The icon is the same as egow entertainment and stored in OliveTin icon PSD file. Under the folder img, create a folder icons and use favicon generator site to create an icon archive and put all the content of that archive in the icons folder, the result should look like this.

    Reverse Proxy/Homepage

    Reverse proxy is normal procedure using NPM. To add bookmark to a file location, use browser/homepages bookmark function.

    "},{"location":"Docker%20Apps/fireshare/","title":"Fireshare","text":"Docker Apps Rating U/GID TZ SSO/Users Existing FS Portable Subfolder Mobile \u274e \u2705* \u274c\ud83e\udd35 \u2705 \u2705 \u274c \u2714"},{"location":"Docker%20Apps/fireshare/#configuration","title":"Configuration","text":"
    services:\n\u00a0 fireshare:\n\u00a0 \u00a0 image: shaneisrael/fireshare:develop\n\u00a0 \u00a0 container_name: fireshare\n\u00a0 \u00a0 environment:\n\u00a0 \u00a0 \u00a0 - MINUTES_BETWEEN_VIDEO_SCAN=30\n\u00a0 \u00a0 \u00a0 - PUID=1000\n\u00a0 \u00a0 \u00a0 - PGID=1001\n\u00a0 \u00a0 env_file:\n\u00a0 \u00a0 \u00a0 - .env # admin password\n\u00a0 \u00a0 volumes:\n\u00a0 \u00a0 \u00a0 - ~/docker/fireshare/data:/data:rw\n\u00a0 \u00a0 \u00a0 - ~/docker/fireshare/processed:/processed:rw\n\u00a0 \u00a0 \u00a0 - /mnt/nvme/share/gaming:/videos:rw\n\u00a0 \u00a0 networks:\n\u00a0 \u00a0 \u00a0 public:\n\u00a0 \u00a0 ports:\n\u00a0 \u00a0 \u00a0 - 8080:80\n\u00a0 \u00a0 restart: unless-stopped\n\nnetworks:\n\u00a0 public:\n\u00a0 \u00a0 name: public\n\u00a0 \u00a0 external: true\n
    "},{"location":"Docker%20Apps/fireshare/#environments","title":"Environments","text":"

    Content of .env

    ADMIN_PASSWORD=\nDOMAIN=\n
    Setup user and group ID accordingly; more environment options are available https://github.com/ShaneIsrael/fireshare/wiki/Fireshare-Configurables

    "},{"location":"Docker%20Apps/fireshare/#other","title":"Other","text":"

    The software can also be configured via config.json located in /data/config.json. It\u2019s configuration is same as the WebUI. Default Video Privacy: false set all the videos public viewable without sharing manually Default Video Privacy: false public user cannot upload videos Sharable Link Domain: link which fireshare append to when sharing files Upload Folder: folder that will be created in /videos directory when file is uploaded

    "},{"location":"Docker%20Apps/fireshare/#usage","title":"Usage","text":"

    By default, can view videos, admin and share links and the link will show preview and viewable in Discord. Admin can also upload directly in web interface. All the uploaded files are located in /videos/uploads

    • when uploading files through filesystem with a changed date via touch the changed date will also be reflected in the app
    "},{"location":"Docker%20Apps/fireshare/#workflow","title":"Workflow","text":"

    https://github.com/vttc08/fireshare-import Refer to this Github repo to setup. For personal documentation

    • setup the project directory into ~/Documents/Projects
    "},{"location":"Docker%20Apps/free-games-claimer/","title":"Free Games Claimer","text":"

    https://github.com/vogler/free-games-claimer

    This is the Github repo for the new and advanced free games claimer. This is implemented after Epicgames FreeGames keeps failing.

    "},{"location":"Docker%20Apps/free-games-claimer/#configuration","title":"Configuration","text":"

    Using Docker-Compose

    In the folder structure

    server: ~/docker/fgc$\ndocker-compose.yml\nfgc.env\n

    fgc.env is the environment file for all the password/keys to login to different game services, fill it in manually or use a backup.

    EG_OTPKEY=\nEG_EMAIL=\nEG_PASSWORD=\nNOTIFY=discord://123456/ABCD\nPG_EMAIL=\nPG_PASSWORD=\nGOG_EMAIL=\nGOG_PASSWORD=\nTIMEOUT=300\n

    NOTIFY=discord://123456/ABCD if the webhook looks like this https://discord.com/api/webhooks/123456/ABCD

    TIMEOUT=300 sets the timeout to 300s before the container skip and error out due to EpicGames captcha problems. However, the impact on prime gaming and GOG are not tested.

    docker-compose.yml

    services:\n  free-games-claimer:\n    container_name: FGC # is printed in front of every output line\n    image: ghcr.io/vogler/free-games-claimer # otherwise image name will be free-games-claimer-free-games-claimer\n    build: .\n    ports:\n\n      - \"5990:5900\" # VNC server\n      - \"5890:6080\" # noVNC (browser-based VNC client)\n    volumes:\n      - ~/docker/fgc:/fgc/data\n      - ~/docker/fgc/epic-games.js:/fgc/epic-games.js\n      - ~/docker/fgc/prime-gaming.js:/fgc/prime-gaming.js\n      - ~/docker/fgc/gog.js:/fgc/gog.js\n    command: bash -c \"node epic-games; node prime-gaming; node gog; echo sleeping; sleep 1d\"\n    env_file:\n      - fgc.env\n    restart: unless-stopped\n

    This docker-compose file use the environment file fgc.env as indicated above and runs once every day. It also contains VNC server/web based client.

    "},{"location":"Docker%20Apps/free-games-claimer/#missing-captcha-session","title":"Missing Captcha Session","text":"

    This should no longer be needed. Edit the line to epicgames.js code and replace with the following message. When the captcha is missed, it will send a notification for manual claiming.

    wait notify(`epic-games: got captcha challenge right before claim. Use VNC to solve it manually. Game link: \\n ${url}`)\n

    EpicGames require a captcha to claim free games. If the 5 minute timeout window for EpicGames is missed, it is no longer possible to claim the games unless waiting for the next day, which due to the nature of discord notifications, there is a slim to none chance of catching the captcha at next day. To continuing claiming after acknowledging the missed session, use portainer, ConnectBot Android to temporarily restart the container to restore VNC session.

    In order to restore the default time of claiming the games. Eg. waking up on Thurs or Fri and a predictable time and claim games, use the linux at command. Need to install at using apt.

    at 9:20\n> docker restart FGC\n> <EOT>\n

    This will run the command at 9:20 AM the next day. Ctrl-D to exit at prompt and verify the time is correct.

    "},{"location":"Docker%20Apps/jlesage-vnc-apps/","title":"jlesage VNC Apps","text":"

    VNC apps consists of desktop applications that have the GUI in a web browser, mostly from the creator jlesage.

    "},{"location":"Docker%20Apps/jlesage-vnc-apps/#environments","title":"Environments","text":"

    At least for apps from jlesage, it supports an environment variable. Create an environment file called vnc.env

    The environment file can be reference in many docker images from jlesage using docker-compose. The current environment variable specify U/GID, time zone and make every app dark mode. It is also possible to set VNC passwords. This is the full list of environment variables. For supported apps such as avidemux, there is an option WEB_AUDIO=1 which allow audio to work.

    USER_ID=1000\nGROUP_ID=1001\nTZ=America/Vancouver\nDARK_MODE=1\nKEEP_APP_RUNNING=1\n

    The jlesage apps have 2 ports, port 5800 for viewing the VNC app on a web browser on desktop; port 5900 is for VNC protocol that can be used in dedicated VNC viewer or mobile viewing.

    "},{"location":"Docker%20Apps/jlesage-vnc-apps/#general-bind-mounts","title":"General Bind Mounts","text":"

    The appdata bind mount is located in the ~/docker/vnc, as seen from the yml example, the vnc environment file vnc.env is placed in the appdata folder. For application requiring access to movie storage, the bind mount is on the corresponding hard drive or pool. As for applications requiring access to storage but not large media, it\u2019s best to put the files on a SSD.

    This is an example of VNC container of MKVToolNix. The vnc.yml file is backed up elsewhere.

        mkvtoolnix:\n        image: jlesage/mkvtoolnix\n        env_file:\n\n            - ./vnc/vnc.env\n        volumes:\n            - '/mnt/data/nzbget:/storage:rw'\n            - '~/docker/vnc/mkvtoolnix:/config:rw'\n        ports:\n            - '5820:5800'\n            - '5920:5900'\n        container_name: mkvtoolnix\n
    "},{"location":"Docker%20Apps/jlesage-vnc-apps/#ports","title":"Ports","text":"

    The application port start from 5800/5900 for its corresponding access and add 10 for each application.

    • for apps with high idle CPU or RAM, it\u2019s best to run the app on-demand and close it when not used
    App Port Dialog Idle CPU RAM Additional Config JDownloader 5800 jdownloader Firefox 5810 MKVToolNix 5820 gtk MKVCleaver 5840 QT High MegaBasterd 5860 Github MCASelector 5870 High High Github Avidemux 5880 QT Med Med WEB_AUDIO=1"},{"location":"Docker%20Apps/jlesage-vnc-apps/#files","title":"Files","text":"

    /config is the directory which app configuration are stored and should have the correct permission, there are other additional bind mounts for /storage which is the default file choose location for some containers.

    • any directory from host can be bind mount into anything in container; however if a directory is not created on host and the container has to create it, it\u2019s possible it will be owned by root

    QT Based Apps that use QT based file explorer (eg. Avidemux) has the configuration stored in ${APP_CONFIG}/xdg/config/QtProject.ini, this is used to setup file explorer shortcuts.

    [FileDialog]\nshortcuts=file:, file:///config, file:///storage, file:///mnt/data/nzbget, file:///mnt/data, file:///mnt/data2\n

    GTK Based Apps that use GTK based file explorer (eg. MCASelector) has the configuration stored in ${APP_CONFIG}/xdg/config/gtk-3.0/bookmarks, this is used to setup file explorer shortcuts.

    file:///world, file:///storage\n

    There are also some application specific setup. For applications accessing hard drive or intensive apps, it is best to stop when not used. Lazytainer and ContainerNursery and possibly using DNS server can help automate this process.

    "},{"location":"Docker%20Apps/tesla-homepage/","title":"Tesla Homepage","text":"

    This is a homepage that allows Tesla browser to enter full screen mode.

    Docker-compose

    services:\n  homepage-for-tesla:\n    image: jessewebdotcom/homepage-for-tesla:latest\n    container_name: homepage-for-tesla\n    environment:\n\n      - DEFAULT_THEME=13\n    volumes:\n      - ~/docker/tesla/public/bookmarks.json:/app/public/bookmarks.json\n      - ~/docker/tesla/public/images:/app/public/images\n    ports:\n      - \"3000:3000\"\n
    "},{"location":"Docker%20Apps/webtop/","title":"Webtop (openbox-ubuntu)","text":"
    version: \"2.1\"\nservices:\n  webtop:\n    image: lscr.io/linuxserver/webtop:amd64-ubuntu-openbox\n    container_name: webtop-openbox\n    security_opt:\n\n      - seccomp:unconfined #optional\n    environment:\n      - PUID=1000\n      - PGID=1001\n      - TZ=America/Vancouver\n      - SUBFOLDER=/ # For reverse proxy\n      - TITLE=WebtopMate # The title as it shown in browser\n    volumes:\n      - ~/docker/webtop/config:/config # default home folder\n      - /mnt/data:/mnt/data\n      - /var/run/docker.sock:/var/run/docker.sock # Run docker inside docker\n    ports:\n      - 3050:3000\n    shm_size: \"1gb\" #optional\n    restart: unless-stopped\n

    The default installation with config folder copied is not usable. Packages to be installed

    apt update\napt install wget terminator rsync ntp spacefm compton tint2 nitrogen nano lxappearance mousepad unrar unzip xarchiver mono-complete libhunspell-dev p7zip libmpv-dev tesseract-ocr vlc ffmpeg fonts-wqy-zenhei language-pack-zh-hans mediainfo mediainfo-gui p7zip\n

    Packages that has to be installed manually lxappearance, spacefm, tint2, nitrogen

    Desktop (tint2, nitrogen)

    • nitrogen cannot keep scaled option after restarting and needs to change it manually
    • nitrogen wallpaper are found in /config/Pictures/wallpaper.jpg
    "},{"location":"Docker%20Apps/webtop/#customization","title":"Customization","text":"

    lxappearance

    • theme: Quixotic-blue; location .themes
    • icon: Desert-Dark-icons; location .icons tint2
    • tint2 with copied config, located in .config/tint2
    "},{"location":"Docker%20Apps/webtop/#firefox-browser","title":"Firefox Browser","text":"

    policies.json

    // force install ublock, disable annoyances, add bookmarks\n{\n  \"policies\": {\n    \"ExtensionSettings\": {\n      \"uBlock0@raymondhill.net\": {\n        \"installation_mode\": \"force_installed\",\n        \"install_url\": \"https://addons.mozilla.org/firefox/downloads/latest/ublock-origin/latest.xpi\"\n      }\n    },\n    \"NoDefaultBookmarks\": true,\n    \"DisableTelemetry\": true,\n    \"Bookmarks\": [\n      {\n        \"Title\": \"zmk\",\n        \"URL\": \"https://zmk.pw\",\n        \"Placement\": \"toolbar\"\n      },\n      {\n        \"Title\": \"SubHD\",\n        \"URL\": \"https://subhd.tv\",\n        \"Placement\": \"toolbar\"\n      } // Add more bookmarks like this\n    ],\n    \"FirefoxHome\": {\n      \"Search\": true,\n      \"TopSites\": true,\n      \"SponsoredTopSites\": false,\n      \"Pocket\": false,\n      \"SponsoredPocket\": false,\n      \"Locked\": false\n    }\n  }\n}\n

    • it is not possible to backup bookmarks on the pinned menu via policies (only way is to restore from home folder)
    • it\u2019s not possible to remove import bookmarks and getting started bookmarks with policies.json as documented here, it has to be removed manually Manual Configs
    • ublock add Chinese filter
    • pin bookmarks
    • remove default bookmarks and getting started from toolbar
    "},{"location":"Docker%20Apps/webtop/#files","title":"Files","text":"

    SpaceFM

    • upon installing, with config copied over, everything works fine
    • configuration is stored in ~/.config/spacefm

    Movie-Renamer Script

    • works after copying
    "},{"location":"Docker%20Apps/webtop/#subtitles","title":"Subtitles","text":""},{"location":"Docker%20Apps/webtop/#subtitle-edit","title":"Subtitle Edit","text":"

    Install dependencies Download subtitle-edit

    curl -s https://api.github.com/repos/SubtitleEdit/subtitleedit/releases/latest | grep -E \"browser_download_url.*SE[0-9]*\\.zip\" | cut -d : -f 2,3 | tr -d \\\" | wget -qi - -O SE.zip\nunzip SE.zip -d /config/subtitle-edit\n
    Subtitle-Edit Dark theme has to be changed manually

    • Options -> Settings -> Appearance -> Use Dark Theme
    • Options -> Settings -> Syntax Coloring -> Error color and change to 27111D
    • Options -> Settings -> Appearance -> UI Font -> General and change to WenQuanYi Zen Hei
    "},{"location":"Docker%20Apps/Downloading/rutorrent/","title":"RuTorrent","text":"

    /watched folder allow dropping torrents files and autodownload

    "},{"location":"Docker%20Apps/Media%20Apps/audiobookshelf/","title":"Audiobookshelf","text":"

    Audiobooks and podcasts.

    UID/GID

    With the newer version of ABS. The environment variables AUDIOBOOKSHELF_UID and GID are removed, the container now runs as root with no ways to change it; if using the user flag in docker, there would be permission error on startup.

    Docker-compose, place it in the media apps compose media.yml

    version: \"3.7\"\nservices:\n  audiobookshelf:\n    image: ghcr.io/advplyr/audiobookshelf:latest\n    ports:\n\n      - 13378:80\n    volumes:\n      - /mnt/m/Audios/audiobooks:/audiobooks # hard drive mount\n      - /mnt/m/Audios/podcasts:/podcasts # hard drive mount\n      - $HOME/audiobookshelf/config:/config\n      - $HOME/audiobookshelf/metadata:/metadata\n    restart: unless-stopped\n\n\u00a0 audiobookshelf-permfix:\n\u00a0 \u00a0 container_name: abs-permfix\n\u00a0 \u00a0 image: ubuntu\n\u00a0 \u00a0 networks:\n\u00a0 \u00a0 \u00a0 - public\n\u00a0 \u00a0 command: bash -c \"chown -R $${PUID}:$${PGID} /mnt; echo sleeping; sleep $${TIME}\"\n\u00a0 \u00a0 volumes:\n\u00a0 \u00a0 \u00a0 - /mnt/data/Audios/audiobooks:/mnt/audiobooks # hard drive mount\n\u00a0 \u00a0 \u00a0 - /mnt/data/Audios/podcasts:/mnt/podcasts # hard drive mount\n\u00a0 \u00a0 \u00a0 - ~/docker/audiobookshelf/config:/mnt/config\n\u00a0 \u00a0 \u00a0 - ~/docker/audiobookshelf/metadata:/mnt/metadata\n\u00a0 \u00a0 environment:\n\u00a0 \u00a0 \u00a0 - PUID=1000\n\u00a0 \u00a0 \u00a0 - PGID=1001\n\u00a0 \u00a0 \u00a0 - TIME=1h\n\u00a0 \u00a0 restart: unless-stopped\n
    • The change made to the docker-compose include a permfix that automatically chown everything in audiobookshelf bind mounts
      • mount everything into /mnt
      • change the user and group ID accordingly
    "},{"location":"Docker%20Apps/Media%20Apps/audiobookshelf/#usage","title":"Usage","text":"

    To add a library, go to settings, libraries and add the path as mounted in docker.

    Go to Users, change the root password and create a new user. Note, the user cannot scan library, only the root can do that.

    "},{"location":"Docker%20Apps/Media%20Apps/audiobookshelf/#adding-media","title":"Adding Media","text":"

    Make sure the contents are in a separate folder. Follow naming like this. A cover image can also be created. The best bitrate should be under 128 kbps for smooth playback.

    /audiobooks\n--- ./Author - Book\n---  --- ./Cover.jpg\n---  --- ./book - 001 or book - chapter 1\n---  --- ./book - 002\n---  --- ./book - 003\n

    In the WebUI, make sure logged in as root. Go to settings, library and scan. It will scan the newly added media. Also useful for dealing with unplayable file errors.

    It is also possible to upload via the WebUI. When files are uploaded this way, it is also be placed in the audiobooks folder. However, it is not possible to add more files via the web upload once it\u2019s scanned.

    Additional Metadata Cover.jpg - cover image desc.txt - descriptions *.opf - XML library file that contains additional metadata such as title, author etc.. Vocabulary abridged/unabridged - shortened listening version primary/supplementary ebooks - primary ebooks are

    If the media does not match or not have an image, go click the edit icon, go to Match, the best result is usually Audible.com.

    If the chapter does not match, chapters can be edited manually. Go to Chapter and Lookup.

    "},{"location":"Docker%20Apps/Media%20Apps/audiobookshelf/#mobile-app","title":"Mobile App","text":"

    https://play.google.com/store/apps/details?id=com.audiobookshelf.app

    Mobile app also has download functionality, however, the directory cannot be changed, the default for download is /Internal Storage/Download/{Podcast or Audiobook}

    The statistic of minutes listened is the actual minutes listened, not the minutes of audiobook progress listened (eg. playing at faster speed).

    "},{"location":"Docker%20Apps/Media%20Apps/audiobookshelf/#backuprestore","title":"Backup/Restore","text":"

    In the WebUI, go to Settings > Backups and there will be option for backup/restore. Alternatively, copy the entire appdata folder to another computer.

    "},{"location":"Docker%20Apps/Media%20Apps/audiobookshelf/#scripting-windows","title":"Scripting (Windows)","text":"

    ffmpeg detect audio silence (for splitting a large audio file into multiple chapters)

    ffmpeg -i input.mp3 -af silencedetect=n=-50dB:d=1.5 -f null -\n
    ffmpeg -i input.mp3 -af silencedetect=n=-50dB:d=1.5 -f null -loglevel debug 2>&1 - | findstr \"silence_duration\" | find /c /v \"\"\n

    This will find silence parts below -50dB and duration threshold of 1.5s.

    The second code (windows cmd only) for linux use grep -c, finds how many silence parts can be detected, this should correlate to number of chapters.

    Once the optimal duration is set, use split.py.

    ffmpeg that remove silence from audio

    ffmpeg -i input.mp4 -af silenceremove=stop_periods=-1:stop_duration=4:stop_threshold=-50dB -b:a 96k output.mp3\n
    • stop_duration (threshold duration for removing silence part)
    • stop_periods = -1 (search for the entire audio track)

    Use edge_reader.py to utilize Edge AI reader to read the audiobook if only the pdf book is provided.

    After reading, put all the recorded files and pdf in the project folder and run processing.py twice.

    "},{"location":"Docker%20Apps/Media%20Apps/jellystat/","title":"Jellystat","text":"Docker Apps Rating U/GID TZ SSO/Users Portable Subfolder \u274e \u2705* \u274c\ud83e\udd35 \u2705 \u274c

    https://github.com/CyferShepard/Jellystat

    "},{"location":"Docker%20Apps/Media%20Apps/jellystat/#install","title":"Install","text":"

    Docker Compose (minimum viable setup)

    services:\n\u00a0 jellystat-db:\n\u00a0 \u00a0 container_name: jellystat-db\n\u00a0 \u00a0 image: postgres:15\n\u00a0 \u00a0 user: 1000:1001\n\u00a0 \u00a0 env_file:\n\u00a0 \u00a0 \u00a0 - jellystat.env\n\u00a0 \u00a0 environment:\n\u00a0 \u00a0 \u00a0 POSTGRES_DB: 'jellystat'\n\u00a0 \u00a0 \u00a0 TZ: 'America/Vancouver'\n\u00a0 \u00a0 \u00a0 PGTZ: 'America/Vancouver'\n\u00a0 \u00a0 volumes:\n\u00a0 \u00a0 - ~/docker/jellystat/db:/var/lib/postgresql/data # Mounting the volume\n\u00a0 \u00a0 restart: unless-stopped\n\n\u00a0 jellystat:\n\u00a0 \u00a0 image: cyfershepard/jellystat:latest\n\u00a0 \u00a0 container_name: jellystat\n\u00a0 \u00a0 user: 1000:1001\n\u00a0 \u00a0 env_file:\n\u00a0 \u00a0 \u00a0 - jellystat.env\n\u00a0 \u00a0 environment:\n\u00a0 \u00a0 \u00a0 POSTGRES_IP: jellystat-db\n\u00a0 \u00a0 \u00a0 POSTGRES_PORT: 5432\n\u00a0 \u00a0 ports:\n\u00a0 \u00a0 \u00a0 - \"5050:3000\" #Server Port\n\u00a0 \u00a0 volumes:\n\u00a0 \u00a0 \u00a0 - ~/docker/jellystat/app:/app/backend/backup-data # Mounting the volume\n\u00a0 \u00a0 depends_on:\n\u00a0 \u00a0 \u00a0 - jellystat-db\n\u00a0 \u00a0 restart: unless-stopped\n

    The content of jellystat.env

    POSTGRES_USER=jellystat\nPOSTGRES_PASSWORD=\nJWT_SECRET=\n

    • Use both PGTZ and TZ to set timezone logging
    • The environment POSTGRES_DB may not work, the default database is jfstat The secret can be generated with
      openssl rand -base64 64 | tr -d '\\ n'\n
    "},{"location":"Docker%20Apps/Media%20Apps/jellystat/#usage","title":"Usage","text":"

    Jellyfin API key is needed to configure it. The app will show login/configuration screen. No other configurations are nessecary.

    "},{"location":"Docker%20Apps/Media%20Apps/jellystat/#backuprestore","title":"Backup/Restore","text":"

    If using bind mount, simply copy the files in the bind mount and everything will work on the new machine without issues. No database dumps, other steps are necessary.

    • ensure the username/password/secret in the environments are matching
    "},{"location":"Docker%20Apps/Media%20Apps/jellystat/#reverse-proxysso","title":"Reverse Proxy/SSO","text":"

    App do not have SSO support. The internal login cannot be disabled, github issue. App do not support subfolders, only subpath supported. No special requirements needed when using Nginx Proxy Manager. If the frontend is in the same network as proxy, simply jellystat:3000 is enough.

    "},{"location":"Docker%20Apps/Media%20Apps/rich-media/","title":"Rich Media","text":"

    Hello Everyone

    This is a demo consisting of medias.

    Some Code

    docker-compose up -d\n
    import os\nimport time\n\nprint(\"hello world\")\nif a=b:\n  print(a)\nelif b=c:\n  try:\n    print(c)\n  except:\n    print(c+a)\nelse:\n  print(\"what is the meaning of life\")\n

    More sample media

    Portainer is a software for managing docker containers.

    "},{"location":"Docker%20Apps/Minecraft/bluemap/","title":"Bluemap","text":"Docker Apps Rating U/GID TZ SSO/Users Portable Subfolder Mobile n/a n/a \u274e\ud83e\udd35 n/a \u2705 \u2714

    https://bluemap.bluecolored.de/wiki/

    "},{"location":"Docker%20Apps/Minecraft/bluemap/#installation","title":"Installation","text":"

    Download bluemap and place it in minecraft plugin folder, Docker version also available.

    "},{"location":"Docker%20Apps/Minecraft/bluemap/#configuration","title":"Configuration","text":"

    Config files are located in plugins/Bluemap Change the line in core.conf so the app functions

    accept-download: true\n

    • data: \"bluemap\" the data location is not in plugins base folder but relative to base folder of the minecraft docker container
      • the default is located in <docker_mc_folder>/bluemap
    • Default port is 8100, change in webserver.conf
    "},{"location":"Docker%20Apps/Minecraft/bluemap/#resource-pack","title":"Resource pack","text":"

    Add a .zip into plugin/Bluemap/packs The .zip folder should have on the files in its root folder

    • .zip -> resource_pack\\ -> [pack.mcmeta, assets ...] not OK
    • .zip -> [pack.mcmeta, assets ...] OK
    "},{"location":"Docker%20Apps/Minecraft/bluemap/#markers","title":"Markers","text":"

    To see the changes docker attach mcserver then execute bluemap reload

    "},{"location":"Docker%20Apps/Minecraft/bluemap/#marker-set","title":"Marker Set","text":"

    https://bluemap.bluecolored.de/wiki/customization/Markers.html

    debug-set: {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 label: \"Debug Set\"\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 toggleable: true\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 default-hidden: false\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 sorting: 1\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 markers: {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 }\n\u00a0 \u00a0 }\n

    • multiple sets can be added in this format
    • label the name that is will appear (the debug-set is just an identifier)
    • sorting the order which it will appear
    "},{"location":"Docker%20Apps/Minecraft/bluemap/#html","title":"HTML","text":"

    Marker that shows an HTML element, for example a text label.

    \u00a0marker-html: {\n\u00a0 \u00a0 \u00a0type: \"html\"\n\u00a0 \u00a0 \u00a0position: { x: -132, y: 72, z: -202 }\n\u00a0 \u00a0 \u00a0label: \"Karis\"\n\u00a0 \u00a0 \u00a0html: \"<html code>\"\n\u00a0 \u00a0 \u00a0anchor: { x: 0, y: 0 }\n\u00a0 \u00a0 \u00a0sorting: 0\n\u00a0 \u00a0 \u00a0listed: true\n\u00a0 \u00a0 \u00a0min-distance: 50\n\u00a0 \u00a0 \u00a0max-distance: 750\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 }\n

    • type set to html

    HTML Code

    <div style='line-height: 1em; font-size: 1.2em; color: black; font-weight: bold; background-color: white; transform: translate(-50%, -50%);'>Karis</div>\n
    This HTML code have black text with white background, bolded To have a multiline text, just copy the <div> part again

    "},{"location":"Docker%20Apps/Minecraft/bluemap/#line","title":"Line","text":"

    Marker is a 3D line that can be clicked to show label or detail, color can be customized.

    line-marker: {\n\u00a0 \u00a0 \u00a0 type: \"line\"\n\u00a0 \u00a0 \u00a0 position: { x: -42, y: 70, z: -340 }\n\u00a0 \u00a0 \u00a0 label: \"Text to Display\"\n\u00a0 \u00a0 \u00a0 line: [\n\u00a0 \u00a0 \u00a0 \u00a0 { x: -42, y: 70, z: -340 },\n\u00a0 \u00a0 \u00a0 \u00a0 { x: 37, y: 90, z: -325 },\n\u00a0 \u00a0 \u00a0 \u00a0 { x: 102, y: 115, z: -312 }\n\u00a0 \u00a0 \u00a0 ]\n\u00a0 \u00a0 \u00a0 line-color: {r: 255, g: 0, b: 0, a: 1}\n\u00a0 \u00a0 \u00a0 line-width: 3\n\u00a0 \u00a0 \u00a0 detail: \"HTML code\"\n\u00a0 \u00a0 \u00a0 max-distance: 1500\n\u00a0 \u00a0 }\n

    • position - the starting position
    • line - array of xyz coordinates (can include starting position)
    • line-color - RGBA value
    • label and detail will both display the name of the line marker
      • setting anything in detail will override label It good idea to set the y above the value that is appears on map, if a line is covered by a block, that part of the line will not show.
    "},{"location":"Docker%20Apps/Minecraft/bluemap/#poi","title":"POI","text":"

    Marker that can be clicked and shows the label text, with option to add custom icons.

    \u00a0 \u00a0 \u00a0 \u00a0 poi-marker-1: {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 type: \"poi\"\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 position: { x: 273, y: 62, z: 640 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 label: \"Village Marker 1\"\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 icon: \"assets/poi.svg\"\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 max-distance: 400\n\u00a0 \u00a0 \u00a0 \u00a0 } \n

    icon - can be any HTML image type

    • the default icon size is 50px as shown in preview
    • icons must be stored in /blue/web/assets to be used
    • svg vector type is preferred over png due to small size constraint
      • svg created in illustrator need width=\"50px\" height=\"50px\" for it to work properly
    Weird behavior with dark mode/different browsers

    On Brave browser mobile dark mode, icons do not show. On Chrome Windows, while markers works, the text style such as bold do not work

    "},{"location":"Docker%20Apps/Minecraft/bluemap/#shape","title":"Shape","text":"

    Flat, 2D only box that covers an area.

    \u00a0 \u00a0 \u00a0 \u00a0 terrain-park: {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 type: \"shape\"\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 label: \"Example Shape Marker\"\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 position: { x: 186, z: -321 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 shape: [\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 { x: 186, z: -321 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 { x: 184, z: -374 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 { x: 168, z: -368 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 { x: 169, z: -316 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 { x: 186, z: -308 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 ]\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 line-width: 2\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 line-color: { r: 255, g: 0, b: 0, a: 1.0 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 fill-color: { r: 200, g: 0, b: 0, a: 0.3 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 shape-y: 86\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 max-distance: 1400\n\u00a0 \u00a0 \u00a0 \u00a0 }\n

    • shape, only the x and z values are needed, no height
    • shape-y the height which the shape appears
      • if there are blocks above the plane of shape-y: part of that shape will be covered
      • if there are no blocks below the plane of shape-y: the shape will appear floating (refer the image above)
    • color, has a line and fill component, a fill with a: less than 1 decrease the opacity
    "},{"location":"Docker%20Apps/Minecraft/bluemap/#render-distance","title":"Render Distance","text":"
    • for flat view, any view distance below 400 would not show
    • as the view distance increase, the icon/html/line will gradually fade out
    "},{"location":"Docker%20Apps/Minecraft/bluemap/#reverse-proxysso","title":"Reverse Proxy/SSO","text":"

    The reverse proxy and authentication setup for subdomain is as usual in Nginx Proxy Manager. App has no built-in authentication so Authelia SSO is supported.

    "},{"location":"Docker%20Apps/Minecraft/bluemap/#subpath-with-sso","title":"Subpath with SSO","text":"Nginx Proxy ManagerCaddy

    The custom locations tab do not work, need to add it manually. Go to Advanced and edit these in the custom Nginx configuration.

    location /map/ {\n    include /snippets/proxy.conf;\n    include /snippets/authelia-authrequest.conf;\n    proxy_pass http://10.10.120.16:8100/;\n  }\n

    • Not tested yet
    "},{"location":"Docker%20Apps/Minecraft/bluemap/#internal-use-only","title":"Internal Use Only","text":"

    For public viewer, these parts are not relevant for setup. This is for setup of my specific server and guidelines.

    Ski Slopes Red - default color Black - default color Green - line-color: {r: 40, g: 255, b: 40, a: 1} Blue - line-color: {r: 0, g: 100, b: 200, a: 1}

    Roads Roads- line-color: {r: 240, g: 220, b: 150, a: 1}

    "},{"location":"Docker%20Apps/Minecraft/minecraft-prep-and-install/","title":"Minecraft Prep and Install","text":""},{"location":"Docker%20Apps/Minecraft/minecraft-prep-and-install/#client-setup-java-online","title":"Client Setup (Java + Online)","text":"
    1. Download Java
    2. Download OptiFine the latest version.
    3. On the official Minecraft client, go add a new installation and match the version with OptiFine.
    4. Download and try the official version, then install OptiFine with Java.
    5. Under Settings -> Keep the Launcher open while games are running
    "},{"location":"Docker%20Apps/Minecraft/minecraft-prep-and-install/#client-setup-java-offline","title":"Client Setup (Java + Offline)","text":"
    1. Use the client PolyMC to enable offline play.
    2. Go to the right corner, manage accounts and create an offline account.
    3. Click on add an instance and follow the guide.
    4. To install OptiFine, need the official launcher first, then download OptiFine
    5. Extract OptiFine, the extracted file should be ending in _MOD.jar
    6. Open the jar file in WinRAR, then move the files from notch folder into the base folder. Save the jar archive.
    7. Go to PolyMC, right click on the instance, click Edit -> Versions -> Add to minecraft.jar and select the modified OptiFine.
    "},{"location":"Docker%20Apps/Minecraft/minecraft-prep-and-install/#docker-server-setup","title":"Docker Server Setup","text":"

    Docker-compose for minecraft server

    version: \"3.9\"\nservices:\n  minecraft:\n    image: marctv/minecraft-papermc-server:latest\n    restart: unless-stopped\n    container_name: mcserver\n    environment:\n\n      - MEMORYSIZE=4G\n      - PAPERMC_FLAGS=\"\"\n      - PUID=1000\n      - PGID=1000\n    volumes:\n      - ~/docker/minecraft:/data:rw\n    ports:\n      - 25565:25565\n      - 19132:19132\n      - 19132:19132/udp # geyser\n\u00a0 \u00a0 \u00a0 - 8100:8100 # bluemap\n    stdin_open: true\n    tty: true\n

    This downloads the latest version of Minecraft, to use another PaperMC version, need to build the image from scratch.

    Warning: PaperMC cannot be downgraded, only newer version of PaperMC can be installed after first run.

    git clone https://github.com/mtoensing/Docker-Minecraft-PaperMC-Server\n# go edit the \"ARG version=1.xx.x\" to the correct version\ndocker build -t marctv/mcserver:1.xx.x\n
    "},{"location":"Docker%20Apps/Minecraft/minecraft-prep-and-install/#folders-and-plugins","title":"Folders and Plugins","text":"

    Plugins are located in folder ./plugins some plugins have .yml files. To update or download plugins, use scp, wget on the server or VSCode.

    The world folder consists of the save data. It is separated into world, nether, the_end.

    Before starting the server, the eula.txt must have eula=true.

    bukkit and spigot.yml in the root folder are configuration files for PaperMC.

    "},{"location":"Docker%20Apps/Minecraft/minecraft-prep-and-install/#rcon-commands","title":"Rcon Commands","text":"

    To access the rcon-cli, use docker attach mcserver, to exit, use Ctrl-P and Q, if using VSCode may need to edit keyboard shortcut.

    Editing VSCode Shortcut Press Ctrl-Shift-P and search for keyboard shortcut json.

    [\n    {\n        \"key\": \"ctrl+p\",\n        \"command\": \"ctrl+p\",\n        \"when\": \"terminalFocus\"\n    },\n\n    {\n        \"key\": \"ctrl+q\",\n        \"command\": \"ctrl+q\",\n        \"when\": \"terminalFocus\"\n    },\n\n    {\n        \"key\": \"ctrl+e\",\n        \"command\": \"ctrl+e\",\n        \"when\": \"terminalFocus\"\n    }\n\n]\n
    "},{"location":"Docker%20Apps/Minecraft/useful-plugins/","title":"Useful Plugins","text":"

    WorldEdit

    EssentialX

    CoreProtect

    ViaVersions - allow other similar version to join the server without conflict

    bluemap

    Geyser

    WorldGuard

    "},{"location":"Docker%20Apps/Minecraft/useful-plugins/#offline-modemobile-bedrock","title":"Offline Mode/Mobile Bedrock","text":"

    To allow offline play for PC version. Change server.properties and edit these lines

    enforce-whitelist=false\nonline-mode=false\n
    Refer to Minecraft Prep and Install to install offline client.

    For bedrock compatibility, need the geyser plugin.

    To allows offline play for bedrock mobile version. Go to ./plugins/Geyser-Spigot/config.yml and change these lines. Do not install the plugin floodgate, if it\u2019s installed, removed the plugin. ViaVersions is also needed for mobile play.

    auth-type: offline\nenable-proxy-connections: true\n

    Now client can play without login to Xbox or Java.

    "},{"location":"Docker%20Apps/Web/caddy/","title":"Custom Caddy Lego","text":"

    https://github.com/vttc08/caddy-lego Customized caddy docker container that has Dynu support for wildcard certificates.

    "},{"location":"Docker%20Apps/Web/caddy/#install","title":"Install","text":"

    Create a Docker network specific to publicly accessible container.

    docker network create public --subnet 172.80.0.0/16\n

    • the Caddy container will have IP address of 172.80.44.3

      services:\n  caddy:\n    image: vttc08/caddy\n    container_name: caddy\n    ports:\n      - 80:80\n      - 443:443\n    volumes:\n      - ~/docker/caddy/Caddyfile:/etc/caddy/Caddyfile\n      - ~/docker/caddy/www:/www\n    env_file:\n      - .env\n    environment:\n      - WHITELIST=${WHITELIST}\n    networks:\n      public:\n        ipv4_address: 172.80.44.3\n    restart: unless-stopped\n\nnetworks:\n  public:\n    external: true\n    name: public\n

    • the volume of caddy follows all other docker apps which is at ~/docker

    • .env file for DYNU_API_KEY which will be used for SSL
    • create a network public with the IP address
    • it is not the best idea to use user: since it may break container function; however, it all the files are present when mounted Caddy should not change the permissions
    • WHITELIST is an environment variable that contains the IP address that can be only allowed on certain services
      • this can be created in ~/.bashrc and sourced
        export WHITELIST=123.456.789.0\n

    The content of .env

    DYNU_API_KEY=\nWEBSITE=\nHTTPS=\nEMAIL=\n

    • HTTPS list of domains so Caddy doesn\u2019t error when parsing comma; \"*.website.dynu.com, website.dynu.com\"
    • WEBSITE just the website name website.dynu.com
    "},{"location":"Docker%20Apps/Web/caddy/#dockerfile","title":"Dockerfile","text":"

    If the provided image doesn\u2019t work, need to build a image on the server itself.

    FROM caddy:2.7.5-builder-alpine AS builder\n\nRUN xcaddy build \\\n    --with github.com/caddy-dns/lego-deprecated\n\nFROM caddy:2.7.5\n\nCOPY --from=builder /usr/bin/caddy /usr/bin/caddy\n
    Then modify the image part of compose.yml
        build:\n      context: .\n      dockerfile: Dockerfile\n

    "},{"location":"Docker%20Apps/Web/caddy/#caddyfile","title":"Caddyfile","text":"
    {\n    email {$EMAIL}\n}\n
    "},{"location":"Docker%20Apps/Web/caddy/#basic-website","title":"Basic Website","text":"
    :80 {\n        root * /usr/share/caddy\n        file_server\n}\n
    "},{"location":"Docker%20Apps/Web/caddy/#https","title":"HTTPS","text":"
    {$HTTPS} {\n        tls {\n                dns lego_deprecated dynu\n        }\n\n        # Standard reverse proxy\n        @web host web.{$WEBSITE$}\n        handle @web {\n                reverse_proxy mynginx:80\n        }\n}\n
    • start with *.website to indicate wildcard
    • the tls block uses dynu
    • declare @web host with the subdomain name
      • this is later used in handle @web
      • use reverse_proxy block to define the port to be reverse proxied In this method, only Docker containers that is in the same Docker network of public can be reverse proxied. By the internal port and via container names. Tailscale IP entries should also work.
    "},{"location":"Docker%20Apps/Web/caddy/#environment-variables","title":"Environment Variables","text":"

    The previous codeblock already utilize environment variables. The syntax is {$NAME}.

    "},{"location":"Docker%20Apps/Web/caddy/#whitelisting","title":"Whitelisting","text":"

                    @blocked not remote_ip {$WHITELIST}\n                respond @blocked \"Unauthorized\" 403\n
    This respond 403 unauthorized on any IP addresses not in whitelist.

    "},{"location":"Docker%20Apps/Web/ddns-update/","title":"Dynamic DNS Updater Docker","text":"

    Official Image: https://hub.docker.com/r/linuxserver/duckdns Custom Github Page: https://github.com/vttc08/docker-duckdns-dynu

    This is a docker container that automatically updates the public IPv4 address of the server every 5 minutes to dynamic DNS services Dynu and DuckDNS. It is the fork of Linuxserver DuckDNS container.

    "},{"location":"Docker%20Apps/Web/ddns-update/#docker-compose","title":"Docker Compose","text":"
      services:\n      duckdns:\n        image: vttc08/docker-duckdns-dynu:latest\n        container_name: duckdns\n        env_file: ddns.env\n        environment:\n\n          - TZ=America/Vancouver\n          - PUID=1000\n          - PGID=1001\n        restart: unless-stopped\n

    These need to be filled in the ddns.env

    DYNU_HOST= # full name of dynu domains\nDYNU_PASS= # md5 hashed dynu login pass\nSUBDOMAINS= # DuckDNS domains without the duckdns.org part\nTOKEN= # DuckDNS token \n

    • token will be visible in DuckDNS dashboard
    • Dynu pass is the same as login; alternatively, it is possible to create a dedicated password just for IP update MD5 generator
      echo -n \"password\" | md5sum\n
    • when setting the IP to 10.0.0.0 in Dynu update API, dynu will automatically update the IP address to the IP address making that request
    "},{"location":"Docker%20Apps/Web/ddns-update/#other-usage","title":"Other Usage","text":"

    docker restart duckdns will manually run IP update docker exec -it duckdns /app/debug.sh or other scripts, debug script will print out IP address of subdomains resolved by Cloudflare

    "},{"location":"Linux%20Server/debian-based-server-setup/","title":"Debian-Based Server Setup","text":"

    Run update and upgrade distro first. Install NTP package is there are errors with that. Reboot

    Setup powertop and powersaving features

    sudo apt install powertop\npowertop --auto-tune\n

    Powersave governor and at reboot. Remember to run the command again

    @reboot echo \"powersave\" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor >/dev/null 2>&1\n

    Ensure these packages are installedi

    powertop htop iotop fio curl gnupg wget ntfs-3g neofetch ca-certificates lsb-release hdparm hd-idle openssh-server at autojump screen bash-completion\n
    • after installing bash-completion, need to source .bashrc for Docker autocomplete to work
    "},{"location":"Linux%20Server/debian-based-server-setup/#hdd","title":"HDD","text":"

    lsblk and blkid to get the ntfs hard drive /dev name and the /dev/by-uuid/\u2026

    Edit the fstab to mount the drive, same entry for nvme drive

    UUID=CC34294F34293E38 /mnt/data ntfs-3g 0 0\n

    If the mounted device is HDD array, need to spindown disk with hdparm

    hdparm -B 120 /dev/sdb # set the APM level\nhdparm -S 241 /dev/sdb\n

    For the -S spindown, 0-240 is multiple of 5s, 241-255 is multiple of 30 min. The above command set spindown every 30min.

    If hdparm does not work, hd-idle can be used. Edit the file in /etc/defaults/hd-idle

    -i 60 -a disk/by-uuid/xxx -l /var/log/hd-idle.log\n

    Sudo without password, go to visudo and add the lines to the bottom, replace $USER with the actual username.

    $USER ALL=(ALL) NOPASSWD: ALL\n

    Edit shortcuts in bashrc

    source .bashrc\n
    "},{"location":"Linux%20Server/debian-based-server-setup/#openssh-with-keys","title":"OpenSSH with Keys","text":""},{"location":"Linux%20Server/debian-based-server-setup/#generate-the-key-using-the-terminal","title":"Generate the key using the terminal","text":"
    ssh-keygen\n
    • give a location to put the key pair
    • this generate a public (.pub) and private key pair
    ssh-copy-id -i key.pub username@server\n
    • key.pub is the public key that was generated

    The key is ready to use for authorization.

    "},{"location":"Linux%20Server/debian-based-server-setup/#generate-keys-using-putty-software","title":"Generate keys using PuTTY software","text":"
    1. Copy the red part and use nano to add it in the server ~/.ssh/authorized_keys
    2. Make sure permissions are correct
      mkdir -p ~/.ssh\nchmod 700 ~/.ssh\nchmod 600 ~/.ssh/authorized_keys\nnano ~/.ssh/authorized_keys\n
    3. Save private key as ppk file on the root ssh folder.
    4. If the client with private key is Linux machine, need to change the permission of the private key.

      chmod 600 private.key\n
    5. Convert the private key Conversion > Export OpenSSH Keys and save the file to a folder OpenSSH Keys

    "},{"location":"Linux%20Server/debian-based-server-setup/#ssh-config","title":"SSH Config","text":"

    Configuration file for easy SSH access. The permission for that file is 644.

    Host server\n  HostName 10.10.120.1\n  User ubuntu\n  IdentityFile ~/keys/server.key\n

    Use with OliveTin

    To have seamless ssh experience with OliveTin, make sure to copy the ssh config file and all the keys to /root, since in OliveTin ~ means /root not your user home directory.

    "},{"location":"Linux%20Server/debian-based-server-setup/#setting-up-smb","title":"Setting Up SMB","text":"

    Refer to Samba(SMB) Setup to setup SMB server.

    "},{"location":"Linux%20Server/debian-based-server-setup/#desktop-environment-setup","title":"Desktop Environment Setup","text":""},{"location":"Linux%20Server/debian-based-server-setup/#firefox","title":"Firefox","text":"

    The location of firefox profile is at /home/$USER/.mozilla/firefox/xxxxx.default

    Make a tarball and copy it and extract it in destination.

    In the profile folder, look for compatibility.ini, go to a random profile in the dest machine and copy the compatibility.ini settings to the one that is copied over. This ensure compatibility so that the new profile works without warning.

    Check the profile.ini with the name and the location of the new profile folder, firefox should be the same as before.

    [Profile0]\nName=karis\nIsRelative=1\nPath=ims58kbd.default-esr-1\n

    Themes

    To backup/restore settings of cinnamon

    Icons

    The icons are located at these locations.

    /usr/share/icons\n~/.icons\n

    Scripts

    Copy the scripts and put it into ~/script for organization and copy the old crontab for executing these scripts.

    "},{"location":"Linux%20Server/olivetin/","title":"OliveTin","text":"

    OliveTin exposes a webpage with buttons that execute shell command (eg. docker, scripts) on the server and allow others for easy access. It should be used internally only.

    Main Interface Log Interface

    "},{"location":"Linux%20Server/olivetin/#installation","title":"Installation","text":"

    Download the correct file from this site. https://github.com/OliveTin/OliveTin/releases OliveTin_linux_amd64.deb

    Go to the directory and install the package.

    • if a previous config.yaml is already present, installer will ask what to do, the default is to keep the previous config
      sudo dpkg -i OliveTin\u2026\u200bdeb\nsudo systemctl enable --now OliveTin\n

    Uninstall

    sudo dpkg -r OliveTin # the installed app name, not the deb file\n

    "},{"location":"Linux%20Server/olivetin/#configuration","title":"Configuration","text":"

    The configuration file is located at /etc/OliveTin/config.yaml

    Script Execution User

    By default, OliveTin always execute script as root!! This have complications. With an example script that echo some location, create a file in/opt dir owned by user 1000 and cd into ~/Downloads user 1000\u2019s download dir.

    default

    /root/Downloads/ line 7: cd: /root/Downloads: No such file or directory The file created by the script is owned by root and not editable in VSCode or other editor unless using sudo

    as user 1000

    /home/test/Downloads/ The file created by the script is owned by user and can be freely edited.

    Run command as user user sudo -u user /path/to/script.

    • ~ path works as intended
    • all files created and modified will be owned by user not root
    • bashrc variables do not work, to use environment variables, it must be sourced elsewhere
    • by default, the script has a $PWD at /root, so relative path do not work regarding files

    Example Configuration

    listenAddressSingleHTTPFrontend: 0.0.0.0:1378 # set the port to 1378\n\n# Choose from INFO (default), WARN and DEBUG\nlogLevel: \"INFO\"\n\nactions:\n\n- title: Update Music\n  shell: /home/karis/scripts/script\n  icon: '&#127925'\n  timeout: 2\n  hidden: true\n
    Configuration consists of list of actions, each action consist of title, shell, icon

    • timeout is also optional, the task will be killed if it takes longer (in seconds) to complete
    • hidden will hide it from dashboard
      • to unhide, a service restart is needed
    • maxConcurrent optional, only allow x runs for the duration of the execution, any more will be blocked
    • rateLimit more advance limiting
      • to clear a rate limit, OliveTin has to be restarted
            maxRate:\n      - limit: 3\n        duration: 5m\n
    "},{"location":"Linux%20Server/olivetin/#arguments","title":"Arguments","text":""},{"location":"Linux%20Server/olivetin/#textbox-input","title":"Textbox Input","text":"
    - title: Restart a Docker CT\n  icon: '<img src = \"icons/restart.png\" width=\"48px\" />'\n  shell: docker restart {{ container }}\n  arguments:\n    - name: container\n      type: ascii\n
    • use {{ }} and give a variable
    • under arguments type, assign a type for it, ascii only allows letters and numbers
    "},{"location":"Linux%20Server/olivetin/#dropdown-choices","title":"Dropdown Choices","text":"

    - title: Manage Docker Stack Services\n  icon: \"&#128736;\"\n  shell: docker-compose -f /home/karis/docker/bookstack/docker-compose.yml {{ action }}\n  arguments:\n    - name: action\n      choices:\n        - title: Start Stack\n          value: up -d\n        - title: Stop Stack\n          value: down\n
    This example give choices to start or stop a docker stack of a docker-compose file. If a argument is given the parameter choices, it will be in dropdown mode.

    "},{"location":"Linux%20Server/olivetin/#suggestion","title":"Suggestion","text":"

    Suggestion is a hybrid between dropdown and textbox. It will suggest the list of possible items in browser but do not restrict choices.

      arguments:\n    - name: action\n      title: Action Name\n      suggestions:\n        - value: Information\n

    • value is what is passed onto the shell and Information is a text display for clarification After modifying configuration, it require a restart to clear out previous suggestions for browsers.
    "},{"location":"Linux%20Server/olivetin/#execute-on-files-created-in-a-directory","title":"Execute on files created in a directory","text":"

    - title: Update Songs\n  icon: <iconify-icon icon=\"mdi:music\"></iconify-icon>\n  shell: /home/test/scripts/file.sh {{ filepath }}\n  arguments:\n    - name: filepath\n      type: unicode_identifier\n  execOnFileCreatedInDir: \n    - /home/test/Downloads/\n    - /another/folder\n
    Whenever a new file is created the action will execute.

    • execOnFileCreatedInDir
      • it is possible to add multiple path to monitor; however, adding a path require a restart of OliveTin service
    • same principle as Arguments, whereas OliveTin provides predefined arguments for files. filepath is the full absolute path of the file that is created
    "},{"location":"Linux%20Server/olivetin/#execution-feedback","title":"Execution Feedback","text":"
    - title: some action\n  popupOnstart: default, execution-dialog-stdout-only, execution-dialog, execution-button\n
    default stdout-only dialog button
    • popup dialog have an option to only show stdout or show full log output with exit code
    • button will show how long the process take
    • the design of popup box may not be easy to close, use the keyboard ++Esc++ key to close
    "},{"location":"Linux%20Server/olivetin/#confirmation","title":"Confirmation","text":"

    It is possible to have a confirmation before completing action.

      arguments:\n\n    - type: confirmation\n      title: Click start to begin.\n

    • user must click a checkbox and then start before the action will execute
    • API do not have such restrictions
    "},{"location":"Linux%20Server/olivetin/#ssh-to-another-server","title":"SSH to Another Server","text":"

    Since OliveTin by default runs command as root, it is necessary to copy the SSH config file and all the keys from a user\u2019s folder into /root/.ssh

    • if the permission is setup correctly for a user, the permissions will copy over

    On the first try, need to have this option when using SSH command -o StrictHostChecking=no and on the subsequent logins, ssh via ssh configs will work as normal.

    "},{"location":"Linux%20Server/olivetin/#icons","title":"Icons","text":"

    The icons need to be placed in a folder in /var/www/[icon-folder]/icon.png. To use the icons, offline image or web address, it should be in HTML format. The size of 48px is the default size of OliveTin icons. Other CSS options such as style=\"background-color: white;\" also works.

    icon: '<img src = \"icons/minecraft.png\" size=\"48px\" />'\n
    Icon with emoji, to use emoji, need to use the html code. https://symbl.cc/en/emoji/ For example, &#9786; \ud83d\ude0a.
    icon: \"&#9786;\"\n

    "},{"location":"Linux%20Server/olivetin/#third-party","title":"Third-Party","text":"

    Olivetin only support iconify icons. To use it, search for an icon, under components select Iconify Icon Add the pasted line into the configuration.

      - title: Title\n    icon: <iconify-icon icon=\"openmoji:jellyfin\"></iconify-icon>\n

    "},{"location":"Linux%20Server/olivetin/#icon-management","title":"Icon Management","text":"

    The default icon folder is /var/www/olivetin/icons The icon folder of all homelab icons is in ~/icons/homelab

    "},{"location":"Linux%20Server/olivetin/#api","title":"API","text":"

    Simple action button.

    curl -X POST \"http://mediaserver:1378/api/StartAction\" -d '{\"actionId\": \"Update Music\"}'\n
    Action with Arguments.
    curl -X POST 'http://mediaserver:1378/api/StartAction' -d '{\"actionId\": \"Rename Movies\", \"arguments\": [{\"name\": \"location\", \"value\": \"value\"}]}'\n

    Arguments variable cannot be \u201cpath\u201d

    If path is used as argument, when executing commands with arguments, it will replace the system $PATH variable, this will render most commands useless even basic ones like sleep, date etc. Use another variable such as directory or location

    Newest Olivetin Version Break Old API Method

    The actionName key is deprecated and no longer works, newest Olivetin API only allow actionId for StartAction API endpoint. The scripts above are adjusted accordingly. To migrate, the easiest way it to create a ID in configuration that has the same value as action name.

    - title: action name\n- id: action name\n

    "},{"location":"Linux%20Server/olivetin/#dashboard","title":"Dashboard","text":"

    Dashboard are a separate page from the default OliveTin page, Fieldsets and Folders are allowed to group actions only in dashboard.

    • when an action is in dashboards, it does not appear in main view.
    • when refreshing the page, it will always go back to main view even if the page is currently at a dashboard
      dashboards:\n  - title: My Dashboard\n    contents:\n      - title: Title Desc\n        type: fieldset\n        contents:\n          - title: Fix Epic Games\n          - title: Restart Minecraft\n      - title: Update Metadata\n        type: fieldset\n        contents:\n          - title: Stuff\n            icon: '<img src = \"icons/mcrestart.png\" width=\"64px\" />'\n            contents:\n               - title: Update Songs\n
    Preview

    "},{"location":"Linux%20Server/olivetin/#fieldsets","title":"Fieldsets","text":"

    Fieldsets are group of actions under a title. Any title that has type: fieldset defined is a fieldset, any actions are grouped under contents key and need to have matching title.

    "},{"location":"Linux%20Server/olivetin/#folders","title":"Folders","text":"

    Folders also group actions together in a dashboard and user need to click into the folder to see the actions.

    • it is possible to use custom icons or title for folders as long as type: is not set and it has contents:
    "},{"location":"Linux%20Server/olivetin/#entities","title":"Entities","text":"

    To use entities, an action, a dashboard entry, entities json/yaml file and entity update method is needed (when the action interact with the entity).

    Preview of Entities Flowchart

    "},{"location":"Linux%20Server/olivetin/#entities-file","title":"entities-file","text":"

    It\u2019s possible to use json or YAML

    entities:\n  - file: /etc/OliveTin/entities/containers.json\n    name: container\n

    • entities file are stored in /etc/OliveTin/entities
    • the name of the entity will be reference as container.attributes in configuration
    "},{"location":"Linux%20Server/olivetin/#entity-update","title":"entity update","text":"
    - title: Update container entity file\n  shell: 'docker ps -a --format \"{{ json . }}\" > /etc/OliveTin/entities/entity.json\n  hidden: true\n  execOnStartup: true\n  execOnCron: '*/5 * * * *'\n
    • this is an action that is trigger by other actions that need to modify the entity, the purpose is to update the entity file
    "},{"location":"Linux%20Server/olivetin/#entity-actions","title":"entity-actions","text":"
    - title: Check {{ container.Names }} Status\n  shell: echo {{ container.Status }}\n  entity: container\n  trigger: Update container entity file\n

    The entity action is defined the same way as other actions.

    • entity need to be defined
    • trigger automatically update entity attributes (since executing this actions could change some attribute of an entity like starting a container)
    • both title and shell can use entity.attributes
    "},{"location":"Linux%20Server/olivetin/#dashboard-entry","title":"dashboard-entry","text":"
     - title: CPanel\n    contents:\n      - title: 'Container {{ container.Names }} ({{ container.Image }})'\n        entity: container\n        type: fieldset\n        contents:\n          - type: display\n            title: |\n              {{ container.Status }} <br /><br /><strong>{{ container.State }}<>\n          - title: 'Check {{ container.Names }} Status'\n
    Preview
    • dashboard is the same configuration as in previous but now is able to utilize entities.
    "},{"location":"Linux%20Server/sambasmb-setup/","title":"Samba(SMB) Setup","text":""},{"location":"Linux%20Server/sambasmb-setup/#setting-up-smb-server-on-linux","title":"Setting up SMB Server on Linux","text":"

    Install the samba tool on Linux.

    sudo apt update\nsudo apt install samba -y\n

    Edit the /etc/samba/smb.conf

    [nvme_share]\n   comment = NVMe Share\n   path = /mnt/nvme/share\n   browseable = yes\n   read only = no\n

    nvme_share is the name of the Samba path which will appear in SMB clients and its path is accessed by \\\\192.168.0.1\\nvme_share

    path is the location where the files are stored

    browseable and read only are flags that are needed to make sure read/write access on the SMB share

    Lastly, add the user and password for the SMB share

    sudo smbpasswd -a $USER # enter the password twice\n

    In the case when Windows fail to write files in the samba share for odd reason. Go to Manage Credentials -> Windows Credentials -> Add a Windows Credential and fill the necessary address, username and password.

    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#recent-updates","title":"Recent Updates","text":"
    • RuTorrent
    • Custom Caddy Lego
    • Basic Server Setup, Caddy, Docker, Tailscale
    • Debian-Based Server Setup
    • Tunneling Basic Services (Jellyfin, Web) with Caddy and Tailscale
    • JDownloader
    • Fireshare
    • Audiobookshelf
    • Jellystat
    • Bluemap
    "},{"location":"mkdocs/","title":"Mkdocs","text":""},{"location":"mkdocs/#mkdocs-gotchas","title":"Mkdocs Gotchas","text":"
    • yaml highlighting is broken with mdx-breakless-lists
    • when using heading #, if there are no line breaks between headings, any lists that is after content of the second heading will not be rendered properly, even with mdx-breakless-lists
    • furthermore, if using lists right after a yaml code block, the list will also not be rendered correctly
    • when referencing a subheading in another file, mkdocs uses [](file.md#heading-with-space) while obsidian uses [](file.md#heading%20with%20space)
    • Before switching from lists to normal content, a line break is needed, otherwise the text below will be rendered with a indent
    • mkdocs subheadings [](#subheadings) must be in lower case
    "},{"location":"mkdocs/#admonitioncallouts","title":"Admonition/Callouts","text":"Mkdocs native callout

    callout content mkdocs

    Nested

    Nesting

    • ??? is also valid syntax for mkdocs
    • ???+ makes the callout collapsible and opens by default, while ???- makes it closed by default
      !!! notes \"Title\"\n    content\n
      Obsidian callouts requires the plugin mkdocs-callouts
    Obsidian Native Callout

    Callout content mkdocs

    Nested callout

    callout

    > [!notes]+/- Callout title\n> Callout content\n
    • obsidian callout syntax also follows the same +,- for collapsing, it is to be inserted after the brackets

    Available callouts include notes, info, warning, danger, success, failure, example, abstract, tip, question, bug.

    "},{"location":"mkdocs/#keys-caret-mark-tilde","title":"Keys, Caret, Mark, Tilde","text":"

    Keys ++ctrl+alt+plus++ Ctrl+Alt++ mark highlighting tilde strikethrough

    "},{"location":"mkdocs/#tabbed-content","title":"Tabbed Content","text":"Tab 1Tab 2

    Tab 1 content mkdocs Second line here.

    Tab 2 content

    === \"Tab Name\"\n    Tab content\n

    • not supported in obsidian
    "},{"location":"mkdocs/#attr_list","title":"attr_list","text":"

    Fancy Buttons mkdocs [button text](link.md){ .md-button } Tooltip I\u2019m a tooltip that you can hover or click. [tooltip](https://link \"hover text\") Annotation I\u2019m an annotation, but you need to click the plus icon (1) to show. (2)

    1. annotation 1
    2. annotation 2
      Annotation location 1 (1), location (2)\n{ .annotate }\n1. annotation text to be shown\n

    Footnote Insert footnote like [^1] 1

    • for inserting footnote [^1]
    • [^1]: at the end to explain the footnote; not supported in obsidian
    "},{"location":"mkdocs/#code-highlighting","title":"Code Highlighting","text":"
    from python import python\npython.run(arg1=123, arg2=\"mystr\")[2]\n
    #!/bin/bash\nvar=\"myvar\"\necho $var+3\n
    # yaml highlighting has to be `yaml` not `yml` and it's broken\n---\nversion: \"2.1\"\nservices:\n  clarkson:\n    image: lscr.io/linuxserver/clarkson\n    container_name: clarkson\n    environment:\n\n      - PUID=1000\n      - PGID=1000\n    ports:\n      - 3000:3000\n    restart: unless-stopped\n
    1. explaining the footnote.\u00a0\u21a9

    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/","title":"Basic Server Setup, Caddy, Docker, Tailscale","text":""},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#basics","title":"Basics","text":""},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#creating-the-vm-in-oracle-cloud","title":"Creating the VM in oracle cloud","text":"
    1. Go to instances, new instance.
    2. Select the Always Free image, ARM or x86, recommended 4GB RAM.
    3. Choose Ubuntu image.
    4. Download the SSH key and name it accordingly.
    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#ssh-keys","title":"SSH Keys","text":"

    Using PuttyGen.

    • Place the key in ./ssh/openssh_keys
    • Open PuttyGen, conversion -> import keys
    • Save the key files as ppk file in root folder of ./ssh

    Putty

    • Grab the IP address in the cloud console
    • Give a name in saved sessions
    • Go to behavior, choose these options
    • Under Data, make sure Terminal-type string is xterm-256color
    • Under Terminal -> Features, check \u201cdisable application keypad mode\u201d to fix issues with nano
    • The private key needs to be load in Connection -> SSH -> Auth -> Credentials

    To get the IP address of the VPS at any time

    curl ifconfig.me\n

    Useful packages to install

    htop iotop iftop fio curl gnupg wget neofetch ca-certificates lsb-release fzf screen firewalld net-tools bash-completion\n

    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#docker","title":"Docker","text":"

    https://docs.docker.com/engine/install/ubuntu/

    sudo apt-get update\nsudo apt-get install \\\n    ca-certificates \\\n    curl \\\n    gnupg \\\n    lsb-release\n\nsudo mkdir -p /etc/apt/keyrings\ncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg\n\necho \\\n  \"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \\\n  $(lsb_release -cs) stable\" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null\n\nsudo apt-get update\nsudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-compose\n

    sudo groupadd docker \\\nsudo usermod -aG docker ubuntu\nnewgrp docker # activate docker group immediately\n

    The machine needs to be rebooted from Oracle Cloud console to finish installation.

    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#caddy","title":"Caddy","text":""},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#docker-version-install","title":"Docker Version Install","text":"

    Detailed information on installing Caddy has moved to caddy If Nginx is installed alongside Caddy, it needs to be changed to listen on port 81 instead.

    sudo nano /etc/nginx/sites-enabled/default\n

    • change the server block\u2019s listen from 80 to 81
      sudo service nginx restart\n
    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#port-forwarding","title":"Port Forwarding","text":"

    On the Oracle Cloud side, login and go to Virtual Cloud Networks, click the one that\u2019s available, then the default subnet, this will bring up the Security Lists

    • this is an example of SSH port, configure by Add Ingress Rules and add the ports accordingly; it\u2019s also possible to allow everything and install a firewall in the OS itself

    On the Linux machine, either use iptables or firewall-cmd

    Firewall-cmd (recommended)iptables
    sudo firewall-cmd --zone=public --add-port 19132/tcp --permanent\nsudo firewall-cmd --zone=public --add-port 19132/udp --permanent\nsudo firewall-cmd --zone=public --add-port 25565/tcp --permanent\nsudo firewall-cmd --zone=public --add-port 25565/udp --permanent\nsudo firewall-cmd --zone=public --add-port 80/tcp --permanent\nsudo firewall-cmd --zone=public --add-port 443/tcp --permanent\nsudo firewall-cmd --zone=public --add-port 5800/tcp --permanent\nsudo firewall-cmd --reload\n
    sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT\nsudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT\nsudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 25565 -j ACCEPT\nsudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 19132 -j ACCEPT\nsudo iptables -I INPUT 6 -m state --state NEW -p udp --dport 25565 -j ACCEPT\nsudo iptables -I INPUT 6 -m state --state NEW -p udp --dport 19132 -j ACCEPT\nsudo iptables -I INPUT 6 -m state --state NEW -p udp --dport 51820 -j ACCEPT\nsudo netfilter-persistent save\n
    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#troubleshooting","title":"Troubleshooting","text":"

    For firewall-cmd, use this command to check all open ports.

    sudo firewall-cmd --list-all\n

    Using netstat, or pipe it to grep

    netstat -tln\n# | grep 8080 etc...\n
    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#tailscale","title":"Tailscale","text":"

    Installation and setup of basic services is covered in tunneling basic services. For usage such as exit-node and subnet-routes.

    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#exit-nodesubnet-routes","title":"Exit Node/Subnet Routes","text":"

    First need to enable IP forwarding.

    echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.conf\necho 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.conf\nsudo sysctl -p /etc/sysctl.conf\n
    When using with firewalld, additional configuration is needed such as masquerade.
    sudo firewall-cmd --add-masquerade --zone=public --permanent \nsudo firewall-cmd --add-interface=tailscale0 --zone=trusted --permanent\nsudo firewall-cmd --reload\n
    Basic command to advertise as exit-node and subnet routes
    sudo tailscale up --advertise-exit-node --advertise-subnet-routes=10.10.120.0/24\n
    When connect tailscale in CLI, additional arguments is needed to accept routes (the command below also activate exit node)
    sudo tailscale up --advertise-exit-node --accept-routes\n
    To enable these features, need to go to admin console, go to each machine settings, Edit Route Settings and enable exit-node or subnet routes.

    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#advanced","title":"Advanced","text":"

    Tunneling Jellyfin and other web services with tailscale and caddy

    Minecraft Tunneling

    "},{"location":"Cloud%20VPS/basic-server-setup-caddy-docker-tailscale/#archived","title":"Archived","text":"

    Basic Setup + Docker

    1. Installing Caddy web server (simple to use reverse proxy), lightweight, easy and no need for docker. (Nginx is also a good candidate for reverse proxy as the command is easy to memorize and does not require consulting documentation sites. However, the syntax for nginx is extremely complex compared to caddy and might not be easily memorized.

    https://caddyserver.com/docs/install#debian-ubuntu-raspbian

    sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https\ncurl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg\ncurl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list\nsudo apt update\nsudo apt install caddy net-tools\n# net-tools is good utility, optionally can install firewall-cmd or nginx\n# sudo apt install firewalld nginx\n

    Basic Caddy Syntax (if applicable) If the server that is being setup or restored needs functional service like bookstack or uptime-kuma, reverse proxy is needed.

    sudo nano /etc/caddy/Caddyfile\n

    {\n    email weebly2x10@gmail.com\n}\n\nyour-uptime-kuma.yoursubdomain.duckdns.org {\n        reverse_proxy http://127.0.0.1:3001\n}\n\nwiki.yoursubdomain.duckdns.org {\n        reverse_proxy http://127.0.0.1:6975\n}\n
    "},{"location":"Cloud%20VPS/jdownloader/","title":"JDownloader","text":""},{"location":"Cloud%20VPS/jdownloader/#basic-setup","title":"Basic Setup","text":""},{"location":"Cloud%20VPS/jdownloader/#configuring-jdownloader","title":"Configuring JDownloader","text":"
    • Go to the JDownloader WebUI
    • Go to Settings
    • Under general, change the max number of downloads (2) and DL per hoster (1) to minimize issues
    • Go to MyJDownloader and configure MyJDownloader account
    • Go to extension modules, install and enable \u201cfolderwatch\u201d

    The configuration for JDownloader is complete and should appear and be functional in WebUI. Advanced JDownloader documentation will be covered in detailed in another section. It is recommended to close port 5800 after configuring to prevent others accessing.

    After setting up JDownloader and it appears well in WebUI.

    The section is useless now as UHDMV has shutdown and it\u2019s pointless to setup multiple automated JDownloader server on VPS.

    "},{"location":"Cloud%20VPS/jdownloader/#settings-for-jdownloader","title":"Settings for JDownloader","text":"

    Debloat settings https://rentry.org/jdownloader2 Advanced Settings GraphicalUserInterfaceSettings: Banner -> disable GraphicalUserlnterfaceSettings: Premium Alert Task Column - > disable GraphicalUserInterfaceSeftings: Premium Alert Speed Column -> disable GraphicalUserInterfaceSettings: Premium Alert ETA Column -> disable GraphicalUsserInterfaceSeftings: Special Deal Oboom Dialog Visible On Startup -> disable GraphicalUsserInterfaceSeftings: Special Deals\u00a0-> disable GraphicalUsserInterfaceSeftings: Donate Button State\u00a0-> Hidden (automode)

    "},{"location":"Cloud%20VPS/jdownloader/#theming","title":"Theming","text":"

    GraphicalUserInterfaceSettings: Look And Feel Theme - > BLACK_EYE For Colors LAFSettings: Color For

    • Panel background and header background and alternate row background- #ff222222
    • Selected Rows Background - #ff666666
    • Package Row Background - #ff333333
    • Mouse Over Row Background - #ff666666
    • Panel Header Foreground, Tooltip Foreground, Selected Rows Foreground, Package Row Foreground, Mouse Over Row Foreground, Alternate Row Foreground, Account Temp Error Row Foreground, Account Error Row Foreground- #ffffffff
      • basically, change all the black values to white when searching for color fore, change everything except blue colors and error color
    • Enabled Text Color, Speed Meter Text, Speed Meter Average Text, Config Panel Description Text, Config Header Text Color - #ffffffff
    • Disabled Text Color - #ff666666
      • basically, when searching for color text, change all to white except for disabled text
    "},{"location":"Cloud%20VPS/tunneling-basic-services-jellyfin-web-with-caddy-and-tailscale/","title":"Tunneling Basic Services (Jellyfin, Web) with Caddy and Tailscale","text":"

    This procedure is not reproducible yet. Rigorous testing is still required before being documented. Here are the known procedures.

    The purpose is to tunnel normal web or network intensive traffic such as Jellyfin when faced with CG-NAT or similar situations (in this case locked down dorm internet), also configure hardware transcoding (in this case NVENC, but Intel QSV for future) to mitigate limitations with Canadian ISP(s).

    "},{"location":"Cloud%20VPS/tunneling-basic-services-jellyfin-web-with-caddy-and-tailscale/#jellyfin","title":"Jellyfin","text":""},{"location":"Cloud%20VPS/tunneling-basic-services-jellyfin-web-with-caddy-and-tailscale/#install","title":"Install","text":"

    https://jellyfin.org/downloads/server Download and run the server installer. Configure Jellyfin to your liking.

    "},{"location":"Cloud%20VPS/tunneling-basic-services-jellyfin-web-with-caddy-and-tailscale/#tailscale","title":"Tailscale","text":""},{"location":"Cloud%20VPS/tunneling-basic-services-jellyfin-web-with-caddy-and-tailscale/#windows","title":"Windows","text":"

    https://tailscale.com/download/windows Download, install and login.

    "},{"location":"Cloud%20VPS/tunneling-basic-services-jellyfin-web-with-caddy-and-tailscale/#linux","title":"Linux","text":"
    curl -fsSL https://tailscale.com/install.sh | sh\n
    sudo tailscale up\n

    All the tailscale management is done in the WebUI.

    The Windows client is given a tailscale network IP address in 100 range. Check if Windows client is pingable on server.

    ping 100.x.y.z #100.79.28.31\n

    Check if Jellyfin is running and tunneled properly on Oracle cloud. It should get a webpage html rather than unable to resolve host etc.

    curl http://100.x.y.z:8096\n
    "},{"location":"Cloud%20VPS/tunneling-basic-services-jellyfin-web-with-caddy-and-tailscale/#reverse-proxy","title":"Reverse Proxy","text":"

    basic-server-setup-caddy-docker-tailscale

    Caddy installation and syntax is can be found on this page. Replace 127.0.0.1 with the tailscale IP address.

    {\n    email weebly2x10@gmail.com\n}\n\nmovies.yoursubdomain.duckdns.org {\n        reverse_proxy http://100.x.y.z:8096\n}\n

    It is possible to set use the root domain (yoursub.duckdns.org) or a subfolder domain (movies.yousub.duckdns.org) for Jellyfin. After configuring the Caddyfile.

    sudo systemctl reload caddy\n

    Use netstat to check port 80, 443 is being listened. Make sure to port forward Oracle VPS.

    Other Services

    Follow the same syntax as the caddy file provided, if the root domain is used, then a subdomain must be used for other services.

    Results

    Inconclusive yet, more testing required.

    "},{"location":"Cloud%20VPS/tunneling-minecraft-server-tcp-only-with-nginx/","title":"Tunneling Minecraft Server (tcp only) with Nginx","text":"

    Procedure not reproducible yet, will be documented later.

    "},{"location":"Computer%20Stuff/demucs-nvidia/","title":"Demucs Nvidia","text":"

    Demucs is an music separation tool that has potential for a karaoke setup.

    https://github.com/facebookresearch/demucs

    https://www.youtube.com/watch?v=9QnFMKWEFcI&t=585s

    https://docs.google.com/document/d/1XMmLrz-Tct1Hdb_PatcwEeBrV9Wrt15wHB1xhkB2oiY/edit

    Installation on PC with Nvidia

    1. Firstly install Anaconda. Download Anaconda for Windows https://www.anaconda.com/products/distribution
    2. Install PyTorch. https://pytorch.org/get-started/locally/. Select the correct version of pytorch.
    3. Install ffmpeg. [https://www.gyan.dev/ffmpeg/builds/]](assets/gallery/2022-12/TwJimage.png)

    Demucs

    After installing the prerequesties.

    Open \u201cAnaconda terminal\u201d and type

    python.exe -m pip install -U demucs\n
    pip install PySoundFile \n

    Running Demucs

    demucs \"C:\\path\\to\\music\\file.mp3\"\n

    This will run demucs with CUDA GPU acceleration, make sure to put the path in double quote. The extracted file will be found in the directory where you run the command eg. the default Anaconda prompt starts in ~/separated

    "},{"location":"Docker%20Apps/01-docker-infra/","title":"01 Docker Infrastructure","text":""},{"location":"Docker%20Apps/01-docker-infra/#filesystem","title":"Filesystem","text":""},{"location":"Docker%20Apps/01-docker-infra/#compose","title":"Compose","text":"

    All docker-compose.yml files are stored in ~/docker folder, which then by default is under the network docker_default.

    • by default for newly created apps, a new folder is created and docker-compose.yml is created for that app for testing
      • once app testing is complete, the compose file can be moved docker root folder if appropriate or remain
    • some apps can be grouped together and these compose files are in the root docker folder such as media.yml, network.yml, the grouping allows multiple services to be managed by a single compose. For grouping, some of the property can include
      • the apps share common properties such as arrs apps
      • it is preferable for apps to live in same network, eg. teslamate
      • a large app requiring multiple containers eg. frontend, mysql etc..
      • apps share similar/same category, such as qBittorrent and nzbget can be put together in downloader.yml even though they do not have common properties or require same networking
    "},{"location":"Docker%20Apps/01-docker-infra/#storage","title":"Storage","text":"

    The storage used for all containers are bind mount.

    • application configs are stored in ~/docker/[app]
      • if an app has multiple components needing persistence (eg. app with database, helpers), a folder will be created as such ~/docker/[app]/postgres etc.
    • apps that also store non-config data (such as music, documents etc.) and not using a lot of space can bind mount /mnt/nvme/share (a directory on local or another SSD) for fast data access and without spinning up HDD
    • exceptions are home assistant or its related home automation containers and these are stored at /srv/homeassistant
    "},{"location":"Docker%20Apps/01-docker-infra/#backup","title":"Backup","text":"

    The entire docker root folder is copied to a NFS share on another computer. With exception of minecraft and home assistant which a specialized method is used.

    "},{"location":"Docker%20Apps/01-docker-infra/#network","title":"Network","text":"

    With docker-compose, a new network is created with the name of folder the compose is located, while it\u2019s possible to change network, it is not straightforward, therefore, there is no points in manually defining networks unless required.

    Public 172.80.0.0/16 - bridge network for public facing applications with reverse proxy, this way when configuring Nginx Proxy Manager, all it need is to enter container_name:80 rather than IP address.

    • Nginx Proxy Manager - 172.80.44.3
    • Other containers will use docker DHCP to get address
    • Containers that need to public facing can attach to this network Media 172.96.0.0/16 - bridge network for arrs, downloader and management applications for easy interconnection when configuring Minecraft 172.255.255.0/24 - bridge network for Minecraft related networks
    • Minecraft server (mcserver) - 172.255.255.65
    "},{"location":"Docker%20Apps/01-docker-infra/#categories","title":"Categories","text":"

    Media Apps - apps related to media acquisition, curation and other functions services for Jellyfin Networking - reverse proxy, DNS, VPN and related services Home Automation - home assistant and its associated functions VNC - containers based on jlesage-vnc-apps or Linuxserver Kasm images, usually desktop apps run in a browser via noVNC Management - tools for managing docker containers or entire server Games - game servers and associated tools Filesharing - apps that share files to other clients Documentation - notes and operation procedures for server infrastructure Authentication - services that handle single sign-on (SSO) with users

    "},{"location":"Docker%20Apps/02-docker-ratings/","title":"Ratings","text":"

    Docker App Rating consist of a table that look at the docker app and evaluate its configurations, deployment and usage against some quality of life features such as easy backup/restore, migration, user mapping, time zone logs, single-sign on with multi-user support etc. These ratings will change as more testing are done.

    Docker Apps Rating U/GID TZ SSO/Users Existing FS Portable Subfolder \u274e \u2705* \u274c\ud83e\udd35 \u2705 \u274c \u274c"},{"location":"Docker%20Apps/02-docker-ratings/#ugid","title":"UGID","text":"

    The Docker container/application or stack supports user ID and group ID mapping and respect the ID matching the host system. For example, Linuxserver.io and jlesage containers are gold standard.

    \u2705 Natively Supported\u274e Supported\ud83d\udfe8 Usable\u274cNot Supported
    • All Linuxserver and jlesage containers and projects that build with their baseimages, uses environment variables PUID/PGID for mapping
    • Fully respect UID and GID mappings on the host, will be able to all bind mounted files on the host with the respective permission without permission error and app issues
    • All the files the app need to write to the bind mount are written with the ID set in environment variable and are accessible via anything such as VSCode and other containers
    • For apps that don\u2019t have environment variables as above, but still following host user ID and permissions when modifying files will also have this rating eg. Audiobookshelf, Navidrome
    • If the app require multiple containers deployed as a stack and if the main app or the app that stores configuration/appdata fully support it but other part of the app do not, it will have \u2705* rating eg. Bookstack
    • The container do not have these environment variables and by default when it needs to create files on the host, it creates them with root:root permission but functions correctly
    • The container permission can be fixed simply with user: 1000:1001 in compose
    • After this fix, these should not be any permission issues and the container functions without issues and create files that are accessible via anything eg. Authelia, Jellystat
    • The container do not support environment variables and by using user:, the functionality of the container is broken and have permission issues or still writes files as root
    • However, the container do not write configuration data or there is no need to have shared access of data
    • eg. a database application, app that is entirely configured via environment/labels
    • The container exhibit symptoms of Usable rating but user: either breaks the containers or still won\u2019t fix permission
    • The container bind mounts to configuration or shared data that needs to be accessible by other tools, it would need constant chown -R to ensure access by others are possible
    • By setting user: or chown to make files accessible to the host and other tools, the container cease to function
    • Only named volumes can be used not bind mounts
    "},{"location":"Docker%20Apps/02-docker-ratings/#tz","title":"TZ","text":"

    The container support standard timezone variable. All logs generated by the container follows the timezone specified by TZ or other supported environment variables. This is either \u2705 or \u274c.

    "},{"location":"Docker%20Apps/02-docker-ratings/#sso","title":"SSO","text":"

    Users

    • \ud83e\udd35: Only a single user/session is supported at a time
    • \ud83d\udc6a: Multiple users are supported via SSO or internally

    Authelia Authelia is the SSO provider that is used for the setup. Only support and compatibility for this will be documented. Only the main app via an exposed web interface need to support it, otherwise it\u2019s not applicable. If there are zero reasons to expose this app to the internet and have multiple local users, this is n/a.

    \u2705 Natively Supported\u274e Supported\ud83d\udfe8 Usable\u274c Not Supported
    • App has OIDC support that works with third-party provider eg. Audiobookshelf, Portainer
    • App without advanced OIDC but have documented other ways to integrate SSO for users eg. Filebrowser, Navidrome
    • The user via SSO can be mapped to existing users with same name or creates the user if not exist
    • App is able to fully integrate with ALL third party service or mobile/desktop apps flawlessly even after installing SSO/2FA
    • Authelia whitelist rules can be easily created to restore full functionality of the app (eg. API, public portion) without compromising security where Authelia is needed
    • App do not provide native integration for third party sign-in providers; but has an option to fully disable internal authentication in favor of Authelia eg. Radarr, Nzbget
    • App do not have internal authentication eg. Memos, jlesage VNC
    • By adding Authelia to add authentication or to replace internal authentication, the app is able to fully integrate with ALL third party service or mobile/desktop apps flawlessly even after installing SSO/2FA
    • Authelia whitelist rules can be easily created to restore full functionality of the app (eg. API, public portion) without compromising security where Authelia is needed
    • The above only apply with single-user apps, if a multi-user app do not natively support 3p SSO provider, Authelia is unable to passthrough the correct user
    • Apps that have removable authentication or no authentication which Authelia can be added
    • The only logical way to access the app is via a web browser where Authelia is fully supported
    • Accessing the app via third-party services is restricted to LAN only or behind a VPN where Authelia is not relevant eg. Nginx Proxy Manager, Teslamate
    • The app has internal authentication that cannot be disabled or integrate with Authelia
    • After installing Authelia, only way to use the app is via web browser; third party integration and mobile/desktop apps no longer function even with whitelisting rules eg. Jellyfin, Home Assistant
    • Using whitelist rules to restore functionality with third party apps would compromise security where Authelia is needed
    • No workarounds are possible to have both SSO and 3p integrations
    "},{"location":"Docker%20Apps/02-docker-ratings/#existing-fs","title":"Existing-FS","text":"

    Existing filesystem structures, the app do not require a folder structure that only the app can use and is able to use it as is and allow user to not change workflow when switching to this app. (This section is incomplete, more updates needed)

    • config: type of files that governs how an app behaves eg. configuration.yaml, app.conf
    • media: files includes videos, photos, documents or other files the user want the app to manage
    \u2705 Yes\ud83d\udfe8 Partially\u274c No
    • App work with a bind mount to a host path where other process can also access it and the app do not have conflict with other processes
    • App do not modify existing file structures and permissions
    • User is able to import/export/edit data stored in the app (both configs and media) freely with or without the app eg. Jellyfin, Filebrowser
    • User is able to move relatively freely to a similar app
    • (To be updated)
    • App store its data (both config and media) in encrypted blob, proprietary format, specific database only the app can read
    • App modify existing file structure for it to work and the permissions it need are incompatible with other workflows, refer to U/GID
    • The only way to import/export/edit data is via the app, it\u2019s difficult to use another workflow
    "},{"location":"Docker%20Apps/02-docker-ratings/#portable","title":"Portable","text":"

    The portability of the app refers to how easy is it to migrate, backup/restore an app\u2019s config. If the frequency of backup/restore is irrelevant or no persistence data is needed such the app runs entirely via docker-compose, it\u2019d be n/a.

    \u2705 Yes\ud83d\udfe8 Partially\u274c No
    • The app will work on another machine simply by copying the bind mount to the new machine
    • If U/GID are not supported and a named volume is used, copying the volume with various tools will transfer the app to the new machine
    • If an app uses a database, it will still work after either copying the bind path or volume to the new machine; if not, a repeatable and documented way to dump and import the database is provide so the app will transfer smoothly
    • After the app is migrated, zero user intervention is needed and the app to function exactly the same
    • App does not work by simply copying over the persistent data, but only a quick user intervention is needed eg. backup/restore file in WebUI
    • App data migration will work, but might require complex scripts or other dependencies that makes scripting harder
    • App cannot be migrated or restored by simply copying the files, the app stop workings
    • The backup process is difficult and often fails
    • Even with a migration, heavy user intervention is needed for the app to function exactly the same if it\u2019s possible
    "},{"location":"Docker%20Apps/02-docker-ratings/#mobile","title":"Mobile","text":"

    The mobile refers to mobiles apps section, this rating determines the quality of mobile integration (only Android tested) since an app on mobiles offers more function than a website.

    \u2705 Great\u2714 App Present/PWA\u274cNot Mobile Friendly
    • The app has a mobile app on app store or APK either from the developer itself or has viable well-maintained third-party apps
    • The mobile app enhance the experience of the app and offers better usability compared to a web browser
    • Mobile app offers deep integration with Android OS or other apps with widgets, controls, intents where nessecary (eg. Audiobookshelf, Home Assistant, Jellyfin, share icon to and from app)
    • The app website has a mobile-friendly layout which a progressive-web app can be used and the webapp offers equivalent functionality to desktop counterpart
    • The app in question is basic and all its functions are supported via a website without deep system integration (eg. Dashboard app for display only)
    • App will be given * rating if the app does not have a mobile app or support PWA but it\u2019s mobile friendly when opened in a traditional mobile browser
    • The app either do not have a mobile friendly website/app or it\u2019s mobile counterpart is not useable that a lot of desktop functionality is lost (eg. Grafana, webtop)
    "},{"location":"Docker%20Apps/bookstack/","title":"Bookstack","text":""},{"location":"Docker%20Apps/bookstack/#installation","title":"Installation","text":"

    Change port to 6975

    Add in docker-compose: restart: unless-stopped

    $docker directory = /home/docker .... etc

    Docker-Compose file reference

    https://github.com/solidnerd/docker-bookstack/blob/master/docker-compose.yml

    version: '2'\nservices:\n  mysql:\n    image: mysql:8.0\n    environment:\n\n    - MYSQL_ROOT_PASSWORD=secret\n    - MYSQL_DATABASE=bookstack\n    - MYSQL_USER=bookstack\n    - MYSQL_PASSWORD=secret\n    volumes:\n    - mysql-data:/var/lib/mysql\n    restart: unless-stopped\n\n  bookstack:\n    image: solidnerd/bookstack:22.10.2\n    depends_on:\n\n    - mysql\n    environment:\n    - DB_HOST=mysql:3306\n    - DB_DATABASE=bookstack\n    - DB_USERNAME=bookstack\n    - DB_PASSWORD=secret\n    #set the APP_ to the URL of bookstack without without a trailing slash APP_URL=https://example.com\n    - APP_URL=http://xxx.xxxmydomainxxx.duckdns.org\n    volumes:\n    - $docker/public-uploads:/var/www/bookstack/public/uploads\n    - $docker/storage-uploads:/var/www/bookstack/storage/uploads\n    ports:\n    - \"6975:8080\"\n    restart: unless-stopped\n

    Notice: The default password for bookstack is

    admin@admin.com

    password

    Permissions: remember the set write permission on public-uploads folder so users can upload photos.

    "},{"location":"Docker%20Apps/bookstack/#backup-and-restore","title":"Backup and Restore","text":"

    Files Backup:

    tar -czvf bookstack-files-backup.tar.gz public-uploads storage-uploads\n

    Restore:

    tar -xvzf bookstack-files-backup.tar.gz\n

    Database backup:

    sudo docker exec bookstack_mysql_1 /usr/bin/mysqldump -u root --password=secret bookstack > ./bookstack/bookstack_db.sql\n

    Restore:

    sudo docker exec -i bookstack_mysql_1 mysql -u root --password=secret bookstack < /$docker/bookstack/bookstack_db.sql\n
    • bookstack_mysql1 is the container name
    • password is secret or the database password
    "},{"location":"Docker%20Apps/bookstack/#reverse-proxy","title":"Reverse Proxy","text":"

    Use subdomain in proxy manager.

    Backing Up and Restoring with LinuxServer.io container

    Due to limits or Oracle Cloud free tier. The only arm image is from linuxserver io container, and it is different than solidnerd image.

    Docker-Compose file

    version: \"2\"\nservices:\n  bookstack:\n    image: lscr.io/linuxserver/bookstack\n    container_name: bookstack\n    environment:\n\n      - PUID=1001\n      - PGID=1001\n      - APP_URL=https://wiki.xxx.duckdns.org\n      - DB_HOST=bookstack_db\n      - DB_USER=bookstack\n      - DB_PASS=secret\n      - DB_DATABASE=bookstackapp\n    volumes:\n      - /home/ubuntu/bookstack:/config\n    ports:\n      - 6975:80\n    restart: unless-stopped\n    depends_on:\n      - bookstack_db\n\n  bookstack_db:\n    image: lscr.io/linuxserver/mariadb\n    container_name: bookstack_db\n    environment:\n\n      - PUID=1001\n      - PGID=1001\n      - MYSQL_ROOT_PASSWORD=secret\n      - TZ=Europe/London\n      - MYSQL_DATABASE=bookstackapp\n      - MYSQL_USER=bookstack\n      - MYSQL_PASSWORD=secret\n    volumes:\n      - /home/ubuntu/bookstack:/config\n    restart: unless-stopped\n

    Notice: In Oracle cloud free tier, the default ubuntu user is 1001, not 1000. For database name, it it bookstackapp, keep in mind when executing restore command. The folder structure is also different. In the solidnerd container, the images are stored at /public-uploads while in LSIO container it is stored at /www/uploads

    "},{"location":"Docker%20Apps/bookstack/#backing-up-from-home-pc","title":"Backing Up (from home PC)","text":"

    Images

    cd into /public-uploads and make a tar archive

    tar -czvf images.tar.gz images\n

    Backup the database

    sudo docker exec bookstack_mysql_1 /usr/bin/mysqldump -u root --password=secret bookstack > ./bookstack_db.sql\n

    Transfer to Oracle Cloud Server

    scp -i oracle-arm-2.key images.tar.gz bookstack_db.sql ubuntu@$IPADDR:/home/ubuntu/bookstack/www/uploads\n

    Take in consideration the location where LSIO image stores the images.

    "},{"location":"Docker%20Apps/bookstack/#restore-into-oracle-cloud","title":"Restore (into Oracle Cloud)","text":"

    Images (/home/ubuntu/bookstack/www/uploads)

    tar -xvzf images.tar.gz\n

    Database

    The image url in the database still refers to old server url, it needs to be changed. The following command replace the subdomain in the sq1 dump.

    sed -i 's/wiki.$home.duckdns.org/wiki.$oracle.duckdns.org/g' bookstack_db.sql\n

    Restore the database.

    sudo docker exec -i bookstack_db mysql -u root --password=secret bookstackapp < /home/ubuntu/bookstack/www/uploads/bookstack_db.sql\n
    "},{"location":"Docker%20Apps/bookstack/#crontab","title":"Crontab","text":"

    On Home PC

    0 23 * * 2,5 /home/karis/bookstack.sh\n
    #!/bin/bash\n\ncd ~/docker/bookstack/public-uploads #location of bookstack public uploads\ntar -czvf images.tar.gz images\nsudo docker exec bookstack_mysql_1 /usr/bin/mysqldump -u root --password=secret bookstack > ./bookstack_db.sql\nscp -i oracle-arm-2.key images.tar.gz bookstack_db.sql ubuntu@$ORACLEIP:/home/ubuntu/bookstack/www/uploads\n

    Make sure to copy the oracle-arm-2.key to the appropriate location (~/docker/bookstack/public-uploads)

    Also make sure the permission of oracle-arm-2.key is in correct permission (600). Especially changing the permission of public-uploads folder to allow write access.

    Do a backup sequence in crontab at 11pm every Tuesday and Friday.

    Oracle Cloud Server

    0 8 * * 3,6 /home/ubuntu/bookstack.sh\n
    #!/bin/bash\n\ncd ~/bookstack/www/uploads #directory where bookstack files scp from home are located\ntar -xvzf images.tar.gz\nsed -i 's/wiki.$homeip.duckdns.org/wiki.$oracle.duckdns.org/g' bookstack_db.sql\nsudo docker exec -i bookstack_db mysql -u root --password=secret bookstackapp < /home/ubuntu/bookstack/www/uploads/bookstack_db.sql\n

    Restore the sequence after backup, every Wednesday and Saturday at 8am (need to consider the TZ between Vancouver, Edmonton and Toronto, or any the time zone of the remote server)

    "},{"location":"Docker%20Apps/ddns-update/","title":"Dynamic DNS Updater Docker","text":"

    Official Image: https://hub.docker.com/r/linuxserver/duckdns Custom Github Page: https://github.com/vttc08/docker-duckdns-dynu

    This is a docker container that automatically updates the public IPv4 address of the server every 5 minutes to dynamic DNS services Dynu and DuckDNS. It is the fork of Linuxserver DuckDNS container.

    "},{"location":"Docker%20Apps/ddns-update/#docker-compose","title":"Docker Compose","text":"
      services:\n      duckdns:\n        image: vttc08/docker-duckdns-dynu:latest\n        container_name: duckdns\n        env_file: ddns.env\n        environment:\n\n          - TZ=America/Vancouver\n          - PUID=1000\n          - PGID=1001\n        restart: unless-stopped\n

    These need to be filled in the ddns.env

    DYNU_HOST= # full name of dynu domains\nDYNU_PASS= # md5 hashed dynu login pass\nSUBDOMAINS= # DuckDNS domains without the duckdns.org part\nTOKEN= # DuckDNS token \n

    • token will be visible in DuckDNS dashboard
    • Dynu pass is the same as login; alternatively, it is possible to create a dedicated password just for IP update MD5 generator
      echo -n \"password\" | md5sum\n
    • when setting the IP to 10.0.0.0 in Dynu update API, dynu will automatically update the IP address to the IP address making that request
    "},{"location":"Docker%20Apps/ddns-update/#other-usage","title":"Other Usage","text":"

    docker restart duckdns will manually run IP update docker exec -it duckdns /app/debug.sh or other scripts, debug script will print out IP address of subdomains resolved by Cloudflare

    "},{"location":"Docker%20Apps/epic-games-free-games/","title":"Epic Games Free Games","text":"

    Buy Free Games from Epic Games

    https://hub.docker.com/r/charlocharlie/epicgames-freegames

    Config

    NEED TO CHANGE

    Email: email address

    Password: password

    Webhook URL: make a discord channel and click settings. Go to integrations, then webhook, copy webhook URL.

    mentioned Users: right click your profile, and click Copy ID

    TOTP

    1. Go here to login. https://www.epicgames.com/account/password Login with Epic Games account.
    2. Click \u201cenable authenticator app.\u201d
    3. In the section labeled \u201cmanual entry key,\u201d copy the key.
    4. Use your authenticator app to add scan the QR code.
    5. Activate 2FA by completing the form and clicking activate.
    6. Once 2FA is enabled, use the key you copied as the value for the TOTP parameter.

    Docker

    docker run -d -v /home/karis/docker/epicgames:/usr/app/config:rw -p 3000:3000 -m 2g --name epicgames --restart unless-stopped charlocharlie/epicgames-freegames:latest\n

    Change the name of the container to a friendly name. Restart unless stopped so it restart automatically.

    Copy and Paste

    The default json configuration is located at /home/karis/docker/epicgames or $HOME/docker/epicgames.

    Fix Login Issue Using Cookies

    https://store.epicgames.com/en-US/

    1. Visit this site and make sure it\u2019s logged in.
    2. Install this extension EditThisCookie https://chrome.google.com/webstore/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg/related
    3. Open the extension and change the url to epicgames.com/id as in screenshot below
    4. Export the cookie

    1. Go to $HOME/docker/epicgames and create a new file email@gmail.com-cookies.json
    2. If the json file is already there, truncate it with \u2013size 0
    3. Paste the cookie value to the json file
    4. Restart container.

    Update

    docker pull charlocharlie/epicgames-freegames:latest\ndocker rm -f epicgames\ndocker images | grep epicgames\n# use docker rmi to remote the corresponding image \n# re run the epicgames docker run command\n
    "},{"location":"Docker%20Apps/filebrowser/","title":"Filebrowser","text":"

    Filebrowser app on a webbrowser, port 4455. free-games-claimer

    Docker-compose deployment

    version: '3.9'\nservices:\n    filebrowser:\n        container_name: filebrowser\n        image: filebrowser/filebrowser\n        ports:\n\n            - '4455:80'\n        user: 1000:1000\n        restart: unless-stopped\n        volumes:\n            - '~/docker/filebrowser/.filebrowser.json:/.filebrowser.json'\n            - '~/docker/filebrowser/filebrowser.db:/database.db'\n            - '~/docker/filebrowser/branding:/branding'\n            - '~/docker:/srv/docker'\n            - '/mnt/data:/srv/data'\n            - '/mnt/nvme/share:/srv/nvme-share'\n

    The first 3 bind mount are for configuration of filebrowser, eg. config, database and branding files. On first deployment, need to create an empty database.db file. The remaining bind mount are for the folders that need to be accessed, the folders should be bound under /srv. Filebrowser by default create a volume under /srv, in this setup where folders are bind mount to subfolders in /srv and nothing bind mount directly, it could create a specific volume under docker just for /srv which is unavoidable.

    This is the content of .filebrowser.json

    {\n    \"port\": 80,\n    \"baseURL\": \"\",\n    \"address\": \"\",\n    \"log\": \"stdout\",\n    \"database\": \"/database.db\",\n    \"root\": \"/srv\"\n  }\n
    "},{"location":"Docker%20Apps/filebrowser/#usershare","title":"User/Share","text":"

    The user and share management in filebrowser is simple. The shares have a expiring time, and can optionally have a password. The recipient can view and download files in the share but cannot upload.

    To create a new user, it\u2019s under settings -> User Management, and add a user and password accordingly, and give appropriate permission. The scope is where the root folder where the user have access to, since the docker data folder is bound at /srv/docker and /srv is defined as root folder in config, the folder name to put in scopes would be /docker. Only one scope is allowed.

    It is also possible to add rules to prevent user access of files within a scope. Under rules, enter the path that is relative to the scope, for example /docker/minecraft/config would be /config

    "},{"location":"Docker%20Apps/filebrowser/#personalization","title":"Personalization","text":"

    Enable dark theme - Setting -> Global Settings -> Branding

    • also change the branding directory path to /branding which is bind mount in docker

    Under the branding folder, create a file custom.csswhich is used for css customization. Then create a folder img and place logo.svg in it for custom icon. The icon is the same as egow entertainment and stored in OliveTin icon PSD file. Under the folder img, create a folder icons and use favicon generator site to create an icon archive and put all the content of that archive in the icons folder, the result should look like this.

    Reverse Proxy/Homepage

    Reverse proxy is normal procedure using NPM. To add bookmark to a file location, use browser/homepages bookmark function.

    "},{"location":"Docker%20Apps/fireshare/","title":"Fireshare","text":"Docker Apps Rating U/GID TZ SSO/Users Existing FS Portable Subfolder Mobile \u274e \u2705* \u274c\ud83e\udd35 \u2705 \u2705 \u274c \u2714"},{"location":"Docker%20Apps/fireshare/#configuration","title":"Configuration","text":"
    services:\n\u00a0 fireshare:\n\u00a0 \u00a0 image: shaneisrael/fireshare:develop\n\u00a0 \u00a0 container_name: fireshare\n\u00a0 \u00a0 environment:\n\u00a0 \u00a0 \u00a0 - MINUTES_BETWEEN_VIDEO_SCAN=30\n\u00a0 \u00a0 \u00a0 - PUID=1000\n\u00a0 \u00a0 \u00a0 - PGID=1001\n\u00a0 \u00a0 env_file:\n\u00a0 \u00a0 \u00a0 - .env # admin password\n\u00a0 \u00a0 volumes:\n\u00a0 \u00a0 \u00a0 - ~/docker/fireshare/data:/data:rw\n\u00a0 \u00a0 \u00a0 - ~/docker/fireshare/processed:/processed:rw\n\u00a0 \u00a0 \u00a0 - /mnt/nvme/share/gaming:/videos:rw\n\u00a0 \u00a0 networks:\n\u00a0 \u00a0 \u00a0 public:\n\u00a0 \u00a0 ports:\n\u00a0 \u00a0 \u00a0 - 8080:80\n\u00a0 \u00a0 restart: unless-stopped\n\nnetworks:\n\u00a0 public:\n\u00a0 \u00a0 name: public\n\u00a0 \u00a0 external: true\n
    "},{"location":"Docker%20Apps/fireshare/#environments","title":"Environments","text":"

    Content of .env

    ADMIN_PASSWORD=\nDOMAIN=\n
    Setup user and group ID accordingly; more environment options are available https://github.com/ShaneIsrael/fireshare/wiki/Fireshare-Configurables

    "},{"location":"Docker%20Apps/fireshare/#other","title":"Other","text":"

    The software can also be configured via config.json located in /data/config.json. It\u2019s configuration is same as the WebUI. Default Video Privacy: false set all the videos public viewable without sharing manually Default Video Privacy: false public user cannot upload videos Sharable Link Domain: link which fireshare append to when sharing files Upload Folder: folder that will be created in /videos directory when file is uploaded

    "},{"location":"Docker%20Apps/fireshare/#usage","title":"Usage","text":"

    By default, can view videos, admin and share links and the link will show preview and viewable in Discord. Admin can also upload directly in web interface. All the uploaded files are located in /videos/uploads

    • when uploading files through filesystem with a changed date via touch the changed date will also be reflected in the app
    "},{"location":"Docker%20Apps/fireshare/#workflow","title":"Workflow","text":"

    https://github.com/vttc08/fireshare-import Refer to this Github repo to setup. For personal documentation

    • setup the project directory into ~/Documents/Projects
    "},{"location":"Docker%20Apps/free-games-claimer/","title":"Free Games Claimer","text":"

    https://github.com/vogler/free-games-claimer

    This is the Github repo for the new and advanced free games claimer. This is implemented after Epicgames FreeGames keeps failing.

    "},{"location":"Docker%20Apps/free-games-claimer/#configuration","title":"Configuration","text":"

    Using Docker-Compose

    In the folder structure

    server: ~/docker/fgc$\ndocker-compose.yml\nfgc.env\n

    fgc.env is the environment file for all the password/keys to login to different game services, fill it in manually or use a backup.

    EG_OTPKEY=\nEG_EMAIL=\nEG_PASSWORD=\nNOTIFY=discord://123456/ABCD\nPG_EMAIL=\nPG_PASSWORD=\nGOG_EMAIL=\nGOG_PASSWORD=\nTIMEOUT=300\n

    NOTIFY=discord://123456/ABCD if the webhook looks like this https://discord.com/api/webhooks/123456/ABCD

    TIMEOUT=300 sets the timeout to 300s before the container skip and error out due to EpicGames captcha problems. However, the impact on prime gaming and GOG are not tested.

    docker-compose.yml

    services:\n  free-games-claimer:\n    container_name: FGC # is printed in front of every output line\n    image: ghcr.io/vogler/free-games-claimer # otherwise image name will be free-games-claimer-free-games-claimer\n    build: .\n    ports:\n\n      - \"5990:5900\" # VNC server\n      - \"5890:6080\" # noVNC (browser-based VNC client)\n    volumes:\n      - ~/docker/fgc:/fgc/data\n      - ~/docker/fgc/epic-games.js:/fgc/epic-games.js\n      - ~/docker/fgc/prime-gaming.js:/fgc/prime-gaming.js\n      - ~/docker/fgc/gog.js:/fgc/gog.js\n    command: bash -c \"node epic-games; node prime-gaming; node gog; echo sleeping; sleep 1d\"\n    env_file:\n      - fgc.env\n    restart: unless-stopped\n

    This docker-compose file use the environment file fgc.env as indicated above and runs once every day. It also contains VNC server/web based client.

    "},{"location":"Docker%20Apps/free-games-claimer/#missing-captcha-session","title":"Missing Captcha Session","text":"

    This should no longer be needed. Edit the line to epicgames.js code and replace with the following message. When the captcha is missed, it will send a notification for manual claiming.

    wait notify(`epic-games: got captcha challenge right before claim. Use VNC to solve it manually. Game link: \\n ${url}`)\n

    EpicGames require a captcha to claim free games. If the 5 minute timeout window for EpicGames is missed, it is no longer possible to claim the games unless waiting for the next day, which due to the nature of discord notifications, there is a slim to none chance of catching the captcha at next day. To continuing claiming after acknowledging the missed session, use portainer, ConnectBot Android to temporarily restart the container to restore VNC session.

    In order to restore the default time of claiming the games. Eg. waking up on Thurs or Fri and a predictable time and claim games, use the linux at command. Need to install at using apt.

    at 9:20\n> docker restart FGC\n> <EOT>\n

    This will run the command at 9:20 AM the next day. Ctrl-D to exit at prompt and verify the time is correct.

    "},{"location":"Docker%20Apps/jlesage-vnc-apps/","title":"jlesage VNC Apps","text":"

    VNC apps consists of desktop applications that have the GUI in a web browser, mostly from the creator jlesage.

    "},{"location":"Docker%20Apps/jlesage-vnc-apps/#environments","title":"Environments","text":"

    At least for apps from jlesage, it supports an environment variable. Create an environment file called vnc.env

    The environment file can be reference in many docker images from jlesage using docker-compose. The current environment variable specify U/GID, time zone and make every app dark mode. It is also possible to set VNC passwords. This is the full list of environment variables. For supported apps such as avidemux, there is an option WEB_AUDIO=1 which allow audio to work.

    USER_ID=1000\nGROUP_ID=1001\nTZ=America/Vancouver\nDARK_MODE=1\nKEEP_APP_RUNNING=1\n

    The jlesage apps have 2 ports, port 5800 for viewing the VNC app on a web browser on desktop; port 5900 is for VNC protocol that can be used in dedicated VNC viewer or mobile viewing.

    "},{"location":"Docker%20Apps/jlesage-vnc-apps/#general-bind-mounts","title":"General Bind Mounts","text":"

    The appdata bind mount is located in the ~/docker/vnc, as seen from the yml example, the vnc environment file vnc.env is placed in the appdata folder. For application requiring access to movie storage, the bind mount is on the corresponding hard drive or pool. As for applications requiring access to storage but not large media, it\u2019s best to put the files on a SSD.

    This is an example of VNC container of MKVToolNix. The vnc.yml file is backed up elsewhere.

        mkvtoolnix:\n        image: jlesage/mkvtoolnix\n        env_file:\n\n            - ./vnc/vnc.env\n        volumes:\n            - '/mnt/data/nzbget:/storage:rw'\n            - '~/docker/vnc/mkvtoolnix:/config:rw'\n        ports:\n            - '5820:5800'\n            - '5920:5900'\n        container_name: mkvtoolnix\n
    "},{"location":"Docker%20Apps/jlesage-vnc-apps/#ports","title":"Ports","text":"

    The application port start from 5800/5900 for its corresponding access and add 10 for each application.

    • for apps with high idle CPU or RAM, it\u2019s best to run the app on-demand and close it when not used
    App Port Dialog Idle CPU RAM Additional Config JDownloader 5800 jdownloader Firefox 5810 MKVToolNix 5820 gtk MKVCleaver 5840 QT High MegaBasterd 5860 Github MCASelector 5870 High High Github Avidemux 5880 QT Med Med WEB_AUDIO=1"},{"location":"Docker%20Apps/jlesage-vnc-apps/#files","title":"Files","text":"

    /config is the directory which app configuration are stored and should have the correct permission, there are other additional bind mounts for /storage which is the default file choose location for some containers.

    • any directory from host can be bind mount into anything in container; however if a directory is not created on host and the container has to create it, it\u2019s possible it will be owned by root

    QT Based Apps that use QT based file explorer (eg. Avidemux) has the configuration stored in ${APP_CONFIG}/xdg/config/QtProject.ini, this is used to setup file explorer shortcuts.

    [FileDialog]\nshortcuts=file:, file:///config, file:///storage, file:///mnt/data/nzbget, file:///mnt/data, file:///mnt/data2\n

    GTK Based Apps that use GTK based file explorer (eg. MCASelector) has the configuration stored in ${APP_CONFIG}/xdg/config/gtk-3.0/bookmarks, this is used to setup file explorer shortcuts.

    file:///world, file:///storage\n

    There are also some application specific setup. For applications accessing hard drive or intensive apps, it is best to stop when not used. Lazytainer and ContainerNursery and possibly using DNS server can help automate this process.

    "},{"location":"Docker%20Apps/tesla-homepage/","title":"Tesla Homepage","text":"

    This is a homepage that allows Tesla browser to enter full screen mode.

    Docker-compose

    services:\n  homepage-for-tesla:\n    image: jessewebdotcom/homepage-for-tesla:latest\n    container_name: homepage-for-tesla\n    environment:\n\n      - DEFAULT_THEME=13\n    volumes:\n      - ~/docker/tesla/public/bookmarks.json:/app/public/bookmarks.json\n      - ~/docker/tesla/public/images:/app/public/images\n    ports:\n      - \"3000:3000\"\n
    "},{"location":"Docker%20Apps/webtop/","title":"Webtop (openbox-ubuntu)","text":"
    version: \"2.1\"\nservices:\n  webtop:\n    image: lscr.io/linuxserver/webtop:amd64-ubuntu-openbox\n    container_name: webtop-openbox\n    security_opt:\n\n      - seccomp:unconfined #optional\n    environment:\n      - PUID=1000\n      - PGID=1001\n      - TZ=America/Vancouver\n      - SUBFOLDER=/ # For reverse proxy\n      - TITLE=WebtopMate # The title as it shown in browser\n    volumes:\n      - ~/docker/webtop/config:/config # default home folder\n      - /mnt/data:/mnt/data\n      - /var/run/docker.sock:/var/run/docker.sock # Run docker inside docker\n    ports:\n      - 3050:3000\n    shm_size: \"1gb\" #optional\n    restart: unless-stopped\n

    The default installation with config folder copied is not usable. Packages to be installed

    apt update\napt install wget terminator rsync ntp spacefm compton tint2 nitrogen nano lxappearance mousepad unrar unzip xarchiver mono-complete libhunspell-dev p7zip libmpv-dev tesseract-ocr vlc ffmpeg fonts-wqy-zenhei language-pack-zh-hans mediainfo mediainfo-gui p7zip\n

    Packages that has to be installed manually lxappearance, spacefm, tint2, nitrogen

    Desktop (tint2, nitrogen)

    • nitrogen cannot keep scaled option after restarting and needs to change it manually
    • nitrogen wallpaper are found in /config/Pictures/wallpaper.jpg
    "},{"location":"Docker%20Apps/webtop/#customization","title":"Customization","text":"

    lxappearance

    • theme: Quixotic-blue; location .themes
    • icon: Desert-Dark-icons; location .icons tint2
    • tint2 with copied config, located in .config/tint2
    "},{"location":"Docker%20Apps/webtop/#firefox-browser","title":"Firefox Browser","text":"

    policies.json

    // force install ublock, disable annoyances, add bookmarks\n{\n  \"policies\": {\n    \"ExtensionSettings\": {\n      \"uBlock0@raymondhill.net\": {\n        \"installation_mode\": \"force_installed\",\n        \"install_url\": \"https://addons.mozilla.org/firefox/downloads/latest/ublock-origin/latest.xpi\"\n      }\n    },\n    \"NoDefaultBookmarks\": true,\n    \"DisableTelemetry\": true,\n    \"Bookmarks\": [\n      {\n        \"Title\": \"zmk\",\n        \"URL\": \"https://zmk.pw\",\n        \"Placement\": \"toolbar\"\n      },\n      {\n        \"Title\": \"SubHD\",\n        \"URL\": \"https://subhd.tv\",\n        \"Placement\": \"toolbar\"\n      } // Add more bookmarks like this\n    ],\n    \"FirefoxHome\": {\n      \"Search\": true,\n      \"TopSites\": true,\n      \"SponsoredTopSites\": false,\n      \"Pocket\": false,\n      \"SponsoredPocket\": false,\n      \"Locked\": false\n    }\n  }\n}\n

    • it is not possible to backup bookmarks on the pinned menu via policies (only way is to restore from home folder)
    • it\u2019s not possible to remove import bookmarks and getting started bookmarks with policies.json as documented here, it has to be removed manually Manual Configs
    • ublock add Chinese filter
    • pin bookmarks
    • remove default bookmarks and getting started from toolbar
    "},{"location":"Docker%20Apps/webtop/#files","title":"Files","text":"

    SpaceFM

    • upon installing, with config copied over, everything works fine
    • configuration is stored in ~/.config/spacefm

    Movie-Renamer Script

    • works after copying
    "},{"location":"Docker%20Apps/webtop/#subtitles","title":"Subtitles","text":""},{"location":"Docker%20Apps/webtop/#subtitle-edit","title":"Subtitle Edit","text":"

    Install dependencies Download subtitle-edit

    curl -s https://api.github.com/repos/SubtitleEdit/subtitleedit/releases/latest | grep -E \"browser_download_url.*SE[0-9]*\\.zip\" | cut -d : -f 2,3 | tr -d \\\" | wget -qi - -O SE.zip\nunzip SE.zip -d /config/subtitle-edit\n
    Subtitle-Edit Dark theme has to be changed manually

    • Options -> Settings -> Appearance -> Use Dark Theme
    • Options -> Settings -> Syntax Coloring -> Error color and change to 27111D
    • Options -> Settings -> Appearance -> UI Font -> General and change to WenQuanYi Zen Hei
    "},{"location":"Docker%20Apps/Downloading/rutorrent/","title":"RuTorrent","text":"

    /watched folder allow dropping torrents files and autodownload

    "},{"location":"Docker%20Apps/Media%20Apps/audiobookshelf/","title":"Audiobookshelf","text":"

    Audiobooks and podcasts.

    UID/GID

    With the newer version of ABS. The environment variables AUDIOBOOKSHELF_UID and GID are removed, the container now runs as root with no ways to change it; if using the user flag in docker, there would be permission error on startup.

    Docker-compose, place it in the media apps compose media.yml

    version: \"3.7\"\nservices:\n  audiobookshelf:\n    image: ghcr.io/advplyr/audiobookshelf:latest\n    ports:\n\n      - 13378:80\n    volumes:\n      - /mnt/m/Audios/audiobooks:/audiobooks # hard drive mount\n      - /mnt/m/Audios/podcasts:/podcasts # hard drive mount\n      - $HOME/audiobookshelf/config:/config\n      - $HOME/audiobookshelf/metadata:/metadata\n    restart: unless-stopped\n\n\u00a0 audiobookshelf-permfix:\n\u00a0 \u00a0 container_name: abs-permfix\n\u00a0 \u00a0 image: ubuntu\n\u00a0 \u00a0 networks:\n\u00a0 \u00a0 \u00a0 - public\n\u00a0 \u00a0 command: bash -c \"chown -R $${PUID}:$${PGID} /mnt; echo sleeping; sleep $${TIME}\"\n\u00a0 \u00a0 volumes:\n\u00a0 \u00a0 \u00a0 - /mnt/data/Audios/audiobooks:/mnt/audiobooks # hard drive mount\n\u00a0 \u00a0 \u00a0 - /mnt/data/Audios/podcasts:/mnt/podcasts # hard drive mount\n\u00a0 \u00a0 \u00a0 - ~/docker/audiobookshelf/config:/mnt/config\n\u00a0 \u00a0 \u00a0 - ~/docker/audiobookshelf/metadata:/mnt/metadata\n\u00a0 \u00a0 environment:\n\u00a0 \u00a0 \u00a0 - PUID=1000\n\u00a0 \u00a0 \u00a0 - PGID=1001\n\u00a0 \u00a0 \u00a0 - TIME=1h\n\u00a0 \u00a0 restart: unless-stopped\n
    • The change made to the docker-compose include a permfix that automatically chown everything in audiobookshelf bind mounts
      • mount everything into /mnt
      • change the user and group ID accordingly
    "},{"location":"Docker%20Apps/Media%20Apps/audiobookshelf/#usage","title":"Usage","text":"

    To add a library, go to settings, libraries and add the path as mounted in docker.

    Go to Users, change the root password and create a new user. Note, the user cannot scan library, only the root can do that.

    "},{"location":"Docker%20Apps/Media%20Apps/audiobookshelf/#adding-media","title":"Adding Media","text":"

    Make sure the contents are in a separate folder. Follow naming like this. A cover image can also be created. The best bitrate should be under 128 kbps for smooth playback.

    /audiobooks\n--- ./Author - Book\n---  --- ./Cover.jpg\n---  --- ./book - 001 or book - chapter 1\n---  --- ./book - 002\n---  --- ./book - 003\n

    In the WebUI, make sure logged in as root. Go to settings, library and scan. It will scan the newly added media. Also useful for dealing with unplayable file errors.

    It is also possible to upload via the WebUI. When files are uploaded this way, it is also be placed in the audiobooks folder. However, it is not possible to add more files via the web upload once it\u2019s scanned.

    Additional Metadata Cover.jpg - cover image desc.txt - descriptions *.opf - XML library file that contains additional metadata such as title, author etc.. Vocabulary abridged/unabridged - shortened listening version primary/supplementary ebooks - primary ebooks are

    If the media does not match or not have an image, go click the edit icon, go to Match, the best result is usually Audible.com.

    If the chapter does not match, chapters can be edited manually. Go to Chapter and Lookup.

    "},{"location":"Docker%20Apps/Media%20Apps/audiobookshelf/#mobile-app","title":"Mobile App","text":"

    https://play.google.com/store/apps/details?id=com.audiobookshelf.app

    Mobile app also has download functionality, however, the directory cannot be changed, the default for download is /Internal Storage/Download/{Podcast or Audiobook}

    The statistic of minutes listened is the actual minutes listened, not the minutes of audiobook progress listened (eg. playing at faster speed).

    "},{"location":"Docker%20Apps/Media%20Apps/audiobookshelf/#backuprestore","title":"Backup/Restore","text":"

    In the WebUI, go to Settings > Backups and there will be option for backup/restore. Alternatively, copy the entire appdata folder to another computer.

    "},{"location":"Docker%20Apps/Media%20Apps/audiobookshelf/#scripting-windows","title":"Scripting (Windows)","text":"

    ffmpeg detect audio silence (for splitting a large audio file into multiple chapters)

    ffmpeg -i input.mp3 -af silencedetect=n=-50dB:d=1.5 -f null -\n
    ffmpeg -i input.mp3 -af silencedetect=n=-50dB:d=1.5 -f null -loglevel debug 2>&1 - | findstr \"silence_duration\" | find /c /v \"\"\n

    This will find silence parts below -50dB and duration threshold of 1.5s.

    The second code (windows cmd only) for linux use grep -c, finds how many silence parts can be detected, this should correlate to number of chapters.

    Once the optimal duration is set, use split.py.

    ffmpeg that remove silence from audio

    ffmpeg -i input.mp4 -af silenceremove=stop_periods=-1:stop_duration=4:stop_threshold=-50dB -b:a 96k output.mp3\n
    • stop_duration (threshold duration for removing silence part)
    • stop_periods = -1 (search for the entire audio track)

    Use edge_reader.py to utilize Edge AI reader to read the audiobook if only the pdf book is provided.

    After reading, put all the recorded files and pdf in the project folder and run processing.py twice.

    "},{"location":"Docker%20Apps/Media%20Apps/jellystat/","title":"Jellystat","text":"Docker Apps Rating U/GID TZ SSO/Users Portable Subfolder \u274e \u2705* \u274c\ud83e\udd35 \u2705 \u274c

    https://github.com/CyferShepard/Jellystat

    "},{"location":"Docker%20Apps/Media%20Apps/jellystat/#install","title":"Install","text":"

    Docker Compose (minimum viable setup)

    services:\n\u00a0 jellystat-db:\n\u00a0 \u00a0 container_name: jellystat-db\n\u00a0 \u00a0 image: postgres:15\n\u00a0 \u00a0 user: 1000:1001\n\u00a0 \u00a0 env_file:\n\u00a0 \u00a0 \u00a0 - jellystat.env\n\u00a0 \u00a0 environment:\n\u00a0 \u00a0 \u00a0 POSTGRES_DB: 'jellystat'\n\u00a0 \u00a0 \u00a0 TZ: 'America/Vancouver'\n\u00a0 \u00a0 \u00a0 PGTZ: 'America/Vancouver'\n\u00a0 \u00a0 volumes:\n\u00a0 \u00a0 - ~/docker/jellystat/db:/var/lib/postgresql/data # Mounting the volume\n\u00a0 \u00a0 restart: unless-stopped\n\n\u00a0 jellystat:\n\u00a0 \u00a0 image: cyfershepard/jellystat:latest\n\u00a0 \u00a0 container_name: jellystat\n\u00a0 \u00a0 user: 1000:1001\n\u00a0 \u00a0 env_file:\n\u00a0 \u00a0 \u00a0 - jellystat.env\n\u00a0 \u00a0 environment:\n\u00a0 \u00a0 \u00a0 POSTGRES_IP: jellystat-db\n\u00a0 \u00a0 \u00a0 POSTGRES_PORT: 5432\n\u00a0 \u00a0 ports:\n\u00a0 \u00a0 \u00a0 - \"5050:3000\" #Server Port\n\u00a0 \u00a0 volumes:\n\u00a0 \u00a0 \u00a0 - ~/docker/jellystat/app:/app/backend/backup-data # Mounting the volume\n\u00a0 \u00a0 depends_on:\n\u00a0 \u00a0 \u00a0 - jellystat-db\n\u00a0 \u00a0 restart: unless-stopped\n

    The content of jellystat.env

    POSTGRES_USER=jellystat\nPOSTGRES_PASSWORD=\nJWT_SECRET=\n

    • Use both PGTZ and TZ to set timezone logging
    • The environment POSTGRES_DB may not work, the default database is jfstat The secret can be generated with
      openssl rand -base64 64 | tr -d '\\ n'\n
    "},{"location":"Docker%20Apps/Media%20Apps/jellystat/#usage","title":"Usage","text":"

    Jellyfin API key is needed to configure it. The app will show login/configuration screen. No other configurations are nessecary.

    "},{"location":"Docker%20Apps/Media%20Apps/jellystat/#backuprestore","title":"Backup/Restore","text":"

    If using bind mount, simply copy the files in the bind mount and everything will work on the new machine without issues. No database dumps, other steps are necessary.

    • ensure the username/password/secret in the environments are matching
    "},{"location":"Docker%20Apps/Media%20Apps/jellystat/#reverse-proxysso","title":"Reverse Proxy/SSO","text":"

    App do not have SSO support. The internal login cannot be disabled, github issue. App do not support subfolders, only subpath supported. No special requirements needed when using Nginx Proxy Manager. If the frontend is in the same network as proxy, simply jellystat:3000 is enough.

    "},{"location":"Docker%20Apps/Media%20Apps/rich-media/","title":"Rich Media","text":"

    Hello Everyone

    This is a demo consisting of medias.

    Some Code

    docker-compose up -d\n
    import os\nimport time\n\nprint(\"hello world\")\nif a=b:\n  print(a)\nelif b=c:\n  try:\n    print(c)\n  except:\n    print(c+a)\nelse:\n  print(\"what is the meaning of life\")\n

    More sample media

    Portainer is a software for managing docker containers.

    "},{"location":"Docker%20Apps/Minecraft/bluemap/","title":"Bluemap","text":"Docker Apps Rating U/GID TZ SSO/Users Portable Subfolder Mobile n/a n/a \u274e\ud83e\udd35 n/a \u2705 \u2714

    https://bluemap.bluecolored.de/wiki/

    "},{"location":"Docker%20Apps/Minecraft/bluemap/#installation","title":"Installation","text":"

    Download bluemap and place it in minecraft plugin folder, Docker version also available.

    "},{"location":"Docker%20Apps/Minecraft/bluemap/#configuration","title":"Configuration","text":"

    Config files are located in plugins/Bluemap Change the line in core.conf so the app functions

    accept-download: true\n

    • data: \"bluemap\" the data location is not in plugins base folder but relative to base folder of the minecraft docker container
      • the default is located in <docker_mc_folder>/bluemap
    • Default port is 8100, change in webserver.conf
    "},{"location":"Docker%20Apps/Minecraft/bluemap/#resource-pack","title":"Resource pack","text":"

    Add a .zip into plugin/Bluemap/packs The .zip folder should have on the files in its root folder

    • .zip -> resource_pack\\ -> [pack.mcmeta, assets ...] not OK
    • .zip -> [pack.mcmeta, assets ...] OK
    "},{"location":"Docker%20Apps/Minecraft/bluemap/#markers","title":"Markers","text":"

    To see the changes docker attach mcserver then execute bluemap reload

    "},{"location":"Docker%20Apps/Minecraft/bluemap/#marker-set","title":"Marker Set","text":"

    https://bluemap.bluecolored.de/wiki/customization/Markers.html

    debug-set: {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 label: \"Debug Set\"\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 toggleable: true\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 default-hidden: false\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 sorting: 1\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 markers: {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 }\n\u00a0 \u00a0 }\n

    • multiple sets can be added in this format
    • label the name that is will appear (the debug-set is just an identifier)
    • sorting the order which it will appear
    "},{"location":"Docker%20Apps/Minecraft/bluemap/#html","title":"HTML","text":"

    Marker that shows an HTML element, for example a text label.

    \u00a0marker-html: {\n\u00a0 \u00a0 \u00a0type: \"html\"\n\u00a0 \u00a0 \u00a0position: { x: -132, y: 72, z: -202 }\n\u00a0 \u00a0 \u00a0label: \"Karis\"\n\u00a0 \u00a0 \u00a0html: \"<html code>\"\n\u00a0 \u00a0 \u00a0anchor: { x: 0, y: 0 }\n\u00a0 \u00a0 \u00a0sorting: 0\n\u00a0 \u00a0 \u00a0listed: true\n\u00a0 \u00a0 \u00a0min-distance: 50\n\u00a0 \u00a0 \u00a0max-distance: 750\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 }\n

    • type set to html

    HTML Code

    <div style='line-height: 1em; font-size: 1.2em; color: black; font-weight: bold; background-color: white; transform: translate(-50%, -50%);'>Karis</div>\n
    This HTML code have black text with white background, bolded To have a multiline text, just copy the <div> part again

    "},{"location":"Docker%20Apps/Minecraft/bluemap/#line","title":"Line","text":"

    Marker is a 3D line that can be clicked to show label or detail, color can be customized.

    line-marker: {\n\u00a0 \u00a0 \u00a0 type: \"line\"\n\u00a0 \u00a0 \u00a0 position: { x: -42, y: 70, z: -340 }\n\u00a0 \u00a0 \u00a0 label: \"Text to Display\"\n\u00a0 \u00a0 \u00a0 line: [\n\u00a0 \u00a0 \u00a0 \u00a0 { x: -42, y: 70, z: -340 },\n\u00a0 \u00a0 \u00a0 \u00a0 { x: 37, y: 90, z: -325 },\n\u00a0 \u00a0 \u00a0 \u00a0 { x: 102, y: 115, z: -312 }\n\u00a0 \u00a0 \u00a0 ]\n\u00a0 \u00a0 \u00a0 line-color: {r: 255, g: 0, b: 0, a: 1}\n\u00a0 \u00a0 \u00a0 line-width: 3\n\u00a0 \u00a0 \u00a0 detail: \"HTML code\"\n\u00a0 \u00a0 \u00a0 max-distance: 1500\n\u00a0 \u00a0 }\n

    • position - the starting position
    • line - array of xyz coordinates (can include starting position)
    • line-color - RGBA value
    • label and detail will both display the name of the line marker
      • setting anything in detail will override label It good idea to set the y above the value that is appears on map, if a line is covered by a block, that part of the line will not show.
    "},{"location":"Docker%20Apps/Minecraft/bluemap/#poi","title":"POI","text":"

    Marker that can be clicked and shows the label text, with option to add custom icons.

    \u00a0 \u00a0 \u00a0 \u00a0 poi-marker-1: {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 type: \"poi\"\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 position: { x: 273, y: 62, z: 640 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 label: \"Village Marker 1\"\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 icon: \"assets/poi.svg\"\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 max-distance: 400\n\u00a0 \u00a0 \u00a0 \u00a0 } \n

    icon - can be any HTML image type

    • the default icon size is 50px as shown in preview
    • icons must be stored in /blue/web/assets to be used
    • svg vector type is preferred over png due to small size constraint
      • svg created in illustrator need width=\"50px\" height=\"50px\" for it to work properly
    Weird behavior with dark mode/different browsers

    On Brave browser mobile dark mode, icons do not show. On Chrome Windows, while markers works, the text style such as bold do not work

    "},{"location":"Docker%20Apps/Minecraft/bluemap/#shape","title":"Shape","text":"

    Flat, 2D only box that covers an area.

    \u00a0 \u00a0 \u00a0 \u00a0 terrain-park: {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 type: \"shape\"\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 label: \"Example Shape Marker\"\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 position: { x: 186, z: -321 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 shape: [\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 { x: 186, z: -321 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 { x: 184, z: -374 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 { x: 168, z: -368 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 { x: 169, z: -316 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 { x: 186, z: -308 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 ]\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 line-width: 2\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 line-color: { r: 255, g: 0, b: 0, a: 1.0 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 fill-color: { r: 200, g: 0, b: 0, a: 0.3 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 shape-y: 86\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 max-distance: 1400\n\u00a0 \u00a0 \u00a0 \u00a0 }\n

    • shape, only the x and z values are needed, no height
    • shape-y the height which the shape appears
      • if there are blocks above the plane of shape-y: part of that shape will be covered
      • if there are no blocks below the plane of shape-y: the shape will appear floating (refer the image above)
    • color, has a line and fill component, a fill with a: less than 1 decrease the opacity
    "},{"location":"Docker%20Apps/Minecraft/bluemap/#render-distance","title":"Render Distance","text":"
    • for flat view, any view distance below 400 would not show
    • as the view distance increase, the icon/html/line will gradually fade out
    "},{"location":"Docker%20Apps/Minecraft/bluemap/#reverse-proxysso","title":"Reverse Proxy/SSO","text":"

    The reverse proxy and authentication setup for subdomain is as usual in Nginx Proxy Manager. App has no built-in authentication so Authelia SSO is supported.

    "},{"location":"Docker%20Apps/Minecraft/bluemap/#subpath-with-sso","title":"Subpath with SSO","text":"Nginx Proxy ManagerCaddy

    The custom locations tab do not work, need to add it manually. Go to Advanced and edit these in the custom Nginx configuration.

    location /map/ {\n    include /snippets/proxy.conf;\n    include /snippets/authelia-authrequest.conf;\n    proxy_pass http://10.10.120.16:8100/;\n  }\n

    • Not tested yet
    "},{"location":"Docker%20Apps/Minecraft/bluemap/#internal-use-only","title":"Internal Use Only","text":"

    For public viewer, these parts are not relevant for setup. This is for setup of my specific server and guidelines.

    Ski Slopes Red - default color Black - default color Green - line-color: {r: 40, g: 255, b: 40, a: 1} Blue - line-color: {r: 0, g: 100, b: 200, a: 1}

    Roads Roads- line-color: {r: 240, g: 220, b: 150, a: 1}

    "},{"location":"Docker%20Apps/Minecraft/minecraft-prep-and-install/","title":"Minecraft Prep and Install","text":""},{"location":"Docker%20Apps/Minecraft/minecraft-prep-and-install/#client-setup-java-online","title":"Client Setup (Java + Online)","text":"
    1. Download Java
    2. Download OptiFine the latest version.
    3. On the official Minecraft client, go add a new installation and match the version with OptiFine.
    4. Download and try the official version, then install OptiFine with Java.
    5. Under Settings -> Keep the Launcher open while games are running
    "},{"location":"Docker%20Apps/Minecraft/minecraft-prep-and-install/#client-setup-java-offline","title":"Client Setup (Java + Offline)","text":"
    1. Use the client PolyMC to enable offline play.
    2. Go to the right corner, manage accounts and create an offline account.
    3. Click on add an instance and follow the guide.
    4. To install OptiFine, need the official launcher first, then download OptiFine
    5. Extract OptiFine, the extracted file should be ending in _MOD.jar
    6. Open the jar file in WinRAR, then move the files from notch folder into the base folder. Save the jar archive.
    7. Go to PolyMC, right click on the instance, click Edit -> Versions -> Add to minecraft.jar and select the modified OptiFine.
    "},{"location":"Docker%20Apps/Minecraft/minecraft-prep-and-install/#docker-server-setup","title":"Docker Server Setup","text":"

    Docker-compose for minecraft server

    version: \"3.9\"\nservices:\n  minecraft:\n    image: marctv/minecraft-papermc-server:latest\n    restart: unless-stopped\n    container_name: mcserver\n    environment:\n\n      - MEMORYSIZE=4G\n      - PAPERMC_FLAGS=\"\"\n      - PUID=1000\n      - PGID=1000\n    volumes:\n      - ~/docker/minecraft:/data:rw\n    ports:\n      - 25565:25565\n      - 19132:19132\n      - 19132:19132/udp # geyser\n\u00a0 \u00a0 \u00a0 - 8100:8100 # bluemap\n    stdin_open: true\n    tty: true\n

    This downloads the latest version of Minecraft, to use another PaperMC version, need to build the image from scratch.

    Warning: PaperMC cannot be downgraded, only newer version of PaperMC can be installed after first run.

    git clone https://github.com/mtoensing/Docker-Minecraft-PaperMC-Server\n# go edit the \"ARG version=1.xx.x\" to the correct version\ndocker build -t marctv/mcserver:1.xx.x\n
    "},{"location":"Docker%20Apps/Minecraft/minecraft-prep-and-install/#folders-and-plugins","title":"Folders and Plugins","text":"

    Plugins are located in folder ./plugins some plugins have .yml files. To update or download plugins, use scp, wget on the server or VSCode.

    The world folder consists of the save data. It is separated into world, nether, the_end.

    Before starting the server, the eula.txt must have eula=true.

    bukkit and spigot.yml in the root folder are configuration files for PaperMC.

    "},{"location":"Docker%20Apps/Minecraft/minecraft-prep-and-install/#rcon-commands","title":"Rcon Commands","text":"

    To access the rcon-cli, use docker attach mcserver, to exit, use Ctrl-P and Q, if using VSCode may need to edit keyboard shortcut.

    Editing VSCode Shortcut Press Ctrl-Shift-P and search for keyboard shortcut json.

    [\n    {\n        \"key\": \"ctrl+p\",\n        \"command\": \"ctrl+p\",\n        \"when\": \"terminalFocus\"\n    },\n\n    {\n        \"key\": \"ctrl+q\",\n        \"command\": \"ctrl+q\",\n        \"when\": \"terminalFocus\"\n    },\n\n    {\n        \"key\": \"ctrl+e\",\n        \"command\": \"ctrl+e\",\n        \"when\": \"terminalFocus\"\n    }\n\n]\n
    "},{"location":"Docker%20Apps/Minecraft/useful-plugins/","title":"Useful Plugins","text":"

    WorldEdit

    EssentialX

    CoreProtect

    ViaVersions - allow other similar version to join the server without conflict

    bluemap

    Geyser

    WorldGuard

    "},{"location":"Docker%20Apps/Minecraft/useful-plugins/#offline-modemobile-bedrock","title":"Offline Mode/Mobile Bedrock","text":"

    To allow offline play for PC version. Change server.properties and edit these lines

    enforce-whitelist=false\nonline-mode=false\n
    Refer to Minecraft Prep and Install to install offline client.

    For bedrock compatibility, need the geyser plugin.

    To allows offline play for bedrock mobile version. Go to ./plugins/Geyser-Spigot/config.yml and change these lines. Do not install the plugin floodgate, if it\u2019s installed, removed the plugin. ViaVersions is also needed for mobile play.

    auth-type: offline\nenable-proxy-connections: true\n

    Now client can play without login to Xbox or Java.

    "},{"location":"Docker%20Apps/Web/caddy/","title":"Custom Caddy Lego","text":"

    https://github.com/vttc08/caddy-lego Customized caddy docker container that has Dynu support for wildcard certificates.

    "},{"location":"Docker%20Apps/Web/caddy/#install","title":"Install","text":"

    Create a Docker network specific to publicly accessible container.

    docker network create public --subnet 172.80.0.0/16\n

    • the Caddy container will have IP address of 172.80.44.3

      services:\n  caddy:\n    image: vttc08/caddy\n    container_name: caddy\n    ports:\n      - 80:80\n      - 443:443\n    volumes:\n      - ~/docker/caddy/Caddyfile:/etc/caddy/Caddyfile\n      - ~/docker/caddy/www:/www\n    env_file:\n      - .env\n    environment:\n      - WHITELIST=${WHITELIST}\n    networks:\n      public:\n        ipv4_address: 172.80.44.3\n    restart: unless-stopped\n\nnetworks:\n  public:\n    external: true\n    name: public\n

    • the volume of caddy follows all other docker apps which is at ~/docker

    • .env file for DYNU_API_KEY which will be used for SSL
    • create a network public with the IP address
    • it is not the best idea to use user: since it may break container function; however, it all the files are present when mounted Caddy should not change the permissions
    • WHITELIST is an environment variable that contains the IP address that can be only allowed on certain services
      • this can be created in ~/.bashrc and sourced
        export WHITELIST=123.456.789.0\n

    The content of .env

    DYNU_API_KEY=\nWEBSITE=\nHTTPS=\nEMAIL=\n

    • HTTPS list of domains so Caddy doesn\u2019t error when parsing comma; \"*.website.dynu.com, website.dynu.com\"
    • WEBSITE just the website name website.dynu.com
    "},{"location":"Docker%20Apps/Web/caddy/#dockerfile","title":"Dockerfile","text":"

    If the provided image doesn\u2019t work, need to build a image on the server itself.

    FROM caddy:2.7.5-builder-alpine AS builder\n\nRUN xcaddy build \\\n    --with github.com/caddy-dns/lego-deprecated\n\nFROM caddy:2.7.5\n\nCOPY --from=builder /usr/bin/caddy /usr/bin/caddy\n
    Then modify the image part of compose.yml
        build:\n      context: .\n      dockerfile: Dockerfile\n

    "},{"location":"Docker%20Apps/Web/caddy/#caddyfile","title":"Caddyfile","text":"
    {\n    email {$EMAIL}\n}\n
    "},{"location":"Docker%20Apps/Web/caddy/#basic-website","title":"Basic Website","text":"
    :80 {\n        root * /usr/share/caddy\n        file_server\n}\n
    "},{"location":"Docker%20Apps/Web/caddy/#https","title":"HTTPS","text":"
    {$HTTPS} {\n        tls {\n                dns lego_deprecated dynu\n        }\n\n        # Standard reverse proxy\n        @web host web.{$WEBSITE$}\n        handle @web {\n                reverse_proxy mynginx:80\n        }\n}\n
    • start with *.website to indicate wildcard
    • the tls block uses dynu
    • declare @web host with the subdomain name
      • this is later used in handle @web
      • use reverse_proxy block to define the port to be reverse proxied In this method, only Docker containers that is in the same Docker network of public can be reverse proxied. By the internal port and via container names. Tailscale IP entries should also work.
    "},{"location":"Docker%20Apps/Web/caddy/#html-file-server","title":"HTML File Server","text":"

    If caddy uses bind mount and access to the root of HTML files, it can be file server. First need to create the bind mount in /www of the container. Then edit the Caddyfile

            @fs host fs.{$WEBSITE}\n        handle @fs {\n                root * /www\n                file_server\n                encode gzip\n        }\n

    "},{"location":"Docker%20Apps/Web/caddy/#environment-variables","title":"Environment Variables","text":"

    The previous codeblock already utilize environment variables. The syntax is {$NAME}.

    "},{"location":"Docker%20Apps/Web/caddy/#whitelisting","title":"Whitelisting","text":"

                    @blocked not remote_ip {$WHITELIST}\n                respond @blocked \"Unauthorized\" 403\n
    This respond 403 unauthorized on any IP addresses not in whitelist.

    "},{"location":"Docker%20Apps/Web/caddy/#usage","title":"Usage","text":""},{"location":"Docker%20Apps/Web/caddy/#reloading","title":"Reloading","text":"
    docker exec -w /etc/caddy caddy caddy reload\n
    "},{"location":"Docker%20Apps/Web/ddns-update/","title":"Dynamic DNS Updater Docker","text":"

    Official Image: https://hub.docker.com/r/linuxserver/duckdns Custom Github Page: https://github.com/vttc08/docker-duckdns-dynu

    This is a docker container that automatically updates the public IPv4 address of the server every 5 minutes to dynamic DNS services Dynu and DuckDNS. It is the fork of Linuxserver DuckDNS container.

    "},{"location":"Docker%20Apps/Web/ddns-update/#docker-compose","title":"Docker Compose","text":"
      services:\n      duckdns:\n        image: vttc08/docker-duckdns-dynu:latest\n        container_name: duckdns\n        env_file: ddns.env\n        environment:\n\n          - TZ=America/Vancouver\n          - PUID=1000\n          - PGID=1001\n        restart: unless-stopped\n

    These need to be filled in the ddns.env

    DYNU_HOST= # full name of dynu domains\nDYNU_PASS= # md5 hashed dynu login pass\nSUBDOMAINS= # DuckDNS domains without the duckdns.org part\nTOKEN= # DuckDNS token \n

    • token will be visible in DuckDNS dashboard
    • Dynu pass is the same as login; alternatively, it is possible to create a dedicated password just for IP update MD5 generator
      echo -n \"password\" | md5sum\n
    • when setting the IP to 10.0.0.0 in Dynu update API, dynu will automatically update the IP address to the IP address making that request
    "},{"location":"Docker%20Apps/Web/ddns-update/#other-usage","title":"Other Usage","text":"

    docker restart duckdns will manually run IP update docker exec -it duckdns /app/debug.sh or other scripts, debug script will print out IP address of subdomains resolved by Cloudflare

    "},{"location":"Linux%20Server/debian-based-server-setup/","title":"Debian-Based Server Setup","text":"

    Run update and upgrade distro first. Install NTP package is there are errors with that. Reboot

    Setup powertop and powersaving features

    sudo apt install powertop\npowertop --auto-tune\n

    Powersave governor and at reboot. Remember to run the command again

    @reboot echo \"powersave\" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor >/dev/null 2>&1\n

    Ensure these packages are installedi

    powertop htop iotop fio curl gnupg wget ntfs-3g neofetch ca-certificates lsb-release hdparm hd-idle openssh-server at autojump screen bash-completion\n
    • after installing bash-completion, need to source .bashrc for Docker autocomplete to work
    "},{"location":"Linux%20Server/debian-based-server-setup/#hdd","title":"HDD","text":"

    lsblk and blkid to get the ntfs hard drive /dev name and the /dev/by-uuid/\u2026

    Edit the fstab to mount the drive, same entry for nvme drive

    UUID=CC34294F34293E38 /mnt/data ntfs-3g 0 0\n

    If the mounted device is HDD array, need to spindown disk with hdparm

    hdparm -B 120 /dev/sdb # set the APM level\nhdparm -S 241 /dev/sdb\n

    For the -S spindown, 0-240 is multiple of 5s, 241-255 is multiple of 30 min. The above command set spindown every 30min.

    If hdparm does not work, hd-idle can be used. Edit the file in /etc/defaults/hd-idle

    -i 60 -a disk/by-uuid/xxx -l /var/log/hd-idle.log\n

    Sudo without password, go to visudo and add the lines to the bottom, replace $USER with the actual username.

    $USER ALL=(ALL) NOPASSWD: ALL\n

    Edit shortcuts in bashrc

    source .bashrc\n
    "},{"location":"Linux%20Server/debian-based-server-setup/#openssh-with-keys","title":"OpenSSH with Keys","text":""},{"location":"Linux%20Server/debian-based-server-setup/#generate-the-key-using-the-terminal","title":"Generate the key using the terminal","text":"
    ssh-keygen\n
    • give a location to put the key pair
    • this generate a public (.pub) and private key pair
    ssh-copy-id -i key.pub username@server\n
    • key.pub is the public key that was generated

    The key is ready to use for authorization.

    "},{"location":"Linux%20Server/debian-based-server-setup/#generate-keys-using-putty-software","title":"Generate keys using PuTTY software","text":"
    1. Copy the red part and use nano to add it in the server ~/.ssh/authorized_keys
    2. Make sure permissions are correct
      mkdir -p ~/.ssh\nchmod 700 ~/.ssh\nchmod 600 ~/.ssh/authorized_keys\nnano ~/.ssh/authorized_keys\n
    3. Save private key as ppk file on the root ssh folder.
    4. If the client with private key is Linux machine, need to change the permission of the private key.

      chmod 600 private.key\n
    5. Convert the private key Conversion > Export OpenSSH Keys and save the file to a folder OpenSSH Keys

    "},{"location":"Linux%20Server/debian-based-server-setup/#ssh-config","title":"SSH Config","text":"

    Configuration file for easy SSH access. The permission for that file is 644.

    Host server\n  HostName 10.10.120.1\n  User ubuntu\n  IdentityFile ~/keys/server.key\n

    Use with OliveTin

    To have seamless ssh experience with OliveTin, make sure to copy the ssh config file and all the keys to /root, since in OliveTin ~ means /root not your user home directory.

    "},{"location":"Linux%20Server/debian-based-server-setup/#setting-up-smb","title":"Setting Up SMB","text":"

    Refer to Samba(SMB) Setup to setup SMB server.

    "},{"location":"Linux%20Server/debian-based-server-setup/#desktop-environment-setup","title":"Desktop Environment Setup","text":""},{"location":"Linux%20Server/debian-based-server-setup/#firefox","title":"Firefox","text":"

    The location of firefox profile is at /home/$USER/.mozilla/firefox/xxxxx.default

    Make a tarball and copy it and extract it in destination.

    In the profile folder, look for compatibility.ini, go to a random profile in the dest machine and copy the compatibility.ini settings to the one that is copied over. This ensure compatibility so that the new profile works without warning.

    Check the profile.ini with the name and the location of the new profile folder, firefox should be the same as before.

    [Profile0]\nName=karis\nIsRelative=1\nPath=ims58kbd.default-esr-1\n

    Themes

    To backup/restore settings of cinnamon

    Icons

    The icons are located at these locations.

    /usr/share/icons\n~/.icons\n

    Scripts

    Copy the scripts and put it into ~/script for organization and copy the old crontab for executing these scripts.

    "},{"location":"Linux%20Server/olivetin/","title":"OliveTin","text":"

    OliveTin exposes a webpage with buttons that execute shell command (eg. docker, scripts) on the server and allow others for easy access. It should be used internally only.

    Main Interface Log Interface

    "},{"location":"Linux%20Server/olivetin/#installation","title":"Installation","text":"

    Download the correct file from this site. https://github.com/OliveTin/OliveTin/releases OliveTin_linux_amd64.deb

    Go to the directory and install the package.

    • if a previous config.yaml is already present, installer will ask what to do, the default is to keep the previous config
      sudo dpkg -i OliveTin\u2026\u200bdeb\nsudo systemctl enable --now OliveTin\n

    Uninstall

    sudo dpkg -r OliveTin # the installed app name, not the deb file\n

    "},{"location":"Linux%20Server/olivetin/#configuration","title":"Configuration","text":"

    The configuration file is located at /etc/OliveTin/config.yaml

    Script Execution User

    By default, OliveTin always execute script as root!! This have complications. With an example script that echo some location, create a file in/opt dir owned by user 1000 and cd into ~/Downloads user 1000\u2019s download dir.

    default

    /root/Downloads/ line 7: cd: /root/Downloads: No such file or directory The file created by the script is owned by root and not editable in VSCode or other editor unless using sudo

    as user 1000

    /home/test/Downloads/ The file created by the script is owned by user and can be freely edited.

    Run command as user user sudo -u user /path/to/script.

    • ~ path works as intended
    • all files created and modified will be owned by user not root
    • bashrc variables do not work, to use environment variables, it must be sourced elsewhere
    • by default, the script has a $PWD at /root, so relative path do not work regarding files

    Example Configuration

    listenAddressSingleHTTPFrontend: 0.0.0.0:1378 # set the port to 1378\n\n# Choose from INFO (default), WARN and DEBUG\nlogLevel: \"INFO\"\n\nactions:\n\n- title: Update Music\n  shell: /home/karis/scripts/script\n  icon: '&#127925'\n  timeout: 2\n  hidden: true\n
    Configuration consists of list of actions, each action consist of title, shell, icon

    • timeout is also optional, the task will be killed if it takes longer (in seconds) to complete
    • hidden will hide it from dashboard
      • to unhide, a service restart is needed
    • maxConcurrent optional, only allow x runs for the duration of the execution, any more will be blocked
    • rateLimit more advance limiting
      • to clear a rate limit, OliveTin has to be restarted
            maxRate:\n      - limit: 3\n        duration: 5m\n
    "},{"location":"Linux%20Server/olivetin/#arguments","title":"Arguments","text":""},{"location":"Linux%20Server/olivetin/#textbox-input","title":"Textbox Input","text":"
    - title: Restart a Docker CT\n  icon: '<img src = \"icons/restart.png\" width=\"48px\" />'\n  shell: docker restart {{ container }}\n  arguments:\n    - name: container\n      type: ascii\n
    • use {{ }} and give a variable
    • under arguments type, assign a type for it, ascii only allows letters and numbers
    "},{"location":"Linux%20Server/olivetin/#dropdown-choices","title":"Dropdown Choices","text":"

    - title: Manage Docker Stack Services\n  icon: \"&#128736;\"\n  shell: docker-compose -f /home/karis/docker/bookstack/docker-compose.yml {{ action }}\n  arguments:\n    - name: action\n      choices:\n        - title: Start Stack\n          value: up -d\n        - title: Stop Stack\n          value: down\n
    This example give choices to start or stop a docker stack of a docker-compose file. If a argument is given the parameter choices, it will be in dropdown mode.

    "},{"location":"Linux%20Server/olivetin/#suggestion","title":"Suggestion","text":"

    Suggestion is a hybrid between dropdown and textbox. It will suggest the list of possible items in browser but do not restrict choices.

      arguments:\n    - name: action\n      title: Action Name\n      suggestions:\n        - value: Information\n

    • value is what is passed onto the shell and Information is a text display for clarification After modifying configuration, it require a restart to clear out previous suggestions for browsers.
    "},{"location":"Linux%20Server/olivetin/#execute-on-files-created-in-a-directory","title":"Execute on files created in a directory","text":"

    - title: Update Songs\n  icon: <iconify-icon icon=\"mdi:music\"></iconify-icon>\n  shell: /home/test/scripts/file.sh {{ filepath }}\n  arguments:\n    - name: filepath\n      type: unicode_identifier\n  execOnFileCreatedInDir: \n    - /home/test/Downloads/\n    - /another/folder\n
    Whenever a new file is created the action will execute.

    • execOnFileCreatedInDir
      • it is possible to add multiple path to monitor; however, adding a path require a restart of OliveTin service
    • same principle as Arguments, whereas OliveTin provides predefined arguments for files. filepath is the full absolute path of the file that is created
    "},{"location":"Linux%20Server/olivetin/#execution-feedback","title":"Execution Feedback","text":"
    - title: some action\n  popupOnstart: default, execution-dialog-stdout-only, execution-dialog, execution-button\n
    default stdout-only dialog button
    • popup dialog have an option to only show stdout or show full log output with exit code
    • button will show how long the process take
    • the design of popup box may not be easy to close, use the keyboard ++Esc++ key to close
    "},{"location":"Linux%20Server/olivetin/#confirmation","title":"Confirmation","text":"

    It is possible to have a confirmation before completing action.

      arguments:\n\n    - type: confirmation\n      title: Click start to begin.\n

    • user must click a checkbox and then start before the action will execute
    • API do not have such restrictions
    "},{"location":"Linux%20Server/olivetin/#ssh-to-another-server","title":"SSH to Another Server","text":"

    Since OliveTin by default runs command as root, it is necessary to copy the SSH config file and all the keys from a user\u2019s folder into /root/.ssh

    • if the permission is setup correctly for a user, the permissions will copy over

    On the first try, need to have this option when using SSH command -o StrictHostChecking=no and on the subsequent logins, ssh via ssh configs will work as normal.

    "},{"location":"Linux%20Server/olivetin/#icons","title":"Icons","text":"

    The icons need to be placed in a folder in /var/www/[icon-folder]/icon.png. To use the icons, offline image or web address, it should be in HTML format. The size of 48px is the default size of OliveTin icons. Other CSS options such as style=\"background-color: white;\" also works.

    icon: '<img src = \"icons/minecraft.png\" size=\"48px\" />'\n
    Icon with emoji, to use emoji, need to use the html code. https://symbl.cc/en/emoji/ For example, &#9786; \ud83d\ude0a.
    icon: \"&#9786;\"\n

    "},{"location":"Linux%20Server/olivetin/#third-party","title":"Third-Party","text":"

    Olivetin only support iconify icons. To use it, search for an icon, under components select Iconify Icon Add the pasted line into the configuration.

      - title: Title\n    icon: <iconify-icon icon=\"openmoji:jellyfin\"></iconify-icon>\n

    "},{"location":"Linux%20Server/olivetin/#icon-management","title":"Icon Management","text":"

    The default icon folder is /var/www/olivetin/icons The icon folder of all homelab icons is in ~/icons/homelab

    "},{"location":"Linux%20Server/olivetin/#api","title":"API","text":"

    Simple action button.

    curl -X POST \"http://mediaserver:1378/api/StartAction\" -d '{\"actionId\": \"Update Music\"}'\n
    Action with Arguments.
    curl -X POST 'http://mediaserver:1378/api/StartAction' -d '{\"actionId\": \"Rename Movies\", \"arguments\": [{\"name\": \"location\", \"value\": \"value\"}]}'\n

    Arguments variable cannot be \u201cpath\u201d

    If path is used as argument, when executing commands with arguments, it will replace the system $PATH variable, this will render most commands useless even basic ones like sleep, date etc. Use another variable such as directory or location

    Newest Olivetin Version Break Old API Method

    The actionName key is deprecated and no longer works, newest Olivetin API only allow actionId for StartAction API endpoint. The scripts above are adjusted accordingly. To migrate, the easiest way it to create a ID in configuration that has the same value as action name.

    - title: action name\n- id: action name\n

    "},{"location":"Linux%20Server/olivetin/#dashboard","title":"Dashboard","text":"

    Dashboard are a separate page from the default OliveTin page, Fieldsets and Folders are allowed to group actions only in dashboard.

    • when an action is in dashboards, it does not appear in main view.
    • when refreshing the page, it will always go back to main view even if the page is currently at a dashboard
      dashboards:\n  - title: My Dashboard\n    contents:\n      - title: Title Desc\n        type: fieldset\n        contents:\n          - title: Fix Epic Games\n          - title: Restart Minecraft\n      - title: Update Metadata\n        type: fieldset\n        contents:\n          - title: Stuff\n            icon: '<img src = \"icons/mcrestart.png\" width=\"64px\" />'\n            contents:\n               - title: Update Songs\n
    Preview

    "},{"location":"Linux%20Server/olivetin/#fieldsets","title":"Fieldsets","text":"

    Fieldsets are group of actions under a title. Any title that has type: fieldset defined is a fieldset, any actions are grouped under contents key and need to have matching title.

    "},{"location":"Linux%20Server/olivetin/#folders","title":"Folders","text":"

    Folders also group actions together in a dashboard and user need to click into the folder to see the actions.

    • it is possible to use custom icons or title for folders as long as type: is not set and it has contents:
    "},{"location":"Linux%20Server/olivetin/#entities","title":"Entities","text":"

    To use entities, an action, a dashboard entry, entities json/yaml file and entity update method is needed (when the action interact with the entity).

    Preview of Entities Flowchart

    "},{"location":"Linux%20Server/olivetin/#entities-file","title":"entities-file","text":"

    It\u2019s possible to use json or YAML

    entities:\n  - file: /etc/OliveTin/entities/containers.json\n    name: container\n

    • entities file are stored in /etc/OliveTin/entities
    • the name of the entity will be reference as container.attributes in configuration
    "},{"location":"Linux%20Server/olivetin/#entity-update","title":"entity update","text":"
    - title: Update container entity file\n  shell: 'docker ps -a --format \"{{ json . }}\" > /etc/OliveTin/entities/entity.json\n  hidden: true\n  execOnStartup: true\n  execOnCron: '*/5 * * * *'\n
    • this is an action that is trigger by other actions that need to modify the entity, the purpose is to update the entity file
    "},{"location":"Linux%20Server/olivetin/#entity-actions","title":"entity-actions","text":"
    - title: Check {{ container.Names }} Status\n  shell: echo {{ container.Status }}\n  entity: container\n  trigger: Update container entity file\n

    The entity action is defined the same way as other actions.

    • entity need to be defined
    • trigger automatically update entity attributes (since executing this actions could change some attribute of an entity like starting a container)
    • both title and shell can use entity.attributes
    "},{"location":"Linux%20Server/olivetin/#dashboard-entry","title":"dashboard-entry","text":"
     - title: CPanel\n    contents:\n      - title: 'Container {{ container.Names }} ({{ container.Image }})'\n        entity: container\n        type: fieldset\n        contents:\n          - type: display\n            title: |\n              {{ container.Status }} <br /><br /><strong>{{ container.State }}<>\n          - title: 'Check {{ container.Names }} Status'\n
    Preview
    • dashboard is the same configuration as in previous but now is able to utilize entities.
    "},{"location":"Linux%20Server/sambasmb-setup/","title":"Samba(SMB) Setup","text":""},{"location":"Linux%20Server/sambasmb-setup/#setting-up-smb-server-on-linux","title":"Setting up SMB Server on Linux","text":"

    Install the samba tool on Linux.

    sudo apt update\nsudo apt install samba -y\n

    Edit the /etc/samba/smb.conf

    [nvme_share]\n   comment = NVMe Share\n   path = /mnt/nvme/share\n   browseable = yes\n   read only = no\n

    nvme_share is the name of the Samba path which will appear in SMB clients and its path is accessed by \\\\192.168.0.1\\nvme_share

    path is the location where the files are stored

    browseable and read only are flags that are needed to make sure read/write access on the SMB share

    Lastly, add the user and password for the SMB share

    sudo smbpasswd -a $USER # enter the password twice\n

    In the case when Windows fail to write files in the samba share for odd reason. Go to Manage Credentials -> Windows Credentials -> Add a Windows Credential and fill the necessary address, username and password.

    "}]} \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index beea7018e5340216726f27b0e0be9b4098eae03c..847384f796a21da383e2035658f23d537c672715 100755 GIT binary patch delta 13 Ucmb=gXP58h;P|ll+C=sW03refNdN!< delta 13 Ucmb=gXP58h;3(LCX(D?C03R*{(EtDd