Important
IMPORTANT NOTE: I am fully aware that I'm exposing my username, some local paths and my internal network structure in the documentation and in the codebase. This is for educational purposes, I am OK with it.
To adjust the configuration to your own environment, please replace akora
with your username and adjust the IP addresses and hostnames in the inventory/hosts
file.
Everything sensitive is stored in the vault.yml
file (not included in the repository). Please check the vault.yml.example
file for more details and examples.
The domain name I'm using locally is l4n.io
. I own this domain and in Cloudflare this domain name is pointing to the local IP address where Traefik is running.
With all this said, let's begin!
The very first thing you need to do is to generate an SSH keypair. This will be used to enable passwordless authentication and will also allow Ansible to work seamlessly.
ssh-keygen -t ed25519 -f "~/.ssh/homelab_ed25519" -N "" -C "homelab access key"
Copy the public key to all remote hosts. This is manual, no need for a fancy script at this stage.
ssh-copy-id -i ~/.ssh/homelab_ed25519.pub [email protected]
ssh-copy-id -i ~/.ssh/homelab_ed25519.pub [email protected]
ssh-copy-id -i ~/.ssh/homelab_ed25519.pub [email protected]
ssh-copy-id -i ~/.ssh/homelab_ed25519.pub [email protected]
Test connectivity and make sure everything is working.
ansible all -m ping
You should see SUCCESS for each host. ping/pong
To avoid man-in-the-middle attacks, it's important to enable host key checking. Not critical for a home lab, but recommended.
ssh-keyscan -H 192.168.0.41 192.168.0.42 192.168.0.51 192.168.0.91 >> ~/.ssh/known_hosts
Update ansible.cfg
to enable host_key_checking
:
host_key_checking = True
This is just another test (of connectivity) and a confirmation that we have matching OS versions on all hosts.
ansible all -m shell -a 'cat /etc/lsb-release | grep DISTRIB_DESCRIPTION'
At the time of writing, all hosts are running Ubuntu 24.04.3 LTS.
NEXT: reaching "baseline" level, Tier ONE!
Run the baseline playbook:
ansible-playbook ansible/playbooks/baseline.yml
For the very first run you may need to add --ask-become-pass
to the command.
This will apply the following changes:
- Update package cache
- Upgrade all packages
- Install common packages
- Set system locale
- Set timezone to UTC
- Ensure sudo is installed
- Configure password-less sudo for admin user
- Secure SSH server
- Set kernel parameters for security
Run the Docker playbook:
ansible-playbook -i ansible/inventory/hosts ansible/playbooks/docker.yml
This will apply the following changes:
- Install required packages for Docker
- Install Docker packages on ARM
- Install Docker packages on x86_64
- Create docker group
- Add admin user to docker group
- Install Docker Compose
- Create docker config directory
- Configure Docker daemon
- Enable and start Docker service
- Verify Docker installation
- Verify Docker Compose installation
- Create Docker Compose files directory
- Check existing ACLs for Docker Compose directory
- Set additional permissions for Docker Compose directory
- Ensure setfacl is installed (for ACL management)
Run the Traefik playbook:
ansible-playbook -i ansible/inventory/hosts ansible/playbooks/traefik.yml
This will apply the following changes:
- Create Traefik directories
- Create Traefik network
- Create acme.json file with proper permissions
- Create Traefik configuration files
- Deploy Traefik container
If all goes well, you should be able to access the Traefik dashboard at https://traefik.l4n.io.
Note on this screenshot that I have the exisiting services all configured, plus I have a few "static" routing configured as well, for my router and for my NAS.
Next: Portainer!
Run the Portainer playbook:
ansible-playbook -i ansible/inventory/hosts ansible/playbooks/portainer.yml
This will apply the following changes:
- Create Portainer directories
- Stop and remove existing Portainer container if exists
- Check if Docker network exists
- Create Docker network if it doesn't exist
- Create Portainer configuration files
- Deploy Portainer container
If all goes well, you should be able to access the Portainer dashboard at https://portainer.l4n.io.
Note on this screenshot that what's captured here is a state AFTER I linked all standalone "agents" to be able to see and manage all of the other servers from one place.
Next: Portainer Agent!
Run the Portainer Agent playbook:
ansible-playbook -i ansible/inventory/hosts ansible/playbooks/portainer-agent.yml
This will apply the following changes:
- Create Portainer Agent directories
- Stop and remove existing Portainer Agent container if exists
- Check if Docker network exists
- Create Docker network if it doesn't exist
- Deploy Portainer Agent container
Setting up the agents made it possible to link them to the main Portainer instance and manage them from one place.
Next: Docker Socket Proxy!
ansible-playbook ansible/playbooks/docker-socket-proxy.yml
This will apply the following changes:
- Create Docker Socket Proxy directories
- Stop and remove existing Docker Socket Proxy container if exists
- Check if Docker network exists
- Create Docker network if it doesn't exist
- Deploy Docker Socket Proxy container
This makes it possible for Homepage to auto-discover services running on the other servers.
Next: Homepage!
ansible-playbook ansible/playbooks/homepage.yml
This will apply the following changes:
- Create Homepage directories
- Stop and remove existing Homepage container if exists
- Check if Docker network exists
- Create Docker network if it doesn't exist
- Deploy Homepage container
If all goes well, you should be able to access the Homepage dashboard at https://home.l4n.io.
Finally! We've got something to look at! :)
At this stage you should be able to see all the services running on all servers, nicely represented.
This tier deploys Gitea, a self-hosted Git service, complete with a robust backup and restore system.
To deploy Gitea, use the provided management script:
./scripts/manage-gitea.sh deploy
The script will run the Ansible playbook, and upon completion, it will display the Gitea URL and initial admin credentials.
For security, user registration is disabled by default. To create your first user account, follow these steps:
-
Enable Registration: Open the
ansible/playbooks/gitea-with-backup.yml
file and comment out thegitea_disable_registration
variable:# In ansible/playbooks/gitea-with-backup.yml ... vars: ... # Disable user registration # gitea_disable_registration: "true"
-
Deploy Gitea: Run the deployment command again to apply the change:
./scripts/manage-gitea.sh deploy
-
Register Your Account: Navigate to your Gitea URL (e.g.,
https://git.l4n.io
) and register your user account through the web interface. -
Disable Registration: Once you have created your account, uncomment the
gitea_disable_registration
line inansible/playbooks/gitea-with-backup.yml
to secure your instance:# In ansible/playbooks/gitea-with-backup.yml ... vars: ... # Disable user registration gitea_disable_registration: "true"
-
Re-deploy: Run the deployment one last time to disable registration:
./scripts/manage-gitea.sh deploy
The playbook configures the user akora
as the administrator. If you need to reset the password for this user, you can use the management script:
./scripts/manage-gitea.sh reset-password
This command will generate a new random password for the akora
user and display it in the console.
Syncthing is a continuous file synchronization program that synchronizes files between two or more computers in real time. This setup deploys Syncthing with Traefik integration for secure remote access.
-
Deploy Syncthing:
ansible-playbook ansible/playbooks/syncthing.yml
-
Access the Web UI:
- Open
https://sync.l4n.io
in your browser - The default configuration disables remote discovery and relay servers for security
- The web interface is secured with your wildcard SSL certificate
- Open
Key configuration options (set in ansible/roles/syncthing/defaults/main.yml
):
# Port configuration
syncthing_gui_port: "8384" # Web GUI port
syncthing_listen_port: "22000" # File transfer port
syncthing_discovery_port: "21027" # Local discovery (UDP)
# Security
syncthing_restrict_to_lan: true # Restrict access to local network
syncthing_allowed_networks:
- "192.168.0.0/24" # Adjust to your LAN subnet
# Storage
syncthing_data_directory: "/opt/docker/syncthing/data"
syncthing_config_directory: "/opt/docker/syncthing/config"
- TLS Encryption: All traffic is encrypted using the wildcard SSL certificate
- Local Network Only: By default, access is restricted to your local network
- No Remote Discovery: Remote discovery and relay servers are disabled
- File-based Authentication: Uses the Syncthing web interface for user management
- Open the Syncthing web interface
- Click "Add Remote Device"
- Enter the Device ID of the remote device
- Select which folders to share
- Accept the connection request on the remote device
-
If you can't access the web interface, check if Traefik is running and the DNS is correctly pointing to your server
-
For sync issues, check the Syncthing logs:
docker logs syncthing
-
Ensure the required ports (8384, 22000, 21027/udp) are open in your firewall
Your Syncthing data is stored in the configured data directory (/opt/docker/syncthing/data
by default). Ensure this directory is included in your regular backup routine.
Twingate provides zero-trust network access to your homelab resources. This setup deploys Twingate connectors on multiple hosts for redundancy and load distribution.
-
Configure Twingate Credentials:
Update
ansible/inventory/group_vars/all/vault.yml
with your Twingate configuration:# Twingate connector configuration vault_twingate_connector_url: "https://your-network.twingate.com" vault_twingate_connectors: - name: "twingate-rpi4-01" host: "rpi4-01" log_level: 5 enabled: true access_token: "your-access-token-1" refresh_token: "your-refresh-token-1" - name: "twingate-rpi4-02" host: "rpi4-02" log_level: 5 enabled: true access_token: "your-access-token-2" refresh_token: "your-refresh-token-2"
-
Deploy Twingate Connectors:
ansible-playbook ansible/playbooks/twingate.yml
Key configuration options (set in ansible/roles/twingate-connector/defaults/main.yml
):
# Docker image and network
twingate_connector_docker_image: "twingate/connector:latest"
twingate_connector_network_name: "traefik-net"
# Storage directories
twingate_connector_data_directory: "/opt/docker/twingate-connector/data"
twingate_connector_config_directory: "/opt/docker/twingate-connector/config"
# Logging
twingate_connector_log_level: 5 # 3=error, 4=warning, 5=notice, 6=info, 7=debug
# Homepage integration
twingate_homepage_integration: true
- Zero-Trust Architecture: All connections are authenticated and encrypted
- No Open Ports: Connectors establish outbound connections only
- IPv6 Disabled: Prevents STUN warnings and potential security issues
- Container Security: Runs with
no-new-privileges
security option - Network Isolation: Integrated with Traefik network for secure communication
The deployment creates multiple connector instances across different hosts:
- Primary Connector:
twingate-rpi4-01
on rpi4-01 - Secondary Connector:
twingate-rpi4-02
on rpi4-02
This provides redundancy and load distribution for your remote access needs.
Connectors are automatically discovered by Homepage and include:
- Service Status: Real-time connection status
- Container Health: Docker container monitoring via Watchtower
- Log Monitoring: Configurable log levels for troubleshooting
-
Check connector status in the Twingate Admin Console
-
View container logs:
docker logs twingate-rpi4-01 docker logs twingate-rpi4-02
-
Verify network connectivity and DNS resolution
-
Ensure access tokens are valid and not expired
n8n is a powerful workflow automation tool that allows you to connect different services and automate tasks. This setup deploys n8n with secure authentication and SSL integration.
-
Configure n8n Credentials:
Update
ansible/inventory/group_vars/all/vault.yml
with your n8n configuration:# n8n credentials vault_n8n_basic_auth_user: "admin" vault_n8n_basic_auth_password: "your-secure-password" vault_n8n_encryption_key: "your-32-character-encryption-key"
-
Deploy n8n:
ansible-playbook ansible/playbooks/n8n.yml
Key configuration options (set in ansible/roles/n8n/defaults/main.yml
):
# Docker image (always latest version)
n8n_image: "n8nio/n8n:latest"
# Network and storage
n8n_network_name: "traefik-net"
n8n_data_directory: "/opt/docker/n8n/data"
n8n_config_directory: "/opt/docker/n8n/config"
# Security settings
n8n_basic_auth_user: "{{ vault_n8n_basic_auth_user }}"
n8n_basic_auth_password: "{{ vault_n8n_basic_auth_password }}"
n8n_encryption_key: "{{ vault_n8n_encryption_key }}"
# Homepage integration
n8n_homepage_integration: true
- HTTP Basic Authentication: Initial access control layer
- Credential Encryption: All workflow credentials encrypted with AES-256
- SSL/TLS: Automatic HTTPS via Traefik and Cloudflare certificates
- Network Isolation: Runs in isolated Docker network
- Container Security: Runs as non-root user (UID 1000)
Basic Auth Credentials:
- Provides HTTP-level authentication before reaching n8n interface
- Required because n8n runs in single-user mode without built-in user management
Encryption Key:
- Must be exactly 32 characters long
- Used for AES-256 encryption of stored credentials and sensitive data
- Critical for protecting API keys and passwords stored in workflows
n8n runs on rpi4-02
while Traefik runs on zima-01
. Static routing configuration:
# /opt/docker/traefik/config/dynamic/n8n.yml
http:
routers:
n8n-router:
rule: "Host(`n8n.l4n.io`)"
service: n8n-service
services:
n8n-service:
loadBalancer:
servers:
- url: "http://192.168.0.42:5678"
-
Access the web interface at
https://n8n.l4n.io
-
Check container status:
ansible rpi4-02 -m shell -a "docker ps | grep n8n"
-
View container logs:
ansible rpi4-02 -m shell -a "docker logs n8n --tail 20"
-
Verify Traefik routing:
curl -I https://n8n.l4n.io
n8n is configured to use the latest
tag and includes Watchtower integration for automatic updates. The container will automatically pull and deploy new versions as they become available.