Kansei Server is a personal custom OCI image of Fedora CoreOS, built off the Universal Blue uCore base daily. This started from a desire to turn an old gaming rig with an nvidia GPU into a proper home application, web hosting, and virtualization server, allowing much better performance for apps struggling on various pi4 docker hosts, an old NAS, etc. Since I've been really into the Universal Blue desktop OCI images of Fedora Atomic (Bluefin and Bazzite) lately, I decided to go with the very similar since they're cousins Fedora CoreOS, again using a Universal Blue base image (uCore). This has become my "server image" that I'll toss on any x86_64 host I feel like using as a headless compute resource at home (it's on an iMac in addition to the gaming rig).
a partial selection of the containers I run on the old gaming rig:
- unifi network app: because it runs better here than on my resource-constrained unifi router
- qbittorrent w/ web interface for seeding open source torrents, my contribution to fellow distro-hoppers
- plenty of nginx containers doing a lil personal web hosting of a couple small fediverse communities (gigabit upload and full IPv6 down to the docker container allows for lots of fun).
- nginx reverse proxy, cert management
- jellyfin: for presenting local media content to appletvs in the home.
- retro computing: an nginx container that presents moderm web pages as images for browsing the internet on computers that are too old for encryption.
- mainsail (3d printer farm web interface)
- homeassistant / mediaassistant
- admin stuff: portainer, cockpit-ws
- always looking for more digital clutter so if you have any useful suggestions let me know <3
At this time, this is a super pre-alpha state despite the fact I run my home DNS server on it (my partner is a saint). Only a couple images are being built specific to test hardware platforms in use currently. If you see any value in this image and would like more variants let me know.
The Kansei Server project builds a few images:
The image names are:
The tag matrix includes combinations of the following:
stable
- [disabled currently] for an image based on the Fedora CoreOS stable streamtesting
- for an image based on the Fedora CoreOS testing streamnvidia
- for an image which includes nvidia driver and container runtimezfs
- for an image which includes zfs driver and toolsdockerce
- for an image which removes the Fedora CoreOS moby-engine and containerd, replacing it with docker-ce
Suitable for running containerized workloads on either bare metal or virtual machines, this image tries to stay lightweight but functional.
- Starts with a Fedora CoreOS image
- Adds the following:
- bootc (new way to update container native systems)
- cockpit (podman container and system management)
- firewalld
- guest VM agents (
qemu-guest-agent
andopen-vm-tools
)) - docker-buildx and docker-compose (versions matched to moby release) docker(moby-engine) is pre-installed in CoreOS
- podman-compose podman is pre-installed in CoreOS
- tailscale and wireguard-tools
- tmux
- udev rules enabling full functionality on some Realtek 2.5Gbit USB Ethernet devices
- Optional nvidia versions add:
- nvidia driver - latest driver built from negativo17's akmod package
- nvidia-container-toolkit - latest toolkit which supports both root and rootless podman containers and CDI
- nvidia container selinux policy - allows using
--security-opt label=type:nvidia_container_t
for some jobs (some will still need--security-opt label=disable
as suggested by nvidia)
- Optional ZFS versions add:
- ZFS driver - latest driver (currently pinned to 2.2.x series) - see below for details
pv
is installed with zfs as a complementary tool
- Disables Zincati auto upgrade/reboot service
- Enables staging of automatic system updates via rpm-ostreed
- Enables password based SSH auth (required for locally running cockpit web interface)
- Provides public key allowing SecureBoot (for ucore signed
nvidia
orzfs
drivers)
Important
Per cockpit's instructions the cockpit-ws RPM is not installed, rather it is provided as a pre-defined systemd service which runs a podman container.
This image builds on kansei-server
but adds drivers, storage tools and utilities making it more useful on bare metal or as a storage server (NAS).
- Starts with a
kansei-server
image providing everything above, plus: - Adds the following:
- cockpit-storaged (udisks2 based storage management)
- distrobox - a toolbox alternative
- duperemove
- all wireless (wifi) card firmwares (CoreOS does not include them) - hardware enablement FTW
- mergerfs
- nfs-utils - nfs utils including daemon for kernel NFS server
- pcp Performance Co-pilot monitoring
- rclone - file synchronization and mounting of cloud storage
- samba and samba-usershares to provide SMB sevices
- snapraid
- usbutils(and pciutils) - technically pciutils is pulled in by open-vm-tools in kansei-server
- Optional ZFS versions add:
- sanoid/syncoid dependencies - see below for details
- cockpit-machines: Cockpit GUI for managing virtual machines
- libvirt-client:
virsh
command-line utility for managing virtual machines - libvirt-daemon-kvm: libvirt KVM hypervisor management
- virt-install: command-line utility for installing virtual machines
Note
Fedora uses DefaultTimeoutStop=45s
for systemd services which could cause libvirtd
to quit before shutting down slow VMs. Consider adding TimeoutStopSec=120s
as an override for libvirtd.service
if needed.
IMAGE | TAG |
---|---|
kansei-server - stable |
stable , stable-nvidia , stable-zfs ,stable-nvidia-zfs |
kansei-server - testing |
testing , testing-nvidia , testing-zfs , testing-nvidia-zfs |
kansei-server-plus - stable |
stable , stable-nvidia , stable-zfs ,stable-nvidia-zfs |
kansei-server-plus - testing |
testing , testing-nvidia , testing-zfs , testing-nvidia-zfs |
Please read the CoreOS installation guide before attempting installation. As Kansei Server is an extension of CoreOS, it does not provide it's own custom or GUI installer.
There are varying methods of installation for bare metal, cloud providers, and virtualization platforms.
All CoreOS installation methods require the user to produce an Ignition file. This Ignition file should, at mimimum, set a password and SSH key for the default user (default username is core
).
Note
It is highly recommended that for bare metal installs, first test your ignition configuration by installing in a VM (or other test hardware) using the same bare metal process.
These images are signed with sigstore's cosign. You can verify the signature by running the following command:
cosign verify --key https://github.com/lauretano/kansei-server/raw/main/cosign.pub ghcr.io/lauretano/IMAGE:TAG
One of the fastest paths to running Kansei Server is using examples/ucore-autorebase.butane as a template for your CoreOS butane file.
- As usual, you'll need to follow the docs to setup a password. Substitute your password hash for
YOUR_GOOD_PASSWORD_HASH_HERE
in theucore-autorebase.butane
file, and add your ssh pub key while you are at it. - Generate an ignition file from your new
ucore-autorebase.butane
using the butane utility. - Now install CoreOS for hypervisor, cloud provider or bare-metal. Your ignition file should work for any platform, auto-rebasing to the
ucore:stable
(or otherIMAGE:TAG
combo), rebooting and leaving your install ready to use.
Once a machine is running any Fedora CoreOS version, you can easily rebase to Kansei Server. Installing CoreOS itself can be done through a number of provisioning methods.
To rebase an existing machine to the latest Kansei Server:
- Execute the
rpm-ostree rebase
command (below) with desiredIMAGE
andTAG
. - Reboot, as instructed.
- After rebooting, you should pin the working deployment which allows you to rollback if required.
sudo rpm-ostree rebase ostree-unverified-registry:ghcr.io/lauretano/IMAGE:TAG
The kansei-server*
images include container policies to support image verification for improved trust of upgrades. Once running one of the kansei-server*
images, the following command will rebase to the verified image reference:
sudo rpm-ostree rebase ostree-image-signed:docker://ghcr.io/lauretano/IMAGE:TAG
It's a good idea to become familar with the Fedora CoreOS Documentation as well as the CoreOS rpm-ostree docs. Note especially, this image is only possible due to ostree native containers.
A CoreOS root filesystem system is immutable at runtime, and it is not recommended to install packages like in a mutable "normal" distribution.
Fedora CoreOS expects the user to run services using podman. moby-engine
, the free Docker implementation, is also installed for those who desire docker instead of podman.
To maintain this image's suitability as a minimal container host, most add-on services are not auto-enabled.
To activate pre-installed services (cockpit
, docker-ce
, tailscaled
, etc):
sudo systemctl enable --now SERVICENAME.service
Note
The libvirtd
is enabled by default, but only starts when triggerd by it's socket (eg, using virsh
or other clients).
SELinux is an integral part of the Fedora Atomic system design. Due to a few interelated issues, if SELinux is disabled, it's difficult to re-enable.
Warning
We STRONGLY recommend: DO NOT DISABLE SELinux!
Should you suspect that SELinux is causing a problem, it is easy to enable permissive mode at runtime, which will keep SELinux functioning, provide reporting of problems, but not enforce restrictions.
# setenforce 0
$ getenforce
Permissive
After the problem is resolved, don't forget to re-enable:
# setenforce 1
$ getenforce
Enforcing
Fedora provides useful docs on SELinux troubleshooting.
Important
CoreOS cautions against running podman and docker containers at the same time. Thus, docker.socket
is disabled by default to prevent accidental activation of the docker daemon, given podman is the default.
Ony run both simultaneously if you understand the risk.
Podman and firewalld can sometimes conflict such that a firewall-cmd --reload
removes firewall rules generated by podman.
As of netavark v1.9.0 a service is provided to handle re-adding netavark (Podman) firewall rules after a firewalld reload occurs. If needed, enable like so: systemctl enable netavark-firewalld-reload.service
Users may use distrobox to run images of mutable distributions where applications can be installed with traditional package managers. This may be useful for installing interactive utilities such has htop
, nmap
, etc. As stated above, however, services should run as containers.
kansei-server-plus
includes a few packages geared towards a storage server which will require individual research for configuration:
But two others are included, which though common, warrant some explanation:
- nfs-utils - replaces a "light" version typically in CoreOS to provide kernel NFS server
- samba and samba-usershares - to provide SMB sevices
It's suggested to read Fedora's NFS Server docs plus other documentation to understand how to setup this service. But here's a few quick tips...
Unless you've disabled firewalld
, you'll need to do this:
sudo firewall-cmd --permanent --zone=FedoraServer --add-service=nfs
sudo firewall-cmd --reload
By default, nfs-server is blocked from sharing directories unless the context is set. So, generically to enable NFS sharing in SELinux run:
For read-only NFS shares:
sudo semanage fcontext --add --type "public_content_t" "/path/to/share/ro(/.*)?
sudo restorecon -R /path/to/share/ro
For read-write NFS shares:
sudo semanage fcontext --add --type "public_content_rw_t" "/path/to/share/rw(/.*)?
sudo restorecon -R /path/to/share/rw
Say you wanted to share all home directories:
sudo semanage fcontext --add --type "public_content_rw_t" "/var/home(/.*)?
sudo restorecon -R /var/home
The least secure but simplest way to let NFS share anything configured, is...
For read-only:
sudo setsebool -P nfs_export_all_ro 1
For read-write:
sudo setsebool -P nfs_export_all_rw 1
There is more to read on this topic.
NFS shares are configured in /etc/exports
or /etc/exports.d/*
(see docs).
Like all services, NFS needs to be enabled and started:
sudo systemctl enable --now nfs-server.service
sudo systemctl status nfs-server.service
It's suggested to read Fedora's Samba docs plus other documentation to understand how to setup this service. But here's a few quick tips...
Unless you've disabled firewalld
, you'll need to do this:
sudo firewall-cmd --permanent --zone=FedoraServer --add-service=samba
sudo firewall-cmd --reload
By default, samba is blocked from sharing directories unless the context is set. So, generically to enable samba sharing in SELinux run:
sudo semanage fcontext --add --type "samba_share_t" "/path/to/share(/.*)?
sudo restorecon -R /path/to/share
Say you wanted to share all home directories:
sudo semanage fcontext --add --type "samba_share_t" "/var/home(/.*)?
sudo restorecon -R /var/home
The least secure but simplest way to let samba share anything configured, is this:
sudo setsebool -P samba_export_all_rw 1
There is much to read on this topic.
Samba shares can be manually configured in /etc/samba/smb.conf
(see docs), but user shares are also a good option.
An example follows, but you'll probably want to read some docs on this, too:
net usershare add sharename /path/to/share [comment] [user:{R|D|F}] [guest_ok={y|n}]
Like all services, Samba needs to be enabled and started:
sudo systemctl enable --now smb.service
sudo systemctl status smb.service
For those wishing to use nvidia
or zfs
images with pre-built kmods AND run SecureBoot, the kernel will not load those kmods until the public signing key has been imported as a MOK (Machine-Owner Key).
Do so like this:
sudo mokutil --import /etc/pki/akmods/certs/akmods-ublue.der
The utility will prompt for a password. The password will be used to verify this key is the one you meant to import, after rebooting and entering the UEFI MOK import utility.
If you installed an image with -nvidia
in the tag, the nvidia kernel module, basic CUDA libraries, and the nvidia-container-toolkit are all are pre-installed.
Note, this does NOT add desktop graphics services to your images, but it DOES enable your compatible nvidia GPU to be used for nvdec, nvenc, CUDA, etc. Since this is CoreOS and it's primarily intended for container workloads the nvidia container toolkit should be well understood.
The included driver is the latest nvidia driver as bundled by negativo17. This package was chosen over rpmfusion's due to it's granular packages which allow us to install just the minimal nvidia-driver-cuda
packages.
If you need an older (or different) driver, consider looking at the container-toolkit-fcos driver. It provides pre-bundled container images with nvidia drivers for FCOS, allowing auto-build/loading of the nvidia driver IN podman, at boot, via a systemd service.
If going this path, you likely won't want to use the kansei-server
-nvidia
image, but would use the suggested systemd service. The nvidia container toolkit will still be required but can by layered easily.
If you installed an image with -zfs
in the tag (or fedora-coreos-zfs
), the ZFS kernel module and tools are pre-installed, but like other services, ZFS is not pre-configured to load on default.
Load it with the command modprobe zfs
and use zfs
and zpool
commands as desired.
Per the OpenZFS Fedora documentation:
By default ZFS kernel modules are loaded upon detecting a pool. To always load the modules at boot:
echo zfs > /etc/modules-load.d/zfs.conf
The default mountpoint for any newly created zpool tank
is /tank
. This is a problem in CoreOS as the root filesystem (/
) is immutable, which means a directory cannot be created as a mountpoint for the zpool. An example of the problem looks like this:
# zpool create tank /dev/sdb
cannot mount '/tank': failed to create mountpoint: Operation not permitted
To avoid this problem, always create new zpools with a specified mountpoint:
# zpool create -m /var/tank tank /dev/sdb
If you do forget to specify the mountpoint, or you need to change the mountpoint on an existing zpool:
# zfs set mountpoint=/var/tank tank
sanoid/syncoid is a great tool for manual and automated snapshot/transfer of ZFS datasets. However, there is not a current stable RPM, rather they provide instructions on installing via git.
kansei-server-plus
has pre-install all the (lightweight) required dependencies (perl-Config-IniFiles perl-Data-Dumper perl-Capture-Tiny perl-Getopt-Long lzop mbuffer mhash pv), such that a user wishing to use sanoid/syncoid only need install the "sbin" files and create configuration/systemd units for it.