Homematic CCU firmware running as docker container on arm and (emulated) x86.
This project downloads the Homematic CCU2 firmware and re-package it as docker image. You can then start it on your raspi. Other ARM-based boards might also work (see Dependencies section). You can deploy on x86 but it will be slow and some components fail. I am working on a true multi-arch docker container for this.
An automated build pushes new docker images to Docker Hub. You can check there the available versions.
Support for CCU2 has been removed from the HEAD. Please checkout the ccu2 branch if you need to build the CCU2 images. There is also another Docker Hub repository with old CCU2 images.
- deploy original CCU firmware to Docker and kubernetes
- addons, ssh and any other feature from the original CCU not listed under Not Working section
- Homematic and Homematic IP supported (wired not tested)
- automatically install support for Homematic HW - thanks to Alex´s piVCCU proyect
- partial multiarch:
- builds on x86
- runs on x86 but HMServer does not start
- displays Duty Cycle for CCU Gateways - thanks to Andreas and Jens
- keep configuration in a remote location specified with the env variable PERSISTENT_DIR - you can use any location supported by rsync
- Settings -> Control Panel -> Network Settings
- Display when there is a new CCU version available
- true multiarch dynamic docker
- current multiarch is based on qemu
- will use OCCU as base as done by this other project
- looking at USB adapter for dual stack with a single device
- automatically build new docker containers when new CCU versions are published by e3q.
- Docker
- Kubernetes and Docker Swarm can be used for High Availability set ups. See the cluster section for more details.
- ARM HW. Following combinations tested for both Homematic and Homematic IP:
- Raspberry with HM-MOD-RPI-PCB
- Orange Pi Plus 2 with HM-MOD-RPI-PCB
- OdroidXU4 with HM-LGW-O-TW-W-EU and HmIP-RFUSB
- One or more adapters to connect to the Homematic and/or Homematic IP devices
- Homematic LAN GW for Homematic devices:
- HM-LGW-O-TW-W-EU
- RaspberryMatic in LAN GW mode
- Docker computer and LAN GW need to be in the same network. No additional SW is needed in the docker computer. Connection is configured using the CCU web UI (settings -> LAN Gateway configuration)
- Homematic USB IP adapter
- HmIP-RFUSB
- the required kernel module is
cp210x
which is available in most Linux systems. Thedeploy.sh
will add a udev rule to enable it automatically when plugged.
- HM-MOD-RPI-PCB, RPI-RF-MOD, HB-RF-USB and emulated adapters
- additional packages need to be installed in the host to support Homematic and Homematic IP in parallel. The required packages come from the piVCCU project which supports multiple ARM devices
- the
deploy.sh
script will try to install the pivccu packages for you. If it does not work please follow these instructions to installpivccu-modules-dkms
,pivccu-devicetree-armbian
(if you are on Armbian) andpivccu-modules-raspberrypi
(if you use a Raspberry with Raspbian). You do not need to install a network bridge since docker manages that.
- Homematic LAN GW for Homematic devices:
- ssh into the target computer (better an ARM device)
- git clone this repository
- (Optional)
cp -a settings.template settings
and editsettings.template
sudo ./deploy.sh
- you can also use env variables such as
MAYOR_CCU_VERSION=2
to deploy a CCU2 firmware. See settings.template for all available options
- you can also use env variables such as
After the above steps you can connect to the :<port 80>. The CCU docker image will be restarted automatically when the computer boots: the container is started in auto-restart mode. With docker ps ccu
you can see its status.
This is only needed if you do not use the already built docker container.
- git clone this repository
- (Optional)
cp -a settings.template settings
and editsettings.template
sudo ./build.sh
git pull
./pull.sh
./deploy.sh
Optionally you can use the CCU_VERSION variable to select a particular version.
Your CCU settings will be preserved.
You can move your settings from an existing CCU into the docker CCU, either via ssh or using the native backup/restore support in the CCU (recommended).
If you use a HM-MOD-RPI-PCB and the Homematic is not working after restoring the backup then likely your old system was running without compatibility with Homematic IP. You need to use the new dual-stack mode. The easiest way to achieve that is to execute the following command: docker exec ccu sh -c "rm /etc/config/rfd.conf && /etc/init.d/S61rfd restart && cat /etc/config/rfd.conf"
. After this please check that Improved Coprocessor Initialization = true
.
- log into you HW CCU web ui
- go to Settings -> Security -> create backup
- go into the the docker CCU web UI
- go to Settings -> Security -> import backup
Please notice that this method does not support switch HW versions: to update from HW CCU2 to docker CCU you need to use the UI
- Enable ssh in your CCU2. Instructions (in German) here
- ssh into your target computer
sudo ./undeploy.sh
rsync -av \[your CCU IP\]/usr/local/* /var/lib/docker/volumes/ccu_data/_data/
./deploy.sh
You can deploy this docker container into a docker cluster with Kubernetes or Docker Swarm. This allows a High Available configuration where the home automation can stay up in the event of HW dying. This is usefull considering that a Raspberry is cheap so you should not depend on a single one to ensure your house stays warm ;-) .
You can check this example of a high available deployment. There I keep the configuration into a cluster persistent volume (glusterfs) so if one of computers is down then the CCU is "just" redeployed automatically into another available computer.
You can also deploy this docker image to a docker swarm. For this you need to:
- set up a docker swarm in multiple Raspberries. They all need to have local antennas at least you use a LAN gateway
- have a shared folder mounted at the same location in all the members of the cluster. Examples:
- Mount a NAS folder. This is simple but then the NAS is the single point of failure
- Cluster FS such as glusterfs. TBD: upload instructions
- Change settings parameters
- Set DOCKER_CCU_DATA to the absolute path of your shared folder. Example:
_/media/glusterfs/ccu_
- Set DOCKER_MODE to
swarm
- Set DOCKER_OPTIONS to
--constraint node.labels.architecture==arm
- Set DOCKER_CCU_DATA to the absolute path of your shared folder. Example:
./deploy.sh