oranki.net

The prettiest blog on the block


Self Hosting My Way - Homeassistant

3rd post in a series about self-hosting

The previous posts laid the basis, now it’s time for the first real example.

One of the most popular services people run on their home servers is Home Assistant, and rightfully so. Consequently, it’s also one of the simplest services to run in a container, making it the perfect candidate for the first detailed example.

Home Assistant uses SQLite by default, and they have done a good job on optimizing the performance. I used to have a Postgres database backing my instance. Sometime in late 2023 I was seeing if there was some quick tricks to improve performance a bit, I saw a lot of comments recommending SQLite over PostgreSQL for Home Assistant. I don’t rely on Home Assistant for storing historical metrics, so the procedure was basically just stopping HA and removing the recorder section from configuration.yaml. Unfortunately I didn’t take notes of the procedure, but I can’t say there’s any noticeable performance difference between databases, so sticking with SQLite simplifies things. Main advantage being that the database is automatically included in backups.

In the earlier posts I wrote about Proxmox, but have since moved back to running only application containers. My home server is an x86 Alma Linux 9 system. Podman in the EL9 repos is on 4.9.4 at the time of writing, and I have not tested older versions of Podman. I would expect any version after the merge of Quadlet in Podman 4.4 should work, but can’t guarantee it.

Summary

Here's the short version:

export BASE_PATH="/path/to/homeassistant"  # The parent directory for YAML and host volumes
mkdir -p "$BASE_PATH"/{config,quadlet}
  • Under $BASE_PATH/quadlet/
    • Download homeassistant.yaml
    • Replace /path/to/homeassistant with the correct path:
      sed -i "s|/path/to/homeassistant|$BASE_PATH|g"
      
    • Create quadlet file (pod-homeassistant.kube)
    • Symlink files for Quadlet:
      mkdir -p ~/.config/containers/systemd/homeassistant
      cd ~/.config/containers/systemd/homeassistant
      cp -s "$BASE_PATH"/quadlet/* .
      
    • Start the service
      systemctl --user daemon-reload
      systemctl --user start pod-homeassistant
      

Requirements

  • Volumes:

    For running Home Assistant as a container, all that’s necessary is a single volume to store persistent data, either a host directory or a named volume. A host directory is the most simple approach.

  • Networking:

    Home Assistant is a bit special, as it’s one of those services that’s perhaps best to run in the hostNetworking mode. In general, I recommend avoiding host network mode, because it diminishes the isolation between the host machine and the container. The logical separation is also far more difficult to keep track of. Many Home Assistant integrations do however rely on broadcast traffic, e.g. detecting smart plugs or smart TVs, and host networking is required for this traffic to be seen inside the container. Home Assistant itself runs fine without these features.

Pod definition

The pod spec for Home Assistant is very simple. Here’s the full YAML: homeassistant.yaml

As you can see, Kubernetes spec is a bit more complex than the docker-compose format. First a pod named homeassistant is defined:

1
2
3
4
5
6
7
apiVersion: v1
kind: Pod
metadata:
  annotations:
  labels:
    app: homeassistant
  name: homeassistant

In the pod, a container named app is added, using the ghcr.io/home-assistant/home-assistant:stable image:

 8
 9
10
11
spec:
  containers:
  - name: app
    image: ghcr.io/home-assistant/home-assistant:stable
Podman names the containers as podname-containername, so this will result in a container with the name homeassistant-app.

A smart move here would be to use a specific version tag instead of :stable. Home Assistant can sometimes introduce breaking changes

All the data related to Home Assistant resides in a ZFS dataset, mounted at /path/to/homeassistant. This doesn’t need to be ZFS, but for those services with a lot of persistent data, snapshot possibility reduces the downtime during backups. Home Assistant does not store a whole lot of data, so a simple stop-backup-restart won’t take much time either. I’ll refer to this directory as “service dataset” later on.

The /config directory inside the container houses all the persistent data for the Home Assistant container version. Definitions for volume and mount point in the container:

 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
spec:
  containers:
  - name: app
    image: ghcr.io/home-assistant/home-assistant:stable
    env:
    - name: TZ
      value: Etc/UTC
    volumeMounts:
    - mountPath: /config
      name: hass-config
  resources: {}
  volumes:
  - hostPath:
      path: /path/to/homeassistant/config
      type: Directory
    name: hass-config
Note that volumeMounts is a property of the container (spec.containers[N]). On lines 20-23 a volume (persistentVolume of type hostPath in the Kubernetes world) named hass-data is defined, pointing to the path /path/to/homeassistant/data. This volume is then mounted at /data inside the container on lines 16-17.

SELinux

At the time of writing, podman kube play won’t change the SELinux context automatically for directories that are bind mounted inside the containers. For the container to be able to access the host directory, set the context manually:

chcon -t container_file_t -R /path/to/homeassistant/config

I’m using the recursive -R flag here, even though at this stage the directory is empty. This command, doesn’t need root privileges, as all the things described in this post should be done as an unprivileged user since we’re using rootless containers.

To get the correct timezone, the TZ environment variable is set in the container:

 8
 9
10
11
12
13
14
spec:
  containers:
  - name: app
    image: ghcr.io/home-assistant/home-assistant:stable
    env:
    - name: TZ
      value: Etc/UTC
In later posts with more complicated configs a different method for environment configuration using configMap will be shown. This is kind of equivalent to an .env file using docker-compose. In Home Assistant’s case, only TZ is needed so it’s simpler to define it inline for the container.

What’s left is to set the to use host networking, and optionally set the hostname. These are both applied on the pod, so any additional containers added to the pod would inherit them too.

19
20
21
22
23
24
25
26
  volumes:
  - hostPath:
      path: /path/to/homeassistant/config
      type: Directory
    name: hass-config
  hostNetwork: true
  hostname: homeassistant.home.arpa
  restartPolicy: Never
restartPolicy is something I haven’t investigated much yet, but restarting the pod will be handled by Quadlet. So far I’ve been fine with Never.

Quadlet

Quadlet will handle running the pod. There’s probably a whole bunch of possible configuration options for it, but I’ve only needed the most basic ones so far. I have to admit, that I am a bit baffled that the Podman team decided to implement Quadlet the way it’s done.

Basically, you can define single containers or pods via a YAML file using Quadlet. This is very handy as starting the containers e.g. after reboot is automatic, but it adds a bit to what you need to remember. This is specifically obvious when running single containers; instead (or rather in addition to) remembering how to create a container from the CLI, you also have to figure out how to define it for the Quadlet systemd unit format. I like systemd, but writing unit files manually is always a pain. I’d want to transition to systemd timers instead of cronie for scheduling tasks, but even a simple scheduled task requires writing a timer file and a service file, and I always need to look up the syntax. Quadlet is a lot like that, it’s very good, but in no way intuitive, less so when you need more than the very basic features.

For running pods, the same quadlet file can luckily be copied over with very little changes. Here’s the one for Home Assistant

pod-homeassistant.kube

[Install]
WantedBy=default.target

[Kube]
Yaml=homeassistant.yaml
As you can see, this is quite simple. The copy process could be simplified even further by keeping the name consistent for the kube YAML file (homeassistant.yaml here). Naming them different will differentiate them a bit.

The .kube file extension tells Quadlet to generate a transient service file (named pod-homeassistant.service here) that executes podman kube play <Kube.Yaml>. The YAML pod definition (homeassistant.yaml in this case) should reside in the same directory as the .kube file.

There are more possible configuration options, more on those later on some more complicated service. You can name the .kube file how you like, I use pod- as a prefix for all the services running in a pod.

Deployment

In the previous post I showed the generic layout for the service folder/dataset, here’s how it looks for Home Assistant:

# /path/to/homeassistant
.
├── config
└── quadlet
    ├── homeassistant.yaml
    └── pod-homeassistant.kube
Looking at it this way, this post starts to feel excessively long. All that’s left to do is telling systemd/Quadlet to run this service.

Quadlet reads per-user files from $HOME/.config/containers/systemd/. All the files for each quadlet unit can be in this directory or a subdirectory within. Quadlet also handles symlinks. I’ve found it very handy to keep each service’s files in a subdirectory, and symlink them from the service dataset:

# Create the directory
mkdir -p $HOME/.config/containers/systemd/homeassistant
cd $HOME/.config/containers/systemd/homeassistant

# Create symlinks
cp -s /path/to/homeassistant/quadlet/* .

Start the service

The way podman kube play works, is that it will by default automatically pull the latest versions of images. This could be overridden by adding spec.imagePullPolicy to the pod definition, but I opt for the default behaviour.

Running under systemd, this might introduce a problem. When the images are pulled, the service is in a starting phase, and the default timeout is very short. Quadlet automatically extends the timeout to 5 minutes, but sometimes this is still too short. The Home Assistant image is 1.7G in size, so it may be a good idea to first pull the image manually:

podman pull ghcr.io/home-assistant/home-assistant:stable

The transient systemd units generated by Quadlet can’t be enabled on boot separately, if the files under $HOME/.config/containers/systemd/ exist, the service will be started after a reboot. This means what’s left to do is reloading systemd and starting Home Assistant:

systemctl --user daemon-reload
systemctl --user start pod-homeassistant

You can follow the logs using either podman cli or journalctl:

podman logs -f homeassistant-app
# OR
journalctl --user-unit pod-homeassistant -f

Navigate to http://<ip-of-server>:8123/ and you should be greeted with the Home Assistant Onboarding wizard!

Home Assistant Onboarding wizard in browser

In the next post on the series I’ll be covering another popular service for self-hosters, Immich.

Bonus: Cloudflare tunnel for external access

One of the neat things about pods is all the services in one can talk to each other using localhost. Home Assistant offers features like location tracking, as long as the instance can be accessed over the public internet. There are paid services for this, and it’s simple to use a straight reverse proxy, if you happen to have one.

For those that don’t care about the privacy implications of being in bed with Cloudflare and have a domain with authoritative DNS on CF, a Zero Trust tunnel can be added to the pod very easily. I won’t go into details on how to do the tunnel configuration on the Cloudflare side, except the endpoint to point the tunnel at should be http://localhost:8123. No special setup for the tunnel is necessary.

Since the pod is running in host networking mode, localhost is actually the host machine’s localhost, and the isolation is severely poorer than with normal namespaced container networking.

Once you get the token for the tunnel, add the Cloudflare Argo tunnel container in it by editing the pod YAML. In spec.containers, add:

 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
spec:
  containers:
  - name: app
    image: ghcr.io/home-assistant/home-assistant:stable
    env:
    - name: TZ
      value: Etc/UTC
  - name: cf-tunnel
    image: docker.io/cloudflare/cloudflared:latest
    args:
    - tunnel
    - --no-autoupdate
    - run
    - --token
    - <your token here>
    resources: {}
    securityContext: {}

Then add localhost as a trusted proxy in the Home Assistant configuration:

## /path/to/homeassistant/config/configuration.yaml
http:
  use_x_forwarded_for: true
  trusted_proxies:
  - 127.0.0.1

and restart the service:

systemctl --user restart pod-homeassistant

The Cloudflare Tunnel image is pulled automatically and started.

You should now be able to access your Home Assistant instance anywhere using the URL/subdomain defined in the Cloudflare Tunnel configuration.

Updated on : Added link to next series post and TL;DR section