Self Hosting My Way - Nextcloud
5th post in a series about self-hosting
Nextcloud, the go-to for most self-hosters, doesn’t really need introductions. This post has been long in the making, mainly due to my recent change from running everything with Podman to Kubernetes with k3s. Long story short, k8s is fun and interesting, but Podman pods are rock solid. I had an accident on new year configuring cluster ingress, and had to make a quick decision on whether to spend the night figuring it out, or switch back to trusty old ways. I chose the latter.
Here’s the details on how to run Nextcloud. Don’t be disheartened by the kube YAML, there are only few places where you need to make edits, the bulk can be copied as-is.
Foreword
I’d like to note is that in these instructions I’m using a custom Nextcloud image, built by me using Github Actions. The image is based on the community Nextcloud Apache image. The custom image has the requirements for video thumbnail generation and NCDownloader preinstalled, along with a convenience script to enable preview generation for videos.
The official way to run Nextcloud in a container is using the All-In-One (AIO) image, which runs everything in a single container and then some. This is bad practice for containers, and I am thoroughly against mounting the Docker socket into the container unless it is absolutely necessary. In case of Nextcloud, I don’t think it is justified. Not to mention, the AIO image is probably a handful to run under Podman, due to these bad practices. Don’t even start about trying to run AIO rootless.
The Apache image has served me well for a lot of years, and since we’re relying on essential container principles, it’s very portable. My Nextcloud instance has been running under Docker, LXC, bare metal, LXD, Podman and Kubernetes, all without starting from scratch. For a short while I even tried the FPM version of the image, but switched back.
If you don’t trust me (that’s OK and good!), you can build that image yourself after inspecting the Dockerfile in the repo, or substitute it with the community Apache image. In the latter case you won’t be able to have previews for videos, as ffmpeg
is required to generate the thumbnails.
Another thing to note is that the image is quite large, pre-pulling them manually using podman pull <image-name>
is recommended to avoid a startup timeout.
Lastly, the
Summary
Here's the short version:
- Create directories:
export BASE_PATH="/path/to/nextcloud" # The parent directory for YAML and host volumes
mkdir -p "$BASE_PATH"/data,redis,psql,quadlet}
cd $BASE_PATH; chcon -t container_file_t -R */ # Set SELinux context for the directories on SELinux systems
- Under
$BASE_PATH/quadlet/
- Download nextcloud-pod.yaml
- Edit at least the following values, optionally other values:
POSTGRES_USER
POSTGRES_DB
POSTGRES_PASSWORD
NEXTCLOUD_ADMIN_USER
NEXTCLOUD_ADMIN_PASSWORD
NEXTCLOUD_TRUSTED DOMAINS
PHP_MEMORY_LIMIT
PHP_UPLOAD_LIMIT
TZ
- Don’t edit POSTGRES_HOST, REDIS_PORT or REDIS_PASSWORD
- Replace
/path/to/nextcloud
with your chosen $BASE_PATH in volumes section:sed -i "s|/path/to/nextcloud|$BASE_PATH|g" nextcloud-pod.yaml
- Create quadlet file (pod-nextcloud.kube)
- Change
PublishPort
if 8080 doesn’t suit you, with e.g.PublishPort=3000:80
- Symlink files for Quadlet:
mkdir -p ~/.config/containers/systemd/nextcloud cd ~/.config/containers/systemd/nextcloud cp -s "$BASE_PATH"/quadlet/* .
- Start the service
- If using my image (
nextcloud-previews ) with preinstalled preview requirements, run
podman run -it nextcloud-app enable-previews
- Enjoy having control of your personal cloud!
Requirements
Volumes:
3 persistent volumes are defined:
- App data
- contains the full web root
- PostgreSQL data
- Redis
- Redis doesn’t strictly need the persistent volume, but it’s nice to persist login sessions between restarts.
- App data
Networking:
No special networking is required, everything is accessed through the HTTP port (80 in the container namespace).
Containers:
- PostgreSQL
- Nextcloud app container
- Redis
- Cron (duplicate of the app container)
Nextcloud requires a maintenance cron job to run every 5 minutes. Having a separate container running
crond
allows to not rely onsupervisord
or host cron, keeping the deployment self-contained.In a future post I’ll probably write how to add ClamAV and the Preview Pre-Generator cronjob to the deployment. You’ll need to mount the
www-data
user’s crontab or change thecommand
of the cron container to add the pre-generation job before startingcrond
System resources
Nextcloud will run on a Raspberry Pi, even on a 1G model. It is also often said to be painfully slow and clunky. My experience has been that using PostgreSQL and Redis helps a lot, but cannot make up for sufficient RAM and SSD storage.
For a comfortable experience, have at least 2G of available RAM for the deployment, and store at least the database on an SSD. An USB-mounted disk does not count, USB is unreliable and slow, even if the data transfer worked at full speed all the time.
Pod definition
Download the full template here: nextcloud-pod.yaml
I won’t go through the definition with as much detail, as there’s nothing special in it. You can probably figure more details from the previous post about Immich, should you wish.
The app
container serves HTTP and runs the PHP backend using Apache, psql
serves the main database and redis
container is used for file-locking and session storage.
One thing to note is that the Apache image is built so that the application runtime source files are all stored under the webroot, being PHP. The full webroot is a volume, meaning you are essentially storing the full Nextcloud application on the host volume.
This is a bit wasteful, but file storage will most likely require a lot of storage, so the “wasted” space becomes negligible soon. During the first installation, the initialization scripts copy the runtime files in place, populating the volume.
When all of the POSTGRES_{USER,DB,PASSWORD,HOST}
variables are set, the image will be autoconfigured. The initial admin user is also created based on NEXTCLOUD_ADMIN_
variables.
Tune the PHP_
variables according to your resources. Based on my experience, I recommend at least 2G for PHP_MEMORY_LIMIT
. PHP_UPLOAD_LIMIT
determines the maximum upload filesize.
If you add a reverse proxy in front, it should be configured to match the upload limit
The cron
container runs maintenance cron jobs. Keeping with the one process per container -paradigm, it runs as a separate container. Running two instances of the same image does not need twice the storage, only the data generated during runtime takes up extra space.
Place the pod definition file under a directory named quadlet
under the base directory for the deployment data.
Quadlet
Like in the previous posts, Quadlet will handle starting and keeping the service running via systemd.
You can separate the configMap to a dedicated file if you wish, like it is with Immich.
Podman also supports secrets and secretMaps. The posts in this series will be updated at Some Time In the Future (tm) to utilize secretMaps instead of plain configMaps
[Install]
WantedBy=default.target
[Kube]
Yaml=nextcloud-pod.yaml
PublishPort=8080:80
If you plan on exposing your instance to the public internet (making it accessible outside your house without a VPN), TLS is not optional. The simplest way is to add a reverse proxy in front of this deployment
The file should be named after the systemd service we want to create. I use pod-nextcloud.kube
. In this case the generated systemd service unit will be named pod-nextcloud.service
. Save it under
mkdir -p ~/.config/containers/systemd/nextcloud
cd ~/.config/containers/systemd/nextcloud
cp -s /path/to/nextcloud/quadlet/* .
The directory structure will look like this, allowing to back up everything as a single directory:
# /path/to/nextcloud
.
├── data
├── psql
├── redis
└── quadlet
├── nextcloud-pod.yaml
└── pod-nextcloud.kube
SELinux
If you have SELinux enabled, change the context for the mount directories manually, just in case:
cd /path/to/nextcloud; chcon -t container_file_t -R */
Start the service
It is a good idea to pre-pull the images manually to avoid timeouts.
Reload systemd and start the service, navigate to http://<your-server-ip>:8080
and you should see the Nextcloud login page!
systemctl --user daemon-reload
systemctl --user start pod-nextcloud.service
Fresh Nextcloud login page (finnish locale)