After setting up Intel NUC with Debian 12 on top of it, I have decided to configure Nextcloud as my first application on this new setup. In order to have a successful setup for my needs, I have decided on the following:

  • DNS name for accessing the installation (static DNS entry on the network + valid TLS certificate)
  • HAProxy for terminating TLS and forwarding traffic to the Nextcloud installation
  • Nextcloud and required services in the container

Above noted requirements came from the experimentation, but I somewhat knew what I want to do with the setup, and how to make it more clean for the new services.

Router configuration

Up to this point, I have used the following DNS servers in my DHCP configuration of the router:

  • 1.1.1.1 - Cloudflare
  • 8.8.8.8 - Google
  • 8.8.4.4 - Google

Mostly because they’re always available, but also because they were something that first came to mind while I initially set up the router. I might change the resolvers in the future as I review privacy of this setup.

Pushing these servers directly via DHCP to all of the clients connected to my network meant that I had no caching on the router side, but instead, on the client side only. Aside of improving performance of the name resolution, caching on the router side would allow me to override certain entries, which is exactly what I needed.

There’s also an option to override DNS names with certain redirect rules in dstnat chain for all DNS requests coming from the local network but not destined to the router IP (eg. you want to use 8.8.8.8 directly). But I haven’t configured that part yet so I’ll omit it from this article.

In order to configure Mikrotik to respond to the DNS requests, few options had to be configured. First, I had to configure DNS settings of the router itself.

Option “Allow Remote Requests” is important here to respond to the queries sent over to the router. Aside of that just ensure that your Firewall allows for DNS traffic to flow back and forth. Remember that most of the DNS queries are UDP, so there’s no connection per-se. Rule that allowed traffic to flow for me was:

chain=input action=accept connection-state=established,related,untracked

After testing DNS resolution from my local machine:

[ivan@tomica-main ~]$ dig tomica.net @10.42.0.1

; <<>> DiG 9.18.17 <<>> tomica.net @10.42.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61362
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;tomica.net.			IN	A

;; ANSWER SECTION:
tomica.net.		49	IN	A	65.9.25.70
tomica.net.		49	IN	A	65.9.25.96
tomica.net.		49	IN	A	65.9.25.17
tomica.net.		49	IN	A	65.9.25.34

;; Query time: 1 msec
;; SERVER: 10.42.0.1#53(10.42.0.1) (UDP)
;; WHEN: Sat Aug 26 08:57:49 CEST 2023
;; MSG SIZE  rcvd: 92

I have decided to configure my local network DHCP options to serve my router as an only DNS server.

Once this was in place, in order to resolve BLAH.SOMETHING.SOMETHING to a destination in my local network, it was only a matter of adding static DNS entry to the router.

Haproxy configuration

To route traffic towards the Nextcloud container, HAProxy has been configured with the following settings:

frontend main
	bind *:80
	bind *:443 ssl crt /etc/haproxy/certs/

	http-request redirect scheme https unless { ssl_fc }

	# https://access.redhat.com/solutions/3986281
	http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
	http-request set-header X-Forwarded-Proto https if { ssl_fc }

	http-response set-header Strict-Transport-Security "max-age=16000000; includeSubDomains; preload;"

	acl local_network src 10.42.0.0/22
 	acl is_nextcloud hdr(host) -i MYNEXTCLOUDURL
	acl url_discovery path /.well-known/caldav /.well-known/carddav

	http-request redirect location /remote.php/dav/ code 301 if url_discovery is_nextcloud

	use_backend nextcloud if is_nextcloud local_network
	default_backend non_existent

backend nextcloud
	mode http
	option httpchk
	http-check send meth GET uri /
	http-response set-header Host MYNEXTCLOUDURL
	server local-docker-nextcloud localhost:10080 check

backend non_existent
	http-request return status 200 content-type "text/plain" string "You shouldn't be here!"

This is binding both ports 80 and 443 on the host, where listener on 443 uses TLS/SSL. Certificates are loaded from the /etc/haproxy/certs/ directory. HAProxy is capable of recognizing and returning the correct certificate for the requested host.

Then there are a few http related things that are being done, namely:

  • force redirect to HTTPS
  • set X-Forwarded-Proto header depending on the incoming protocol
  • set HSTS header (after first visit, browsers will cache the information that the host is using HTTPS for 6 months - so enable this only after you’re sure you don’t need plain HTTP for testing or whatnot)

Few of the few ACL rules either route traffic to the Nextcloud instance, or let them pass to the default backend. I could have configured deny rules here, but decided in the end that non_existent backend is also a nice solution.

As you can see, there are two backends where requests may end up after frontend processing:

  • nextcloud which forwards requests towards the localhost:10080 port where the nextcloud container is listening
  • or non_existent, which returns “200 OK” with the message “You shouldn’t be here!”.

To add additional applications, I just have to add another acl rule, use_backend directive, and backend to the HAProxy configuration. If I decided to have some application on multiple servers, I’d just add those servers to the backend configuration list. HAProxy is pretty flexible and nice, and if you haven’t tried it yet, please do. Folks developing it are doing an amazing job! Performant, powerful, and once you understand it, quite simple to use.

TLS termination

Since I knew from the start that I’ll have few applications running on this setup, I have decided to issue wildcard certificate powered by LetsEncrypt. Certificate is issued with the certbot utility and validation is being done via Route53 plugin. For this thing to work I have done the following:

  • the whole subdomain used in the setup (BLAH.SOMETHING.SOMETHING) has been delegated to the separate Route53 DNS zone to have better scope of permissions
  • separate IAM user has been created which has access to the hosted zone in AWS
  • credentials configured in the /root/.aws/credentials config file so certbot can use them to authenticate to AWS

Although this whole process is explained in [[https://certbot-dns-route53.readthedocs.io/en/stable/][certbot-dns-route53 documentation page]] here’s the exact configuration below.

AWS policy

AWS policy is quite simple and allows changes only in that particular DNS zone

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "route53:ListHostedZones",
                "route53:GetChange"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "route53:ChangeResourceRecordSets"
            ],
            "Resource": [
                "arn:aws:route53:::hostedzone/Z0BLAH"
            ]
        }
    ]
}

Certbot configuration

Certificate was then issued with

certbot certonly --dns-route53 -d BLAH.SOMETHING.SOMETHING -d *.BLAH.SOMETHING.SOMETHING

The only issue with this is how will HAProxy load these certificates? Easily, I concatinated those into /etc/haproxy/certs/BLAH.SOMETHING.SOMETHING.pem file and voila. And for renewing certificates I added simple script quickly hacked together in /etc/letsencrypt/renewal-hooks/post/haproxy.sh with something like this:

#!/usr/bin/env bash

cat /etc/letsencrypt/live/BLAH.SOMETHING.SOMETHING/{fullchain,privkey}.pem > /etc/haproxy/certs/BLAH.SOMETHING.SOMETHING.pem

systemctl reload haproxy.service

I’ll probably end up reworking this part a bit later on, but it works for now.

Nextcloud configuration

At last, Nextcloud. To be honest, there’s nothing special here. At first I was thinking about binding ports 80 and 443 directly to the Nginx container and pass requests to the nextcloud:fpm container. I could have easily gone with the Apache based Nextcloud which would have both web server (Apache obviously) and PHP (mod_php in Apache) configured in the same container. But, here we are. In the end I decided not to bind 80/443 to the Nginx container obviously, but Nginx was left there. It doesn’t really matter which one you choose in this case.

For configuration and easy provisioning I am using docker-compose, and my compose.yml looks like this:

version: '2'

volumes:
  nextcloud:
  db:

services:
  db:
    image: postgres:15
    restart: always
    volumes:
      - db:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=nextcloud
      - POSTGRES_USER=nextcloud
      - POSTGRES_PASSWORD=SOMEPASSWORD ;-)

  app:
    image: nextcloud:fpm
    restart: always
    links:
      - db
      - redis
    volumes:
      - nextcloud:/var/www/html
    environment:
      - POSTGRES_HOST=db
      - POSTGRES_DB=nextcloud
      - POSTGRES_USER=nextcloud
      - POSTGRES_PASSWORD=SOMEPASSWORD ;-)
    depends_on:
      - db

  web:
    image: nginx
    restart: always
    ports:
      - 10080:80
    links:
      - app
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/conf.d/:/etc/nginx/conf.d/:ro
    volumes_from:
      - app

  redis:
    image: redis:alpine
    restart: always

As you can see, there are few components in there:

  • persistent volumes db and nextcloud where the data is saved. Persistent volumes are there so I don’t loose the data residing in containers upon each container restart. That would be unfortunate. :-) These volumes are overlayfs based and are basically folders in /var/lib/docker/volumes folder when looking from the host perspective.
  • db container with PostgreSQL 15 database running. Data is saved to the db persistent volume
  • app container running Nextcloud powered by the PHP-FPM with access to the db and redis containers
  • web container running only Nginx, but also exposing port 10080 on the host so we can send HAProxy requests there. Nginx is used for serving static files, and forwarding PHP related requests to the app container for processing
  • redis container to have an in-memory file locking

Nginx configuration

Nginx configuration is quite simple. I took the default Nginx config somewhere and added an include directive to load my config.d folder where I placed my nextcloud vhost configuration. I could have crammed in everything in nginx.conf but I think this approach makes things cleaner.

worker_processes auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    '$status $body_bytes_sent "$http_referer" '
    '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log main;

    sendfile        on;
    server_tokens   off;
    keepalive_timeout  65;

    include /etc/nginx/conf.d/*.conf;
}

One thing I probably have to do here is investigate where exactly those error and access logs end up. Last thing I’d want is to fill up disk space with them :-) Idea is to ship them to some central logging system, but that will be done in later phases of this project. Things are good as they are for now I’d say.

Vhost configuration on the other hand has been taken from the Nextcloud configuraiton, with only slightest adjustments to account for the domain change.

upstream php-handler {
    server app:9000;
}

server {
    listen 80;
    server_name MYNEXTCLOUDDOMAIN;

    # set max upload size
    client_max_body_size 1024M;
    fastcgi_buffers 64 4K;

    # Enable gzip but do not remove ETag headers
    gzip on;
    gzip_vary on;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
    gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;

    # HTTP response headers borrowed from Nextcloud `.htaccess`
    add_header Referrer-Policy                      "no-referrer"       always;
    add_header X-Content-Type-Options               "nosniff"           always;
    add_header X-Download-Options                   "noopen"            always;
    add_header X-Frame-Options                      "SAMEORIGIN"        always;
    add_header X-Permitted-Cross-Domain-Policies    "none"              always;
    add_header X-Robots-Tag                         "noindex, nofollow" always;
    add_header X-XSS-Protection                     "1; mode=block"     always;

    # Remove X-Powered-By, which is an information leak
    fastcgi_hide_header X-Powered-By;

    # Path to the root of your installation
    root /var/www/html;

    # Specify how to handle directories -- specifying `/index.php$request_uri`
    # here as the fallback means that Nginx always exhibits the desired behaviour
    # when a client requests a path that corresponds to a directory that exists
    # on the server. In particular, if that directory contains an index.php file,
    # that file is correctly served; if it doesn't, then the request is passed to
    # the front-end controller. This consistent behaviour means that we don't need
    # to specify custom rules for certain paths (e.g. images and other assets,
    # `/updater`, `/ocm-provider`, `/ocs-provider`), and thus
    # `try_files $uri $uri/ /index.php$request_uri`
    # always provides the desired behaviour.
    index index.php index.html /index.php$request_uri;

    # Rule borrowed from `.htaccess` to handle Microsoft DAV clients
    location = / {
        if ( $http_user_agent ~ ^DavClnt ) {
            return 302 /remote.php/webdav/$is_args$args;
        }
    }

    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    }

    # Make a regex exception for `/.well-known` so that clients can still
    # access it despite the existence of the regex rule
    # `location ~ /(\.|autotest|...)` which would otherwise handle requests
    # for `/.well-known`.
    location ^~ /.well-known {
        # The rules in this block are an adaptation of the rules
        # in `.htaccess` that concern `/.well-known`.

        location = /.well-known/carddav { return 301 /remote.php/dav/; }
        location = /.well-known/caldav  { return 301 /remote.php/dav/; }

        location /.well-known/acme-challenge    { try_files $uri $uri/ =404; }
        location /.well-known/pki-validation    { try_files $uri $uri/ =404; }

        # Let Nextcloud's API for `/.well-known` URIs handle all other
        # requests by passing them to the front-end controller.
        return 301 /index.php$request_uri;
    }

    # Rules borrowed from `.htaccess` to hide certain paths from clients
    location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/)  { return 404; }
    location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console)                { return 404; }

    # Ensure this block, which passes PHP files to the PHP process, is above the blocks
    # which handle static assets (as seen below). If this block is not declared first,
    # then Nginx will encounter an infinite rewriting loop when it prepends `/index.php`
    # to the URI, resulting in a HTTP 500 error response.
    location ~ \.php(?:$|/) {
        # Required for legacy support
        rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode\/proxy) /index.php$request_uri;

        fastcgi_split_path_info ^(.+?\.php)(/.*)$;
        set $path_info $fastcgi_path_info;

        try_files $fastcgi_script_name =404;

        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $path_info;
        #fastcgi_param HTTPS on;

        fastcgi_param modHeadersAvailable true;         # Avoid sending the security headers twice
        fastcgi_param front_controller_active true;     # Enable pretty urls
        fastcgi_pass php-handler;

        fastcgi_intercept_errors on;
        fastcgi_request_buffering off;
    }

    location ~ \.(?:css|js|svg|gif)$ {
        try_files $uri /index.php$request_uri;
        expires 6M;         # Cache-Control policy borrowed from `.htaccess`
        access_log off;     # Optional: Don't log access to assets
    }

    location ~ \.woff2?$ {
        try_files $uri /index.php$request_uri;
        expires 7d;         # Cache-Control policy borrowed from `.htaccess`
        access_log off;     # Optional: Don't log access to assets
    }

    # Rule borrowed from `.htaccess`
    location /remote {
        return 301 /remote.php$request_uri;
    }

    location / {
        try_files $uri $uri/ /index.php$request_uri;
    }
}

Cleanup of this configuration and optimization will be done at some later point in time.

Nextcloud configuration

It is worth noting that during the initial setup of the Nextcloud I have had a few additional variables specified in the app section of the compose.yml file in order to create an admin user on the system and specify trusted domains.

NEXTCLOUD_ADMIN_USER=blah
NEXTCLOUD_ADMIN_PASSWORD=blah
NEXTCLOUD_TRUSTED_DOMAINS=MYNEXTCLOUDURL

Once the Nextcloud was installed, I have added additional configuration options related to the file locking option into config/config.php configuration file which you can access from the host directly (/var/lib/docker/volumes/nextcloud_nextcloud/_data/config/config.php).

  'filelocking.enabled' => true,
  'memcache.locking' => '\\OC\\Memcache\\Redis',
  'redis' =>
  array (
    'host' => 'redis',
    'port' => 6379,
    'timeout' => 0.0,
  ),

Nextcloud also has to execute some jobs peridically, and there are 3 options to do that in the background:

  • Ajax cron
  • Webcron
  • System Cron

Both Ajax based (on each page load cron is checked and triggered) and Webcron have not worked well for me so I have decided to use good old system cron service for this purpose. Since I don’t think adding cron service to the docker container would be a good idea, as I’d have to rebuild the image and maintain my own fork essentially, I have decided to place the following command in the host system cron:

*/5 * * * * /usr/bin/docker exec --user 33 nextcloud_app_1 php -f /var/www/html/cron.php

Future plans

Now that this works, plan is to work a bit on filtering rules, authentication etc. and to allow access to the Nextcloud outside of my network. I’m still uncertain if I’ll expose the setup to the public internet by port forwarding certain ports on the router and add basic auth in front of it, or will I set up full-blown VPN to my network. Each approach has its pros and cons so I have to mediate on this a bit more.

In the meantime, there are other services which I want to move, so I have something to play with :-)