brgd.eu

Creating a Home Server

2021-03-03

I have recently built my own home server, using my old desktop PC for this. I thought that I would write down the process a little. However, since there already are a lot of resources out there, I thought that I would mainly try to simply find and list some good ones, and link them here. Then I would simply add some notes to them, with stuff that was maybe not immediately clear from the linked article, or for which I didn't immediately see the reason behind.

WireGuard and VPN Tunneling

To secure the server, I have decided to tunnel everything through VPN. After some reading, I found that this was the easiest way to secure everything on my server. An obvious pitfall of this approach is that I will need to set up VPN on any device that I want to access my server on. For me personally, this is less of a problem (in the end, once it is set up, it really is easy to enable it). Maybe it gets to be a slight problem if I want my family to be able to access the server. We will see.

As for VPN, everybody advised for WireGuard, and I think that was a good suggestion. I have used this guide to get WireGuard up and running:

https://mikkel.hoegh.org/2019/11/01/home-vpn-server-wireguard/

The main WireGuard website (www.wireguard.com) was also useful. Note that in the guide above, it tells you to set the default gateway of your router at home (where the home server is stored) as DNS. I didn't understand why this was needed (and to be honest, I still don't fully understand it) and, stubborn as I am, kept this step out. Then, it did make a handshake, but internet was not passing through. When finally setting the default gateway of my home router as DNS, it worked, so apparently this step is really needed.

Besides that, what I first didn't understand was why I need to set up these "local" addresses (for instance 10.14.0.0. in the guide). In effect, these are the local addresses that a peer (client), or the server (which actually is also just a peer) are assigned to inside the VPN.

Furthermore, what is not written in the guide, is to look out for firewall rules. In my case that was ufw for which I needed to allow the given port of the VPN tunnel.

Setting up a New WireGuard Client

WireGuard itself doesn't know about the concepts of server and client - all it knows about are peers. Therefore, the guide I have presented above is fine for setting up any future client as well. I have also done so, successfully, with the iPhone WireGuard app. However, for Ubuntu, which I use with Gnome, which generally handles VPN connections in its NetworkManager, I have used a NetworkManager plugin. This lets me enable and disable the VPN easily.

https://askubuntu.com/questions/1233034/wireguard-vpn-client-gui

https://github.com/max-moser/network-manager-wireguard

With this, I can enable the VPN directly in the dropdown settings menu of Gnome.

SSH with Public/Private Key Authentication

This was very straight-forward. I already had a password-based authentication set up from the start. I then used this tutorial to set everything up for public/private key authentication:

https://kb.iu.edu/d/aews

Note also that in the article, further below, it mentions .ssh/config. This is something very handy, which in my opinion is indispensable when using public/private key authentication. Read up on it in the article.

HTTPS Over a Public Domain

While I keep everything inside my VPN tunnel for now, which already offers a good step for security, it can still make sense to run everything over HTTPS. As a reminder, HTTPS is used so that passwords are not sent in plain text over the network. Now while I am using a VPN tunnel that already is encrypted, the services that are open to my LAN are per se not encrypted. This is problematic when I have proprietary IoT devices running in my LAN. They could potentially look out for those, and then send that data over the internet to their vendors.

And because I have such devices in my LAN, I thought that it would make sense to run everything through HTTPS. Another benefit of this is that I get to know how to set up HTTPS by using LetsEncrypt.

Besides LetsEncrypt, there would be another way to run HTTPS: By hosting my own Certificate Authority (CA). BouncerCA would be an example for this, and I think it would not be too hard. The disadvantage for this is that I would need to move my certificates around manually (the ones that would be created by BouncerCA). While this would work for my own devices, it will make everything a bit harder when I want my family to be able to connect to my network. In this case, I would need to set up not only WireGuard for them, but regularly update their certificates as well.

For this reason I decided to give LetsEncrypt a try instead. This step will need a public domain, which I already have.

I have used the following links to get started with it. The first link is a good, concise summary of the process.

https://github.com/DrMint/Intranet-Lets-Encrypt-Certification

https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-20-04

https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04

https://www.cloudflare.com/learning/dns/dns-records/dns-a-record/

http://nginx.org/en/docs/http/configuring_https_servers.html

As you can see from the links, I have set up NGINX together with HTTPS. For my docker containers, I will let NGINX act as a reverse proxy.

The first link that I was giving above helped the most for setting things up, even though it uses Apache as an example. As for NGINX, setting that up was in general very easy (see the DigitalOcean links). However, restarting nginx was a bit tedious, because it somehow often did not shut down completely. When an error occurred there, I used this to check whether NGINX was still serving a port:

netstat -tulpn

Then, doing sudo service nginx --full-restart helped sometimes. If not, doing sudo service nginx stop, then sudo pkill -f nginx & wait $! and finally sudo service nginx start helped for sure. I don't really know why this strange behaviour occurs, but it works fine with this workaround.

Regarding UFW, it is important to note that nginx works in there as an "app". I.e., once nginx is installed, ufw app list should show some options for it. We can then allow an option of that list, for instance: sudo ufw allow 'NGINX HTTPS.

Automatically linking to HTTPS

Automatically redirecting all HTTP traffic to HTTPS is easy with NGINX. I had to add the following server config to redirect all subdomains to its HTTPS equivalent:

server {
    listen 80;
    listen [::]:80;
    server_name *.example.com;
    # redirect all HTTP traffic to HTTPS
    return 301 https://$host$request_uri;
}

This will redirect all traffic like for example http://a.example.com to https://a.example.com.

I have also found this link useful in this regard:

https://www.digitalocean.com/community/questions/configure-nginx-ssl-force-http-to-redirect-to-https-force-www-to-non-www-on-serverpilot-free-plan-using-nginx-configuration-file-only

HTTPS for All Further Subdomains

I wanted to connect to further subdomains. For instance, I wanted to access Paperless (a nice document manager) via https://paperless.a.example.com. For this, I needed to set another DNS record, or, actually, two:

  • One is a CNAME record, where host is *.a and the value is a.example.com (host and value is how it's called at Namecheap - in effect, this is just written in plain text one after another)
  • The other is an update to the HTTPS certificate location. I had to run certbot again, exactly the same as before. However, the only difference is that now I would set the address to *.a.example.com (instead of just a.example.com). I think I could have even written both of them into the address that the certbot expects from me.

I am not exactly sure what the CNAME record is for. I think it just redirects all traffic back to a.example.com. Nonetheless, setting it up like this made it work.

Some more useful information on DNS can be found in these two StackExchange questions:

https://serverfault.com/questions/275982/what-type-of-dns-record-is-needed-to-make-a-subdomain

https://stackoverflow.com/questions/31656300/apache2-can-i-setup-a-subdomain-without-creating-an-a-record

Using NGINX as a Reverse Proxy

To get the Docker container now running with HTTPS, we need to set up NGINX as a reverse proxy.

For this, we need to ask ourselves on what URI we want to make the docker container accessible. Two main options are available: Either we put the service into the subpath of a domain, which would look for example like this: https://a.example.com/paperless. In this case, we would need to be able to tell the service this subpath, because it needs this to build its own subpath URIs correctly (e.g. /paperless/login). How to do this depends on the service at question. For the service that I have used as an example here, Paperless, or, more specifically, Paperless-NG, this is currently not even possible, because this functionality is broken.

This leads us directly to the other main option: Creating a further subdomain. Setting this up for DNS to work was already explained above. Here, I want to show some resources that have helped me to do this with NGINX. Again, DigitalOcean was very useful as a resources, just as was the official NGINX documentation as well:

https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-with-ssl-as-a-reverse-proxy-for-jenkins

https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/

The second link mostly just shows what to write to set up the reverse proxy. The first link also shows this, but shows it for a subdomain. However, this is actually very easy, as we just need to specify the concrete subdomain instead of the generic domain.

Finishing Up

With this, I now have a nice starter pack set up for my home server, on which I can now play around. As you probably already noticed, I already run my first service over the server, i.e. Paperless-NG. Now, I want to continually run more on it, with a high priority being a file server. Besides that, I maybe I also want to host some Git client (Gitea and Gitlab come to mind), BitWarden, and what would also be cool would be some kind of note-taking, so that I have that always with me. Let's see where this will lead.