
The Homelab Revival Project
At the beginning of the summer, I shared my experience building my first media server using Apache and Jellyfin. I eventually ran into a problem with that server a few weeks later that was unsolvable by both me, my friends, and the Jellyfin discord server. Unfortunately, after weeks of failed attempts to diagnose the issue, I decided that if I did it once before, I could do it all again and proceeded to nuke the entire server and start from scratch. This time, I decided to use Ubuntu LTS instead of Desktop. The reason for this was because I felt that the server should be as barebones as possible and that I would handle all the development by using SSH from my PC. So, I loaded up a fresh install of Ubuntu Server and installed Docker.
The Process
The plan for this was simple. I listed out a bunch of applications that I wanted to self host on the server and started setting up each app one at a time. I broke the setup down into multiple steps and aimed to set up one application per few days. The first one on my list was to reconfigure Jellyfin. I was able to write my own compose file this time. It was incredibly important to learn how to properly mount drives on a Linux machine as well as partition those drives to make everything work smoothly for these type of applications. Passing those drives through to the container by using volumes was necessary to use applications like Jellyfin and NextCloud properly. After setting up Jellyfin and solving a few docker errors, I started to move on to adding some tools to my server. These consisted of applications such as Portainer to track Docker containers and their statuses, Cockpit as a dashboard to my server as well as Pi-hole for DNS ad-blocking. These applications were all super useful and just as I did for all the applications in this stack, I documented my setup process thoroughly on my Github in order to keep track of all the errors I encountered.
Eventually, after setting up all the tools, I moved on to other applications. This included a Glance dashboard which I made my default when I open up a new tab on Firefox now. This was one of my favorite additions as I was able to fully customize it myself and have all the news that I would need at my finger tips. Another awesome addition was Immich. This was my self-hosted alternative to Google Drive and I am incredibly happy with it. It is nice knowing that all my photos are safely stored away from the hands of big coorporations who will do God knows what with them. The full list of applications ended up being the following:
- - Cockpit as my server interface
- - Jellyfin as my media server
- - Portainer as my container manager
- - Pi-hole as a network wide ad-block and monitor
- - NextCloud as my OneDrive replacement
- - Immich as my iCloud Photos replacement
- - Glance as my new homepage for new tabs on Firefox
- - Karakeep as my bookmark manager
But How Does It Work?
The tech behind this server is actually a lot simpler than I imagined it would be. In my original setup, I used Apache as a reverse proxy and whitelist to limit traffic to my network. This time, I decided to leave the web server method behind for a more safe and convenient method. Tailscale. This ended up being one of the best decisions that I would make for the server. It exponentially increased the ease of access to the server remotely, requiring only a VPN connection. A quick explanation of Tailscale now.
Tailscale is a site to site VPN built upon Wireguard that creates lightweight encrypted tunnels between your computer and other nodes in the network. It creates a mesh network that allows clients to connect to their LAN without being on the same network. All the client's data would pass through those encrypted tunnels to the server incredibly quickly without any security risks. It's almost too good to be true! After testing it out with my phone and server, I knew immediately that this was the best way to handle the networking of the server. I could remotely access my services without needing to port forward or expose myself to the world!
Aside from all the networking, each application is running in its own Docker container making things clean and seperated. In the future, I am considering adding Kubernetes for better deployment, however I am still weighing the benefits. This project was instrumental in improving my familiarity with Docker and Linux. I learned so many different commands and tricks during my time with it and can't wait to keep continuously improving it as time goes on. Due to the usage of Tailscale, I didn't get to practice or network too much. Although a good thing for convenience, I genuinely enjoyed messing around with Nginx and Apache last time. For that reason, I decided to work on a new project focused on AWS that I plan to share with you all soon. I hope you look forward to it!
To get into the nitty gritty of my server and see the details of my setup and the challenges I faced, click here!.

