Blog Post Image
September 16, 2025

The Homelab Revival Project

At the beginning of the summer, I shared my experience building my first media server using Apache and Jellyfin. I eventually ran into a problem with that server a few weeks later that was unsolvable by both me, my friends, and the Jellyfin discord server. Unfortunately, after weeks of failed attempts to diagnose the issue, I decided that if I did it once before, I could do it all again and proceeded to nuke the entire server and start from scratch. This time, I decided to use Ubuntu LTS instead of Desktop. The reason for this was because I felt that the server should be as barebones as possible and that I would handle all the development by using SSH from my PC. So, I loaded up a fresh install of Ubuntu Server and installed Docker.

The Process

The plan for this was simple. I listed out a bunch of applications that I wanted to self host on the server and started setting up each app one at a time. I broke the setup down into multiple steps and aimed to set up one application per few days. The first one on my list was to reconfigure Jellyfin. I was able to write my own compose file this time. It was incredibly important to learn how to properly mount drives on a Linux machine as well as partition those drives to make everything work smoothly for these type of applications. Passing those drives through to the container by using volumes was necessary to use applications like Jellyfin and NextCloud properly. After setting up Jellyfin and solving a few docker errors, I started to move on to adding some tools to my server. These consisted of applications such as Portainer to track Docker containers and their statuses, Cockpit as a dashboard to my server as well as Pi-hole for DNS ad-blocking. These applications were all super useful and just as I did for all the applications in this stack, I documented my setup process thoroughly on my Github in order to keep track of all the errors I encountered.

Eventually, after setting up all the tools, I moved on to other applications. This included a Glance dashboard which I made my default when I open up a new tab on Firefox now. This was one of my favorite additions as I was able to fully customize it myself and have all the news that I would need at my finger tips. Another awesome addition was Immich. This was my self-hosted alternative to Google Drive and I am incredibly happy with it. It is nice knowing that all my photos are safely stored away from the hands of big coorporations who will do God knows what with them. The full list of applications ended up being the following:

  • - Cockpit as my server interface
  • - Jellyfin as my media server
  • - Portainer as my container manager
  • - Pi-hole as a network wide ad-block and monitor
  • - NextCloud as my OneDrive replacement
  • - Immich as my iCloud Photos replacement
  • - Glance as my new homepage for new tabs on Firefox
  • - Karakeep as my bookmark manager

But How Does It Work?

The tech behind this server is actually a lot simpler than I imagined it would be. In my original setup, I used Apache as a reverse proxy and whitelist to limit traffic to my network. This time, I decided to leave the web server method behind for a more safe and convenient method. Tailscale. This ended up being one of the best decisions that I would make for the server. It exponentially increased the ease of access to the server remotely, requiring only a VPN connection. A quick explanation of Tailscale now.

Tailscale is a site to site VPN built upon Wireguard that creates lightweight encrypted tunnels between your computer and other nodes in the network. It creates a mesh network that allows clients to connect to their LAN without being on the same network. All the client's data would pass through those encrypted tunnels to the server incredibly quickly without any security risks. It's almost too good to be true! After testing it out with my phone and server, I knew immediately that this was the best way to handle the networking of the server. I could remotely access my services without needing to port forward or expose myself to the world!

Aside from all the networking, each application is running in its own Docker container making things clean and seperated. In the future, I am considering adding Kubernetes for better deployment, however I am still weighing the benefits. This project was instrumental in improving my familiarity with Docker and Linux. I learned so many different commands and tricks during my time with it and can't wait to keep continuously improving it as time goes on. Due to the usage of Tailscale, I didn't get to practice or network too much. Although a good thing for convenience, I genuinely enjoyed messing around with Nginx and Apache last time. For that reason, I decided to work on a new project focused on AWS that I plan to share with you all soon. I hope you look forward to it!

To get into the nitty gritty of my server and see the details of my setup and the challenges I faced, click here!.

Blog Post Image
July 21, 2025

Looking Back at the AEAC UAS Student Competition

It has been a few months since I attended the AEAC UAS Student Competition in Medicine Hat with the Aerospace team and after attending a drone event in Calgary the other week, I thought it would be a good time to reflect and share what we accomplished. Our team came 10th place out of 15 competing university teams and although it wasn't the result we were looking for, but the journey was full of valuable lessons. Getting to meet peers and seniors in the industry and see how other teams innovated to push the industry forward was an amazing opportunity. More than anything, the experience helped clarify the direction I want to take in my future work.

The competition this year was focused on combating the increasing number of wildfires that we see nowadays in Canada. Our team decided to modify the design of our existing drone , Jellyfish, and equip it with the proper equipment to both detect and extinguish fires. While the mechanical and electrical teams began with the design of the tank system that would hold and dispense the water, the software team worked on the infrared (IR) detection algorithm that would be used to detect heat signatures in the field. To do this, we decided on a spiral search algorithm that would fly in a spiral above the field and use an IR camera to locate hotspots on the ground. Upon detection, a series of calculations would take place to determine the coordinates of the hotspot before being uploaded to a file for storage. It was decided that based on these acquired coordinates, the Jellyfish would then descend before autonomously triggering a relay to dispense water to extinguish the flames.

Although promising, our team ultimately ran into problems due to a serious lack of testing. Features that should have been tested months prior to the competition ended up being tested during the competition. Our repository went through several changes just hours before flight time. It was this unavoidable crunch that ultimately led to our underperformance. That said, it was a powerful learning experience. We now have a clearer understanding of the importance of early integration and consistent testing, and we're better prepared to tackle future challenges with greater confidence and coordination. It was great working with such a team of knowledgable and charismatic people and I look forward to maybe one day working with them again.

Looking ahead, I know I want to pursue a career in aerial robotics, and I’ve begun taking concrete steps toward that goal. Right now, I’m working through a self-directed roadmap to deepen my understanding of DevOps practices and tools. Getting hands-on with systems like Ubuntu and learning more about networking protocols will help build the foundation I need to break into the field. This experience made it clear that I’m most passionate about the intersection of software and hardware. I enjoy seeing the code I write have a tangible and physical impact. That’s the direction I want to grow in, and I’m excited to keep building toward it.

Blog Post Image
May 28, 2025

Building My First Media Server with Apache and Jellyfin (on Windows)

In this blog post, I’ll share my experience building a personal media server using Apache as a reverse proxy and Jellyfin as the media platform. I’ll walk through the problems I encountered, how I solved them, and what I learned throughout the process. While I plan to rebuild the server on a dedicated Ubuntu machine in the near future, I wanted to document my current Windows-based setup to reflect on the process.

Phase 1: The Excitement (and Naivety) of Port Forwarding

Like many first-time self-hosters, I started by port forwarding Jellyfin’s default port (8096) on my router so I could access my media library remotely. It worked—kind of. But I quickly noticed strange behavior in my router logs: unsolicited access attempts from unfamiliar IP addresses, particularly from regions like Iran and China.

This was my wake-up call. By exposing port 8096 directly to the internet, I had created a security vulnerability. Any attacker scanning IP ranges could attempt to connect—not necessarily to exploit Jellyfin itself, but potentially to scan for other open ports or weaknesses on my network. It became clear that I needed to rethink my approach.

Phase 2: Enter Apache – Reverse Proxy & SSL

To mitigate the risk, I decided to set up a reverse proxy using Apache. I had previously used Apache in a school project and was somewhat familiar with its configuration. The idea was to serve Jellyfin behind Apache, routing all traffic through port 80 initially and later through 443 using HTTPS.

Setting up the reverse proxy wasn’t as smooth as I expected. I ran into several issues, such as:

  • - Apache not forwarding traffic correctly to Jellyfin
  • - SSL_ERROR_RX_RECORD_TOO_LONG when testing HTTPS
  • - Conflicts between HTTP and HTTPS listeners

After some troubleshooting, I discovered that my httpd-ssl.conf file was missing the Listen 443 directive, and parts of my virtual host configuration were commented out. Once I fixed these and re-enabled the necessary Apache modules, HTTPS traffic began working correctly.

I also used win-acme to generate an SSL certificate from Let’s Encrypt and configured Apache to use it. Now, all traffic is encrypted, and Jellyfin is no longer exposed to the internet directly.

Phase 3: Locking It Down

Once HTTPS was working, I took additional steps to secure the setup:

  • - Closed ports 80 and 8096 on my router
  • - Configured Windows Firewall to only allow connections on port 443
  • - Changed the Jellyfin admin password to a stronger one
  • - Added IP whitelisting rules to restrict access further

I still noticed IP reputation alerts from my router, but all of them were being blocked before reaching my machine. This made me consider adding intrusion prevention tools like CrowdSec in the future, ideally inside a Docker container on a more isolated system.

Lessons Learned & What’s Next

This project taught me a lot about networking, self-hosting, and basic cybersecurity principles. When I inevitably do it again from scratch, here’s what I’ll do differently:

  • - Start with an Ubuntu server for more flexibility and support
  • - Use Docker from the beginning to isolate services and make deployments easier
  • - Implement a reverse proxy with Nginx instead of Apache
  • - Set up CrowdSec for geo-IP based blocking

Ultimately, this experience gave me hands-on knowledge that complements my academic learning. It also raised my interest in DevOps and server administration—skills I plan to keep building on.