After many years of “renting” my online presence on the internet, I finally pulled the trigger and bought myself a cloud computer.
What do I mean by “renting”?
I used a combination of Squarespace and Ghost to host my website, mailing lists, and newsletter. In total, it was costing me somewhere in the range of $78 to $100 every month.
Both of these were cloud-hosted, which meant I never owned anything. In exchange for the subscription I paid, these entities maintained the infrastructure for me.
Such an arrangement can be very convenient…or a nightmare if these entities were to get hacked, bankrupt, or just disappear.
More optimistically speaking, even if none of these “doomsday” scenarios took place, I would still need to reluctantly accept the ever-increasing subscription fees.
Looking past both the “renting issue” and “increasing fees”, it’s also important to talk about the lack of flexibility. Platforms like Squarespace take a “one-size-fits-all” approach.
That means all websites and newsletters look the same. Even though some customization tools are provided, you can never make your website “truly yours”.
Because of these and more, I decided to buy a server and build (or host) everything myself.
Generally speaking, there are 3 options I could take:
Buy a Raspberry Pi
Buy an at-home Server
Buy a Cloud Server
After some deliberation, I went with option 3 — buying a Digital Ocean server for ~$12/month.
Woah! I can already hear all of y’all shouting.
Isn’t that renting as well? You are paying a monthly subscription and the server is still owned by a different entity (in this case Digital Ocean).
Well, yes. That’s true. However, with a few tweaks, I can make it “very close to owning”. Let me explain.
When evaluating all options, the options related to strictly self-hosting — Raspberry Pi or Server at home — were the most enticing to me.
I even went one step further by purchasing a Raspberry Pi and hosting my website. It worked…until it didn’t. A couple of days later the Raspberry Pi quite literally “blew up”.
Such a single point of failure can be very difficult to overcome.
Especially in my case, I am someone who travels quite frequently. If I rely entirely on a server that’s hosted at home, if something were to go wrong, it would be next to impossible for me to recover.
This lack of availability was one of the biggest reasons that tipped the scale towards cloud-hosting.
Now, moving to the issue of “Digital Ocean owning my infra”, that’s a tough pill to swallow.
Every option outside hosting my server at home came with this tradeoff. There were no other alternatives.
There’s no perfect way to get around it. My best attempt at mitigating risk here would be to do regular backups from the cloud to my physical hardware.
Hosting a website requires a web server. The open-source option that I picked is Nginx.
Nginx allows me to set up proxies and reverse proxies, configure SSL certificates, and set up rate limiting. It also supports subdomain routing — say traffic to irtizahafiz.com will be routed to port 3000 whereas api.irtizahafiz.com will be routed to port 8000 internally.
Above all, I have used Nginx before, so I can cut my development time in half.
I purchased my domain from GoDaddy years ago. I will continue using that.
Instead of porting over nameservers (NS) to Digital Ocean, I will let GoDaddy do it. When I need to add a subdomain or change any A records, I will do it myself through the GoDaddy portal.
This way, I at least don’t couple my domain too tightly with Digital Ocean.
Most of my website content will be static.
The only dynamic part will be the addition of new blog posts or YouTube videos. I will publish around 2-3 times every week, so not frequently enough for me to ignore the benefits of server-side rendered static websites.
What benefits?
Better performance — less work done on the client
More SEO-friendly — easy for web crawlers
One great advantage of using platforms like Squarespace or WordPress is the built-in Content Management System (CMS).
When building your custom solution, I think this is the part where you’d need to do the most amount of work.
In my case, I always write my blog posts in Markdown (.md) files.
NextJS provides a tool to convert Markdown files to MDX which lets you turn your blog posts into interactive React pages.
My current plan is to mimic a CMS by having my Markdown files in my server’s file storage and using something like @next/mdx to convert in real-time.
This is the step I am most skeptical about, so we will find out how it goes.
Even though my NextJS server will take care of ~75% of the website, some of the features will still depend on backend APIs.
Here are a few that I can think of:
Subscribing to Newsletters
Support “like” functionality in blog posts
Chatbot functionality to interact with my content
Like most of my previous projects, I will use FastAPI running in a docker container.
Given I am offloading my CMS to NextJS, there won’t be a ton of backend data storage needed.
For every blog post, I will need to store the slug, like counter, and maybe a few other metadata. All these and more can be stored in a simple locally running MySQL database.
FastAPI has native integration with MySQL which I have used many times before. I will stick to that.
In the future, if my data model becomes more complex, I will consider other alternatives.
When it comes to collecting and analyzing web traffic data, I am sure only one option comes to mind — Google Analytics.
However, I want to stay far away from Google’s option.
I want to give my users a much more private and secure option.
Enter — Umami.
Umami is open-source, does not use cookies, does not track users across the internet, and gives me the option to own my data.
I will set up a docker container in my server that runs Umami. The data will be stored in a locally running MySQL or Postgres DB.
The most difficult decision has been to choose an open-source newsletter and mailing list management system.
After looking at many options, I have decided to go with listmonk.
Similar to Umami, I will host listmonk locally using docker and Postgres.
Most of my newsletter content is textual. However, if I need to sprinkle in a few assets (images, videos), I can use my server’s file system.
Listmonk also takes care of GDP compliance (and other similar ones), as well as gives a clean interface for users to unsubscribe.
A self-hosted Listmonk requires a custom SMTP server. I could go many different routes:
Self-hosted SMTP server
Gmail
Mailgun
SendGrid
Even though the self-hosted route is the most preferred, there might be security and reputation implications.
I will do more research, but by the looks of things right now, I think most likely I will go with an external SMTP server such as MailGun.
If you have made it this far, I hope you found this valuable.
There are many other things, such as CI/CD pipelines, that I will need to figure out.
The plan is to get the above version working and deployed in production. After that, I will focus on other things.
Regardless, if you want to see how this turns out, or what else I do with my newly acquired server, stay tuned!