Switching back to PostHaven

The first time I evaluated Posthaven, I did not even spend three hours on it. If I remembered correctly, the experience went like this:

  1. I hyped myself up by reading posts about Posthaven
  2. Tried to migrate some of my programming posts to it
  3. Immediately realised that formatting code in this platform sucks and;
  4. Gist embeds does not work (script tags are forbidden, although I understand for a good reason)
  5. Customisation options are severely limited (I am not even trying to do hardcore customisation, I just wanted to add a couple of menus and a date archive similar to what you can see here)
  6. There is no option to truncate posts on the homepage and limit the number of posts that are displayed

At that point, I just gave up and cancelled my account. I emailed them to give me my money back. I cannot even remember if I gave them an account of the experience I had above, but here it is now.

I immediately went over Digital Ocean and it did not take me long before I can set up a Ghost blog by provisioning a $5 droplet. Having done this over and over again with multiple cloud VMs, containers and even on a Raspberry Pi, it was almost automatic for me.

Fig. 1: Current snapshot of my Ghost blog.

And I think that is where my problem is.

If everything went right, you are now probably reading this post on Posthaven. You have read that right. I switched back to this ugly site and migrated all my posts by hand (fixing the code indentations manually, even wrapping them on pre and code tags by hand, just like these two words). Regarding my old Ghost blog, no, nothing disastrous happened. It was not hacked nor did it break down due to neglect. I am abandoning it solely due to the reasons I have stated above: I have done these steps over and over again for the past few years and I am no longer willing to do it in the next years to come

Why? Well, I am planning to write more on a regular basis, again. And this time, the posts that you will read here will no longer be just soulless tech how-tos. They will be akin to how I have usually written things prior to my attempts to attract the attention of technical recruiters. No, seriously, I will try to be more personal this time and share more about my life (?) and possibly the things that I do.

But John, you are not making sense. What's your point?

When I first thought about writing again, I realised that blog maintenance takes a good amount of time. The more time I spend as a software engineer, the more lazier I get with regards to these things. Not to mention irritable. Do not get me wrong, though. I enjoy tinkering and creating stuff. But repetitive and tedious stuff that does not give me something new, something to be excited about, is just not my thing. 

Sooner or later, something somewhere is going to break. And no amount of automation or code quality check can prevent this from happening. It could be anything: a dependency update gone wrong, a network hiccup or even a heisenbug. That is just how software is. Probably the same reason why people still cannot see software development as real engineering. Most of our time in software engineering is spent either fixing things or finding out what went wrong, no different from how other people view IT as a profession: fixing computers. Heck, even this service might break the next moment. 

But that is the beauty of it. If it breaks, I know I cannot do anything about it. I will not stress about it and just let the good guys figure it out while I think about what to write next. By now, it might be getting obvious where I am getting at. I used Posterous heavily before it got acquired by Twitter. I continued using it after the acquisition but I did not bother to export my data by the time it was approaching its demise. If you have known me long enough, you must have known that I have maintained a couple of blogs during my formative years, with topics ranging from literature, photography and even the local skateboarding scene in the Philippines. Some of them are still around with obscure names, some suffered the same fate as Posterous and most of them I have just recently made password protected (maybe I will write as to why in the next couple of weeks). 

The first thought I had was to create something like this, a blogging service that was intended to last forever. I even prototyped a distributed web application on the Ethereum platform for it. I did not release it because I thought the walled garden implementation that I did suck (there are better ways to shoulder gas prices for your users now, though) and that I know for a fact that maintenance is going to suck harder as the libraries such as web3.js have breaking changes all the time. The DApp will definitely break, sooner rather than later.

In case you are not in the know, Posthaven was made by the same guys who made Posterous. That and the promise they have is the ultimate reason why I switched back. It was also reassuring to know that in case I decide to stop paying for this service twelve months from now, this will permanently be online in archive mode. It might be a marketing gimmick for all I know, but I now treat these things with arguments no different from Blaise Pascal's wager.

Enhancing my self-hosted blog with Cloudflare

This post is not sponsored by Cloudflare; it is an update on my self-hosting journey with the Raspberry Pi.

I am happy with the result of the script that I shared on my last post because I no longer have to manually reboot the Pi every time the Internet connection goes down. However, it is still suboptimal; if the Internet connection goes down for an extended period of time, the blog goes down with it. Not only is it bad for would be readers, it was also frustrating on my end. The thought of moving this blog to a cheap cloud instance crossed my mind during the first few days, but I had to think of something more pragmatic. That was when I decided to check Cloudflare out. When I found out that they are offering a free plan that has more features than what I would need for this blog, I was sold.

Cloudflare is a security company that gained notoriety for stopping DDoS attacks through their Content Delivery Network (CDN)-like feature. It can help your site become more performant by caching your static content in their data centers around the world. This enables your site to load faster and allows more concurrency by serving cached content first before hitting your server. Cloudflare offers this and more for free; including three page rules, analytics, free SSL through their network and even enabling security measures like HTTP Strict Transport Security (HSTS). All of these can be easily configured in their nice looking dashboard. If you want to read more about the company's history, here is a good article about their humble beginning.

Getting a Cloudflare account is straightforward. A walkthrough video of the initial setup process is available on their landing page. In a nutshell, the process only has three steps:

  • Signing up with your email address and password
  • Adding your domain
  • Pointing your domain's nameservers to Cloudflare's own nameservers

After going through those steps quickly, you will be presented with a modern, easy to use admin interface:
Cloudflares dashboard image

It will be impossible to discuss all of what Cloudflare has to offer in a single post, so I will just write about the tweaks that I did to suit my current self-hosted Raspberry Pi setup.


I obtained my domain's SSL certificate through Let's Encrypt, a trusted certificate authority that issues certificates for free. Since I have my own certificate configured on NGINX, I do not need to use Cloudflare's free SSL. I just selected Full (Strict) mode under SSL and enabled HSTS, Opportunistic Encryption and Automatic HTTPS Rewrites.


I enabled Auto Minify for both Javascript and CSS to optimize load times and save on bandwidth. I decided against minifying the HTML to preserve the blog's markup, which in my opinion is important for search engine optimization. I also enabled the Accelerated Mobile Links support for a better mobile reading experience. They also have a Beta feature called Rocket Loader™ (improves the load time of pages with JavaScript), this is off by default, but I decided to give it a try.


This is the feature that I needed the most. I clicked on this menu before I even explored the other settings above. I made sure Always Online™ is on, and made some minor adjustments with the Browser Cache Expiration.

Page Rules

Cloudflare gives you three page rules for free, and you can subscribe should you need more. Here's how I made use of my free page rules:

Cloudflares Page Rule Settings

Dynamic DNS Configuration

My blog's DNS records are now being handled by Cloudflare so I need to make sure that they are updated automatically if my ISP gives me a new IP address.

The easiest way to achieve this is to install ddclient from Raspbian's default repository, along with the Perl dependencies:

sudo apt-get install ddclient libjson-any-perl

Unfortunately, this version of ddclient does not support Cloudflare's Dynamic DNS API. We need to download the current version here, and overwrite the executable that has been installed by the previous command:

$ wget http://downloads.sourceforge.net/project/ddclient/ddclient/ddclient-3.8.3.tar.bz2

$ tar -jxvf ddclient-3.8.3.tar.bz2

$ cp -f ddclient-3.8.3/ddclient /usr/sbin/ddclient

We installed the old version first to benefit from the daemon that comes with it. This daemon keeps ddclient running in the background and spawns it automatically after each reboot.

This new version of ddclient looks for the configuration file in a different directory so we need to create that directory and move our old configuration file:

$ sudo mkdir /etc/ddclient
$ sudo mv /etc/ddclient.conf /etc/ddclient

Here's my ddclient.conf for reference:

# Configuration file for ddclient generated by debconf
# /etc/ddclient.conf

login=*Enter your cloudflare email address here*
password=*Enter your API key here*

We can now restart ddclient and check its status to make sure that everything is working as expected:

$ sudo service ddclient restart
$ sudo service ddclient status -l

The last command should give you the current status of the daemon along with the latest event logs. Check the event logs for any error messages or warnings, and if everything turned out to be okay, you should see something similar to this: 

SUCCESS: blog.johncrisostomo.com -- Updated Successfully to xxx.xxx.xxx.xxx.

So far this setup works well and I am happy with the blog's performance. It is a shame that I have not gathered data before Cloudflare to objectively compare the performance boost I am getting out of it. However, the blog's initial loading time has become noticeably faster, at least on my end. I guess we will have to see in the next couple of days.

Troubleshooting my Raspberry Pi's Wireless Issue

It has been almost a week since I decided to self-host my Ghost blog. It was a fun experience and most importantly, I knew a lot of new things that I would not otherwise know. On the less technical side, it inspired me to write more about my learning journey because not only does it solidify what I already know, it also drives me to learn more.

There is a little problem though. My Internet connection is flaky and it causes my blog to be sporadically down throughout the day. This is not intended to be a for-profit blog, however, seeing people share some of my posts while my blog is down was frustrating. I just had to do something about it. I observed the Pi's behavior by writing several BASH scripts and cron jobs that makes sure these events are logged. Sifting through the logs after work, I found out that aside from the ISP problem, there is another queer phenomenon that was happening. Whenever my home router loses Internet connection, the Raspberry Pi will lose its default gateway; it persists even after rebooting the router.

My initial attempts to fix this issue was to mess with the resolve.conf and /etc/network/interfaces configuration files. I tried everything from manualdhcpand even static. Nothing really fixed the issue and it was still losing the default gateway route whenever the Internet connection goes down. I finally solved this problem by writing a small BASH script:


ping -c1 google.com > /dev/null

if [ $? != 0 ]
  echo `date` "No network connection, restarting wlan0" >> /home/uplogs.txt
  /sbin/ifdown 'wlan0'
  sleep 5
  /sbin/ifup --force 'wlan0'
  echo `date` "Internet seems to be up" >> /home/uplogs.txt

The script pings google.com and then checks the exit code. If the ping exited with an error, the Pi restarts the wireless LAN interface. It also logs all these events so that I can check how reliable my Internet connection was throughout the day. It was a quick and dirty fix. Nothing fancy, but it works.