Installing MongoDB 3.4 (with SSL) on Ubuntu 16.04 (MS Azure)

Hey everyone. I know it has been a while since I wrote something. I have been busy with multiple, large scale projects during the past few months, so I was almost always too tired at the end of the day to compose a new entry. I also had to relocate; I think the adjustment phase also took a lot of my time and energy. Anyway, what I am going to try to do now is to write short, straight to the point tutorials about how to do specific tasks (as opposed to going into more detailed, wordy posts). I will still write the elaborate ones, but I will be focusing on consistency for now. I have been working on a lot of interesting problems and relevant technologies at work, and I just feel guilty that I do not have enough strength left at the end of the day to document them all.

Let us start with this simple topic just to get back to the habit of writing publicly. I have been configuring Linux VMs for a while now, but I have not really written anything about it, aside from my series of Raspberry Pi posts. Also, it is my first time to work with the Azure platform, so I thought that it might be interesting to write about this today.

This tutorial will assume that the Ubuntu 16.04 VM is already running and you can SSH properly into the box with a sudoer account.

The Basics: Installing MongoDB

You can read about the official steps here. If you prefer looking at just one post to copy and paste code in sequence, I will still provide the instructions below.

Add the MongoDB public key

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6

Add MongoDB to apt's sources list

echo "deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list

Update apt repository and install MongoDB

sudo apt-get update && sudo apt-get install -y mongodb-org

Run and check if MongoDB is running properly

sudo service mongod start
tail -f /var/log/mongodb/mongod.log

If everything went well, you should see something like this:

2017-10-04T01:18:51.854+0000 I NETWORK [thread1] waiting for connections on port 27017

If so, let's continue with the next steps!

Create a root user

I will not get into the details of how to create and manage MongoDB databases and collections here, but let us go into the processes of creating a root user so we manage our database installation remotely through this user.

Connect to MongoDB CLI

mongo

Use the admin database

use admin

Create admin user

db.createUser(
    {
      user: "superadmin",
      pwd: "password123",
      roles: [ "root" ]
    }
)

SSL and some network-related configuration

Now that we have MongoDB installed and running, we need to make some changes with the mongod.conf file to enable SSL and to make our MongoDB installation accessible on our VM's public IP and chosen port.

SSL Certificates

Creating a self-signed certificate

If you already have a certificate or you just bought one for your database for production use, feel free to skip this step. I am just adding this for people who are still experimenting and want SSL enabled from the start. More information regarding this can be found here.

This self-signed certificate will be valid for one year.

sudo openssl req -newkey rsa:2048 -new -x509 -days 365 -nodes -out mongodb-cert.crt -keyout mongodb-cert.key

Create .pem certificate

This .pem certificate is the one that we will use on our mongod.conf configuration file. This command will save it on your home directory (/home/<username>/mongodb.pem or ~/mongodb.pem).

cat mongodb-cert.key mongodb-cert.crt > ~/mongodb.pem

MongoDB Configuration

Now that we have our self-signed certificate and admin user ready, we can go ahead and tweak our MongoDB configuration file to bind our IP, change the port our database will use (if you want to), enable SSL and to enable authorization.

I use vim whenever I am dealing with config files via SSH; you can use your favorite text editor for this one.

sudo vim /etc/mongod.conf

Make sure to change the following lines to look like this:

net:
  port: 27017
  bindIp: 0.0.0.0
  ssl:
    mode: requireSSL
    PEMKeyFile: /home/<username>/mongodb.pem

security:
  authorization: enabled

Restart the MongoDB service:

sudo service mongod restart

If we go ahead and print the MongoDB logs like we did earlier, we should be able to see something that looks like this (notice that there's an SSL now):

2017-10-04T01:18:51.854+0000 I NETWORK [thread1] waiting for connections on port 27017 ssl

If you got that, it means that everything is working fine. We just need to add one more command to make sure that our MongoDB service will restart across VM reboots. systemctl will take care of that for us:

sudo systemctl enable mongod.service

Azure Firewall

Now, if you try to connect to your database using your favorite MongoDB database viewer or by using the Mongo CLI on your local machine, you might notice that you will not be able connect. That's because we need to add an Inbound security rule on the Azure portal first.

Once on the Dashboard, click on All Resources.
Azure Portal Dashboard

Click on the Network Security Group associated with your VM.

Azure Portal Inbound Security Rules

From here, you can see a summary of all the security rules you have for your virtual network. Click on Inbound security rules under Settings on the left pane.

Azure Portal Network Security Group Settings

Click Add. You should be able to see a form with a lot of fields. We are used MongoDB's default port, so we can just click on Basic at the top so we can select from a list of preset ports.

Basic Inbound security rules form

Just click on OK, and we are done! You can start connecting to your MongoDB installation using your tool of choice.

Enhancing my self-hosted blog with Cloudflare

This post is not sponsored by Cloudflare; it is an update on my self-hosting journey with the Raspberry Pi.

I am happy with the result of the script that I shared on my last post because I no longer have to manually reboot the Pi every time the Internet connection goes down. However, it is still suboptimal; if the Internet connection goes down for an extended period of time, the blog goes down with it. Not only is it bad for would be readers, it was also frustrating on my end. The thought of moving this blog to a cheap cloud instance crossed my mind during the first few days, but I had to think of something more pragmatic. That was when I decided to check Cloudflare out. When I found out that they are offering a free plan that has more features than what I would need for this blog, I was sold.

Cloudflare is a security company that gained notoriety for stopping DDoS attacks through their Content Delivery Network (CDN)-like feature. It can help your site become more performant by caching your static content in their data centers around the world. This enables your site to load faster and allows more concurrency by serving cached content first before hitting your server. Cloudflare offers this and more for free; including three page rules, analytics, free SSL through their network and even enabling security measures like HTTP Strict Transport Security (HSTS). All of these can be easily configured in their nice looking dashboard. If you want to read more about the company's history, here is a good article about their humble beginning.

Getting a Cloudflare account is straightforward. A walkthrough video of the initial setup process is available on their landing page. In a nutshell, the process only has three steps:

  • Signing up with your email address and password
  • Adding your domain
  • Pointing your domain's nameservers to Cloudflare's own nameservers

After going through those steps quickly, you will be presented with a modern, easy to use admin interface:
Cloudflares dashboard image

It will be impossible to discuss all of what Cloudflare has to offer in a single post, so I will just write about the tweaks that I did to suit my current self-hosted Raspberry Pi setup.

Cypto

I obtained my domain's SSL certificate through Let's Encrypt, a trusted certificate authority that issues certificates for free. Since I have my own certificate configured on NGINX, I do not need to use Cloudflare's free SSL. I just selected Full (Strict) mode under SSL and enabled HSTS, Opportunistic Encryption and Automatic HTTPS Rewrites.

Speed

I enabled Auto Minify for both Javascript and CSS to optimize load times and save on bandwidth. I decided against minifying the HTML to preserve the blog's markup, which in my opinion is important for search engine optimization. I also enabled the Accelerated Mobile Links support for a better mobile reading experience. They also have a Beta feature called Rocket Loader™ (improves the load time of pages with JavaScript), this is off by default, but I decided to give it a try.

Caching

This is the feature that I needed the most. I clicked on this menu before I even explored the other settings above. I made sure Always Online™ is on, and made some minor adjustments with the Browser Cache Expiration.

Page Rules

Cloudflare gives you three page rules for free, and you can subscribe should you need more. Here's how I made use of my free page rules:

Cloudflares Page Rule Settings


Dynamic DNS Configuration

My blog's DNS records are now being handled by Cloudflare so I need to make sure that they are updated automatically if my ISP gives me a new IP address.

The easiest way to achieve this is to install ddclient from Raspbian's default repository, along with the Perl dependencies:

sudo apt-get install ddclient libjson-any-perl

Unfortunately, this version of ddclient does not support Cloudflare's Dynamic DNS API. We need to download the current version here, and overwrite the executable that has been installed by the previous command:

$ wget http://downloads.sourceforge.net/project/ddclient/ddclient/ddclient-3.8.3.tar.bz2

$ tar -jxvf ddclient-3.8.3.tar.bz2

$ cp -f ddclient-3.8.3/ddclient /usr/sbin/ddclient

We installed the old version first to benefit from the daemon that comes with it. This daemon keeps ddclient running in the background and spawns it automatically after each reboot.

This new version of ddclient looks for the configuration file in a different directory so we need to create that directory and move our old configuration file:

$ sudo mkdir /etc/ddclient
$ sudo mv /etc/ddclient.conf /etc/ddclient

Here's my ddclient.conf for reference:

# Configuration file for ddclient generated by debconf
#
# /etc/ddclient.conf

protocol=cloudflare
zone=johncrisostomo.com
use=web
server=www.cloudflare.com
login=*Enter your cloudflare email address here*
password=*Enter your API key here*
blog.johncrisostomo.com

We can now restart ddclient and check its status to make sure that everything is working as expected:

$ sudo service ddclient restart
$ sudo service ddclient status -l

The last command should give you the current status of the daemon along with the latest event logs. Check the event logs for any error messages or warnings, and if everything turned out to be okay, you should see something similar to this: 

SUCCESS: blog.johncrisostomo.com -- Updated Successfully to xxx.xxx.xxx.xxx.



So far this setup works well and I am happy with the blog's performance. It is a shame that I have not gathered data before Cloudflare to objectively compare the performance boost I am getting out of it. However, the blog's initial loading time has become noticeably faster, at least on my end. I guess we will have to see in the next couple of days.

Troubleshooting my Raspberry Pi's Wireless Issue

It has been almost a week since I decided to self-host my Ghost blog. It was a fun experience and most importantly, I knew a lot of new things that I would not otherwise know. On the less technical side, it inspired me to write more about my learning journey because not only does it solidify what I already know, it also drives me to learn more.

There is a little problem though. My Internet connection is flaky and it causes my blog to be sporadically down throughout the day. This is not intended to be a for-profit blog, however, seeing people share some of my posts while my blog is down was frustrating. I just had to do something about it. I observed the Pi's behavior by writing several BASH scripts and cron jobs that makes sure these events are logged. Sifting through the logs after work, I found out that aside from the ISP problem, there is another queer phenomenon that was happening. Whenever my home router loses Internet connection, the Raspberry Pi will lose its default gateway; it persists even after rebooting the router.

My initial attempts to fix this issue was to mess with the resolve.conf and /etc/network/interfaces configuration files. I tried everything from manualdhcpand even static. Nothing really fixed the issue and it was still losing the default gateway route whenever the Internet connection goes down. I finally solved this problem by writing a small BASH script:

#!/bin/bash

ping -c1 google.com > /dev/null

if [ $? != 0 ]
then
  echo `date` "No network connection, restarting wlan0" >> /home/uplogs.txt
  /sbin/ifdown 'wlan0'
  sleep 5
  /sbin/ifup --force 'wlan0'
else
  echo `date` "Internet seems to be up" >> /home/uplogs.txt
fi 

The script pings google.com and then checks the exit code. If the ping exited with an error, the Pi restarts the wireless LAN interface. It also logs all these events so that I can check how reliable my Internet connection was throughout the day. It was a quick and dirty fix. Nothing fancy, but it works.

Weekend Project: Self-hosted blog & Docker in a Raspberry Pi

I received a Raspberry Pi 3 Model B last Christmas, but I did not know what to do with. Or at least not yet. The problem has little to do with the Pi and more of the fact that most of the projects that I do can easily be solved with an Arduino.

When I stumbled upon these series of posts by the Docker Captain Alex Ellis, I figured out that this is a perfect opportunity to learn a tool I have always wanted to use. I know virtual machines well, but I had a hard time understanding how to make Docker fit into my workflow. The idea of containers that I cannot simply SSH into (I now know that you can exec bash to peek inside them, but that's not the point), just seemed absurd when I was first trying to use it. To be honest it felt too complex and cumbersome that I just dismissed it as something that was not worth it. Well, it turned out that I did not understand the philosophy behind it. I would like to talk about it and discuss images and containers in depth, but I decided that it will be better to have a dedicated post for that. After getting my hands dirty with Docker last weekend, I can say that I have attained a working proficiency with it and I can comfortably use it for my projects from here on.

After three days, I finally got it to work. The blog that you are reading right now is hosted on a Raspberry Pi with Docker Engine installed. I have two Docker containers running: the Ghost blog and the NGINX server that handles the caching. It took me a lot of trial and errors before I finally got it to work; I do not have any prior knowledge of NGINX when I embarked on this weekend project. The Pi's limited hardware made building images painstakingly slow. Building SQLite3 from source for the ARM architecture was excruciating.

I will be sharing my Dockerfiles and some configuration below. I won't go into more detail right now, but I am hoping that I will have the time to do so in my next post. Some of these are directly forked/copied from [Alex]((http://blog.alexellis.io)'s GitHub repositories; I could have pulled the images from Docker Hub or cloned the Dockerfiles but I decided to train my muscle memory by typing the Dockerfiles manually. I still have a lot to learn about NGINX and Docker in particular, but I consider this blog as a milestone.

Ghost Dockerfile

FROM alexellis2/node4.x-arm:latest

USER root
WORKDIR /var/www/
RUN mkdir -p ghost
RUN apt-get update && \
    apt-get -qy install wget unzip && \
    wget https://github.com/TryGhost/Ghost/releases/download/0.11.4/Ghost-0.11.4.zip && \
    unzip Ghost-*.zip -d ghost && \
    apt-get -y remove wget unzip && \
    rm -rf /var/lib/apt/lists/*

RUN useradd ghost -m -G www-data -s /bin/bash
RUN chown ghost:www-data .
RUN chown ghost:www-data ghost
RUN chown ghost:www-data -R ghost/*
RUN npm install -g pm2

USER ghost
WORKDIR /var/www/ghost
RUN /bin/bash -c "time (npm install sqlite3)"
RUN npm install

EXPOSE 2368
EXPOSE 2369
RUN ls && pwd

ENV NODE_ENV production

RUN sed -e s/127.0.0.1/0.0.0.0/g ./config.example.js > ./config.js
CMD ["pm2", "start", "index.js", "--name", "blog", "--no-daemon"]

Blog Dockerfile

FROM johncrisostomo/ghost-on-docker-arm:0.11.4

ADD Vapor /var/www/ghost/content/themes/Vapor

RUN sed -i s/my-ghost-blog.com/blog.johncrisostomo.com/g config.js

NGINX Dockerfile

FROM resin/rpi-raspbian:latest

RUN apt-get update && apt-get install -qy nginx

WORKDIR /etc/nginx/

RUN rm /var/www/html/index.nginx-debian.html && \
    rm sites-available/default && \
    rm sites-enabled/default && \
    rm nginx.conf

COPY nginx.conf /etc/nginx/

COPY johncrisostomo.com.conf conf.d/

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]

JOHNCRISOSTOMO.COM.CONF

server {
  listen 80;
  server_name blog.johncrisostomo.com;
  access_log /var/log/nginx/blog.access.log;
  error_log /var/log/nginx/blog.error.log;

  location / {
    proxy_cache              blog_cache;
    add_header X-Proxy-Cache $upstream_cache_status;
    proxy_ignore_headers     Cache-Control;
    proxy_cache_valid any    10m;
    proxy_cache_use_stale    error timeout http_500 http_502 http_503 http_504;

    proxy_set_header  X-Real-IP $remote_addr;
    proxy_set_header  Host      $http_host;
    proxy_pass        http://blog:2368;
  }
}

docker-compose.yml

version: "2.0"
services:
  nginx:
    ports:
      - "80:80"
    build: "./nginx/"
    restart: always

  blog:
    ports:
      - "2368:2368"
    build: "./blog.johncrisostomo.com/"
    volumes:
      - ghost_apps:/var/www/ghost/content/apps
      - ghost_data:/var/www/ghost/content/data
      - ghost_images:/var/www/ghost/content/images
      - ghost_themes:/var/www/ghost/content/themes
    restart: always

volumes:
   ghost_apps:
      driver: local
   ghost_data:
      driver: local
   ghost_images:
      driver: local
   ghost_themes:
      driver: local

I have written several follow up posts about this project. Feel free to check them out as most of them are troubleshooting issues and optimizations that are built on top of this project.