Troubleshooting my Raspberry Pi's Wireless Issue

It has been almost a week since I decided to self-host my Ghost blog. It was a fun experience and most importantly, I knew a lot of new things that I would not otherwise know. On the less technical side, it inspired me to write more about my learning journey because not only does it solidify what I already know, it also drives me to learn more.

There is a little problem though. My Internet connection is flaky and it causes my blog to be sporadically down throughout the day. This is not intended to be a for-profit blog, however, seeing people share some of my posts while my blog is down was frustrating. I just had to do something about it. I observed the Pi's behavior by writing several BASH scripts and cron jobs that makes sure these events are logged. Sifting through the logs after work, I found out that aside from the ISP problem, there is another queer phenomenon that was happening. Whenever my home router loses Internet connection, the Raspberry Pi will lose its default gateway; it persists even after rebooting the router.

My initial attempts to fix this issue was to mess with the resolve.conf and /etc/network/interfaces configuration files. I tried everything from manualdhcpand even static. Nothing really fixed the issue and it was still losing the default gateway route whenever the Internet connection goes down. I finally solved this problem by writing a small BASH script:


ping -c1 > /dev/null

if [ $? != 0 ]
  echo `date` "No network connection, restarting wlan0" >> /home/uplogs.txt
  /sbin/ifdown 'wlan0'
  sleep 5
  /sbin/ifup --force 'wlan0'
  echo `date` "Internet seems to be up" >> /home/uplogs.txt

The script pings and then checks the exit code. If the ping exited with an error, the Pi restarts the wireless LAN interface. It also logs all these events so that I can check how reliable my Internet connection was throughout the day. It was a quick and dirty fix. Nothing fancy, but it works.

Getting started with tmux

I have been using tmux for several years now and it has since become a central part of my workflow as a software developer. Since I am constantly writing code, executing shell commands or accessing server instances via SSH, most of these things are done in the terminal. I am always on the lookout for cool and new tools that could potentially improve my workflow, so I checked tmux out. I knew I just have to get some hands on experience with it to find out just where it fits in my current flow.

tmux is being described as a terminal multiplexer. When I was first starting out, it was such a big word that added appeal to it. I thought it was leet, especially when I was still new to it. During my first days, I was using it solely for the sake of using it.

At this time, it became so ingrained into my system that the first thing I do upon arriving at work is spawn a terminal window into full screen and set the tmux windows that I will be using throughout the day.

Before I start with the basic commands though, I have to clear up a preconception that some people have about it. No, it does not manage your SSH connections. I need to stress this out because I have a certain colleague in the past who told me it was an ancient tool and dismissed it as a trend among hipster developers. He said he's better off using PAC Manager for all his SSH needs. These tools are apples and oranges. They complement each other; I also use PAC Manager because there is no way I will remember all the usernames and the host addresses I need to work with throughout the day.

To give a simple description as to what tmux is, you have to think of it as a serverthat serves terminal sessions. That allows you to attach and detach from it at will, and also gives other people a chance to attach to your existing tmux session. That is the main feature that makes it so awesome for everyone who works with remote stuff. Let us say that you have a VPS instance somewhere and you need to do some maintenance work. You SSH into your server, tell tmux to create a new terminal session and proceed with your work. After fifteen minutes or so, you remember that you have an important meeting to attend. The problem is that you aren't quite yet done with your work. As a contrived example, perhaps the server is doing a vulnerability scan or building something from source. Since you are attached to a tmux session, you can just kill your SSH connection. In tmux terms, it is referred to as detaching. After the meeting, you can attach back to your session and you will be presented with exactly the same screen as when you left. This enables you to see the scan results or the build progress without digging into the logs or trying to remember how it was doing before you left.

Another benefit of using tmux is, you will use your mouse less often once you get the hang of it. If you spend most of the day coding, reaching for the mouse to switch files or scroll through your code breaks the cadence. These are small personal idiosyncrasies, however, if you are plagued by the same quirk, you might want to learn VIM as well.

The good thing is that you only need to know a few commands to use tmux effectively. There are a whole lot of features and customization options available but you can learn them along the way. If you have used Emacs before, these commands will make you feel at home as the key combinations are somehow similar.

Outside a tmux session

Creating a new session

tmux new -s [session name]

Listing sessions

tmux ls

Attaching to an existing session

tmux attach -t [session name]

Inside a tmux session

Splitting the screen vertically

Ctrl - b % 

Splitting the screen horizontally

Ctrl - b "

Pane Navigation

Ctrl - arrow keys

Maximize a pane (from splitting)

Ctrl - b z

Closing a pane (from splitting)

Ctrl - d

Opening a new window

Ctrl - b c

Renaming a window

Ctrl - b ,

Window Navigation

Ctrl - b n


Ctrl -b p

Closing a window

Ctrl - b &

Detaching from a session

Ctrl - b d

I hope I have covered enough of the basics to get you started. Happy hacking!

Weekend Project: Self-hosted blog & Docker in a Raspberry Pi

I received a Raspberry Pi 3 Model B last Christmas, but I did not know what to do with. Or at least not yet. The problem has little to do with the Pi and more of the fact that most of the projects that I do can easily be solved with an Arduino.

When I stumbled upon these series of posts by the Docker Captain Alex Ellis, I figured out that this is a perfect opportunity to learn a tool I have always wanted to use. I know virtual machines well, but I had a hard time understanding how to make Docker fit into my workflow. The idea of containers that I cannot simply SSH into (I now know that you can exec bash to peek inside them, but that's not the point), just seemed absurd when I was first trying to use it. To be honest it felt too complex and cumbersome that I just dismissed it as something that was not worth it. Well, it turned out that I did not understand the philosophy behind it. I would like to talk about it and discuss images and containers in depth, but I decided that it will be better to have a dedicated post for that. After getting my hands dirty with Docker last weekend, I can say that I have attained a working proficiency with it and I can comfortably use it for my projects from here on.

After three days, I finally got it to work. The blog that you are reading right now is hosted on a Raspberry Pi with Docker Engine installed. I have two Docker containers running: the Ghost blog and the NGINX server that handles the caching. It took me a lot of trial and errors before I finally got it to work; I do not have any prior knowledge of NGINX when I embarked on this weekend project. The Pi's limited hardware made building images painstakingly slow. Building SQLite3 from source for the ARM architecture was excruciating.

I will be sharing my Dockerfiles and some configuration below. I won't go into more detail right now, but I am hoping that I will have the time to do so in my next post. Some of these are directly forked/copied from [Alex](('s GitHub repositories; I could have pulled the images from Docker Hub or cloned the Dockerfiles but I decided to train my muscle memory by typing the Dockerfiles manually. I still have a lot to learn about NGINX and Docker in particular, but I consider this blog as a milestone.

Ghost Dockerfile

FROM alexellis2/node4.x-arm:latest

USER root
WORKDIR /var/www/
RUN mkdir -p ghost
RUN apt-get update && \
    apt-get -qy install wget unzip && \
    wget && \
    unzip Ghost-*.zip -d ghost && \
    apt-get -y remove wget unzip && \
    rm -rf /var/lib/apt/lists/*

RUN useradd ghost -m -G www-data -s /bin/bash
RUN chown ghost:www-data .
RUN chown ghost:www-data ghost
RUN chown ghost:www-data -R ghost/*
RUN npm install -g pm2

USER ghost
WORKDIR /var/www/ghost
RUN /bin/bash -c "time (npm install sqlite3)"
RUN npm install

RUN ls && pwd

ENV NODE_ENV production

RUN sed -e s/ ./config.example.js > ./config.js
CMD ["pm2", "start", "index.js", "--name", "blog", "--no-daemon"]

Blog Dockerfile

FROM johncrisostomo/ghost-on-docker-arm:0.11.4

ADD Vapor /var/www/ghost/content/themes/Vapor

RUN sed -i s/ config.js

NGINX Dockerfile

FROM resin/rpi-raspbian:latest

RUN apt-get update && apt-get install -qy nginx

WORKDIR /etc/nginx/

RUN rm /var/www/html/index.nginx-debian.html && \
    rm sites-available/default && \
    rm sites-enabled/default && \
    rm nginx.conf

COPY nginx.conf /etc/nginx/

COPY conf.d/


CMD ["nginx", "-g", "daemon off;"]


server {
  listen 80;
  access_log /var/log/nginx/blog.access.log;
  error_log /var/log/nginx/blog.error.log;

  location / {
    proxy_cache              blog_cache;
    add_header X-Proxy-Cache $upstream_cache_status;
    proxy_ignore_headers     Cache-Control;
    proxy_cache_valid any    10m;
    proxy_cache_use_stale    error timeout http_500 http_502 http_503 http_504;

    proxy_set_header  X-Real-IP $remote_addr;
    proxy_set_header  Host      $http_host;
    proxy_pass        http://blog:2368;


version: "2.0"
      - "80:80"
    build: "./nginx/"
    restart: always

      - "2368:2368"
    build: "./"
      - ghost_apps:/var/www/ghost/content/apps
      - ghost_data:/var/www/ghost/content/data
      - ghost_images:/var/www/ghost/content/images
      - ghost_themes:/var/www/ghost/content/themes
    restart: always

      driver: local
      driver: local
      driver: local
      driver: local

I have written several follow up posts about this project. Feel free to check them out as most of them are troubleshooting issues and optimizations that are built on top of this project.

Cebu Mechanical Keyboard Enthusiasts Meetup 8/6

I have been busy these past few weeks since I started working on a server monitoring application. It will be released later this year; I have been busy and didn't have much time to blog and share about the new things that I have learned.

Despite the busy schedule, I managed to attend my first Mechanical Keyboard meetup at the Coffee Factory. Here are some of the awesome keyboards at the event.

Using Gagarin’s DDP Client to test Meteor methods, publications and subscriptions

On my previous post, we briefly went over unit testing in Meteor and Mantra by using Sinon’s spy and stub. We have discussed the difference between the two functions and determined when to use them for our unit tests.

Today, we are going to go through basic integration testing with methods and publications/subscriptions using Gagarin’s DDP client. Gagarin is the Mantra spec’s recommended testing framework for doing integration testing. It is versatile and can do a lot more that what we are going to cover here, like using chromedriver and Selenium for end to end testing, for example.

About Gagarin

According to the project’s documentation,

Gagarin is a mocha based testing framework designed to be used with Meteor. It can spawn multiple instances of your meteor application and run the tests by executing commands on both server and client in realtime. In other words, Gagarin allows you to automate everything you could achieve manually with meteor shell, browser console and a lot of free time. There’s no magic. It just works.

Gagarin is based on Laika, another testing framework created by Arunoda. According to the documentation, it can be thought of as Laika 2.0, though it is not backward compatible. The main differences between Gagarin, Laika and Velocity can also be found on the documentation above.


We can simply install Gagarin by running

npm install -g gagarin

Once we have written some tests, we can just run this command at the root of our app directory:


By default, it will look for files that are in the tests/gagarin/ directory. It will build our app inside .gagarin/local/ along with the database that it will use for the duration of the test, which can be found at .gagarin/db/.
Now that we have a basic understanding of how to install and run Gagarin, let’s proceed and look at the code snippets that we are going to test.

Meteor code snippets (for testing)

In order for us to understand what we are testing, the code snippets will be included here so we can easily reference the functions and how they are being tested. I have simplified these code snippets so that we can focus more on testing.


const Categories = new Mongo.Collection(‘categories’);


Meteor.publish(‘categoriesList’, () => {
  return Categories.find()
Meteor.publish(‘categoriesOwnedBy’, (owner) => {
  check(owner, String);
  return Categories.find({owner: owner});


  ‘categoriesAdd’(data) {
     check(data, Object);
       name :,
       owner: data.owner,
       createdAt : new Date(),
   ‘categoriesUpdate’(data) {
     check(data, Object);
       name :, 
       owner: data.owner,
       modifiedAt : new Date(),

Writing Gagarin Tests

Now that we have seen the code that we are going to test, we can now start writing basic tests in a single JavaScript file inside tests/gagarin/. Because Gagarin is based on Mocha, it has the same describe — it structure. Chai’s expect is also exposed for doing more semantic assertions.

Testing the categoriesAdd method

The test we are going to do first is to check whether or not we can add something to the categories collection.

describe(‘Categories’, function() {
  var app = meteor({flavor: “fiber”});
  var client = ddp(app, {flavor: “fiber”});
  it(‘should be able to add’, function() {‘categoriesAdd’, [{name: ‘First category’}]);
    var categories = client.collection(“categories”);

We are defining the initial describe block that we are going to use for this example. Gagarin gives us two useful global functions that are essential for running tests: meteor and ddp.

meteor is used to spawn a new Meteor instance that we have assigned to the app variable. Meteor uses fibers by default, so we need to specify it as the flavor. ddp allows a client to connect to the Meteor instance that we have just created by passing the reference of the instance and the flavor as its arguments.

Since we now have our Meteor app and our client configured, we are ready to proceed with our first test case: making sure that we can successfully add a new category.

Inside our it block, we are calling the Meteor method categoriesAdd. Gagarin provides our client with a handy call function that works exactly the same way as The only difference is that the arguments need to be inside an array, regardless of its number.

We then use the sleep function to add a little delay so that we can make sure that the new document comes to the client. We are subscribing to our categoriesList publication through the handy subscribe function of our client. Just like the call function, this is similar to Meteor.subscribe, which makes it very straightforward.

After subscribing to our publication, we now check if the document has been inserted by our Meteor method to the collection. We do that by calling the collection function of our client, passing the name of the MongoDB collection as an argument. It returns an object that looks like this:

{ Hpu6Z4h7ZFtC6Q77m: 
 { _id: ‘Hpu6Z4h7ZFtC6Q77m’,
 name: ‘First category’,
 owner: null,
 modifiedAt: 2016–06–07T08:29:06.026Z,
 createdAt: 2016–06–07T08:29:06.026Z } }

It looks similar to something that we would get if we query our collection using find, aside from the fact that instead of getting back an array or a cursor, we are getting an object which has the _id field as a key.

We then use Chai’s expect function to do a simple assertion and that completes our first test. Object.keys has been used on the object that was returned by the collection function, so we can just expect the resulting array to have a length of 1. This test makes us sure that the client can call our method, and can receive the the document through our publication.

Testing the categoriesUpdate method

We now have a basic test that checks if we can insert and retrieve documents, what we want to do next is to check if we can update a certain category from our collection. The process is similar to what we did on the last section (still goes inside the same describe block):

it(‘should be able to update’, function() {
  var categories = client.collection(“categories”);
  var id = Object.keys(categories)[0];‘categoriesUpdate’, [{_id:id, name: ‘updated    category’}]);
  categories = client.collection(“categories”);
  expect(categories[id].name).to.equal(‘updated category’);

The only thing that is new here is that we are storing the id of the category that we want to update so we can use it when we call categoriesUpdate. We can then check if the name has been updated by using expect.

Testing categoriesOwnedBy publication

The next thing that we will test is the categoriesOwnedBy publication. Since we did not use the owner field in our previous examples, we will put this test on a separate describe block. That will allow us to spawn a new Meteor and database instances that has nothing to do with the previous one.

describe(‘categoriesOwnedBy publication’, function() {
  var app = meteor({flavor: “fiber”});
  var client = ddp(app, {flavor: “fiber”});
  it(‘should only publish a specific users category’, function() {
    app.execute(function() {
      var categories = [
        { name: ‘Category 1’, owner: ‘John’ },
        { name: ‘Category 2’, owner: ‘Jessica’ },
        { name: ‘Category 3’, owner: ‘John’ }
      ];‘categoriesAdd’, categories[0]);‘categoriesAdd’, categories[1]);‘categoriesAdd’, categories[2]);
  client.subscribe(‘categoriesOwnedBy’, [‘John’]);
  var johnsCategories = client.collection(‘categories’);

This looks similar to our two previous examples, but this time I am using the execute function of our Meteor instance. It accepts a callback function as an argument, and the contents of that function will be executed in the server context. Notice how we have access to inside this function?

We then go back to the client context and subscribe to our categoriesOwnedBypublication, passing ‘John’ as our argument. After fetching the contents of the collection, we are checking if we are getting the expected number of documents that was published by our collection.

Running our test

If we run gagarin on the root folder of our application, we will get something similar to this:


Using the examples above, we have seen how to create simple integration tests using Gagarin on Meteor. These test cases might seem contrived, but the idea here is to get an overview of how to use Gagarin’s DDP client to perform basic integration tests that deals with Meteor methods, publications and subscriptions.

Using Sinon’s Spy and Stub in Mantra (Unit Testing)

With the release of Meteor 1.3, unit testing has never been easier in Meteor. Our team recently decided to adopt Arunoda’s Mantra spec for developing Meteor applications. It is an application architecture which allows for a more modular approach with clear separation of concerns with the client and the server side. It has been a rough month for us since the spec is new and there are limited learning resources available. There were also a lot of things to learn since adopting the Mantra spec meant that we have to learn React for the presentation logic, instead of sticking with Blaze.

Unit testing is something that can be easily accomplished with the Mantra spec. Since it is modular and it clearly separates the presentation logic from the business logic through containers, components and actions, everything can be unit tested. Meteor 1.3 also introduced native NPM support, which means that using familiar tools such as Mocha, Chai and Sinon can be imported in a straightforward manner. This is going to be my first blog post and so I am only going to explain Sinon’s spy and stub methods as they were used in Arunoda’s mantra-sample-blog application.


People who are new to unit testing tools in JavaScript (with me included) were initially confused about the difference between Sinon’s spy and stub. It turns out that a spy is a basic function that you can use in Sinon, and that stubs and mocks were built on top of it.

According to the Sinon.JS documentation, a spy is

a function that records arguments, return value, the value of this and exception thrown (if any) for all its calls. A test spy can be an anonymous function or it can wrap an existing function

What this means is that a spy can be used as a replacement for an anonymous callback function or you can wrap an existing function into it so that you can spy on its behavior. For example, if you have a function that accepts another function as an argument to be called back later after a certain condition, you can instead pass Sinon’s spy() function as the callback. You can then assert or use Chai’s expect() to see if that function was called by using spy()’s calledOnce(). Additionally, you can check if the correct arguments were passed to it by using its calledWith(). You can check the different spy functions that are available here.

Now let’s look at how spies are used in the Mantra sample blog application. We are going to use the tests that were written for the posts action. Let’s check this code snippet out:

it('should call to save the post', () => {
  const Meteor = {uuid: () => 'id', call: spy()};
  const LocalState = {set: spy()};
  const FlowRouter = {go: spy()};
  actions.create({LocalState, Meteor, FlowRouter}, 't', 'c');
  const methodArgs =[0];
  expect(methodArgs.slice(0, 4)).to.deep.equal([
    'posts.create', 'id', 't', 'c'

In the test block above, we are testing if our action will correctly invoke a Meteor Method in the server through The first three lines are creating local objects for MeteorLocalState and FlowRouter to be used exclusively in this test case. In the Mantra spec, these are exported as the application context inside client/configs/context.js.

Actions in Mantra receive this application context as the first parameter. We are creating local objects in lieu of the real app context and by doing so, we can trick the action into thinking that it is receiving its first expected argument (which is the app context).

Next, we spy on how these objects are used inside the action that we are testing. See how the Meteor object that we have passed contains a call property, which is a Sinon’s spy() function? When the action gets invoked on Line 6, it will go ahead and invoke inside it. The Meteor object that it will receive is something that we have just created for spying purposes, so we have access to the arguments that were passed into it when it was invoked (Lines 7 — 12). We used the arguments that we have obtained through spying to verify that our function is invoking with the correct arguments.

This is what the create action looks like, for reference:

  create({Meteor, LocalState, FlowRouter}, title, content) {
    if (!title || !content) {
      return LocalState.set('SAVING_ERROR', 'Title & Content are required!');

    LocalState.set('SAVING_ERROR', null);

    const id = Meteor.uuid();'posts.create', id, title, content, (err) => {
      if (err) {
        return LocalState.set('SAVING_ERROR', err.message);


Now that we are done with spies, and have a basic understanding on how they work, let’s work with stubs. Stubs are just like spies, and in fact they have the entire spy() API inside them, but they can do more than just observe a function’s behavior. According to the Sinon API, stubs are:

functions (spies) with pre-programmed behavior. They support the full test spy API in addition to methods which can be used to alter the stub’s behavior.

and they should be used when you want to:

Control a method’s behavior from a test to force the code down a specific path. Examples include forcing a method to throw an error in order to test error handling.


When you want to prevent a specific method from being called directly (possibly because it triggers undesired behavior, such as a XMLHttpRequest or similar).

Okay, so where spies can observe and spy on how a function is going to be called, the number of times it’s going to be called and the arguments that were sent with it, stubs can do all these, plus you can also programmatically control its behavior.

Let’s check how it is used on the post action test inside the sample blog application:

    it('should call to save the post', () => {
      const Meteor = {uuid: () => 'id', call: spy()};
      const LocalState = {set: spy()};
      const FlowRouter = {go: spy()};

      actions.create({LocalState, Meteor, FlowRouter}, 't', 'c');
      const methodArgs =[0];

      expect(methodArgs.slice(0, 4)).to.deep.equal([
        'posts.create', 'id', 't', 'c'

This particular test will check if our action will set an error message if something goes wrong after the Meteor Method call. Just like what we did with the spy example, we are setting local objects to be used as the context for our action function.

In Mantra, the LocalState is a Meteor reactive-dict data structure (a reactive dictionary) which is mainly used to handle the client side state of the app, although it is mostly used to store temporary error messages. We are creating a LocalState object here to mimic the app context’s LocalState. We are setting its set property as a spy function, so we can later see if our action will set the appropriate error message by checking the arguments that were passed into it.

Notice that this time, we are using a stub() instead of a spy() for our local Meteor object. The reason for this is that we are no longer just observing how it is going to be called, but we are also forcing it to respond in a specific way.

We are checking our action’s behavior once the call to a remote Meteor method returns an error and if that error will be stored in the LocalState accordingly. In order to do that, we need to reproduce that behavior, or make the call() function in our local Meteor object return an error. That is something that a spy() will not be able to do since it can only observe. For this scenario, we will use the stub’s callsArgWith()* function to set our desired behavior (Line 6).

We will give callsArgWith() two arguments: 4 and the err object that we have defined in Line 5. This function will make our stub invoke the argument at index 4, passing the err as an argument to whatever function is at that position. If you are going to look at our create action above, is invoked with five arguments, the last or the one in the fourth index is a callback function:‘posts.create’, id, title, content, (err) => { 
  if (err) { 
    return LocalState.set(‘SAVING_ERROR’, err.message); 

We have to remember that this that is being invoked here is the local object that we have created and passed explicitly into our create action for testing purposes. As such, this is the stub in action, and it doesn’t know that the last argument when it is invoked is going to be a callback function, so we have to use the callsArgWith() with the err object. Inside this callback, the create action will then store the error message in the LocalState object that we have passed in. Since the set() function of that LocalState object is a spy, we can conclude our test by checking if the arguments that were passed to this spy function matches the error message that we are expecting (Line 9).

This wraps up our discussion of how Sinon’s Spy and Stubs methods are being used in Mantra Unit Testing. As a recap, a spy just observes a certain function, or can take the place of a anonymous callback function so we can observe its behavior. A stub does more than that, by allowing us to pre-program a function’s behavior. If I have provided any wrong information, please feel free to correct me in the comments. :)

*The callsArg and yields family of methods have been removed as of Sinon 1.8. They were replaced with the onCall API.

How To: Play MP3 and other codecs on Moblin 2.1

Moblin (short for Mobile Linux) is a new Linux distribution that is designed by Intel to support multiple platforms and usage models ranging from Netbooks to Mobile Internet Devices (MID), to various embedded usage models, such as the In Vehicle Infotainment systems.

Moblin 2.1 was released recently and you could check out the screenshots hereor watch the intro video here for a quick look on how the Moblin 2.1 Netbook looks like. You could check for tested netbook models here. The full release note and download link could be found here.

Looks promising, but the problem is, as with any other Linux distributions, it does not play mp3 and other proprietary codecs out of the box for legal reasons. It only plays Ogg Vorbis audio and Ogg Theora video upon installation and the Gstreamer packages needed to play mp3 and other video codecs are not available from Moblin's repository or from the Moblin Garage. So we have to compile these packages from source.

DISCLAIMER: Try this at your own risk.

Step 1: Download the souce code here. We need the following source codes:


After installing these, extract them to a directory of your choice (ex. /Home/Downloads).

Step 2: Open the terminal and type this command to download and install necessary development tools and build packages:

yum install gcc bison flex *glib* *diff* liboil*dev*

Step 3: Compile and build the source code. Using the Terminal, use the cd command to navigate to the folder where you have extracted the downloaded source codes (ex. cd /Home/Downloads). Then type these commands in order (press Enter after each line):

cd gstreamer-0.10.25
./configure -prefix=/usr && make && make install

cd ..

cd gst-plugins-base-0.10.25
./configure -prefix=/usr && make && make install

cd ..

cd gst-plugins-good-0.10.16
./configure -prefix=/usr && make && make install

cd ..

cd gst-plugins-bad-0.10.16
./configure -prefix=/usr && make && make install

cd ..

cd gst-plugins-ugly-0.10.13
./configure -prefix=/usr && make && make install

cd ..

cd gst-ffmpeg-0.10.9
./configure -prefix=/usr && make && make install 

Then reboot and have fun with your media. I might write an automated script later to do all of these upon execution of the script, but I'm busy at the moment.

Random Linux Post

"'Free software' is a matter of liberty, not price. To understand the concept, you should think of "free" as in 'free speech,' not as in 'free beer.'"

We all grew up using one Operating System, and I am pretty sure that the new blood still does. Maybe not all of us, but at least in my generation, at least 99.8% grew up using one Operating System. And yes, I am referring to Microsoft Windows.

Personally, I do not have any problem with Microsoft Windows. I grew up using it and Macintosh, but I've only used Macintosh in my early years, as far as I remember, when I was still in Grade 2. It was a lot easier to use as compared to Windows, for it uses a lot of graphics, unlike Windows which tends to focus on words and phrases. Windows offers clear explanation for each, though nothing beats that power of imagery and intuitive icons. If I am not mistaken, Macintosh has first introduced the Graphical User Interface or the GUI, for I remember my mom back in the days using Windows 3.1. It uses Command Line Interface and I remember using a Mac computer then which already has a black and white GUI.

You might be wondering by now why I am talking about these two Operating Systems when I should be explaining why I switched over to a new one which only a few people knows that such OS exists. Well these two OS are the most popular and people find it peculiar why I switched over to a "not-so-popular" OS, at least on where I live. Most reaction I get from people is a smirk, followed by "Is it easy to use Linux? Some people said they had a hard time using it" and such, and some people even look at it as inferior as compared to Microsoft and Macintosh and they are really skeptical when it comes to performance. Well, here's my defense.

The notion that Linux is hard to use is like 20 summers ago. Some people still refer to Linux as a pure Command Line Interface OS, and does not have a good GUI like the two popular OS I have mentioned above. If you are still under that impression, you might be living under a rock for decades! I could say that Linux has better GUI than any OS I have used because it gives you choices. What do I mean? Instead of the usual taskbar with Start menu with icons on the desktop or the clean desktop with a familiar dock, in Linux, there are several Desktop Environments that you could choose from, each of them has an array of features that is suited to your preference or hardware. For instance, there is the traditional GNOME desktop environment which is common on most Linux distributions. There is also the K Desktop Environment or KDE which is targeted to new Linux users who are well accustomed in using Microsoft Windows. Plus there is also the XFCE desktop environment which is becoming popular in the past few months. It is an extremely lightweight desktop environment which could bring an old computer to life, for the applications that are bundled with it uses less memory consumption and requires less processing speed. There are endless customizations that could be done on each of the desktop environments I have mentioned and you won't get bored; you can get the look that you want and need. And, you won't ever have to go into suspicious sites again looking for cracks and or syndicated Serial Numbers for your software since everything is free in Linux. Well, almost all are free, only a few programmers charge for their programs and they come real cheap if they does.

Speaking of cracks, well in Linux, you don't need them so no need to bother. I know some of us ( and I should say a lot of us here ) have used pirated software, pirated OS and all things that are not that legal and I should say I grew tired of it. It's like this: Why would I use such commercial OS when I cannot really afford it? I mean how much is OS X or Vista these days? That's only the core OS, how about additional productivity software, which are sold separately? Being street smart, we could always manage to get some "cracked" or "stripped" or worst "pirated" versions, but come on, show the programmers some respect. They certainly need the money that's why they chose to work there. And why count on them if there are ones out there who are willing to make authentic software for all of us to enjoy for free? All they need is support. And they are going to support us back.

Another thing that made me switch to Linux is speed. I know, this statement of mine might trigger a lot of grunts and "Come on"s from a lot of people since they believe that the computer's hardware is responsible for that, meaning that if you have a great specs, then it would be great with any OS and vice versa. Well that might be true, but I am sure that a new Windows box would run like a charm the first few days and then after a week or so, it will start to lag and faster boot times might then be noticeable. It is caused by the fact that there are thousands of viruses and worms known in Microsoft, while there are only around 400 known viruses in Linux and Mac OS. That's just a rough estimate.

Add all those useless services (programs that run in the background) that came installed in Windows XP and you'll get a boot time close to five minutes.


The Support system of Linux is really interesting. Instead of calling a number and paying for Tech Support Representatives, all you have to do to get support is to have an internet connection. In Linux, you get support from the user community and not from hired technical people. You just need to register in your distribution's community forum and fire your questions their. Help would be there in 1-2 hours time, and in several week's time, you'll see yourself gaining familiarity with the Linux distribution that you have chosen and you'll see yourself hanging out in the community forums, helping newcomers out.

There are a lot of great things I could mention about Linux, but let me clarify things out: Linux is just the kernel used, and not the OS. A kernel together with the bundled software makes a so called distribution. Those distributions that uses the Linux kernel are Ubuntu, Fedora, Mandriva, Debian, OpenSUSE and etc. Among the most famous are Ubuntu, as they said it is the most user friendly and I have to admit that it has a great community. Personally, I use Fedora, not because Linus Torvalds himself and all the computers at NASA uses it, but I guess I just got used at using it and I feel uncomfortable using any other distributions.

Linux and Open Source OS and applications seems to be more popular now than ever due to the sudden boom of netbooks and other low priced portable devices that comes with Linux pre-installed.

To end this post, I would say that GNU/Linux would be the future of computing, it does not solely give us free software, it also gives everyone an idea of how everything works inside the box.

Installing Cairo-Dock on the Acer Aspire One

Cairo-Dock is an OSX-ish application laucher that you could place on your desktop to replace your panel. Or if you're like me, you could have both and will make your desktop similar to this [It is recommended to have at least 1GB of RAM though]:

Hopefully, it is fairly easy once you have activated the standard XFCE desktop and set aside the Acer modified desktop, instructions on how to do that could be found here. Of course, this is assuming that you already have added the new RPM-Fusion repositories, if you haven't yet, then simple instructions could be found here.

Once we have the standard XFCE desktop, we need to activate Compiz [it is a program pre-installed and can make great 3D effects on your desktop] by downloading Fusion-Icon. Just go to the Terminal and then type in:

sudo yum install fusion-icon

After installation, we can run it by pressing Alt+F2 on your keyboard and then typing in fusion-icon and then click Run. You will know if it is successful if you see a new blue icon on your system tray [where the clock, etc. is]. You can right click it for you to configure some effects that you might like to enable. If you might ask and be interested what Emerald theme is, it is a theme manager and you can get themes for it by opening the package manager and searching for emerald-themes. Later on this guide, we are going to add fusion-icon and cairo-dock so that they would run automatically upon startup.

Now that we have fusion-icon, all we need to do is to get the cairo-dock RPM from any of the mirrors listed in here. After downloading it, just double click it and it will automatically installed. You can find it in you menu under System, named Cairo-Dock. Click on it and then the dock will appear in your desktop. Right click it so that you can personalize it and adding applications is as easy as dragging and dropping .desktop files from /usr/share/applications to the dock, or you can create manually configured lauchers/subdocks/etc. if you want.

Themes for it are also available, try searching some at by typing in cairo-dock-themes. I got my themes there but I forgot the direct link, but I'll update this later.

Now if you notice, fusion-icon and cairo-dock does not open upon startup. This can be easily remedied by opening a Terminal and typing:


A new window should pop up and just add those two applications, the commands being cairo-dock and fusion-icon, respectively. And that's pretty much it.

Have fun on your new desktop!

Installing Mozilla Thunderbird and Pidgin on Acer Aspire One

This How-To is Acer modded Linpus Lite specific, please don't try this on an Acer Aspire One that has Microsoft Windows XP or Vista installed.

This How-To will guide you in installing Mozilla Thunderbird and Pidgin Messenger in your Aspire One and change the icon on your desktop to the original icon. This will only work assuming that you are still using AME or the Acer e-mail client that came pre-installed with your Aspire One and the Acer Messenger as well.

First thing that we need to do is to uninstall AME, by typing this command in the Terminal [alt+f2 and then type Terminal and then click on Run]:

sudo yum remove evolution-data-server libpurple

When the terminal is finally done performing those task, we can go ahead and install Pidgin and Thunderbird using pirut or in my case the Smart Package Manager [assuming the you have already signed keys using this command: sudo yum update fedora-release]. Open pirut or Smart and then search for Pidgin and then Thunderbird. After we're done with that, we're going to associate both programs with the default Mail and Messenger icon by typing these commands on the Terminal:

cd /usr/acer/bin

sudo ln -s /usr/bin/thunderbird AME

sudo ln -s /usr/bin/pidgin UIM

Well actually, it is as easy as that and we are done. You should now be able to use Mozilla Thunderbird and Pidgin using the default icons, but in case you are unhappy with those icons and wants to have the original icons, don't worry because we can do that by going to the Terminal and then typing:

sudo mousepad /usr/share/applications/AME.desktop

It should open a notepad that has a lot of text written on it and just in case you want to label the icon differently, let's say change it from E-mail to Thunderbird, just replace the text after Name= and GenericName= to your preferred name. Let's get back to the icon. When we scroll down, we should be able to see a line that says Icon= , just replace it with thunderbird.png, save it and we're done. Well, sad to say for Pidgin it has to be done in a different way and I'll just cover that on my next post, because it involves tweaking of group-app.xml, and a single mistake can ruin your desktop.

If you're already satisfied, then that's all, however for additional info, you can read below.

So now we have Mozilla Thunderbird and its original icon, what to do next? Well this is not that necessary but just in case you found out, the Mozilla Thunderbird we just installed does not update itself automatically and if we check on the Help menu, Check for Updates is grayed out, that is due to the fact that this version of Thunderbird is from the Fedora 8 [the Linux distribution where Linpus is based] repository. So in order for us to fix this, we need to do these steps in the Terminal so that we will be getting the official release from Mozilla and we're going to install it in the /opt directory. This is how:

wget ""

sudo tar -xvf thunderbird- --directory /opt

And then a lot of unpacking happens. After it's done we can type this again in the Terminal:

sudo chown user -R /opt/thunderbird

sudo mousepad /usr/share/applications/AME.desktop

And we just need to change the Exec= line to look just like this:


That's pretty much it. But if you're bothered to have 2 Mozilla Thunderbirds installed in your Aspire One, we can delete the old one via pirut or Smart but it will delete the icon as well which can be remedied easily by searching Google these keywords that I have used which is 'thunderbird.png 64x64'. Just copy the image to your Downloads folder and then move it to the pixmaps directory using this command:

sudo cp /home/user/Downloads/thunderbird.png /usr/share/pixmaps

After that, we're done. Have fun!