tag:blog.johncrisostomo.com,2013:/posts john crisostomo's blog 2019-02-28T07:08:14Z John Crisostomo tag:blog.johncrisostomo.com,2013:Post/1379481 2018-12-19T16:00:00Z 2019-02-28T07:08:14Z Tailing the CosmosDB Change Feed

This will hopefully be a short post about how to listen to changes from your CosmosDB SQL API using the Node SDK. For some reasons, it reminds me of the oplog tailing mechanism in MongoDB that Meteor utilized in the past to achieve real-time, hence this article's title.

I do not have much time right now, but let us talk about a bit of my use case for this. I am currently working on an IoT project that uses several Azure technologies as its infrastructure. As you might have guessed, it pipes data from the IoT sensors into Azure Event Hubs. The data then gets ingested in Azure Databricks; ultimately ending up in an intermediate data warehouse (for dashboards and further analysis) and a NoSQL application data store (coming up next).

This NoSQL application data store is the CosmosDB instance that we are going to tail. The objective was to push changes from this data store to a mobile application in real-time or at least real-time (vs long polling or querying in set intervals). To make the long story short, I ended up tailing the CosmosDB's Change Feed in a GraphQL service application to make it easier for the client application to implement a PubSub system. More about this in an older post that I published early this year.

Before we dig into the code, let me just say that we are not going to go through how to initialize your CosmosDB collection or how to write a facade / repository class in NodeJS. We will go straight to the Change Feed part but first, make sure you have the correct Node SDK in your project:

npm install --save @azure/cosmos

Once you have set up your CosmosDB boilerplate code in Node, you can access the change feed iterator using the code below:

const changeFeed = container.items.readChangeFeed(partitionKey, options);

If you need to brush up on iterators in JavaScript, you can check this post that I have written last year).

And that's it! You now have a change feed iterable where you can listen to changes from CosmosDB, but let us see what options we have in doing so. The Node SDK currently gives us four ways of listening to the change feed and they are:

1. Start from now

const options = {};

2. Start from a continuation

const { headers } = await container.items.create({ ... });
const lsn = headers["lsn"];
const options = { continuation: lsn };
// I have not used this specific method yet, this example is from here.

3. Start from a specific point in time

const options = { startTime: new Date() }; // any Date object will do

4. Start from the beginning

const options = { startFromBeginning: true };

As all of these are self-explanatory, we will go ahead and use our changeFeed iterator. To retrieve the next value, we can use await changeFeed.executeNext() or we can loop through the next values like this:

while (changeFeed.hasMoreResults) {
  const { result } = await changeFeed.executeNext();
  // do what you want with the result / changes
}

Reading the source code of the Node SDK revealead that it is also exposing a real generator (the function signature being public async *getAsyncIterator(): AsyncIterable<ChangeFeedResponse<Array<T & Resource>>>). This would have allowed a more elegant for of construct, but unfortunately I bumped into a few issues regarding Symbols when I tried it. If you have used it in the past, please feel free to share in the comments!

That will be all for now, and I hope you learned something in this post.

]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379490 2018-02-20T16:00:00Z 2019-02-28T05:02:19Z Real-time GraphQL Subscriptions Part 1: Server

In this post, we will not go through GraphQL from the ground up. My assumption is that you are here to learn how to implement real-time through the use of the graphql-subscription package and that you already have a basic understanding of GraphQL types, queries and mutations.

We are going to use Apollo for this tutorial. Apollo was made by the same guys who made Meteor and is a bit opinionated but is also arguably one of the most popular full-featured GraphQL libraries around. We will also use React and create-react-app to bootstrap our client application on the second part of this tutorial. That being said, some knowledge of higher order components is also assumed (in Part 2).

Server Boilerplate

Let's start outlining our backend. Initialize a Node project by issuing npm init on your preferred folder, and then install dependencies like so:

npm i --save express body-parser cors graphql graphql-tools apollo-server-express

Next, create the three files that we will use for this short tutorial:

touch index.js resolvers.js schema.js

We will then define a type, a root query and a mutation that we will use for our subscription:

const { makeExecutableSchema } = require('graphql-tools');
const resolvers = require('./resolvers');

const typeDefs = `
  type Message {
    message: String
  }

  type Query {
    getMessages: [Message]
  }

  type Mutation {
    addMessage(message: String!): [Message]
  }

  schema {
    query: Query
    mutation: Mutation
  }
`;

module.exports = makeExecutableSchema({ typeDefs, resolvers });

Okay, so at this point, we have the schema for a GraphQL server that allows you to send a mutation to add a Message, and a query that allows you fetch all messages in the server. Let's implement resolvers.js so we can start using our schema:

const messages = [];

const resolvers = {
  Query: {
    getMessages(parentValue, params) {
      return messages;
    }
  },
  Mutation: {
    addMessage(parentValue, { message }) {
      messages.push({ message });
      return messages;
    }
  }
};

module.exports = resolvers;

Oh shoot. We have defined a schema and the functions that will resolve their return values, but we have not set our server up. At least not yet. We are going to use express and apollo-server-express to serve our GraphQL implementation through HTTP:

const express = require('express');
const bodyParser = require('body-parser');
const cors = require('cors');
const { graphqlExpress, graphiqlExpress } = require('apollo-server-express');
const { createServer } = require('http');

const schema = require('./schema');

const app = express();

app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));

app.use(cors());
app.use(
  '/graphql',
  graphqlExpress({
    schema
  })
);

app.use(
  '/graphiql',
  graphiqlExpress({
    endpointURL: '/graphql'
  })
);

const PORT = process.env.PORT || 3030;

const server = createServer(app);
server.listen(PORT, () => {
  console.log(`Server now running at port ${PORT}`);
});

We can now have a working GraphQL server running on http://localhost:3030/graphql by issuing node index.js. Since we have configured the interactive Graphiql as well, we can explore our schema, and issue some sample queries and mutations on http://localhost:3030/graphiql:

Adding real-time through Subscriptions and PubSub

Server Configuration

Our simple GraphQL server is running. That means we can now proceed to the interesting part: implementing real-time through Apollo PubSub. As with all modern real-time frameworks, the implementation is often done on top of WebSockets. We need to install additional dependencies to make use of this transport layer:

npm i --save graphql-subscriptions subscriptions-transport-ws

We then need to make use of these libraries to enable WebSockets support on index.js:

const { execute, subscribe } = require('graphql');
const { SubscriptionServer } = require('subscriptions-transport-ws');

. . .

const server = createServer(app);
server.listen(PORT, () => {
    console.log(`Server now running at port ${PORT}`);
    new SubscriptionServer(
        {
            execute,
            subscribe,
            schema
        },
        {
            server,
            path: '/subscriptions'
        }
    );
});

Let's modify our /graphiql endpoint as well to make use of our new transport layer, so we can demonstrate that this is working through Graphiql once we are done:

app.use(
    '/graphiql',
    graphiqlExpress({
        endpointURL: '/graphql',
        subscriptionsEndpoint: 'ws://localhost:3030/subscriptions'
    })
);

That's it for the server setup! Let's proceed on fleshing out the subscription implementation.

Defining Subscriptions

In GraphQL, a subscription is just a type, pretty much like query and mutation. Go ahead and define our subscription on our schema.js:

  const typeDefs = `
  
  . . .
  
  type Subscription {
    newMessageAdded: Message
  }

  schema {
    query: Query
    mutation: Mutation
    subscription: Subscription
  }
`;

We have just defined our first subscription. It will allow applications or clients to subscribe and receive updates whenever new messages are added (through a mutation). Just to make sure everything is working correctly, visit Documentation Explorer on Graphiql and you should now be able to see Subscription and newMessageAdded:

If there are no errors and you can see the definition above, then we are ready to make this work by, you guessed it, implementing the resolver function for newMessageAdded.

Implementing the Subscription and Publishing Messages

With the transport configuration and the type definitions done, the only thing we need to do now is to implement newMessageAdded and the actual message publication. The flow will be like this:

1. A client will subscribe to `newMessageAdded`
2. Every time our `addMessage` mutation is queried, we will publish a message to `newMessageAdded`, using the new `message` as the payload.

We need to tweak our resolvers.js to import helpers from graphql-subscriptions. We will use them to implement our newMessageAdded subscription query:

const { PubSub, withFilter } = require('graphql-subscriptions');
const pubsub = new PubSub();

. . .

const resolvers = {
  Query: {
    . . .
  },
  Mutation: {
    . . .
  },
  Subscription: {
    newMessageAdded: {
      subscribe: withFilter(
        () => pubsub.asyncIterator('newMessageAdded'),
        (params, variables) => true
      )
    }
  }
};

module.exports = resolvers;

We just implemented our first subscription query! Every time our server publishes a message to newMessageAdded, clients that are subscribed will get the published payload.

As an aside, the helper function withFilter is not actually required nor used in our example here (just subscribe: () => pubsub.asyncIterator('newMessageAdded') will do for this tutorial), but I figured that this will be helpful if you want to try something useful with this whole pubsub ordeal, like say, a classic chat app.
The second function that you pass as an argument to withFilter will allow you to filter out the subscribers who will receive the message. This is done by using the field in the actual payload that is about to get published (params) and the GraphQL query variables from the subscription (variables). All you need to do is to return a truthy value if you want it sent to this particular subscriber. It will look roughly similar to this: return params.receiverId === variables.userId. Of course, that is assuming that a query variable called userId was sent along with the subscription.

Since we do not have an application that will subscribe to our server yet, why don't we try this out with Graphiql?

If you can see the same message above, great! Everything is working awesome. But if we do not publish anything anywhere on our server, nothing will happen. Yep, we are about to do just that.

In fact, we just need to add one line to our addMessage resolver:

  Mutation: {
    addMessage(parentValue, { message }) {
      messages.push({ message });
      
      // blame prettier for not making this a one-liner 
      pubsub.publish('newMessageAdded', {
        newMessageAdded: { message }
      });
      
      return messages;
    }
  }

We can now test this using Graphiql on two browser windows. The first browser will be the subscriber, and the second one will send the mutations:

As soon as you send a addMessage mutation on the second browser, the first browser receives the message, and displays it instantly! How cool is that? Let's wrap up what we learned in this short tutorial.

Wrap up

In this tutorial, we learned how to set up subscriptions and publish message across subscribers using graphql-subscriptions. On the next part of this tutorial, we will use apollo-client with react-apollo to see how this will work with a real application as the subscriber.

The complete source code for this tutorial can be found here

If you encountered any errors or have any questions about this, let me know in the comments section below!

]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379503 2017-10-03T16:00:00Z 2019-02-28T05:14:31Z Installing MongoDB 3.4 (with SSL) on Ubuntu 16.04 (MS Azure)

Hey everyone. I know it has been a while since I wrote something. I have been busy with multiple, large scale projects during the past few months, so I was almost always too tired at the end of the day to compose a new entry. I also had to relocate; I think the adjustment phase also took a lot of my time and energy. Anyway, what I am going to try to do now is to write short, straight to the point tutorials about how to do specific tasks (as opposed to going into more detailed, wordy posts). I will still write the elaborate ones, but I will be focusing on consistency for now. I have been working on a lot of interesting problems and relevant technologies at work, and I just feel guilty that I do not have enough strength left at the end of the day to document them all.

Let us start with this simple topic just to get back to the habit of writing publicly. I have been configuring Linux VMs for a while now, but I have not really written anything about it, aside from my series of Raspberry Pi posts. Also, it is my first time to work with the Azure platform, so I thought that it might be interesting to write about this today.

This tutorial will assume that the Ubuntu 16.04 VM is already running and you can SSH properly into the box with a sudoer account.

The Basics: Installing MongoDB

You can read about the official steps here. If you prefer looking at just one post to copy and paste code in sequence, I will still provide the instructions below.

Add the MongoDB public key

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6

Add MongoDB to apt's sources list

echo "deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list

Update apt repository and install MongoDB

sudo apt-get update && sudo apt-get install -y mongodb-org

Run and check if MongoDB is running properly

sudo service mongod start
tail -f /var/log/mongodb/mongod.log

If everything went well, you should see something like this:

2017-10-04T01:18:51.854+0000 I NETWORK [thread1] waiting for connections on port 27017

If so, let's continue with the next steps!

Create a root user

I will not get into the details of how to create and manage MongoDB databases and collections here, but let us go into the processes of creating a root user so we manage our database installation remotely through this user.

Connect to MongoDB CLI

mongo

Use the admin database

use admin

Create admin user

db.createUser(
    {
      user: "superadmin",
      pwd: "password123",
      roles: [ "root" ]
    }
)

SSL and some network-related configuration

Now that we have MongoDB installed and running, we need to make some changes with the mongod.conf file to enable SSL and to make our MongoDB installation accessible on our VM's public IP and chosen port.

SSL Certificates

Creating a self-signed certificate

If you already have a certificate or you just bought one for your database for production use, feel free to skip this step. I am just adding this for people who are still experimenting and want SSL enabled from the start. More information regarding this can be found here.

This self-signed certificate will be valid for one year.

sudo openssl req -newkey rsa:2048 -new -x509 -days 365 -nodes -out mongodb-cert.crt -keyout mongodb-cert.key

Create .pem certificate

This .pem certificate is the one that we will use on our mongod.conf configuration file. This command will save it on your home directory (/home/<username>/mongodb.pem or ~/mongodb.pem).

cat mongodb-cert.key mongodb-cert.crt > ~/mongodb.pem

MongoDB Configuration

Now that we have our self-signed certificate and admin user ready, we can go ahead and tweak our MongoDB configuration file to bind our IP, change the port our database will use (if you want to), enable SSL and to enable authorization.

I use vim whenever I am dealing with config files via SSH; you can use your favorite text editor for this one.

sudo vim /etc/mongod.conf

Make sure to change the following lines to look like this:

net:
  port: 27017
  bindIp: 0.0.0.0
  ssl:
    mode: requireSSL
    PEMKeyFile: /home/<username>/mongodb.pem

security:
  authorization: enabled

Restart the MongoDB service:

sudo service mongod restart

If we go ahead and print the MongoDB logs like we did earlier, we should be able to see something that looks like this (notice that there's an SSL now):

2017-10-04T01:18:51.854+0000 I NETWORK [thread1] waiting for connections on port 27017 ssl

If you got that, it means that everything is working fine. We just need to add one more command to make sure that our MongoDB service will restart across VM reboots. systemctl will take care of that for us:

sudo systemctl enable mongod.service

Azure Firewall

Now, if you try to connect to your database using your favorite MongoDB database viewer or by using the Mongo CLI on your local machine, you might notice that you will not be able connect. That's because we need to add an Inbound security rule on the Azure portal first.

Once on the Dashboard, click on All Resources.
Azure Portal Dashboard

Click on the Network Security Group associated with your VM.

Azure Portal Inbound Security Rules

From here, you can see a summary of all the security rules you have for your virtual network. Click on Inbound security rules under Settings on the left pane.

Azure Portal Network Security Group Settings

Click Add. You should be able to see a form with a lot of fields. We are used MongoDB's default port, so we can just click on Basic at the top so we can select from a list of preset ports.

Basic Inbound security rules form

Just click on OK, and we are done! You can start connecting to your MongoDB installation using your tool of choice.

]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379506 2017-03-19T16:00:00Z 2019-02-28T05:25:49Z Implementing Token-Based Authentication With jwt-simple

On this post, we will talk about JSON Web Tokens, most commonly known by its acronym JWT. If you have done any web development work for the last few years, you must have heard of it, or even used a package that uses JWT to implement a token-based authentication mechanism under the hood.

We will examine what a JWT is and describe what comprises a valid token. Next, we will implement basic authentication using Node/Express and the jwt-simple package.

What is JWT?

According to the comprehensive Introduction to JSON Web Tokens:

JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA.

JWT is said to be compact because it uses JSON which is pretty much how every web application these days pass data across consumers and other APIs. That means that a JWT can be easily passed around as a query parameter, through a POST request, or through request headers. Being self-contained adds up to the portability because it means that it can contain the needed information in the token itself. We will see this in practice in our small Express application.

Anatomy of JSON Web Tokens

A JSON Web Token is made up of three parts that are separated by dots. The first two parts are called Header and Payload, respectively. Both of them are Base64 encoded JSON objects that contain several information that we are going to briefly discuss below.

The Header object contains the type of the token and the encryption algorithm used. Since we are going to create a basic authentication mechanism on an Express app, the type is JWT and the encryption will be a keyed-hash message authentication code (HMAC). Since we will use a package which will simplify the encoding and decoding of our tokens, there is no need to set this explicitly and we will stick with the defaults which is HMAC SHA256.

The Payload contains what the specification refers to as claims. They are information that can be attached to the token for identification or verification purposes. Claims are further categorized as Registered ClaimsPublic Claims and Private Claims. On our example app, we will use Registered Claims to identify our application as the Issuer of the token and to set its expiry. We will also make use of the user's name and their password as Public Claims.

Now that we have discussed the first and the second part of a JWT, it is now time for the third one, which is called the Signature. Once we have the Header and the Payload properly encoded as a Base64 strings, they need to be concatenated with a dot, and then hashed with the app secret. This process will produce the token's signature. The secret can be any string, but as the name suggests, keep it secret because it can be used to decode your token's Header and Payload.

Here's an example token:

eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHAiOjE0ODk5OTEyNjI3NTIsImlzcyI6IkpvaG4gQ3Jpc29zdG9tbyIsIm5hbWUiOiJjcmlzb3N0b21vIiwiZW1haWwiOiJjcmlzb3N0b21vQGpvaG4uY29tIn0._CP8KU_AX4XNJKyxD561LTiFbY0HcPFKRgI1AztGMsI

Try to notice the dots that separate the three parts of the token. To wrap this section up and as a review, the first two parts are the Base64 encoded JSON objects that contains information about the user and our application. The third part is hashed version of the first two parts with the application key used as the hash key.


Application Demo

It is now time for the application demo. At this point, we already have a good grasp of what a JSON Web Token is and its parts. We are now ready to put this into practice by creating a demo application to solidify the concepts that we have learned. Before we start, a word of precaution:

The example app that we will build in this section will be for the sole purpose of understanding how JWT can be used to implement a barebones token-based authentication. Please do not use this example in production. There are better packages out there that uses jwt-simple under the hood and makes this process foolproof.

Dependencies

Creating the user store and the token store

Since this is a fairly small project, we will not use any real databases. Instead, the users will be hard coded in an array, as well as the tokens. We will create two files to implement these functionalities in this section.

USERS.JS

const users = [
  { _id: 1, name: "john", email: "john@crisostomo.com", password: "john12345" },
  { _id: 2, name: "crisostomo", email: "crisostomo@john.com", password: "crisostomo12345" },
];

function validateUser(username, password) {
  const user = users.find((user) => {
    return user.name === username && user.password === password;
  });

  return user;
}

module.exports = { validateUser };

TOKENS.JS

const tokens = [];

module.exports = {
  add: function(token, payload) {
    tokens[token] = payload;
  },

  isValid: function(token) {
    if (!tokens[token]) {
      return false; 
    }

    if (tokens[token].exp <= new Date()) {
      const index = tokens.indexOf(token);
      tokens.splice(index, 1);
      return false;
    } else {
      return true;
    }
  }
}

On our users.js file, we exposed a convenience method to let us easily validate a user by searching through our users array. Our token.js file allows us to add a token with the associated payload. It also has a method that can check a token's validity.

Creating our application

This is where we create our application. Our app will have two entry points: one for accessing a restricted route, and another one where we can obtain tokens for registered users. The endpoint for these functionalities are /secretInfo and /token.

On a high level, we can obtain a valid token if we send a POST request to the /token endpoint with valid user credentials. This token can then be used to access the information at /secretInfo.

The first thing that we need to do is to require the dependencies mentioned above, and set the server to run at port 8080:

const express = require('express');
const bodyParser = require('body-parser');
const jwt = require('jwt-simple');
const moment = require('moment');
const users = require('./users');
const tokens = require('./tokens');

const app = express();
app.use(bodyParser.json());

const jwtAttributes = {
  SECRET: 'this_will_be_used_for_hashing_signature',
  ISSUER: 'John Crisostomo', 
  HEADER: 'x-jc-token', 
  EXPIRY: 120,
};

app.listen(8080);

console.log('JWT Example is now listening on :8080');

This sets all our dependencies and imports our user and token stores. We also declared an object called jwtAttributes. This object contains the claims that will be used for our token, as well as some other attributes like the app secret and header key. At this point, this server will run but will not do anything because we have not implemented any routes or endpoints.

Let us start implementing the /token endpoint.

// AUTH MIDDLEWARE FOR /token ENDPOINT
const auth = function (req, res) {
  const { EXPIRY, ISSUER, SECRET } = jwtAttributes;

  if (req.body) {
    const user = users.validateUser(req.body.name, req.body.password);
    if (user) {
      const expires = moment().add(EXPIRY, 'seconds')
        .valueOf();
      
      const payload = {
        exp: expires,
        iss: ISSUER,
        name: user.name,
        email: user.email, 
      };

      const token = jwt.encode(payload, SECRET);

      tokens.add(token, payload);

      res.json({ token });
    } else {
      res.sendStatus(401);
    }
  } else {
    res.sendStatus(401);
  }
};

app.post('/token', auth, (req, res) => {
  res.send('token');
});

Before we set up our route for the /token endpoint, we created the authentication middleware. It will check if the request has a body and it will try to validate if a user with the matching password is found on our user store. This middleware could make use of more validation, but I am keeping it simple to make our example less cluttered.

If a user is found, it sets the token's expiry with the help of moment and the set amount of time defined in our jwtAttributes object. Next, we proceed in constructing our payload. Notice that we have two registered claims exp and iss, which stands for expiry and issuer, and two public claims which are the user's name and email.

Finally, the encode method of the jwt-simple package abstracts the process of encoding our payload. It generates our token by concatenating the header and hashing them with the app secret. If the request's body is invalid or if the user/password combo is not found on our store, we return a 401 Unauthorized response. The same goes for sending blank requests, too.

Time for the /secretInfo endpoint.

// VALIDATE MIDDLEWARE FOR /secretInfo
const validate = function (req, res, next) {
  const { HEADER, SECRET } = jwtAttributes;

  const token = req.headers[HEADER];

  if (!token) {
    res.statusMessage = 'Unauthorized: Token not found';
    res.sendStatus('401').end();
  } else {
    try {
      const decodedToken = jwt.decode(token, SECRET);
    } catch(e) {
      res.statusMessage = 'Unauthorized: Invalid token';
      res.sendStatus('401');
      return;
    }
    
    if (!tokens.isValid(token)) {
      res.statusMessage = 'Unauthorized : Token is either invalid or expired';
      res.sendStatus('401');
      return;
    }
    next(); 
  }
};

app.get('/secretInfo', validate, (req, res) => {
  res.send('The secret of life is 42.');
});

Similar to our /token endpoint, we start by implementing our validate middleware. It checks if a token exists in the header, then jwt-simple decodes the token. It gets validated through our token store's method. If the token is found and is not yet expired, we call on the next handler, and the secret message is sent. Otherwise, we send our 401 Unauthorized as the response.

Now that we have finished implementing both endpoints, we can proceed in testing them with Postman.

Testing our app with Postman

Postman is a nifty Chrome app that can be used to test REST APIs. You can get Postman here.

If we send a GET request directly to /secretInfo, we will get a status code of 401, along with an Unauthorized message:

Likewise, sending an incorrect user credentials will give us the same response:

Providing the /token endpoint with a valid payload (a valid JSON with correct user credentials) will provide us a token that is bound to expire in two minutes:

We can then use the token by sending another GET request to /secretInfoendpoint, by including the token through the x-jc-token header (we specified this key on the jwtAttributes object):

Wrap up

That's it! We have successfully implemented a basic token-based authentication on Express by using jwt-simple. Equipped with this knowledge, we can now understand how popular authentication packages uses JWT under the hood. That makes us more capable to troubleshoot JWT authentication problems or even contribute to these packages. If you want to clone the files in this mini-tutorial, you can get them on this Github repository. If you are interested in learning more about JWT, you can get a free eBook here.


]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379507 2017-03-11T16:00:00Z 2019-02-28T05:28:32Z Basic Generators in JavaScript

I have been watching a movie last night when my mind spun on a different thread and remembered a JavaScript language feature that have existed for some time now, but I have never had the chance to use. At least, directly.

We do bleeding edge JavaScript at the office. That means we have all these new language features at our disposal as early as possible through the use of Babel. We write JavaScript code using the newest language specification (ECMAScript 6/7) and our code gets transpiled into ECMAScript 5. We have been using all the nifty features such as importasync/awaitspread/rest operators and destructuring as early as last year. These are just the new ES6 features that I can think of off the top, maybe because they are the most practical ones.

There is one feature, however, that can be really powerful but I have not really been able to leverage. They are generators. Prior to V8 v5.5 and Node v7.6.0, Babel's async/await and other asynchronous libraries around has been using generatorsunder the hood to implement this feature.

But what are generators? According to the venerable MDN page:

A generator is a special type of function that works as a factory for iterators. A function becomes a generator if it contains one or more yield expressions and if it uses the function* syntax.

MDN's definition is clear and straightforward, but let me rephrase it from what I have understood. Aside from producing iterables, think of a generator as a function that you can play and pause. This characteristic enables it to implement asynchronous programming, and when used with promises, you can come up with all sorts of things- including your own async library if you want to make one for learning purposes.

Let's dig into some basic code examples to solidify our understanding of generators:

function* counter() {
  for (let i = 0; i < 5; i++) {
    yield i
  }
}

This function was declared using function* and has a yield inside the function body, so this must be a generator. When we invoke it and assign the result to a variable like so, let c = counter(), we get back an iterable object that we can use to iterate over the values of i. An iterable object in JavaScript must have a next()method. This method returns an object that contains a value and a done property. Let's see that in action:

/***************************************************
  Using next() to step through the values explicitly
****************************************************/
let c1 = counter();

console.log(c1.next().value);
// 1
console.log(c1.next().value);
// 2
console.log(c1.next().value);
// 3
console.log(c1.next().value);
// 4
console.log(c1.next().value);
// 5

/***************************************************
  Using a for-of loop
****************************************************/
let c2 = counter();

for (const num of c) {
  console.log(c);
}

// 1
// 2
// 3
// 4
// 5

/***************************************************
  Using the done property explicitly
****************************************************/
let c3 = counter();

let i = c3.next();

while (!i.done) {
  console.log(i.value);
  i = c3.next();
}

// 1
// 2
// 3
// 4
// 5

We went through three different ways on how to iterate over the iterable that was returned by our counter generator. On the first example, we manually stepped through the iterator by using next(). We know that next() returns an object with a valueand a done property, and so we were able to chain .value every time we log the iteration to the console. This shows us one of the concepts that we have discussed earlier: we were able to play and pause the generator's execution by using the next() method. Another interesting thing is that it remembers its internal statethrough its iterations.

It works this way: the generator function stops immediately at every yield statement, and passes the value on its right to the object being returned by next(). We used a loop on our example, and by doing so, the loop gets suspended every time it encounters a yield statement.

Another thing worth knowing is that we can alter the generator's internal state from outside the generator by passing in an argument to next():

function* counter (limit) {
  for (let i = 1; i <= limit; i++) {
    let j = yield i;
    if (j) limit = j;
  }
}

/***************************************************
  Passing a value to next() to alter internal state
****************************************************/
const c1 = counter(2)

console.log(c1.next().value); // 1
console.log(c1.next().value); // 2
console.log(c1.next().value); // undefined

/***************************************************
  Passing a value to next() to alter internal state
****************************************************/
const a2 = counter(2)

console.log(c2.next().value); // 1
console.log(c2.next().value); // 2
console.log(c2.next(5).value); // 3
console.log(c2.next().value); // 4
console.log(c2.next().value); // 5

The example above is yet another contrived modification to our earlier example. This counter generator accepts an argument as the limit to the number of values it can generate. It has the same loop as the above example, except that the control is now dependent on the limit parameter that was passed to it.

Inside the loop body, we have declared a variable j that is being assigned to the value of yield. This expression is being followed by another control structure: an if statement that checks the value of j. The value of j will replace the value of limit if it has a truthy value.

As I have mentioned prior to showing the examples, we can control the internal state of generators by passing an argument to the next() method. This argument will become the value of yield inside the generator, and as such we can assign it to control its behavior.

This can be seen above where we both declared a generator with an initial limit of 2 values. On the first one, we did not pass an argument to next() and so we were only able to iterate through two values. On the second example, we did the same thing, but we passed in a value of 5 as an argument to next(). This altered the generator's internal limit from two to five values, enabling us to get three more values out of it.



On this post, we have learned about the basics of ES6's generators. We went through the basic implementation and usage through some simple examples. We found out that generator functions are declared using the function* keyword, and contains at least one yield statement/expression. We also found out that a generator produces and iterable with a next() method. Since this post is getting long, I have decided to split this post into two. On my next post, we will explore how to implement basic async/await functionality through the use of generators and promises.

]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379508 2017-03-09T16:00:00Z 2019-02-28T05:32:23Z Introducing Cheers Alerts


Cheer Alerts Demo GIF

This week, a friend decided to create his own JavaScript library. It was a small and simple in-browser notification library called 'Cheers Alert'. The library was inspired by Toastr, and as of this writing, depends on jQuery and FontAwesome.

The library is already available on npm. I have submitted a pull request that added [Grunt] to this project. This enabled the library to be bundled as a standalone browser library through the use of Browserify and other Grunt plugins such as Uglify and mincss. This automation allowed him to easily maintain and develop future versions of the library. Aside from npm, the library can also be installed through Bower.

As this is his first open source package, he will be actively developing this library. It's open for feedback and contributions, so please check the source out at Github.

You can try the library out by visiting the demo page.

]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379509 2017-03-07T16:00:00Z 2019-02-28T05:36:18Z Basic OOP and Composition in Go

I have been studying the Go programming language for several weeks now and thought about writing a series of posts to share what I have learned so far. I figured that it will be an excellent way to reinforce my understanding of the language. I initially thought about writing a post that will discuss concurrency in Go but it turned out that I am not yet eloquent enough to talk about basic concurrency patterns with goroutines and channels. I decided to set the draft aside and write about something I am more comfortable with at the moment: basic object-oriented patterns and composition in Go.

One of the best things I like about Go is its terseness. It made me realize that being advanced does not necessarily need to be complex. There are only a few reserved words, and just going through some of the basic data structures will enable you to read and comprehend most Go projects at Github. In fact, Go is not an object oriented language in the purest sense. According to the Golang FAQ:

Although Go has types and methods and allows an object-oriented style of programming, there is no type hierarchy. The concept of “interface” in Go provides a different approach that we believe is easy to use and in some ways more general. There are also ways to embed types in other types to provide something analogous—but not identical—to subclassing. Moreover, methods in Go are more general than in C++ or Java: they can be defined for any sort of data, even built-in types such as plain, “unboxed” integers. They are not restricted to structs (classes).

If Go is not an object-oriented language and everyone is going crazy about Functional Programming in the web development world, then why bother learning OOP patterns in Go? Well, OOP is a widely taught paradigm in CS and IT curricula around the world. If used correctly, I still believe that object-oriented patterns still have its place in modern software development.

Using structs

Go does not have a class similar to a real object-oriented language. However, you can mimic a class by using a struct and then attaching functions to it. The types defined inside the struct will act as the member variables, and the functions will serve as the methods:

package main

import "fmt"

type person struct {
  name string
  age  int
}

func (p person) talk() {
  fmt.Printf("Hi, my name is %s and I am %d years old.\n", p.name, p.age)
}

func main() {
  p1 := person{"John Crisostomo", 25}
  p1.talk()
  // prints: "Hi, my name is John Crisostomo and I am 25 years old."
}

Run this code

On our example above, we have declared a type struct called person with two fields: name and age. In Go, structs are just that, a typed collection of fields that are useful for grouping together related data.

After the struct declaration, we declared a function called talk. The first parenthesis after the keyword func specifies the receiver of the function. By using pof type person as our receiver, every variable of type person will now have a talkmethod attached to it.

We saw that in action on our main function where we declared and assigned p1to be of type person and then invoking the talk method.

Overriding methods and method promotion

struct is a type, hence, it can be embedded inside another struct. If the embedded struct is a receiver of a function, this function gets promoted and can be directly accessed by the outer struct:

package main

import (
	"fmt"
)

type creature struct {}

func (c creature) walk() {
  fmt.Println("The creature is walking.")
}

type human struct {
  creature
}

func main() {
  h := human{
    creature{},
  }
  h.walk()
  // prints: "The creature is walking."
}

Run this code

We can override this function by attaching a similarly named function to our human struct:

package main

import (
	"fmt"
)

type creature struct {}

func (c creature) walk() {
  fmt.Println("The creature is walking.")
}

type human struct {
  creature
}

func (h human) walk() {
  fmt.Println("The human is walking.")
}

func main() {
  h := human{
    creature{},
  }
  h.walk()
  // prints: "The human is walking."
  h.creature.walk()
  // prints: "The creature is walking."
}

Run this code

As we can see on our contrived example, the promoted method can easily be overridden, and the overridden function of the embedded struct is still accessible.

Interfaces and Polymorphism

Interfaces in Go are used to define a type's behavior. It is a collection of methods that a particular type can do. Here's the simplest explanation I can muster: if a struct has all of the methods in an interface, then it can be said that the struct is implementing that interface. This is a concept that can be easily grasped through code, so let us make use of our previous example to demonstrate this:

package main

import (
	"fmt"
)

type lifeForm interface {
   walk()
}

type creature struct {}

func (c creature) walk() {
  fmt.Println("The creature is walking.")
}

type human struct {
  creature
}

func (h human) walk() {
  fmt.Println("The human is walking.")
}

func performAction(lf lifeForm) {
  lf.walk()
}

func main() {
  c := creature{}
  h := human{
    creature{},
  }

  performAction(c)
  // prints: "The creature is walking."
  performAction(h)
  // prints: "The human is walking."
}

Run this code

In this modified example, we declared an interface called lifeForm which has a walk method. Just like what we have discussed above, it can be said that both creature and human implements the interface lifeForm because they both have a walk method attached to them.

We also declared a new function called performAction, which takes a parameter of type lifeForm. Since both c and h implements lifeForm, they can both be passed as an argument to performAction. The correct walk function will invoked accordingly.

Wrap up

There is so much more to object-oriented programming than what we have covered here but I hope it is enough to get you started in implementing class-like behavior with Golang's structs and interfaces. On my next post, I will talk about goroutineschannels and some basic concurrency patterns in Go. If there's something you would like to add up to what I have covered here, please feel free to leave a comment.

]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379510 2017-02-25T16:00:00Z 2019-02-28T05:39:31Z Cebu Open Hackathon 2017

There will be an upcoming hackathon next month, brought to you by Snapzio Rapid Software Solutions in collaboration with iiOffice Cebu. Whether you have an awesome app idea or just wanted to spend the weekend prototyping with a new stack, this event is perfect for all developers who want to showcase their software craftsmanship skills. It will also be an awesome opportunity to meet new developers and discuss new trends in the fast-paced world of software development.

The event will take place at iiOffice Cebu (Arlinda V. Paras Bldg., Don Gil Garcia St., Cebu City, Philippines 6000, near the Cebu Provincial Capitol) on March 24 - 25, 2017(07:00pm - 07:00pm). The hackathon will be open to everyone: freelancers, professional developers and even students. Teams can go up to three members, with a registration fee of Php 100.00 for each team member/participant. Deadline of registration will be on March 20, 2017.

Interested participants can register by filling up this form. More information can be found on the Cebu Open Hackathon 2017's Facebook's page.


]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379511 2017-02-10T16:00:00Z 2019-02-28T05:42:13Z Enhancing my self-hosted blog with Cloudflare

This post is not sponsored by Cloudflare; it is an update on my self-hosting journey with the Raspberry Pi.

I am happy with the result of the script that I shared on my last post because I no longer have to manually reboot the Pi every time the Internet connection goes down. However, it is still suboptimal; if the Internet connection goes down for an extended period of time, the blog goes down with it. Not only is it bad for would be readers, it was also frustrating on my end. The thought of moving this blog to a cheap cloud instance crossed my mind during the first few days, but I had to think of something more pragmatic. That was when I decided to check Cloudflare out. When I found out that they are offering a free plan that has more features than what I would need for this blog, I was sold.

Cloudflare is a security company that gained notoriety for stopping DDoS attacks through their Content Delivery Network (CDN)-like feature. It can help your site become more performant by caching your static content in their data centers around the world. This enables your site to load faster and allows more concurrency by serving cached content first before hitting your server. Cloudflare offers this and more for free; including three page rules, analytics, free SSL through their network and even enabling security measures like HTTP Strict Transport Security (HSTS). All of these can be easily configured in their nice looking dashboard. If you want to read more about the company's history, here is a good article about their humble beginning.

Getting a Cloudflare account is straightforward. A walkthrough video of the initial setup process is available on their landing page. In a nutshell, the process only has three steps:

  • Signing up with your email address and password
  • Adding your domain
  • Pointing your domain's nameservers to Cloudflare's own nameservers

After going through those steps quickly, you will be presented with a modern, easy to use admin interface:
Cloudflares dashboard image

It will be impossible to discuss all of what Cloudflare has to offer in a single post, so I will just write about the tweaks that I did to suit my current self-hosted Raspberry Pi setup.

Cypto

I obtained my domain's SSL certificate through Let's Encrypt, a trusted certificate authority that issues certificates for free. Since I have my own certificate configured on NGINX, I do not need to use Cloudflare's free SSL. I just selected Full (Strict) mode under SSL and enabled HSTS, Opportunistic Encryption and Automatic HTTPS Rewrites.

Speed

I enabled Auto Minify for both Javascript and CSS to optimize load times and save on bandwidth. I decided against minifying the HTML to preserve the blog's markup, which in my opinion is important for search engine optimization. I also enabled the Accelerated Mobile Links support for a better mobile reading experience. They also have a Beta feature called Rocket Loader™ (improves the load time of pages with JavaScript), this is off by default, but I decided to give it a try.

Caching

This is the feature that I needed the most. I clicked on this menu before I even explored the other settings above. I made sure Always Online™ is on, and made some minor adjustments with the Browser Cache Expiration.

Page Rules

Cloudflare gives you three page rules for free, and you can subscribe should you need more. Here's how I made use of my free page rules:

Cloudflares Page Rule Settings


Dynamic DNS Configuration

My blog's DNS records are now being handled by Cloudflare so I need to make sure that they are updated automatically if my ISP gives me a new IP address.

The easiest way to achieve this is to install ddclient from Raspbian's default repository, along with the Perl dependencies:

sudo apt-get install ddclient libjson-any-perl

Unfortunately, this version of ddclient does not support Cloudflare's Dynamic DNS API. We need to download the current version here, and overwrite the executable that has been installed by the previous command:

$ wget http://downloads.sourceforge.net/project/ddclient/ddclient/ddclient-3.8.3.tar.bz2

$ tar -jxvf ddclient-3.8.3.tar.bz2

$ cp -f ddclient-3.8.3/ddclient /usr/sbin/ddclient

We installed the old version first to benefit from the daemon that comes with it. This daemon keeps ddclient running in the background and spawns it automatically after each reboot.

This new version of ddclient looks for the configuration file in a different directory so we need to create that directory and move our old configuration file:

$ sudo mkdir /etc/ddclient
$ sudo mv /etc/ddclient.conf /etc/ddclient

Here's my ddclient.conf for reference:

# Configuration file for ddclient generated by debconf
#
# /etc/ddclient.conf

protocol=cloudflare
zone=johncrisostomo.com
use=web
server=www.cloudflare.com
login=*Enter your cloudflare email address here*
password=*Enter your API key here*
blog.johncrisostomo.com

We can now restart ddclient and check its status to make sure that everything is working as expected:

$ sudo service ddclient restart
$ sudo service ddclient status -l

The last command should give you the current status of the daemon along with the latest event logs. Check the event logs for any error messages or warnings, and if everything turned out to be okay, you should see something similar to this: 

SUCCESS: blog.johncrisostomo.com -- Updated Successfully to xxx.xxx.xxx.xxx.



So far this setup works well and I am happy with the blog's performance. It is a shame that I have not gathered data before Cloudflare to objectively compare the performance boost I am getting out of it. However, the blog's initial loading time has become noticeably faster, at least on my end. I guess we will have to see in the next couple of days.

]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379512 2017-02-09T16:00:00Z 2019-02-28T05:44:42Z Troubleshooting my Raspberry Pi's Wireless Issue

It has been almost a week since I decided to self-host my Ghost blog. It was a fun experience and most importantly, I knew a lot of new things that I would not otherwise know. On the less technical side, it inspired me to write more about my learning journey because not only does it solidify what I already know, it also drives me to learn more.

There is a little problem though. My Internet connection is flaky and it causes my blog to be sporadically down throughout the day. This is not intended to be a for-profit blog, however, seeing people share some of my posts while my blog is down was frustrating. I just had to do something about it. I observed the Pi's behavior by writing several BASH scripts and cron jobs that makes sure these events are logged. Sifting through the logs after work, I found out that aside from the ISP problem, there is another queer phenomenon that was happening. Whenever my home router loses Internet connection, the Raspberry Pi will lose its default gateway; it persists even after rebooting the router.

My initial attempts to fix this issue was to mess with the resolve.conf and /etc/network/interfaces configuration files. I tried everything from manualdhcpand even static. Nothing really fixed the issue and it was still losing the default gateway route whenever the Internet connection goes down. I finally solved this problem by writing a small BASH script:

#!/bin/bash

ping -c1 google.com > /dev/null

if [ $? != 0 ]
then
  echo `date` "No network connection, restarting wlan0" >> /home/uplogs.txt
  /sbin/ifdown 'wlan0'
  sleep 5
  /sbin/ifup --force 'wlan0'
else
  echo `date` "Internet seems to be up" >> /home/uplogs.txt
fi 

The script pings google.com and then checks the exit code. If the ping exited with an error, the Pi restarts the wireless LAN interface. It also logs all these events so that I can check how reliable my Internet connection was throughout the day. It was a quick and dirty fix. Nothing fancy, but it works.

]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379514 2017-02-07T16:00:00Z 2019-02-28T05:58:17Z Getting started with tmux

I have been using tmux for several years now and it has since become a central part of my workflow as a software developer. Since I am constantly writing code, executing shell commands or accessing server instances via SSH, most of these things are done in the terminal. I am always on the lookout for cool and new tools that could potentially improve my workflow, so I checked tmux out. I knew I just have to get some hands on experience with it to find out just where it fits in my current flow.

tmux is being described as a terminal multiplexer. When I was first starting out, it was such a big word that added appeal to it. I thought it was leet, especially when I was still new to it. During my first days, I was using it solely for the sake of using it.

At this time, it became so ingrained into my system that the first thing I do upon arriving at work is spawn a terminal window into full screen and set the tmux windows that I will be using throughout the day.

Before I start with the basic commands though, I have to clear up a preconception that some people have about it. No, it does not manage your SSH connections. I need to stress this out because I have a certain colleague in the past who told me it was an ancient tool and dismissed it as a trend among hipster developers. He said he's better off using PAC Manager for all his SSH needs. These tools are apples and oranges. They complement each other; I also use PAC Manager because there is no way I will remember all the usernames and the host addresses I need to work with throughout the day.

To give a simple description as to what tmux is, you have to think of it as a serverthat serves terminal sessions. That allows you to attach and detach from it at will, and also gives other people a chance to attach to your existing tmux session. That is the main feature that makes it so awesome for everyone who works with remote stuff. Let us say that you have a VPS instance somewhere and you need to do some maintenance work. You SSH into your server, tell tmux to create a new terminal session and proceed with your work. After fifteen minutes or so, you remember that you have an important meeting to attend. The problem is that you aren't quite yet done with your work. As a contrived example, perhaps the server is doing a vulnerability scan or building something from source. Since you are attached to a tmux session, you can just kill your SSH connection. In tmux terms, it is referred to as detaching. After the meeting, you can attach back to your session and you will be presented with exactly the same screen as when you left. This enables you to see the scan results or the build progress without digging into the logs or trying to remember how it was doing before you left.

Another benefit of using tmux is, you will use your mouse less often once you get the hang of it. If you spend most of the day coding, reaching for the mouse to switch files or scroll through your code breaks the cadence. These are small personal idiosyncrasies, however, if you are plagued by the same quirk, you might want to learn VIM as well.

The good thing is that you only need to know a few commands to use tmux effectively. There are a whole lot of features and customization options available but you can learn them along the way. If you have used Emacs before, these commands will make you feel at home as the key combinations are somehow similar.

Outside a tmux session

Creating a new session

tmux new -s [session name]

Listing sessions

tmux ls

Attaching to an existing session

tmux attach -t [session name]

Inside a tmux session

Splitting the screen vertically

Ctrl - b % 

Splitting the screen horizontally

Ctrl - b "

Pane Navigation

Ctrl - arrow keys

Maximize a pane (from splitting)

Ctrl - b z

Closing a pane (from splitting)

Ctrl - d

Opening a new window

Ctrl - b c

Renaming a window

Ctrl - b ,

Window Navigation

Ctrl - b n

or

Ctrl -b p

Closing a window

Ctrl - b &

Detaching from a session

Ctrl - b d

I hope I have covered enough of the basics to get you started. Happy hacking!


]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379520 2017-02-05T16:00:00Z 2019-02-28T06:18:00Z Weekend Project: Self-hosted blog & Docker in a Raspberry Pi

I received a Raspberry Pi 3 Model B last Christmas, but I did not know what to do with. Or at least not yet. The problem has little to do with the Pi and more of the fact that most of the projects that I do can easily be solved with an Arduino.

When I stumbled upon these series of posts by the Docker Captain Alex Ellis, I figured out that this is a perfect opportunity to learn a tool I have always wanted to use. I know virtual machines well, but I had a hard time understanding how to make Docker fit into my workflow. The idea of containers that I cannot simply SSH into (I now know that you can exec bash to peek inside them, but that's not the point), just seemed absurd when I was first trying to use it. To be honest it felt too complex and cumbersome that I just dismissed it as something that was not worth it. Well, it turned out that I did not understand the philosophy behind it. I would like to talk about it and discuss images and containers in depth, but I decided that it will be better to have a dedicated post for that. After getting my hands dirty with Docker last weekend, I can say that I have attained a working proficiency with it and I can comfortably use it for my projects from here on.

After three days, I finally got it to work. The blog that you are reading right now is hosted on a Raspberry Pi with Docker Engine installed. I have two Docker containers running: the Ghost blog and the NGINX server that handles the caching. It took me a lot of trial and errors before I finally got it to work; I do not have any prior knowledge of NGINX when I embarked on this weekend project. The Pi's limited hardware made building images painstakingly slow. Building SQLite3 from source for the ARM architecture was excruciating.

I will be sharing my Dockerfiles and some configuration below. I won't go into more detail right now, but I am hoping that I will have the time to do so in my next post. Some of these are directly forked/copied from [Alex]((http://blog.alexellis.io)'s GitHub repositories; I could have pulled the images from Docker Hub or cloned the Dockerfiles but I decided to train my muscle memory by typing the Dockerfiles manually. I still have a lot to learn about NGINX and Docker in particular, but I consider this blog as a milestone.

Ghost Dockerfile

FROM alexellis2/node4.x-arm:latest

USER root
WORKDIR /var/www/
RUN mkdir -p ghost
RUN apt-get update && \
    apt-get -qy install wget unzip && \
    wget https://github.com/TryGhost/Ghost/releases/download/0.11.4/Ghost-0.11.4.zip && \
    unzip Ghost-*.zip -d ghost && \
    apt-get -y remove wget unzip && \
    rm -rf /var/lib/apt/lists/*

RUN useradd ghost -m -G www-data -s /bin/bash
RUN chown ghost:www-data .
RUN chown ghost:www-data ghost
RUN chown ghost:www-data -R ghost/*
RUN npm install -g pm2

USER ghost
WORKDIR /var/www/ghost
RUN /bin/bash -c "time (npm install sqlite3)"
RUN npm install

EXPOSE 2368
EXPOSE 2369
RUN ls && pwd

ENV NODE_ENV production

RUN sed -e s/127.0.0.1/0.0.0.0/g ./config.example.js > ./config.js
CMD ["pm2", "start", "index.js", "--name", "blog", "--no-daemon"]

Blog Dockerfile

FROM johncrisostomo/ghost-on-docker-arm:0.11.4

ADD Vapor /var/www/ghost/content/themes/Vapor

RUN sed -i s/my-ghost-blog.com/blog.johncrisostomo.com/g config.js

NGINX Dockerfile

FROM resin/rpi-raspbian:latest

RUN apt-get update && apt-get install -qy nginx

WORKDIR /etc/nginx/

RUN rm /var/www/html/index.nginx-debian.html && \
    rm sites-available/default && \
    rm sites-enabled/default && \
    rm nginx.conf

COPY nginx.conf /etc/nginx/

COPY johncrisostomo.com.conf conf.d/

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]

JOHNCRISOSTOMO.COM.CONF

server {
  listen 80;
  server_name blog.johncrisostomo.com;
  access_log /var/log/nginx/blog.access.log;
  error_log /var/log/nginx/blog.error.log;

  location / {
    proxy_cache              blog_cache;
    add_header X-Proxy-Cache $upstream_cache_status;
    proxy_ignore_headers     Cache-Control;
    proxy_cache_valid any    10m;
    proxy_cache_use_stale    error timeout http_500 http_502 http_503 http_504;

    proxy_set_header  X-Real-IP $remote_addr;
    proxy_set_header  Host      $http_host;
    proxy_pass        http://blog:2368;
  }
}

docker-compose.yml

version: "2.0"
services:
  nginx:
    ports:
      - "80:80"
    build: "./nginx/"
    restart: always

  blog:
    ports:
      - "2368:2368"
    build: "./blog.johncrisostomo.com/"
    volumes:
      - ghost_apps:/var/www/ghost/content/apps
      - ghost_data:/var/www/ghost/content/data
      - ghost_images:/var/www/ghost/content/images
      - ghost_themes:/var/www/ghost/content/themes
    restart: always

volumes:
   ghost_apps:
      driver: local
   ghost_data:
      driver: local
   ghost_images:
      driver: local
   ghost_themes:
      driver: local

I have written several follow up posts about this project. Feel free to check them out as most of them are troubleshooting issues and optimizations that are built on top of this project.


]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379521 2016-08-18T16:00:00Z 2019-02-28T06:20:00Z Cebu Mechanical Keyboard Enthusiasts Meetup 8/6


I have been busy these past few weeks since I started working on a server monitoring application. It will be released later this year; I have been busy and didn't have much time to blog and share about the new things that I have learned.

Despite the busy schedule, I managed to attend my first Mechanical Keyboard meetup at the Coffee Factory. Here are some of the awesome keyboards at the event.


]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379522 2016-06-07T16:00:00Z 2019-02-28T06:22:50Z Using Gagarin’s DDP Client to test Meteor methods, publications and subscriptions

On my previous post, we briefly went over unit testing in Meteor and Mantra by using Sinon’s spy and stub. We have discussed the difference between the two functions and determined when to use them for our unit tests.

Today, we are going to go through basic integration testing with methods and publications/subscriptions using Gagarin’s DDP client. Gagarin is the Mantra spec’s recommended testing framework for doing integration testing. It is versatile and can do a lot more that what we are going to cover here, like using chromedriver and Selenium for end to end testing, for example.

About Gagarin

According to the project’s documentation,

Gagarin is a mocha based testing framework designed to be used with Meteor. It can spawn multiple instances of your meteor application and run the tests by executing commands on both server and client in realtime. In other words, Gagarin allows you to automate everything you could achieve manually with meteor shell, browser console and a lot of free time. There’s no magic. It just works.

Gagarin is based on Laika, another testing framework created by Arunoda. According to the documentation, it can be thought of as Laika 2.0, though it is not backward compatible. The main differences between Gagarin, Laika and Velocity can also be found on the documentation above.

Installation

We can simply install Gagarin by running

npm install -g gagarin

Once we have written some tests, we can just run this command at the root of our app directory:

gagarin

By default, it will look for files that are in the tests/gagarin/ directory. It will build our app inside .gagarin/local/ along with the database that it will use for the duration of the test, which can be found at .gagarin/db/.
Now that we have a basic understanding of how to install and run Gagarin, let’s proceed and look at the code snippets that we are going to test.

Meteor code snippets (for testing)

In order for us to understand what we are testing, the code snippets will be included here so we can easily reference the functions and how they are being tested. I have simplified these code snippets so that we can focus more on testing.

Collection

const Categories = new Mongo.Collection(‘categories’);

Publications

Meteor.publish(‘categoriesList’, () => {
  return Categories.find()
});
Meteor.publish(‘categoriesOwnedBy’, (owner) => {
  check(owner, String);
  
  return Categories.find({owner: owner});
});

Methods

Meteor.methods({
  ‘categoriesAdd’(data) {
     check(data, Object);
 
     Categories.insert({
       name : data.name,
       owner: data.owner,
       createdAt : new Date(),
     });
   },
   ‘categoriesUpdate’(data) {
     check(data, Object);
     
     Categories.update({_id:data._id},{$set:{
       name : data.name, 
       owner: data.owner,
       modifiedAt : new Date(),
     }});
   },
});

Writing Gagarin Tests

Now that we have seen the code that we are going to test, we can now start writing basic tests in a single JavaScript file inside tests/gagarin/. Because Gagarin is based on Mocha, it has the same describe — it structure. Chai’s expect is also exposed for doing more semantic assertions.

Testing the categoriesAdd method

The test we are going to do first is to check whether or not we can add something to the categories collection.

describe(‘Categories’, function() {
  var app = meteor({flavor: “fiber”});
  var client = ddp(app, {flavor: “fiber”});
  it(‘should be able to add’, function() {
    client.call(‘categoriesAdd’, [{name: ‘First category’}]);
    client.sleep(200);
    client.subscribe(‘categoriesList’);
    var categories = client.collection(“categories”);
    expect(Object.keys(categories).length).to.equal(1);
 }); 
}

We are defining the initial describe block that we are going to use for this example. Gagarin gives us two useful global functions that are essential for running tests: meteor and ddp.

meteor is used to spawn a new Meteor instance that we have assigned to the app variable. Meteor uses fibers by default, so we need to specify it as the flavor. ddp allows a client to connect to the Meteor instance that we have just created by passing the reference of the instance and the flavor as its arguments.

Since we now have our Meteor app and our client configured, we are ready to proceed with our first test case: making sure that we can successfully add a new category.

Inside our it block, we are calling the Meteor method categoriesAdd. Gagarin provides our client with a handy call function that works exactly the same way as Meteor.call. The only difference is that the arguments need to be inside an array, regardless of its number.

We then use the sleep function to add a little delay so that we can make sure that the new document comes to the client. We are subscribing to our categoriesList publication through the handy subscribe function of our client. Just like the call function, this is similar to Meteor.subscribe, which makes it very straightforward.

After subscribing to our publication, we now check if the document has been inserted by our Meteor method to the collection. We do that by calling the collection function of our client, passing the name of the MongoDB collection as an argument. It returns an object that looks like this:

{ Hpu6Z4h7ZFtC6Q77m: 
 { _id: ‘Hpu6Z4h7ZFtC6Q77m’,
 name: ‘First category’,
 owner: null,
 modifiedAt: 2016–06–07T08:29:06.026Z,
 createdAt: 2016–06–07T08:29:06.026Z } }

It looks similar to something that we would get if we query our collection using find, aside from the fact that instead of getting back an array or a cursor, we are getting an object which has the _id field as a key.

We then use Chai’s expect function to do a simple assertion and that completes our first test. Object.keys has been used on the object that was returned by the collection function, so we can just expect the resulting array to have a length of 1. This test makes us sure that the client can call our method, and can receive the the document through our publication.

Testing the categoriesUpdate method

We now have a basic test that checks if we can insert and retrieve documents, what we want to do next is to check if we can update a certain category from our collection. The process is similar to what we did on the last section (still goes inside the same describe block):

it(‘should be able to update’, function() {
  client.subscribe(‘categoriesList’);
  var categories = client.collection(“categories”);
  var id = Object.keys(categories)[0];
  client.call(‘categoriesUpdate’, [{_id:id, name: ‘updated    category’}]);
 client.sleep(200);
 
  categories = client.collection(“categories”);
  expect(categories[id].name).to.equal(‘updated category’);
});

The only thing that is new here is that we are storing the id of the category that we want to update so we can use it when we call categoriesUpdate. We can then check if the name has been updated by using expect.

Testing categoriesOwnedBy publication

The next thing that we will test is the categoriesOwnedBy publication. Since we did not use the owner field in our previous examples, we will put this test on a separate describe block. That will allow us to spawn a new Meteor and database instances that has nothing to do with the previous one.

describe(‘categoriesOwnedBy publication’, function() {
  var app = meteor({flavor: “fiber”});
  var client = ddp(app, {flavor: “fiber”});
  it(‘should only publish a specific users category’, function() {
    app.execute(function() {
      var categories = [
        { name: ‘Category 1’, owner: ‘John’ },
        { name: ‘Category 2’, owner: ‘Jessica’ },
        { name: ‘Category 3’, owner: ‘John’ }
      ];
      Meteor.call(‘categoriesAdd’, categories[0]);
      Meteor.call(‘categoriesAdd’, categories[1]);
      Meteor.call(‘categoriesAdd’, categories[2]);
   });
  client.subscribe(‘categoriesOwnedBy’, [‘John’]);
  
  var johnsCategories = client.collection(‘categories’);
  
  expect(Object.keys(johnsCategories).length).to.equal(2);
});

This looks similar to our two previous examples, but this time I am using the execute function of our Meteor instance. It accepts a callback function as an argument, and the contents of that function will be executed in the server context. Notice how we have access to Meteor.call inside this function?

We then go back to the client context and subscribe to our categoriesOwnedBypublication, passing ‘John’ as our argument. After fetching the contents of the collection, we are checking if we are getting the expected number of documents that was published by our collection.

Running our test

If we run gagarin on the root folder of our application, we will get something similar to this:

Conclusion

Using the examples above, we have seen how to create simple integration tests using Gagarin on Meteor. These test cases might seem contrived, but the idea here is to get an overview of how to use Gagarin’s DDP client to perform basic integration tests that deals with Meteor methods, publications and subscriptions.


]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379526 2016-06-05T16:00:00Z 2019-02-28T06:43:07Z Using Sinon’s Spy and Stub in Mantra (Unit Testing)

With the release of Meteor 1.3, unit testing has never been easier in Meteor. Our team recently decided to adopt Arunoda’s Mantra spec for developing Meteor applications. It is an application architecture which allows for a more modular approach with clear separation of concerns with the client and the server side. It has been a rough month for us since the spec is new and there are limited learning resources available. There were also a lot of things to learn since adopting the Mantra spec meant that we have to learn React for the presentation logic, instead of sticking with Blaze.


Unit testing is something that can be easily accomplished with the Mantra spec. Since it is modular and it clearly separates the presentation logic from the business logic through containers, components and actions, everything can be unit tested. Meteor 1.3 also introduced native NPM support, which means that using familiar tools such as Mocha, Chai and Sinon can be imported in a straightforward manner. This is going to be my first blog post and so I am only going to explain Sinon’s spy and stub methods as they were used in Arunoda’s mantra-sample-blog application.

Spies

People who are new to unit testing tools in JavaScript (with me included) were initially confused about the difference between Sinon’s spy and stub. It turns out that a spy is a basic function that you can use in Sinon, and that stubs and mocks were built on top of it.

According to the Sinon.JS documentation, a spy is

a function that records arguments, return value, the value of this and exception thrown (if any) for all its calls. A test spy can be an anonymous function or it can wrap an existing function

What this means is that a spy can be used as a replacement for an anonymous callback function or you can wrap an existing function into it so that you can spy on its behavior. For example, if you have a function that accepts another function as an argument to be called back later after a certain condition, you can instead pass Sinon’s spy() function as the callback. You can then assert or use Chai’s expect() to see if that function was called by using spy()’s calledOnce(). Additionally, you can check if the correct arguments were passed to it by using its calledWith(). You can check the different spy functions that are available here.

Now let’s look at how spies are used in the Mantra sample blog application. We are going to use the tests that were written for the posts action. Let’s check this code snippet out:

it('should call Meteor.call to save the post', () => {
  const Meteor = {uuid: () => 'id', call: spy()};
  const LocalState = {set: spy()};
  const FlowRouter = {go: spy()};
  
  actions.create({LocalState, Meteor, FlowRouter}, 't', 'c');
  
  const methodArgs = Meteor.call.args[0];
  
  expect(methodArgs.slice(0, 4)).to.deep.equal([
    'posts.create', 'id', 't', 'c'
  ]);
  expect(methodArgs[4]).to.be.a('function');
});

In the test block above, we are testing if our action will correctly invoke a Meteor Method in the server through Meteor.call(). The first three lines are creating local objects for MeteorLocalState and FlowRouter to be used exclusively in this test case. In the Mantra spec, these are exported as the application context inside client/configs/context.js.

Actions in Mantra receive this application context as the first parameter. We are creating local objects in lieu of the real app context and by doing so, we can trick the action into thinking that it is receiving its first expected argument (which is the app context).

Next, we spy on how these objects are used inside the action that we are testing. See how the Meteor object that we have passed contains a call property, which is a Sinon’s spy() function? When the action gets invoked on Line 6, it will go ahead and invoke Meteor.call() inside it. The Meteor object that it will receive is something that we have just created for spying purposes, so we have access to the arguments that were passed into it when it was invoked (Lines 7 — 12). We used the arguments that we have obtained through spying to verify that our function is invoking Meteor.call() with the correct arguments.

This is what the create action looks like, for reference:

  create({Meteor, LocalState, FlowRouter}, title, content) {
    if (!title || !content) {
      return LocalState.set('SAVING_ERROR', 'Title & Content are required!');
    }

    LocalState.set('SAVING_ERROR', null);

    const id = Meteor.uuid();
    Meteor.call('posts.create', id, title, content, (err) => {
      if (err) {
        return LocalState.set('SAVING_ERROR', err.message);
      }
    });
    FlowRouter.go(`/post/${id}`);
  }

Stubs

Now that we are done with spies, and have a basic understanding on how they work, let’s work with stubs. Stubs are just like spies, and in fact they have the entire spy() API inside them, but they can do more than just observe a function’s behavior. According to the Sinon API, stubs are:

functions (spies) with pre-programmed behavior. They support the full test spy API in addition to methods which can be used to alter the stub’s behavior.

and they should be used when you want to:

Control a method’s behavior from a test to force the code down a specific path. Examples include forcing a method to throw an error in order to test error handling.

or

When you want to prevent a specific method from being called directly (possibly because it triggers undesired behavior, such as a XMLHttpRequest or similar).

Okay, so where spies can observe and spy on how a function is going to be called, the number of times it’s going to be called and the arguments that were sent with it, stubs can do all these, plus you can also programmatically control its behavior.

Let’s check how it is used on the post action test inside the sample blog application:


    it('should call Meteor.call to save the post', () => {
      const Meteor = {uuid: () => 'id', call: spy()};
      const LocalState = {set: spy()};
      const FlowRouter = {go: spy()};

      actions.create({LocalState, Meteor, FlowRouter}, 't', 'c');
      const methodArgs = Meteor.call.args[0];

      expect(methodArgs.slice(0, 4)).to.deep.equal([
        'posts.create', 'id', 't', 'c'
      ]);
      expect(methodArgs[4]).to.be.a('function');
    });

This particular test will check if our action will set an error message if something goes wrong after the Meteor Method call. Just like what we did with the spy example, we are setting local objects to be used as the context for our action function.

In Mantra, the LocalState is a Meteor reactive-dict data structure (a reactive dictionary) which is mainly used to handle the client side state of the app, although it is mostly used to store temporary error messages. We are creating a LocalState object here to mimic the app context’s LocalState. We are setting its set property as a spy function, so we can later see if our action will set the appropriate error message by checking the arguments that were passed into it.

Notice that this time, we are using a stub() instead of a spy() for our local Meteor object. The reason for this is that we are no longer just observing how it is going to be called, but we are also forcing it to respond in a specific way.

We are checking our action’s behavior once the call to a remote Meteor method returns an error and if that error will be stored in the LocalState accordingly. In order to do that, we need to reproduce that behavior, or make the call() function in our local Meteor object return an error. That is something that a spy() will not be able to do since it can only observe. For this scenario, we will use the stub’s callsArgWith()* function to set our desired behavior (Line 6).

We will give callsArgWith() two arguments: 4 and the err object that we have defined in Line 5. This function will make our stub invoke the argument at index 4, passing the err as an argument to whatever function is at that position. If you are going to look at our create action above, Meteor.call() is invoked with five arguments, the last or the one in the fourth index is a callback function:

Meteor.call(‘posts.create’, id, title, content, (err) => { 
  if (err) { 
    return LocalState.set(‘SAVING_ERROR’, err.message); 
  } 
});

We have to remember that this Meteor.call() that is being invoked here is the local object that we have created and passed explicitly into our create action for testing purposes. As such, this is the stub in action, and it doesn’t know that the last argument when it is invoked is going to be a callback function, so we have to use the callsArgWith() with the err object. Inside this callback, the create action will then store the error message in the LocalState object that we have passed in. Since the set() function of that LocalState object is a spy, we can conclude our test by checking if the arguments that were passed to this spy function matches the error message that we are expecting (Line 9).

This wraps up our discussion of how Sinon’s Spy and Stubs methods are being used in Mantra Unit Testing. As a recap, a spy just observes a certain function, or can take the place of a anonymous callback function so we can observe its behavior. A stub does more than that, by allowing us to pre-program a function’s behavior. If I have provided any wrong information, please feel free to correct me in the comments. :)

*The callsArg and yields family of methods have been removed as of Sinon 1.8. They were replaced with the onCall API.

]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379532 2009-11-13T16:00:00Z 2019-02-28T06:45:02Z How To: Play MP3 and other codecs on Moblin 2.1

Moblin (short for Mobile Linux) is a new Linux distribution that is designed by Intel to support multiple platforms and usage models ranging from Netbooks to Mobile Internet Devices (MID), to various embedded usage models, such as the In Vehicle Infotainment systems.

Moblin 2.1 was released recently and you could check out the screenshots hereor watch the intro video here for a quick look on how the Moblin 2.1 Netbook looks like. You could check for tested netbook models here. The full release note and download link could be found here.

Looks promising, but the problem is, as with any other Linux distributions, it does not play mp3 and other proprietary codecs out of the box for legal reasons. It only plays Ogg Vorbis audio and Ogg Theora video upon installation and the Gstreamer packages needed to play mp3 and other video codecs are not available from Moblin's repository or from the Moblin Garage. So we have to compile these packages from source.

DISCLAIMER: Try this at your own risk.

Step 1: Download the souce code here. We need the following source codes:

gst-ffmpeg-0.10.9.tar.bz2
gst-plugins-bad-0.10.16.tar.bz2
gst-plugins-base-0.10.25.tar.bz2
gst-plugins-good-0.10.16.tar.bz2
gst-plugins-ugly-0.10.13.tar.bz2
gstreamer-0.10.25.tar.bz2

After installing these, extract them to a directory of your choice (ex. /Home/Downloads).

Step 2: Open the terminal and type this command to download and install necessary development tools and build packages:

yum install gcc bison flex *glib* *diff* liboil*dev*

Step 3: Compile and build the source code. Using the Terminal, use the cd command to navigate to the folder where you have extracted the downloaded source codes (ex. cd /Home/Downloads). Then type these commands in order (press Enter after each line):

cd gstreamer-0.10.25
./configure -prefix=/usr && make && make install

cd ..

cd gst-plugins-base-0.10.25
./configure -prefix=/usr && make && make install

cd ..

cd gst-plugins-good-0.10.16
./configure -prefix=/usr && make && make install

cd ..

cd gst-plugins-bad-0.10.16
./configure -prefix=/usr && make && make install

cd ..

cd gst-plugins-ugly-0.10.13
./configure -prefix=/usr && make && make install

cd ..

cd gst-ffmpeg-0.10.9
./configure -prefix=/usr && make && make install 

Then reboot and have fun with your media. I might write an automated script later to do all of these upon execution of the script, but I'm busy at the moment.

]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379533 2009-01-09T16:00:00Z 2019-02-28T06:46:27Z Random Linux Post

"'Free software' is a matter of liberty, not price. To understand the concept, you should think of "free" as in 'free speech,' not as in 'free beer.'"

We all grew up using one Operating System, and I am pretty sure that the new blood still does. Maybe not all of us, but at least in my generation, at least 99.8% grew up using one Operating System. And yes, I am referring to Microsoft Windows.

Personally, I do not have any problem with Microsoft Windows. I grew up using it and Macintosh, but I've only used Macintosh in my early years, as far as I remember, when I was still in Grade 2. It was a lot easier to use as compared to Windows, for it uses a lot of graphics, unlike Windows which tends to focus on words and phrases. Windows offers clear explanation for each, though nothing beats that power of imagery and intuitive icons. If I am not mistaken, Macintosh has first introduced the Graphical User Interface or the GUI, for I remember my mom back in the days using Windows 3.1. It uses Command Line Interface and I remember using a Mac computer then which already has a black and white GUI.

You might be wondering by now why I am talking about these two Operating Systems when I should be explaining why I switched over to a new one which only a few people knows that such OS exists. Well these two OS are the most popular and people find it peculiar why I switched over to a "not-so-popular" OS, at least on where I live. Most reaction I get from people is a smirk, followed by "Is it easy to use Linux? Some people said they had a hard time using it" and such, and some people even look at it as inferior as compared to Microsoft and Macintosh and they are really skeptical when it comes to performance. Well, here's my defense.

The notion that Linux is hard to use is like 20 summers ago. Some people still refer to Linux as a pure Command Line Interface OS, and does not have a good GUI like the two popular OS I have mentioned above. If you are still under that impression, you might be living under a rock for decades! I could say that Linux has better GUI than any OS I have used because it gives you choices. What do I mean? Instead of the usual taskbar with Start menu with icons on the desktop or the clean desktop with a familiar dock, in Linux, there are several Desktop Environments that you could choose from, each of them has an array of features that is suited to your preference or hardware. For instance, there is the traditional GNOME desktop environment which is common on most Linux distributions. There is also the K Desktop Environment or KDE which is targeted to new Linux users who are well accustomed in using Microsoft Windows. Plus there is also the XFCE desktop environment which is becoming popular in the past few months. It is an extremely lightweight desktop environment which could bring an old computer to life, for the applications that are bundled with it uses less memory consumption and requires less processing speed. There are endless customizations that could be done on each of the desktop environments I have mentioned and you won't get bored; you can get the look that you want and need. And, you won't ever have to go into suspicious sites again looking for cracks and or syndicated Serial Numbers for your software since everything is free in Linux. Well, almost all are free, only a few programmers charge for their programs and they come real cheap if they does.

Speaking of cracks, well in Linux, you don't need them so no need to bother. I know some of us ( and I should say a lot of us here ) have used pirated software, pirated OS and all things that are not that legal and I should say I grew tired of it. It's like this: Why would I use such commercial OS when I cannot really afford it? I mean how much is OS X or Vista these days? That's only the core OS, how about additional productivity software, which are sold separately? Being street smart, we could always manage to get some "cracked" or "stripped" or worst "pirated" versions, but come on, show the programmers some respect. They certainly need the money that's why they chose to work there. And why count on them if there are ones out there who are willing to make authentic software for all of us to enjoy for free? All they need is support. And they are going to support us back.

Another thing that made me switch to Linux is speed. I know, this statement of mine might trigger a lot of grunts and "Come on"s from a lot of people since they believe that the computer's hardware is responsible for that, meaning that if you have a great specs, then it would be great with any OS and vice versa. Well that might be true, but I am sure that a new Windows box would run like a charm the first few days and then after a week or so, it will start to lag and faster boot times might then be noticeable. It is caused by the fact that there are thousands of viruses and worms known in Microsoft, while there are only around 400 known viruses in Linux and Mac OS. That's just a rough estimate.

Add all those useless services (programs that run in the background) that came installed in Windows XP and you'll get a boot time close to five minutes.

Well...

The Support system of Linux is really interesting. Instead of calling a number and paying for Tech Support Representatives, all you have to do to get support is to have an internet connection. In Linux, you get support from the user community and not from hired technical people. You just need to register in your distribution's community forum and fire your questions their. Help would be there in 1-2 hours time, and in several week's time, you'll see yourself gaining familiarity with the Linux distribution that you have chosen and you'll see yourself hanging out in the community forums, helping newcomers out.

There are a lot of great things I could mention about Linux, but let me clarify things out: Linux is just the kernel used, and not the OS. A kernel together with the bundled software makes a so called distribution. Those distributions that uses the Linux kernel are Ubuntu, Fedora, Mandriva, Debian, OpenSUSE and etc. Among the most famous are Ubuntu, as they said it is the most user friendly and I have to admit that it has a great community. Personally, I use Fedora, not because Linus Torvalds himself and all the computers at NASA uses it, but I guess I just got used at using it and I feel uncomfortable using any other distributions.

Linux and Open Source OS and applications seems to be more popular now than ever due to the sudden boom of netbooks and other low priced portable devices that comes with Linux pre-installed.

To end this post, I would say that GNU/Linux would be the future of computing, it does not solely give us free software, it also gives everyone an idea of how everything works inside the box.


]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379536 2008-11-18T16:00:00Z 2019-02-28T06:48:30Z Installing Cairo-Dock on the Acer Aspire One

Cairo-Dock is an OSX-ish application laucher that you could place on your desktop to replace your panel. Or if you're like me, you could have both and will make your desktop similar to this [It is recommended to have at least 1GB of RAM though]:

Hopefully, it is fairly easy once you have activated the standard XFCE desktop and set aside the Acer modified desktop, instructions on how to do that could be found here. Of course, this is assuming that you already have added the new RPM-Fusion repositories, if you haven't yet, then simple instructions could be found here.

Once we have the standard XFCE desktop, we need to activate Compiz [it is a program pre-installed and can make great 3D effects on your desktop] by downloading Fusion-Icon. Just go to the Terminal and then type in:

sudo yum install fusion-icon

After installation, we can run it by pressing Alt+F2 on your keyboard and then typing in fusion-icon and then click Run. You will know if it is successful if you see a new blue icon on your system tray [where the clock, etc. is]. You can right click it for you to configure some effects that you might like to enable. If you might ask and be interested what Emerald theme is, it is a theme manager and you can get themes for it by opening the package manager and searching for emerald-themes. Later on this guide, we are going to add fusion-icon and cairo-dock so that they would run automatically upon startup.

Now that we have fusion-icon, all we need to do is to get the cairo-dock RPM from any of the mirrors listed in here. After downloading it, just double click it and it will automatically installed. You can find it in you menu under System, named Cairo-Dock. Click on it and then the dock will appear in your desktop. Right click it so that you can personalize it and adding applications is as easy as dragging and dropping .desktop files from /usr/share/applications to the dock, or you can create manually configured lauchers/subdocks/etc. if you want.

Themes for it are also available, try searching some at http://rpm.pbone.net/ by typing in cairo-dock-themes. I got my themes there but I forgot the direct link, but I'll update this later.

Now if you notice, fusion-icon and cairo-dock does not open upon startup. This can be easily remedied by opening a Terminal and typing:

xfce4-autostart-editor

A new window should pop up and just add those two applications, the commands being cairo-dock and fusion-icon, respectively. And that's pretty much it.

Have fun on your new desktop!

]]>
John Crisostomo
tag:blog.johncrisostomo.com,2013:Post/1379537 2008-10-28T16:00:00Z 2019-02-28T06:49:57Z Installing Mozilla Thunderbird and Pidgin on Acer Aspire One

This How-To is Acer modded Linpus Lite specific, please don't try this on an Acer Aspire One that has Microsoft Windows XP or Vista installed.

This How-To will guide you in installing Mozilla Thunderbird and Pidgin Messenger in your Aspire One and change the icon on your desktop to the original icon. This will only work assuming that you are still using AME or the Acer e-mail client that came pre-installed with your Aspire One and the Acer Messenger as well.

First thing that we need to do is to uninstall AME, by typing this command in the Terminal [alt+f2 and then type Terminal and then click on Run]:

sudo yum remove evolution-data-server libpurple

When the terminal is finally done performing those task, we can go ahead and install Pidgin and Thunderbird using pirut or in my case the Smart Package Manager [assuming the you have already signed keys using this command: sudo yum update fedora-release]. Open pirut or Smart and then search for Pidgin and then Thunderbird. After we're done with that, we're going to associate both programs with the default Mail and Messenger icon by typing these commands on the Terminal:

cd /usr/acer/bin

sudo ln -s /usr/bin/thunderbird AME

sudo ln -s /usr/bin/pidgin UIM

Well actually, it is as easy as that and we are done. You should now be able to use Mozilla Thunderbird and Pidgin using the default icons, but in case you are unhappy with those icons and wants to have the original icons, don't worry because we can do that by going to the Terminal and then typing:

sudo mousepad /usr/share/applications/AME.desktop

It should open a notepad that has a lot of text written on it and just in case you want to label the icon differently, let's say change it from E-mail to Thunderbird, just replace the text after Name= and GenericName= to your preferred name. Let's get back to the icon. When we scroll down, we should be able to see a line that says Icon= , just replace it with thunderbird.png, save it and we're done. Well, sad to say for Pidgin it has to be done in a different way and I'll just cover that on my next post, because it involves tweaking of group-app.xml, and a single mistake can ruin your desktop.

If you're already satisfied, then that's all, however for additional info, you can read below.

So now we have Mozilla Thunderbird and its original icon, what to do next? Well this is not that necessary but just in case you found out, the Mozilla Thunderbird we just installed does not update itself automatically and if we check on the Help menu, Check for Updates is grayed out, that is due to the fact that this version of Thunderbird is from the Fedora 8 [the Linux distribution where Linpus is based] repository. So in order for us to fix this, we need to do these steps in the Terminal so that we will be getting the official release from Mozilla and we're going to install it in the /opt directory. This is how:

wget "http://download.mozilla.org/?product=thunderbird-2.0.0.17&os=linux&lang=en-US"

sudo tar -xvf thunderbird-2.0.0.17.tar.gz --directory /opt

And then a lot of unpacking happens. After it's done we can type this again in the Terminal:

sudo chown user -R /opt/thunderbird

sudo mousepad /usr/share/applications/AME.desktop

And we just need to change the Exec= line to look just like this:

Exec=/opt/thunderbird/thunderbird

That's pretty much it. But if you're bothered to have 2 Mozilla Thunderbirds installed in your Aspire One, we can delete the old one via pirut or Smart but it will delete the icon as well which can be remedied easily by searching Google these keywords that I have used which is 'thunderbird.png 64x64'. Just copy the image to your Downloads folder and then move it to the pixmaps directory using this command:

sudo cp /home/user/Downloads/thunderbird.png /usr/share/pixmaps

After that, we're done. Have fun!

]]>
John Crisostomo