This is one of those frustrations post where I just spent hours working on something and I finally managed to have a working solution. I learned quite a bit but I feel like it should not have taken me that much time…

Anyway, the goal was to generate a PDF from HTML, then send it back to the browser so the user could download it. I tried a lot of different things, and it’s more than likely my solution is not the most elegant, or fast, but fuck it, it works.

I consider this post to be a place where I can store this solution, juste in case I forget it in the future. I’ll know where to look. Let’s jump into the actual solution.

The solution!


Let’s start with the front-end.

const downloadPDF = () => {
        fetch('/api/invoices/create-pdf', {
            data: {
            method: 'POST'
        }).then(res => {
            return res
                .then(res => {
                    const blob = new Blob([res], { type: 'application/pdf' })
                    saveAs(blob, 'invoice.pdf')
                .catch(e => alert(e))

This is the function that does everything. We are generating an invoice in my case.

1) A fetch with the POST method. This is the part where we generate our PDF with the proper data and generate our PDF on the server. (server code will follow)

3) The response we get needs to be converted into an arraybuffer.

4) We create a Blob ( Binary Large Objects ) with the new Blob() constructor. The Blob takes a iterable as the first argument. Notice how our response turned arraybuffer is surrounded by square braquets( [res] ). To create a blob that can be read as a PDF, the data needs to be an iterable into a binary form ( I think…). Also, notice the type application/pdf.

5) Finally, I’m using the saveAs function from the file-saver package to create the file on the front end!


Here is the back-end things. There is a whole express application and everything. I juste show you the controller where the two methods reside for this PDF problem.

module.exports = {
    createPDF: async function(req, res, next) {
        const content = fs.readFileSync(
            path.resolve(__dirname, '../invoices/templates/basic-template.html'),
        const browser = await puppeteer.launch({ headless: true })
        const page = await browser.newPage()
        await page.setContent(content)
        const buffer = await page.pdf({
            format: 'A4',
            printBackground: true,
            margin: {
                left: '0px',
                top: '0px',
                right: '0px',
                bottom: '0px'

1) I am using puppeteer to create a PDF from the HTML content. The HTML content is read from an HTML file I simply fetch with readFileSync

2) We store the buffer data returned by page.pdf() and we return it to the front-end. This is the response converted to an arraybuffer later.


Well, looking at the code, it really looks easier now that it actually did when I tried to solve this problem. It took me close to 10 hours to find a proper answer. 10 FREAKING HOURS!!!!

Note to self: if you get frustrated, walk away from the computer, get some fresh air, and come back later…

Happy Coding <3

Read more


Lately, I spent a lot of time in private blockchain’s world. When you are learning a new technology like this one, you come across certain concepts or principles that you have to understand in order to move on. Docker and containers seems to be one of them right now for me. So, in a good old let’s write about what I learn fashion, I’m trying to explain what Docker does and how I’m getting started with it.


Docker is a platform for developers to develop and deploy applications with containers. Docker didn’t invent containers or containerization, but popularised the concept, so they are sometimes used to describe the same thing.

Containers are launched by running an image. An image is an executable that explains everything the application needs to run, and where/how to find them. A container is a running instance of an image. This way of doing things takes less resources than Virtual Machines (VM) that provides a full “virtual” operating system, which represents more resources than most applications need. By containerizing your application and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away.

Docker and NodeJS

Enough theory, let’s see how we could use Docker to create an image for a NodeJS application.

First, install Docker by following the instructions here. Once this is done, run docker --version in your terminal. You should have something like:

Docker version 17.12.0-ce, build c97c6d6

If you want to make sure everything is working, you can run: docker run hello-world. This will pull the hello-world image for you and launch a container.

You can see a list of the images you downloaded with docker image ls.

You can see a list of running containers with docker container ls, or you can have all the containers with docker container ls --all. Remember than containers are instances of the images you downloaded.

So, if you run the hello-world image, assuming you didn’t have any containers running before, you will see one container in this list. If you run hello-world 5 times, you will have 5 containers ( instances of the hello-world image ).

Note: To stop containers, run docker kill $(docker ps -q). You will still see these containers with “docker container ls –all. To remove them completely, rundocker rm $(docker ps -a -q)`.

The NodeJS application

Let’s do something very simple. An express app with 2 routes that renders 2 html pages. Create a new directory called express-app:

mkdir express-app && cd express-app

Run npm init with the defaults. Then, run npm install express --save.

Create 3 files: index.js, index.html, about.html.

  • index.js
const express = require('express')
const app = express()

app.get('/', ( req, res ) => {

app.get('/about', (req, res) => {

app.listen(3000, () => {
    console.log('Listening on port 3000!')

Create an express app, 2 routes for our html files and listen on port 3000.

  • index.html
        <h1>Hello Docker from index</h1>
  • about.html
        <h1>About page</h1>

Good, our app is done. If you run node index.js, you will see our html pages on localhost:3000/ and localhost:3000/about.


To define the environment inside your container, we will use a Dockerfile. This is a list of instructions that tells Docker what it must do to create the image we want.

Create a Dockerfile in your directory with touch Dockerfile:

FROM node:carbon WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 CMD ["node", "index.js"]

What’s happening here? The first line indicates that we want to use the latest node version to build our image. This is the image we start from. node:carbon being the latest Long Term Support version available.

The second line creates a directory to hold our application’s code inside the image.

The third and fourth line copy the package.json and run the npm install command. The first line gives us node.js and npm. So we install our dependencies, in this case, express.js only. Note that we will NOT copy the /node_modules file.

The COPY instruction bundles our app inside the Docker image, so our html files and index.js file in our case.

The EXPOSE instruction exposes the 3000 port that our app uses.

Finally, the CMD instruction specifies which command needs to be run for our app to start.


Everything is now ready, we can build the app.

Run docker build -t node-app .

The -t tag allows you to specify a friendly name to your Docker image. You should see something like this in your terminal:

Sending build context to Docker daemon   21.5kB
Step 1/7 : FROM node:carbon
 ---> 41a1f5b81103
Step 2/7 : WORKDIR /usr/src/app
 ---> Using cache
 ---> ffe57744035c
Step 3/7 : COPY package*.json ./
 ---> Using cache
 ---> c094297a56c2
Step 4/7 : RUN npm install
 ---> Using cache
 ---> 148ba6bb6f25
Step 5/7 : COPY . .
 ---> Using cache
 ---> 0f3f6d8f42fc
Step 6/7 : EXPOSE 3000
 ---> Using cache
 ---> 15d9ee5bda9b
Step 7/7 : CMD ["node", "index.js"]
 ---> Using cache
 ---> 154d4cd7e768
Successfully built 154d4cd7e768
Successfully tagged node-app:latest

Now, if you run docker image ls. You will see your node-app in the list.

Launch the container(s)

We can now launch our containers. Run docker run -p 8080:3000 -d node-app

The -d flag runs the app in detached mode. -p 8080:3000 redirects a public port to a private port. 8080 being the private port, 3000 the public port we exported.

Go to localhost:8080 and your app is running!

Now, run docker run -p 10000:3000 -d node-app, then docker run -p 4000:3000 -d node-app.

Open localhost:10000 and localhost:4000 and see that you have three different instances of your node-app image running at the same time! To make sure of it, you can run docker container ls and see your 3 containers running the same image on different ports.

Well, that was a quick introduction to Docker. Have fun!

Read more


In my last article, I gave a quick overview of the Hyperledger Composer framework to build a business network with a private blockchain technology. I used a land registry network to show how the framework works. We then used a React application to use the REST API provided.

This time, instead of using the REST API, I made a little command line application using the Javascript API. The concept is simple. You enter commands in your terminal to trigger actions ( retrieve data, create assets and/or transactions ). We will re-use the same land registry network I used in the previous article.

Continue reading Private Blockchains: Hyperledger Composer Javascript API

Read more


I recently started to work with a company as a freelancer. As I got access to the source code, I had the chance to look around at what technologies were used and the particular syntax that the software was written into. The software uses for most of the communication client-server. But, as I was looking at the packages used and the code, one of those packages was socketio-streams. With this blog post, I’ll give a quick example of how this package works.. Let’s jump into it!

What is it ?

Here is how it is described:

This is the module for bidirectional binary data transfer with Stream API through Socket.IO.

Combining the native Node Stream API with Socket.IO. That sounds promising. One of the advantages of using streams with Sockets is that, if you end having to transfer a lot of data between the client and the server, it can become very slow. Streams, on the other end, listen to a data event, meaning they receive and send data by chunks. Let’s try to build something with it.

We are going to do something very simple, as an example. Let’s say I want users to send my a file and I will receive it and write the content of the file I received into a whole new file.

We will use to create a new stream on the front-end. That stream will send the data to our back-end who will then write its content to a new file.

First, create a new directory and let’s install what we will need.

npm install express

Then, let’s create two files: server.js and client.html

Here is how our server file will look like:

const ss = require('');
const path = require('path');
const app = require('express')();
const server = require('http').Server(app);
const io = require('')(server);
const fs = require('fs');

io.on('connection', socket => {
    socket.emit('connected', 'hello World');
    ss(socket).on('file', (stream, data) => {
        stream.pipe(fs.createWriteStream( +'.txt'));

app.use((req, res) => {
    res.sendFile(path.join(__dirname + '/client.html'));


We require our dependencies at the top. We create a server in order to use Express will help use send the static html file on the front-end. So, as you can see, socket-stream listen to an event called file, this event will get two different arguments, a stream and the data. We then use the pipe method on that stream to connect it to a writable stream. This new writable stream write our content to a new .txt file. Let’s take a look at the front-end now.


<!DOCTYPE html>

        <input type='file' id="file" />
        <script src=""></script>
        <script src=""></script>
            const socket = io();
            let fileElem = document.getElementById('file');

            socket.on('connected', data => {
            fileElem.onchange = e => {
                let file =[0];
                let stream = ss.createStream();
                ss(socket).emit('file', stream, {size: file.size});
                let blobStream = ss.createBlobReadStream(file);
                let size = 0;
                blobStream.on('data', chunk => {
                    size += chunk.length;
                    console.log(Math.floor(size / file.size * 100) + '%');

                blobStream.on('end' , ()=> {



The html is very simple here, just an input that accepts file. I use the CDN service for and You could use require() if you are configured for that.

So, anytime that our file input changes, meaning that we give a new file, we create a new stream thanks to Then, we emit the file event. The event can take a few optional parameters. Here, I give a stream parameter, and the file size. Then, I create a read stream thanks to createBlobReadStream . A Blob is everything that is not Javascript data (like a file). I added a little snippet to display the progress of the upload (with on(‘data’) ). Then, we connect this read stream to the stream we created at the beginning (which is also the stream we gave as an argument in our emit event).

And that is pretty much it for our application. You can try it out by running:

node server.js  

You will see that every time you try to upload a new file, the progress will be shown in the console. And you will see a new file appearing in your directory.

Obviously, there are a lot of possibilities to use this. But, I think it is a nice start.

Feel free to ask questions and share!
Have a nice day!

Read more


Node.Js offers us only a few modules to get started, but they are extremely useful. When you build a platform like Node.Js, your goal is to have components that make it easy to build on top. In my learning journey in Node.Js, I like to study a little bit deeper the source code of those modules. So, for the purpose of this article, I chose to look at the querystring module. You can find the documentation here and the source code here.

The Query String module

The query string module is a small one. 413 lines (with all the comments). If you look at the docs, there are only 4 main functions. escape, unescape, stringify and parse.
As its name indicates, this module works on the query part of an URL. Let’s use the URL module to get the details of an URL. Let’s create a file:

const url = require('url');

let urlStr = '';

In this example, we want the details of the google’s homepage URL. To achieve this, we use the url.parse() function.
When you run this file, here is what your console will print:

Url {
  protocol: 'https:',
  slashes: true,
  auth: null,
  host: '',
  port: null,
  hostname: '',
  hash: null,
  search: '?gfe_rd=cr&ei=CK0ZWOmGJoqg8wfUjIGYCw',
  query: 'gfe_rd=cr&ei=CK0ZWOmGJoqg8wfUjIGYCw',
  pathname: '/',
  path: '/?gfe_rd=cr&ei=CK0ZWOmGJoqg8wfUjIGYCw',
  href: '' }

Here are the details of this URL. We get a URL object back. In this object, you can see a query key. This is what the query string module works on. It is everything after the protocol, host and pathname.
Great, now we know the scope of our module. Let’s see the functions it offers us.

A- parse()

We will start we the parse function.It takes one mandatory parameter, a string. If we keep the previous example with the google’s homepage, let’s parse the query of the URL like this:

const qs = require('querystring');
let urlQuery = 'gfe_rd=cr&ei=CK0ZWOmGJoqg8wfUjIGYCw';

After running this, the console returns:

{ gfe_rd: 'cr', ei: 'CK0ZWOmGJoqg8wfUjIGYCw' }

We can see a pattern here. The ‘&’ sign acts as a separator between key:value pairs, and the ‘=’ acts as the separator between each key and value. This two signs are the defaults signs. You can change them. The parse function takes 3 optional parameters. The first one defaults to ‘&’, in order to delimit key and value pairs in the query string. The second one defaults to ‘=’, in order to delimit keys and values. Let’s try to change them in our query string and our function:

const qs = require('querystring');
let urlQuery = 'gfe_rd:cr!ei:CK0ZWOmGJoqg8wfUjIGYCw';
console.log(qs.parse(urlQuery, '!', ':'));

We replaced the ‘&’ by ‘!’ and the ‘=’ by ‘:’. If we run this, we will get the exact same result than before.
Note: If you do not specify the right separators in your functions, you will just get your query string as an empty key. Most URLs use the ‘&’ and ‘=’ as separators, this is why they are the defaults.

The third optional parameters takes a object. This object has two keys by default. The first one (decodeURIComponent) takes the function querystring.unescape() as a value. We will see this function later in this article. this is the place where you specify how to deal with percent encoded characters (such as %20 for spaces). The second key is maxKeys, and it defaults to 1000. It specifies the number of keys to parse. It allows you to have more control over really long URLs. Give it 0 if you wish to remove any kind of limitations.

Let’s move on to stringify.

B- stringify

The stringify function is the opposite of parse. It takes one mandatory parameter, an object. It transforms it into a query string. Like this:

const qs = require('querystring');
let queryObj = { gfe_rd: 'cr', ei: 'CK0ZWOmGJoqg8wfUjIGYCw' };

We take the object the parse function returned last section and call it as the stringify argument.
In our console, here is what we get back:


We find the same query string than before. However, just like the parse function, the stringify function takes 3 optional parameters. The first two are the same, they define the separators. Let’s change them:

const qs = require('querystring');
let queryObj = { gfe_rd: 'cr', ei: 'CK0ZWOmGJoqg8wfUjIGYCw' };
console.log(qs.stringify(queryObj, '!', ':'));

We change the parameters to equal ‘!’ and ‘:’. Here is the result:


We get back the query string with the separators we specified.
The third optional parameter is different than the one in parse. It only takes the encodeURIComponent key. It takes the default function, querystring.escape().
Let’s take a look at this escape function now.

C- escape

The escape function is fairly straightforward. It takes one parameter, a string. URLs have a specification that makes certain characters unsafe. The escape function makes sure that the query strings are escaped with percent-encoding. Here is what it gives us with a random query string:

const qs = require('querystring');
let queryToEscape = 'this could be&areally cool!String=to,escape?Is it_éven$~possißle??';

Here is how this query looks like when it’s escaped:


The ‘&’, ‘=’, ‘?’, ‘é’, ‘ß’, ‘,’ , ‘$’ and spaces are all percent-encoded now. As we saw earlier, this function is used by stringify. You will probably rarely use it yourself.
Let’s see the last function, unescape.

D- unescape

As we might have guessed, the unescape function does the opposite of escape. It decodes percent-encoded character into a normal string. Like this:

const qs = require('querystring');
let queryToDecode = 'this%20could%20be%26areally%20cool!String%3Dto%2Cescape%3FIs%20it_%C3%A9ven%24~possi%C3%9Fle%3F%3F';

We use the result we got from the escape example and decode it. Without a surprise, we get the original string with the spaces and all the special characters back. As you saw, this function is used in the parse function. Just like the escape, this function will probably rarely be used directly. Nice little thing to note, this function uses the built-in Javascript function decodeURIComponent. You can find more information about it here.


I wrote this article as part of a learning process. I believe it is very good to dig into the source code of a project to learn more about it. The Query String module gave me a relatively simple starting point. It is very short and pretty easy to grasp. I encourage you to read the module’s code on GitHub and see how they implemented those functions. I didn’t understand everything, but it’s always a good idea to expose yourself to other people’s code. I did learn a few things during the process, and will most certainly try it again sometime soon.

As always, feel free to share and comment.
Have a nice day!

Read more

What is Express?

Express is a web framework. It provides a very minimalist structure to build web applications. Express shares the same spirit as NodeJs. Express is unopinionated, fast and small. This framework doesn’t force you down a certain path. It only gives you a solid foundation to build upon. This is why Express is a very popular tool in the community. In this article, I will show you a extremely simple Express application. Nothing crazy. It will contain a single file that will help you have something up and running in no time. Then, I will show you the Express generator that creates a full skeleton of a Express application. Let’s get started.

Install and Run

Let’s create a directory called simple-express-app. In this directory, create a single file called index.js.
The first thing we want to do is install Express globally. To do so, open your terminal and enter this command:

npm install -g express

Note: you may have to use sudo for this command.

Great, now open your index.js:

Explanation: we import express in the first line. Then, we instantiate a new Express application and store it in the variable app. This new Express instance allow us to use a lot of amazing methods on it. Here, because this is a simple example. We use the get method. This handles the HTTP GET requests to the specified path.
On line 4, we tell Express how to handle the GET requests made for our index page (‘/’). The callback function takes three parameters, a request, a response and next. You will often times see this abbreviated as req, res, next. We can access the request object thanks to req and control the way our response will be with the res object. The next function tells Express to call the next matching route. We will see it in use a bit later when we will use middlewares.
So, our index page is going to end the response by sending the string ‘Hey Home’.
On line 9, we handle the GET request at the /about path. This time, we decide to send a JSON response. Then, we need to tell our application which port it should listen to.
So if we run this application with:

node index.js

and then visit our http://localhost:3000/ , we see => ‘Hey Home’. If we go to http://localhost:3000/about , we get => {message: ‘About Page’} . Awesome!

Now, let’s go a bit deeper and use something called a middleware. As its name indicates, a middleware is something that is used in the middle. Let me show you a quick example by changing our index.js file like this:

We use middlewares in Express with the use method. Every time our application receives a request, but before we give control back to our other routes, we will run the middleware at line 4 (in the middle remember?). This way, we have a lot more control over our application and it prevents duplicate code. In this example, we log a string with the URL, the hostname and the method. Then, we tell our response object to display HTML in the browser when the status code is 200 (meaning everything is OK). I do not have to tell each route how to display its content, nor do I have to write the console.log statement everytime. Let’s run this file now.

If you visit the homepage or the about page, you will see the corresponding HTML and the log in the console. But, if you go to a path that we do not handle, /contact for example, you will still have the log in you console. Remember, the middleware is called before the response is emitted. Then, we call next(). Earlier, I said that next passes control to the next matching route. So, if I visit /about, when next() is called in our middleware, we run app.get(‘/about’ …. But, if I visit a page that we do not handle, our server will just hang … We could add a little piece of code after our routes like this one :

app.use('*', (req, res, next) => {
    res.end('Page not found');

If our server can’t find a matching route, we will just print out ‘Page not found’. Our first middleware will log our string and pass control to this new middleware if no path matches. The ‘*’ is a shortcut for all the routes.
Note: Make sure to add this middleware after all the other routes. Otherwise, you will always see ‘Page not found’.

Ok, we have a very simple Express application that allows you to understand a little bit what Express is about. We only scratch the surface here and it would be impossible to explore every Express possibility in one blog post. In the last section, we will see how the Express executable work.

Express Generator

The Express executable does a lot of good things for you. It creates a skeleton for your app and allows you to choose a template engine. Express provides support for a variety of different template engines. Those engines let’s you pass data to your view. It’s pretty much HTML on steroids. You can pass markup, data, add some logic … But, this is outside the scope of this article. You can have more info about the executable by tying the command:

 express --help

  Usage: express [options] [dir]


    -h, --help          output usage information
    -V, --version       output the version number
    -e, --ejs           add ejs engine support (defaults to jade)
        --hbs           add handlebars engine support
    -H, --hogan         add hogan.js engine support
    -c, --css   add stylesheet  support (less|stylus|compass|sass) (defaults to plain css)
        --git           add .gitignore
    -f, --force         force on non-empty directory

In addition, you could specify a CSS preprocessor using the –css option, or enable session middleware with –sessions.
Ok, let’s just see what this thing does. I’ll just create a application called myApp and use the hogan template engine.

express -H myApp

And here is the result:

express -H myApp

   create : myApp
   create : myApp/package.json
   create : myApp/app.js
   create : myApp/public
   create : myApp/public/javascripts
   create : myApp/public/images
   create : myApp/public/stylesheets
   create : myApp/public/stylesheets/style.css
   create : myApp/routes
   create : myApp/routes/index.js
   create : myApp/routes/users.js
   create : myApp/views
   create : myApp/views/index.hjs
   create : myApp/views/error.hjs
   create : myApp/bin
   create : myApp/bin/www

   install dependencies:
     $ cd myApp && npm install

   run the app:
     $ DEBUG=myApp:* npm start

As you can now see, you have a shiny new directory with a nice application skeleton. You have to install the few dependencies that were generated and you are good to go! We won’t cover everything, but let’s just take a look at our entry point: app.js

var express = require('express');
var path = require('path');
var favicon = require('serve-favicon');
var logger = require('morgan');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');

var routes = require('./routes/index');
var users = require('./routes/users');

var app = express();

// view engine setup
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'hjs');

// uncomment after placing your favicon in /public
//app.use(favicon(path.join(__dirname, 'public', 'favicon.ico')));
app.use(bodyParser.urlencoded({ extended: false }));
app.use(express.static(path.join(__dirname, 'public')));

app.use('/', routes);
app.use('/users', users);

// catch 404 and forward to error handler
app.use(function(req, res, next) {
  var err = new Error('Not Found');
  err.status = 404;

// error handlers

// development error handler
// will print stacktrace
if (app.get('env') === 'development') {
  app.use(function(err, req, res, next) {
    res.status(err.status || 500);
    res.render('error', {
      message: err.message,
      error: err

// production error handler
// no stacktraces leaked to user
app.use(function(err, req, res, next) {
  res.status(err.status || 500);
  res.render('error', {
    message: err.message,
    error: {}

module.exports = app;

Alright, let’s break this down a bit. The first lines import all the dependencies. Then, we require the files responsible for our routes. you can use a Router with Express, to make it simpler to handle your routes. You can get more information here. Then, with the two app.set lines. We tell our application where our views are and which template engine we use (I chose Hogan for this). The next block serves as a configuration for our app. The bodyParser will allow your application to parse the request bodies for example. The logger will log friendly output in your console. The express.static middleware tells Express where to find your static files. Here, we will look in the public directory.
You can run this application with

npm start

and visit localhost:3000 .

Congratulations, you have now a solid skeleton for your future web applications! I highly encourage you to play around with the Express generator and see what each part does. Remember that this framework really doesn’t force you on any path. The generator only gives you some tools to improve your development process and a Router. You still have complete control over the tools you want to implement and the path you want to take.
Express is a really great tool to learn and explore all the possibilities of web development.

Have fun.
As always, feel free to share and ask questions.
Have a great day!

Read more

In this article, we will build together a simple command-line application using NodeJS.

We will make a very original (not really) ToDo application. In this application, you will be able to create a todo, update todos and remove todos. We will use a file-based storage, meaning our todos will be stored in a file. Considering the size of the project, I believe a database is not necessary.

What do we need?

We will only use the tools that NodeJS provides us out of the box. No packages, no npm to worry about.

We won’t need much actually. To interact with the command-line, we will use the process object. It is a global object that allows us to read and control the current NodeJS process. You can find more information about this here : Node Process Docs .
Process is a global, meaning it’s always available, no need to require() it.

Next, because we will implement a file-based storage, we need to read and write to a file. NodeJS gives us the filesystem module, known as ‘fs’. We do need to require it.

And, that’s all we need. The rest of the logic will be handle by regular Javascript, let’s get started!

1) Files

First, let’s create a directory. Go to your terminal and type

mkdir <nameOfYourNewFolder>

and press enter. Now, type

cd <nameOfYourNewFolder>

and then:

touch server.js todos.txt

Great, we have our files now.
Let’s open our text editor and create some todos in our todos.txt.

Here is my todos.txt:

Buy clothes|Feed the cat

Note: To differentiate each todo, I chose to separate them with a pipe character. This choice is totally up to you, choose whatever character you feel is appropriate.

2) Read Todos from file

Awesome, now how do we read those todos from our todos.txt and display them in the terminal?
Open server.js and type this.

const fs = require('fs');

//Read todo from file and display them
let todos = getTodosFromFile();

function getTodosFromFile(){
    let todos = fs.readFileSync('todos.txt', {encoding: 'utf-8'});
    return todos.split('|');

function displayTodos(todosArray){
    console.log('\n\nHere are your todos: \n\n');
    for(let i = 0; i < todosArray.length; i++){
    console.log(i + ') ' + todosArray[i]);


function displayInstructions(){
    console.log('Type quit to exit. Type create <todo> to add a new todo.\n',
    'Type update <index> <todo> to update a todo.\n',
    'Type delete <index> to delete a todo.');

Let me explain: the first line sets the encoding for the terminal, here utf-8(in this case, stdin is our terminal).
Next, we require our fs module.

Next, we have a few functions.

The first function getTodosFromFile retrieves the content of our todos.txt. To achieve this, we use the fs.readFileSync function. It takes 2 parameters, the path of the file we want to read, and an object of options. The file is in the same directory as server.js, so we just type ‘todos.txt’. For the options, I specified the encoding {encoding: ‘utf-8’}. Now, this fs.readFileSync returns a string. But, I decide to split this array with the ‘|’ character. That allows me to know where a todo string stops and starts.

Note: If you use a different character than ‘|’ , you’ll have to replace the appropriate character in the split method.

Awesome, we are now in possession of an array of todos. Let’s show them with the displayTodos function. As you can see, this function takes the array of todos and iterate through them. I added a few newlines characters (\n) for clarity.

The displayInstructions function gives the user informations about how to interact with the application.

Amazing, so let’s see how this works.
Go to your terminal and type

node server.js

And here you see your todos:

Here are your todos:
0) Buy clothes
1) Feed the cat

Type quit to exit. Type create <todo> to add a new todo.
Type update <index> <todo> to update a todo.
Type delete <index> to delete a todo.

Now, let’s add the different commands to create, update and delete todos.
We’ll start with creating.

3) Create todo

Right now, if you tried to type something in the console, nothing happens. You can only kill the process with Control + C. Why? Because we need to tell our process.stdin (terminal) what to do when we enter data.
In our displayInstructions function, you can see that we have 4 words that will be used as commands: quit, create, update and delete. Again, those are totally arbitrary.

So, our program needs to identify when the user enters data to the terminal, then our program needs to know if the text entered contains a special command. If the answer is yes, it triggers some actions depending on the command.

First things first, making our terminal listens to the data we type. To do this, we add this code:

process.stdin.on('data', (text)=>{
// our logic will go here

Each time we send data to the terminal, meaning each time we press Enter, the program will run the callback. The text parameter will be the string that you typed in the terminal. It will be a string because we set the encoding to ‘utf-8’ on the first line, remember?

Now, we need to now if the first word of the text is in fact a command. If we use a split method on the string and a switch statement, we can handle this problem like this:

process.stdin.on('data', (text)=>{
//split the string at the spaces
let textAsArray = text.split(' ');
let command = textAsArray[0];

    case 'create':
        //create Todo
    case 'update':
        //update Todo
    case 'delete':
        //delete Todo
    case 'quit\n':
        //exit program
        console.log('Unknown command.');

Each command has its own logic. If the command is not create, update, delete or quit, we use the displayInstructions function to help the user.

Note: You may notice the newline(\n) character after ‘quit’. When you type enter, the terminal seems to add this character automatically. Because the other commands needs additional input to their logic, if you just type ‘create’ or ‘delete’ or ‘update’, it will trigger the default case (because the terminal will add the newline character by default).

Very well, let’s write our logic for creating a Todo. The ‘create’ case will look like this:

case 'create':
    // Get todo text and update todos.txt
    let todoText = textAsArray.slice(1).join(' ');
    fs.appendFileSync('todos.txt', '|' + todoText);
    let todos = getTodosFromFile();

First, we take away the command from the text. Remember, we split the string before the switch statement. So everything after the command is the body of our Todo. So we join the rest of the array with spaces.
Now, here is the fun part. We have our Todo as a string. We need to add it to our file todos.txt. To achieve this, we use fs.appendFileSync. It takes 2 parameters, the path of the file we want to write to, and the text we want to write. Notice that I add the ‘|’ character before the text of the Todo to help use differentiate todos in the future. After that, we use two functions that we already wrote earlier, getTodosFromFile and displayTodos. Now, after running this program, you will see your todos.txt updated and your new todo added in your terminal.

Note: Careful with the function you use to write to a file. Certain functions like fs.writeFileSync will replace the content of the file and not append. You can get more informations about the fs functions here: Node fs Docs

4)Update Todo

Awesome, now we can add todos to our little program. Let’s move on to updating.
If you read the instructions, you saw that the update command needs to be followed by the index of the todo you want to update, then by the new text of the todo. This means that we have to slice our array a bit more to retrieve the index. Here is how we could put this in place:

Add this inside the ‘update’ case:

case 'update':
    let indexUpdate = textAsArray[1];
    let newTodoText = textAsArray.slice(2).join(' ');
        console.log('You must enter the index of the todo you want to update.');
        updateTodo(indexUpdate, newTodoText);

And here is the updateTodo function:

function updateTodo(index, todo){
    let todos = getTodosFromFile();
    if(index > todos.length -1 || index < 0){
        console.log('Index out of range');
    todos[index] = todo;
    let updatedTodos = todos.join('|');
    fs.writeFileSync('todos.txt', updatedTodos);
    console.log('\nTodo Updated! \n ');

Alright, let me break this down a bit. So we know the index is going to be at the second position in the textAsArray. This means everything after the index is the body of the updated Todo. We need to make sure the index is actually a number. If it is, we run the updateTodo function.
This function retrieves the todos first so we are up-to-date with the file. Next, we control that the index is in range. If it is, we update the correct todo. Then, as we did before with create, we write the new todos in our file. Finally, we display the new set of todos.

5)Delete Todo

We’re almost there! We now need to implement a way to delete a todo. To do this, we use the delete command followed by the index of the todo. Here is how the ‘delete’ case will look like:

case 'delete:
    let indexDelete = textAsArray[1];
        console.log('You must enter the index of the todo you want to delete.');

And here is the deleteTodo function:

function deleteTodo(index){
    let todos = getTodosFromFile();
    if(index > todos.length -1 || index < 0){
        console.log('Index out of range');
    todos.splice(index, 1);
    let updatedTodos = todos.join('|');
    fs.writeFileSync('todos.txt', updatedTodos);
    console.log('\nTodo Deleted!\n');

Just like the update case, we retrieve the index and make sure it is a number. Then we call the deleteTodo function. We get up-to-date with the current todos and we make sure the index is not out of range. Then, a classic splice method on the todos array deletes the targeted todo. We join the array with our pipe ‘|’ character and write it to our file. Finally, we can display our todos.

6) Exit the program

Yeah! We can add, delete and update todos. We just need one more little thing: a way to exit the program. Now, if you know how to use the terminal, you will do control + C.But, our users may not know this trick. So we added the quit command. To do this, we add this logic to the ‘quit\n’ case:

case 'quit\n':
    console.log('Good Bye!');

As you might have guessed, process.exit() kills the current process.

And there it is. We have a fun command-line application. You can run it with node server.js in your terminal.
You can get the full code here :
Code on GitHub

I hope everything was clear. Feel free to ask questions and share!
Have a nice day!

Read more

This blog is focused on Node.js. So, it only seems logical to start things off by trying to explain what NodeJS is. I will try my best to break down what this platform is all about and what it allows us to do.

You may have heard about NodeJS as: “Javascript on the server”. But can we dig deeper than that? Sure we can!

The Node.Js website ( welcomes you with this paragraph:

“Node.js® is a JavaScript runtime built on Chrome’s V8 JavaScript engine. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world.”

Alright, that’s a mouthful, let’s try to break this down!

First, “is a JavaScript runtime”. Yep, Node.Js is written in Javascript. The most popular language in the world  also runs this platform. This means developers can use the same language on the front-end AND the back-end.

Next, “uses an event-driven, non-blocking I/O model”.  What??

Node uses a event loop. This event loop handles dispatching events in your program. It handles requests and responses, and if there is none, the event loop is sleeping, just waiting for something to happen.

The non-blocking I/O (input/output) means that your program keeps running while the server processes a request.

So, if Node receives several requests, the server is able to handle them asynchronously. They all get treated as soon as they are received. The server doesn’t wait for a previous request to be completed to start a new one.
Conventional I/O model will use a blocking model, meaning the execution of the program stops while the request is being processed. There are ways to process several requests at the same time, but they are more demanding in resources and can cause latency.
Example: Reading data from a file with Node.Js

 const fs = require('fs');

//event loop receives this and is handled outside of the loop
 fs.readFile('/myfile.json', function(data){
 //program keeps running
 //when the response comes back, the callback is executed


Next, we have ‘lightweight and efficient’.

It’s pretty self-explanatory. I suggest you visit NodeJS GitHub’s page ( and realize how small it is. And this is what I love about it. The freedom to do whatever you want with Node. Node doesn’t have an opinion, Node is just a platform, Node is not a framework. It gives you a foundation to build on. And that foundation is solid, but minimalist. So there are a lot of possibilities and this is what makes Node fast.

You may ask: “That’s great but, if Node is minimalist, how can there but so many possibilities?”

You add what we call packages with a tool called NPM. NPM is the Javascript’s package manager. Basically, the Javascript community build packages and they are accessible by everyone.

“Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world.”

Just to give you an idea, there are more than 330 000 modules in the NPM library. I’m sure you can imagine the level of commitment from the Javascript community in this tool. Let alone all the ideas and possibilities that all those modules add to NodeJS.

Let me add a few more advantages about NodeJS:

  1. – JSON(JavaScript Object Notation) is a popular way to format data, and is native to Javascript.
  2. – Javascript is a language used in some NoSQL databases (MongoDB, CouchDB)
  3. – NodeJS use Google Chrome’s V8 Engine, which stays up-to-date with the Javascript language standards. Meaning, you don’t have to wait for your browsers to catch up with the javascript you are writing.

I hope you have a better understanding of what Node.Js is. The next logical question would be: What can I actually do with this?

  1. NodeJS is designed to handle real-time applications, what we call DIRT( Data-Intensive Real-Time). You can think of a chat room for example, or a multiplayer game. is a popular tool is this regard.
  2. NodeJS is also perfectly able to handle Web Applications. You may have heard of the Express framework.
  3. Command Line Applications
  4. APIs (Application Programming Interface)

I’m going to leave you with a snippet of code that create a http server with Node. You’ll see how little code you actually need.

const http = require('http');
 http.createServer(function(request, response){
 response.writeHead(200, { 'Content-Type' : 'text/plain'});
 response.end('Hello World\n');
 console.log('Magic happening at http://localhost:3000');


First, we require the http module, which Node provides out of the box.
Then, we create a server. The callback is run every time the server receives a request. It takes two arguments, request and response.
We define the type of data that our response will send (here just text/plain). We then end the response sending ‘Hello World’. Finally, we tell our server to listen to port 3000.
To start the server, go to your terminal, and type node <fileName>. You will see “Hello World” at http://localhost:3000 and the console.log message in your terminal.

That’s it!
I hope I have been clear enough in my explanations. Feel free to ask questions or correct me if I made some mistakes.
Feel free to share!


Read more