Coding

Why we need a NodeJS application server or: writing boilerplate code considered harmful

[Update] A lot of commentators have taken issue with my use of the term “application server”, which made them think of a big monolithic architecture. Even though I do refer to Java, that’s not what I mean. It was the wrong choice of terminology – I am  thinking of a modular, lightweight solution that saves you from implementing the most generic parts of the typical web application. For a better writeup with the same cause, see this post on the hood.ie blog.[/Update]

I love creating web applications, but I don’t do it for a living. I code because I like the challenge of it, but first of all: I want to solve specific problems. When I was writing my dissertation in Political Science, I collected thousands of bibliographic references, some of which I wanted to share with others. This was a time long before Zotero came into existence and there was no decent software that let me do that the way I wanted it. So I started to dabble in PHP, later in JavaScript libraries (in particular, qooxdoo), and out of this came, in several iterations, my pet project Bibliograph. Later, when I was coordinating the organization of a big conference, I had to manage a staff of 50 people working in shifts. The wasn’t any freely available software that was up to the task, so I wrote a qooxdoo-based, interactive staff scheduling web app. I won’t publish the code because it is a mess, but it worked well for the task at hand. Besides my ongoing urge to rewrite Bibliograph to make use of the latest web technologies, I have lots of other ideas for web applications to deal with that odd challenge that I come across occasionally.

There is, however, one big issue that holds me back from starting to work on any of those ideas: the lack of the right end-to-end stack including a decent application server for NodeJS, which will free me from having to worry about boilerplate code for things like authentication, user management, access control, etc.

Professional developers might argue that one should write this kind of code oneself, so that one knows the code inside out and has full control, and can then adapt it for each specific job. If you are not ready to write the boilerplate, they might argue, you should maybe not be in the business of writing applications in the first place.

I respectfully disagree. On the contrary, I think that it is wasteful, if not outright dangerous, for specialized, part-time application developers to implement their own backends from scratch. For Bibliograph, I developed a complete PHP solution. It was an interesting experience, from which I learned a lot. However, it proved impossible for me to maintain it beyond the time that I was actively working on the main application. Later, busy with the things I had to do for my real job, I couldn’t react fast when bugs were discovered in the PHP version I used. Having published the backend code, with people actually using it, I couldn’t respond to questions they had or bugs they discovered in the code. In the first place, I had invested countless of hours in reinventing the wheel, with no real benefit.

I should be using the resources in my disposal efficiently, i.e. the very specific ideas and knowledge of particular technologies that solve specialized problems (such as bibliographic data management for the humanities). I shouldn’t spend my time worrying about how to store passwords securely, or how to synchronize data between the client and the server, or how exactly model data is persisted in what database. There are quite a few tasks that can be solved in very generic ways, in order to provide a platform on top of which specialized applications can be built. This issue also pertains to testing: I should be writing tests for the main business logic, and not spend valuable time on tests for boilerplate code.

I think such a platform would not only save thousands (or more) of hours of developers’ time. This time could then be invested instead in solving the application-specific problems in thorough and creative ways. Such a collaboratively developed platform would make the applications built on top of it more secure and stable (because of collective wisdom, test coverage, and quick fixing of bugs). It would generate enough interest in its long-term maintenance.

There is a wealth of JavaScript frameworks on the client and on the server side, but, as far as I can see, no such thing as an “application server” known from the Java world yet exists. Plenty of libraries exist that can be wired together, but this wiring itself is not trivial, as I learned while writing a tutorial on exactly this topic. The large number of libraries to choose from is both a blessing and a curse – the freedom to choose comes with a price – we need more and more time to research which choices should be made, and the multiplication of APIs and protocols also adds to the expense. How neat would it be if we could agree on a meta-API for the most commonly needed features of a web application and then shims could be used to plug in the different solutions that currently exist.

I see a number of features and characteristics of such an application server (most, if not all of them provided by already existing modules):

  • It should not assume any particular rendering model on the client (DOM-templating vs. widget objects)
  • It should also not force a different programming model on developers (like Opa or Meteor), but let you code in plain old asynchronous javascript. No to-javascript-compilation (except optionally). No magic. Just the right tools.
  • It should have an integrated API for client and server.
  • It should provide a static HTTP server, REST routing, and bidirectional, realtime messaging and broadcasting (such as Socket.io).
  • It should offer async startup/plugin/configuration system like Cloud9’s Architect.
  • It should provide an out-of-the box system and API for user & group management, registration, access control, Password storage/retrieval/update etc, preferrably with a set of built-in templates that can be used for managing the most generic configuration tasks. With this, a pluggable system to use third-party authentication providers.
  • It should also provide an integrated system of data modeling and persistence. I really do not care about database technology. I simply want to store, edit and retrieve my model data.
  • It could also have a toolset that would allow you to deploy your application instantly to a cloud provider such as Heroku or Nodejitsu.

The existence of such an application server, I would argue, would unleash a lot of currently unused creative energy and lead to the development of a lot of interesting special-purpose applications by people who like to code to solve specific problems, but who cannot afford to invest the time to develop full-fledged application backends. It would be great to see such a project take shape, and I am sure others like me would gladly contribute small bits and pieces. Or does it exist already and I have overlooked something?

Coding

Developing a complete client-server application with qooxdoo and NodeJS on Cloud9IDE. Part 4: Access Control [Updated]

This is an updated version of part 4 of my ongoing series on “Developing a complete client-server application with qooxdoo and NodeJS on Cloud9IDE“. In response to the original post, the author of node_acl has rewritten the library to support an im-memory store. We no longer need a different library with an API adapter and can use node_acl directly. This post also updates the client code to show how to use converter functions.

Today we’re going to tackle another important aspect of creating a web application: access control, also called authorization (although authorization is often used synonymously with authentication). It is not enough to authenticate users, you also have to set up policies which grant them certain rights or limit access to certain resources or functionalities. Sometimes access control is independent from authentication: think of web applications which cam be used without logging in. These applications still need to differentiate between individual users and require policies about ability of anonymous guests to do things. As usual, the code of this post is available on GitHub.

Identifying clients: Assigning user ids

Before we approach access control, we need to refine our authentication system a bit. The simple setup used in the previous post did react to authentication requests, but did not differentiate between the connected clients. Luckily, socket.io does most of the work for us. Replace lines 29 and following of users.js with the following code:

plugins/users/users.js

  // User management API

  var api = {
    // get userdata. If a property is given as second argument, return just this property
    // the last argument is the callback
    getUserData : function( userid, arg2, arg3  ) {
      var property = arg3 ? arg2 : null;
      var callback = arg3 ? arg3 : arg2;
      userstore.get(userid, function( err, data ){
        if ( data )
        {
          if ( property ) return callback( null, data[property] );
          return callback( null, data );
        }
        return callback( "A user with id '"+userid+"' doesn't exist");
      });
    },
    // password authentication
    authenticate : function(userid, password, callback){
      userstore.get(userid, function( err, data ){
        // check password
        if ( data && data.password == password )
        {
          return callback(null,userid);
        }
        // authentication failed
        return callback( "Invalid username or password" );
      });
    }
  };

  // support of sessions

  // setup socket events
  var io = imports.socket;
  io.on("connection", function(socket){

    // helper functions using the API

    function login( data, callback )
    {
      api.authenticate( data.username, data.password, function(err,userid){
        if (err) return callback(err);
        socket.set("userid", userid, function(){
          console.log("User %s has logged in.", userid);
          // return user data to the client via the callback
          api.getUserData(userid, callback);
        });
      });
    }

    function logout( userid, callback )
    {
      socket.set("userid", null, function(){
        console.log("User %s has logged out.", userid);
        return callback();
      } );
    }

    // wire helpers to events

    socket.on("authenticate",function(data, callback){
      socket.get("userid", function(err,currentUserId){
        if( currentUserId )
        {
          return logout( currentUserId, function(){
            login( data, callback );
          });
        }
        login( data, callback );
      });
    });
    socket.on("logout", logout );
  });

  // register plugin and provide plugin API
  register(null,{
      users : api
  });

Note the difference between the user management “API”, which has no login our logout method because it has no way of knowing which client is currently connected. This information is only available to the socket.io session. We therefore will always need the “socket” object to know which user is attached to the object.

The ACL library

Unfortunately, as of now, not very many ACL libraries for node exist. The one library that uses commonly used terminology  (Users,Roles, Permissions, Resources) and seems to be best suited to our requirements  is node-acl (npm install acl), which is modeled on the Zend PHP Framework’s ACL library.

Since you have been following this tutorial, I won’t have to repeat how to set up the acl plugin. Here is t the main plugin file:

plugins/acl/acl.js

// This plugin provides access control
module.exports = function setup(options, imports, register)
{
  var acl = require("acl");
  acl = new acl(new acl.memoryBackend());

  // socket events
  var io = imports.socket;
  io.on("connection", function(socket){
    socket.on("allowedPermissions",function(resource,callback){
      socket.get("userid", function(err,userId){
        // anonymous has no permissions
        if( ! userId ) return callback(null,{});
        // for registered users, get permissions
        console.log("Querying permissions for "+userId+" on resource "+resource);
        acl.allowedPermissions( userId, resource, function(err,data){
          if(err) return callback(err);
          var permissions = data[resource];
          // prepare permission data for client consumption
          data = {};
          permissions.forEach(function(p){
            data[p] = true;
          });
          console.log(data);
          callback(null,data);
        });
      });
    });
  });

  // error checking callback
  var cb = function(err){
    if(err) console.log(err);
  }
  // create some mock permissions
  // in resource "db", users can only read, admins can read, write and delete
  acl.allow([{
    roles: 'admin',
    allows: [{
      resources: 'db',
      permissions: ['write', 'delete','read']
    }]
  },{
    roles: 'user',
    allows: [{
      resources: 'db',
      permissions: 'read'
    }]
  }],cb);

  // assing user ids to roles
  acl.addUserRoles("john","user",cb);
  acl.addUserRoles("mary","admin",cb);

  // register plugin and provide plugin API
  register(null,{
      acl : acl
  });
}

As you can see, we use Node ACL’s Memory backend to store roles, resouces and permissions. For more complex use cases, where changes to the ACL setup need to be persisted, the library also provides a Redis backend. More backends can easily be added. The code should be self-explanatory.

ACL on the client

As it is obvious from the above code, we expose only one method to the client (allowedPermissions), which takes a resource name as argument and returns permission data. The client only needs to concern itself with resources and permissions, it does not need to deal with users and roles. There is only one relevant user – the on that is currently logged in, and the server knows which one (through the socket object). There is no need to inform the client what roles the user has or which permissions is part of which role – that is all server-side data.

On the client, we need a resource controller object, which connects the permissions with UI states. Let’s have a look at the code. We need to make substantial changes and additions to our Application.js. I’ll document only the important parts, the whole file is on GitHub.:

testapp/source/class/testapp/Application.js

      // create the qx message bus singleton and give it a socket.io-like API
      // note that the argument passed to the subscriber is a qooxdoo event object
      var bus = qx.event.message.Bus.getInstance();
      bus.on = bus.subscribe;
      bus.emit = bus.dispatchByName;

      // set up socket.io
      var loc = document.location;
      var url = loc.protocol + "//" + loc.host + ":" + loc.port;
      var socket = io.connect(url + "/testapp");

      // Create a button
      var loginButton = new qx.ui.form.Button("Login", "testapp/test.png");
      var doc = this.getRoot();
      doc.add(loginButton, {left: 100, top: 50});

      // we'll need these vars in the closure
      var loginWindow, userid = null, username="";

      // Add an event listener for the button
      loginButton.addListener("execute", function(e)
      {
        // if someone is logged in, log out
        if (userid){
          return socket.emit("logout",userid, function(err){
            if(err) return alert("Something went wrong");
            loginButton.setLabel("Login");
            userid=null;
            bus.emit("updatePermissions");
          });
        }

        // create or reuse login window
        if ( ! loginWindow ){
          loginWindow = new dialog.Login({
            image : "dialog/logo.gif",
            text  : "Please log in",
            checkCredentials  : checkCredentials,
            callback : finalCallback
          });
        }
        loginWindow.show();
      },this);

      // this asyncronously checks the user credentials
      function checkCredentials( username, password, callback ) {
        socket.emit("authenticate", { username:username, password:password }, callback );
      }

      // this reacts on the result of the authentication
      function finalCallback(err, data){
        // error
        if (err) {
          return dialog.Dialog.error( err );
        }
        // Success!
        userid    = data.id;
        username  = data.name
        loginButton.setLabel( "Logout " + username );
        dialog.Dialog.alert("Welcome, " + username + "!" );
        // now permissions have changed, update them
        bus.emit("updatePermissions");
      }

As you can see, we have modified the login code a bit, so that username and userid are stored, and we use the qooxdoo message bus to inform listeners when the permissions should be updated (Changing the bus API is not really necessary, but I like the short “on” and “emit” better than the method names of the bus API).

Now we create a resource controller. This should really go into its own class, but we’re only interested in the main functionality of this controller.

      //  ACL

      // create a resource controller that reacts on permission updates
      // we don't do any type checking to keep this short
      function createController( resourceName ){
        var targets =[], permissions={};
        var controller = {
          // bind a property of a widget to a permission
          add : function( widget, property, permission, converter ){
            targets.push( {
              widget: widget,
              permission: permission,
              property: property,
              converter: converter || function(p){return p;}
            });
            return controller; // make it chainable
          },
          // set the permissions
          setPermissions : function(perms){
            permissions = perms;
          },
          // enforce the given or stored permissions with the controlled
          // widgets
          enforce : function(){
            targets.forEach(function(t){
              // compute new property value by calling hook function with
              // permission value and original property value
              var propVal = t.widget.get(t.property);
              var computedPropVal = t.converter(permissions[t.permission]||false, propVal);
              t.widget.set(t.property, computedPropVal );
            });
            return controller;
          },
          // pull the permissions from the server
          pull : function(){
            socket.emit("allowedPermissions",resourceName,function(err,data){
              if(err) return alert(err);
              controller.setPermissions(data);
              controller.enforce();
            });
            return controller;
          },
          // start listening to events concerning permissions and pull data
          start : function() {
            bus.on("updatePermissions", controller.pull );
            socket.on("updatePermissions", controller.pull );
            socket.on("acl-update-"+resourceName, controller.enforce );
            // this will normally disable everything since no permissions are set
            controller.enforce();
            // get permissions from server
            controller.pull();
            return controller;
          }
        };
        return controller;
      }

      // create new resource controller over a fictional "db" resource
      var dbController = createController("db");

      // create buttons
      var readButton = new qx.ui.form.Button("Read");
      doc.add(readButton, {left: 100, top: 100});
      var writeButton = new qx.ui.form.Button("Write");
      doc.add(writeButton, {left: 150, top: 100});
      var deleteButton = new qx.ui.form.Button("Delete");
      doc.add(deleteButton, {left: 200, top: 100});

      // delete button is only enabled when the checkbox is checked
      // a change in state needs to trigger an update
      var confirmDeleteCB = new qx.ui.form.CheckBox("Enable Delete");
      confirmDeleteCB.addListener("changeValue",dbController.enforce)
      doc.add(confirmDeleteCB,{left:270, top:100});

      // configure ACL
      dbController
        .add(readButton, /*property name*/ "enabled",/*permission name*/ "read")
        .add(writeButton, "enabled", "write")
        .add(deleteButton, "enabled", "delete", function(p,v){return p && confirmDeleteCB.getValue()})
        .add(confirmDeleteCB, "enabled", "delete")
        .add(confirmDeleteCB, "value", "delete", function(p,v){return p? v:false})
        .start();

The resource controller is very simple, but already quite powerful. It binds widget property values to permission states and observes changes in these states. It can be notified by other parts of the client application and then pulls the newest permission data from the server, or the server can push state changes to the server, using a socket.io message (“acl-update-RESOURCENAME”);

Since not all properties are boolean (as the permission values), a converter function can be used to transform a permission state into an appropriate property value. This converter allows to integrate additional logic that affects the UI: even though a user has a certain permission, the current state of the application might require that the permission cannot be given. In our example, the  “Delete” button is activated only if a checkbox is checked.

The converter function is called with two arguments, the permission state and the current value of the property. This allows, as the two next-to-last lines in the code above shows, to implement the following logic: if the “delete” permission is granted, enable the checkbox, and preserve the current checkbox state (checked/unchecked), otherwise disable and uncheck it.

If you build and start the application now (cd testapp; python ./generate.py –no-progress-indicator build; cd ..; node server.js build; http://<projectname&gt;.<username>.c9.io). you’ll see three buttons below the login button. They are disabled when no user is logged in. If you log in with john/john, only the “read” button is enabled, as John is only a regular user. However, if you log in Mary (with mary/mary), an “admin” role, all buttons exept the “delete” button are enabled, as is the “confirm delete” checkbox. When you check the confirmation checkbox, the “delete” button is enabled. When you log out, all buttons will be immediately disabled.

(Update: you can also access the application on nodejitsu).

Even though this is, again, very basic stuff and doesn’t do much, the underlying logic covers already a lot of use cases concerning the control of UI elements. I would be interested, however, on your views on the general setup. What level of complexity is missing which is necessary for production-grade applications?

Coding

Deploying a qooxdoo+socket.io application from Cloud9IDE to a cloud server: Heroku, OpenShift, Nodejitsu

This is part 5 of my ongoing series on “Developing a complete client-server application with qooxdoo and NodeJS on Cloud9IDE“.

Before, as previously announced, I tackle the “Mighty Database Problem”, it seemed to make sense to cover another topic first: the possibility to migrate your Cloud9IDE-apps directly to a deployment server in the cloud, where the restrictions of C9 (such as: no database server) do not apply, and where the application can be really used and tested by others. I approached the PAAS (“plattform-as-a-service”) topic rather naive and clueless, using a trial-and-error approach – that’s why it took two failed attempts and quite some time to put together this post.

As part of the completely new development experience that cloud services offer (in comparison to the old model of doing everything on your own servers), C9 is cooperating with a couple of different cloud providers and offers a one-click deployment from within the IDE. At the moment Heroku and Windows Azure, but I assume that more providers are in the pipeline. Unfortunately, the heroku deployment didn’t work (see below), so I tried my luck with a different solution, OpenShift (Cloud9 itself is based on OpenShift), which also didn’t work out, to finally arrive at Nodejitsu, which is probably the best tool for the given job. There are a couple of other offerings, which I haven’t tested. If you have suggestions, please write a comment.

To make a long story short, the main problem is that Socket.io is not supported well by either Heroku nor OpenShift, at least not in a straightforward way. Had I read the documentation of Nodejitsu earlier (which notes that they are the only service to fully support Socket.io), I would have saved myself some time. However, rather than discarding my notes on Heroku and OpenShift, I thought that it made sense to document the steps I took and maybe later they will be useful, or someone knows how to fix the problems. If you want to go directly to what worked, skip the sections 1) and 2) on proceed right to 3).

The lesson from this episode is that, contrary to what I said in an earlier post, socket.io might not be the solution to all problems, it might be the problem in some cases. Currently, it seems to be better to stick with REST when using “normal” cloud providers. If you have normal connect/express apps, they will all work with the services I tested.

1) Trying to deploy to Heroku.com: Killed by the slug (size)

Automatic deployment to Heroku as offered by C9’s interface doesn’t work for our application for reasons that will become clear later. To set up Heroku deployment, you have to go through a few steps first, which I will describe in the following. For more details, have a look at this tutorial which describes the complete process of deploying a C9-hosted node application to Heroku, including registration and setup.

The heroku deployment requires that your app contains a valid package.js file that declares the dependencies.

package.json

{
  "name": "qxnodeapp",
  "version": "0.0.1",
  "dependencies": {
    "architect": "0.1.4",
    "async": "0.1.22",
    "connect": "2.4.2",
    "roles": "0.0.4",
    "socket.io": "0.8.7"
  },
  "engines": {
    "node": "0.6.x"
  },
  "repository": {
    "type": "git",
    "url": "https://github.com/cboulanger/qxnodeapp.git"
  }
}

Note that we don’t use the current socket.io version (as of 2012/08/19, 0.9.8), because the former introduces a dependency (hiredis) that doesn’t compile on Heroku.

Also, we need to tell Heroku how to start the node process:

Procfile

web: node.js server.js

Finally, when pushing our app to Heroku, we must make sure that we only push those files that are actually needed to run the application. We cannot include the complete qooxdoo SDK because it blows the limits imposed by Heroku (200MB).

.slugignore

qooxdoo/*

This means that we can only use the standalone version that has been produced by “python ./generate.py –no-progress-indicator build”. Neither “source-hybrid” nor “source” version are supported by the setup described here – but the deployed app should be the built app in any case.

As of now, socket.io and Heroku are not an ideal match (as detailed here). I hope that will change in the near future because it imposes serious limitations on the use of socket.io. To make socket.io work, we have to add this configuration to plugins/socket/socketio.js, after line 21:

io.configure(function () {
  io.set("transports", ["xhr-polling"]);
  io.set("polling duration", 10);
});

Heroku compiles a complete image of the application called “slug” before it actually stores and publishes it. In this process, we get fairly descriptive error messages in case something goes wrong.

Because of the fact that Heroku clones your C9 repository, it will not receive the files that are excluded by a .gitignore file. That is true for all the “build” files, which are, however, necessary to run the application. We therefore have to create a deployment branch which includes those files. Luckily, we can use the generator to automate this task. Add a “deploy” element to the “export” section and the following job to the “jobs” section of testapp/config.json:

    "deploy" : { "run": ["build","push_to_heroku"] },
    "push_to_heroku": {
      "shell": { "command" : [
        "git checkout -b deploy",
        "git add -f ./build",
        "git commit -m \"Adding build files\"",
        "git push -f heroku-target deploy:master",
        "git checkout master",
        "git branch -D deploy"
        ] }
    }

This creates a temporary “deploy” branch, commits the build files to it, pushes everything to Heroku, and finally deletes the branch. Note that if something goes wrong along the way, and the script terminates, you’ll have to checkout master and delete the deploy branch manually (git checkout master; git branch -D deploy).

Why it didn’t work

Unfortunately, our socket.io dependency is very heavy and creates a slug that blows the available slug size limit. That is where the attempt to use Heroku had to stop: The application wouldn’t even start and I could not figure out how to make the slug any smaller. If you want to have a look yourself, here’s the state of the code then.

So I had to look for a different solution.

2) OpenShift: socket.io woes, wrongly blamed

Now to the next unsuccessful attempt: the setup almost worked, as you’ll see below…

There already is a detailed tutorial on the deployment of Cloud9-Apps to OpenShift, so I only need to add relevant details pertaining to the application that is developed in this series.

As explained in the mentioned tutorial, after setting up your OpenShift application, you need to add the OpenShift container’s git repo as a remote target, using the git URI that can be found in the settings page of your application (you need to click on the name of the application in your dashboard).

git remote add openshift -m master ssh://XXXX@YYYY-ZZZZ.rhcloud.com/~/git/YYYY.git/

Since the next steps involves overwriting your local files, you must make a backup of server.js now (or revert the changes using git afterwards – I am not very good at git yet, so please tell me how one would do that).

git pull -s recursive -X theirs openshift master

This pulls in the preinstalled files in the OpenShift NodeJS appliance. As noted, it overwrites our server.js file, so we need to restore them now from our backup & recommit or use git to get the old version. It also seems to commit the files contained in node_modules, which is not what we want, so we have to throw them out of the git index:

git rm --cached -r node_modules
git commit -m "Removing files from index"

I am sure there is a better way of doing this, so if your git knowledge is better than mine (which is not much), use the comment section to correct me!

Previously, it was necessary to to add and commit a file deplist.txt with the npm dependencies that OpenShift should install automatically. The current README.openshift file marks deplist.txt as “deprecated” and tells us to use the normal package.json file for listing the dependencies, so we’ll use that file.

package.json

{
  "name": "qxnodeapp",
  "version": "0.0.1",
  "description": "A demo app combining qooxdoo & nodejs",
  "engines": {
    "node": ">= 0.6.0",
    "npm": ">= 1.0.0"
  },
 "dependencies": {
    "architect": "0.1.4",
    "async": "0.1.22",
    "connect": "2.4.2",
    "roles": "0.0.4",
    "socket.io": "0.8.7"
  },
  "private": true,
  "main": "server.js"
}

Now we need to adapt our node server settings to the OpenShift container. Create a new architect configuration file configs/deploy.js (and add/commit it):

configs/deploy.js

var path = require("path");
module.exports = [
  { packagePath: "../plugins/http", root : path.resolve("testapp/build"),
    host : process.env.OPENSHIFT_INTERNAL_IP,
    port : process.env.OPENSHIFT_INTERNAL_PORT
  },
  { packagePath: "../plugins/socket", namespace : "/testapp", loglevel : 0 },
  { packagePath: "../plugins/store" },
  { packagePath: "../plugins/users" },
  { packagePath: "../plugins/acl" }
];

We have to update the server code to pick up this configuration by default (the change is in line 4):

server.js

var path = require('path');
var architect = require("architect");

var configName = process.argv[2] || "deploy";
var configPath = path.resolve("./configs/", configName);
var config     = architect.loadConfig(configPath);

architect.createApp(config, function (err, app) {
    if (err) {
        console.error("While starting the '%s' setup:", configName);
        throw err;
    }
    console.log("Started '%s'!", configName);
});

Since the default configuration now is “deploy”, this means that in the C9 IDE, you can no longer simply press the “run” button with the server.js file open, in order to run the source version. You need to type “node server.js source” explicitly to start the server in the C9 virtual machine. Creating a custom job in the “run & debug” section doesn’t work yet because there is a bug in C9 that prevents passing command line arguments to the scripts (which will be fixed eventually).

As in the case of Heroku, we want to tranfer only the “build” files to the OpenShift container, and not the full toolkit. If you haven’t done so already, add a “deploy” element to the “export” section and the following job to the “jobs” section of testapp/config.json:

    "deploy" : { "run": ["build","push_to_openshift"] },
    "push_to_openshift": {
      "shell": { "command" : [
        "git checkout -b deploy",
        "git add -f build",
        "git commit -m \"Ready for deployment\"",
        "git push -f openshift deploy:master",
        "git checkout master",
        "git branch -D deploy"
        ] }
    }

Unlike Heroku, OpenShift doesn’t seem to provide an easy way of skipping the submodules (qooxdoo and the dialog contrib), which take away unneccesary space. I tried to remove the submodules in the deploy branch using these directions, but that didn’t work. Any ideas?

Our C9 repo now contains all the files necessary for committing & pushing the application to OpenShift, which we’ll do now,using the generator and our deploy job:

cd testapp
python ./generate.py -I deploy

Note that I used the -I command line switch here which is available only in the latest master branch of qooxdoo. If you use an older version, use the longer –no-progress-indicator switch.

Pushing the changes to OpenShift triggers a rebuild of the app in the OpenShift container. The app should now start.

Why it didn’t work

But that wasn’t the (happy) end of the story. However hard I tried, the socket.io connection wasn’t established. All I got from the /socket.io server was “Welcome to socket.io”. So I gave up and tried another service: Nodejitsu, which is described in the next section. In the process of which it dawned to me that the problem might have to do with something else completely (a bug in socket.io v.0.8.7), so it was probably not OpenShift’s fault. But then it was already to late, and I had moved on.

If you’re interested, the code of the OpenShift setup is saved on GitHub.

3. Nodejitsu: so easy it should probably be declared illegal

After trying Heroku and OpenShift, which both demanded a relatively complicated setup to get my app (almost) running, discovering Nodejitsu involved a couple of nice surprises: It turned out to be almost unbelievably easy to deploy my application.

There is no official guide information yet that deploying from Cloud9IDE to Nodejitsu is even possible. I found this obscure gist that contained the information I needed.

If you work on a free plan, use

npm install jitsu@0.7.x -g
mv ../lib/node_modules/jitsu node_modules

from the C9 command line. If you have a paid subscription,

npm install jitsu@0.7.x -g

will suffice. I don’t know why the particular version (0.7) is recommended – I even got a notice that that version is deprecated, but there must be reason, and it worked. I haven’t tried to update jitsu within C9 yet.

Upon registering an account at nodejitsu.com, you will receive an e-mail with a confirmation code.

jitsu users confirm USERNAME CONFIRMATION-CODE

You’ll be asked to provide and confirm a password – you can respond to the command line prompts using the C9 command line with no problem.

Use package.json and server.js from the Heroku section and the following configs/deploy.js:

var path = require("path");
module.exports = [
  { packagePath: "../plugins/http",
    root : path.resolve("testapp/build"),
    host : "qxnodeapp.nodejitsu.com",
    port : 8080
  },
  { packagePath: "../plugins/socket", namespace : "/testapp", loglevel : 2 },
  { packagePath: "../plugins/store" },
  { packagePath: "../plugins/users" },
  { packagePath: "../plugins/acl" }
];

The port 8080 is important, because otherwise socket.io doesn’t work.

Important: we’ll have to adapt line 61 of config.json to read

"uri": "/socket.io/socket.io.js"

(“socket.io.js” instead of “socket.io.min.js”), because there is a bug in socket.io 0.8.7 (which we’re using here) that prevents the minified version from being served. This was the reason why my OpenShift setup didn’t work.

Then the only thing that is left to do is to simply:

cd testapp
python ./generate.py -I build
cd ..
jitsu deploy

The utility will analyse your code, package, and setup the app on one of the nodejitsu servers. You’ll be asked to provide the namespace for the app, in our case “qxnodeapp”. Each time you “jitsu deploy” your app, a new “snapshot” will be created. After some time, jitsu will tell you that the application has (re-)started at a specific URL. Our application should be running at http://qxnodeapp.nodejitsu.com (try to log in with john/john or mary/mary).

There is also a developer dashboard which can be accessed at https://develop.nodejitsu.com, where you can start, restart and stop your application, manage your snapshots, and look at the log files, which is particularly useful.

So at the end of the day, cloud deployment of the app developed here was successful. As usual, you find the complete source code on GitHub. Next time, I will really be looking at databases!

Coding

Developing a complete client-server application with qooxdoo and NodeJS on Cloud9IDE. Part 4: Access Control

There is an updated version of this article available here

This post has taken me considerably longer to write than the previous ones, not only because I was busy, but mainly because library support for today’s topic is much weaker than for application architecture, data transport or authentication. I had to experiment quite a bit with the existing ones to puzzle together a solution. But maybe I overlooked something – please let me know in the comments or @herr_panyasan on twitter.

Today we’re going to tackle another important aspect of creating a web application: access control, also called authorization (although authorization is often used synonymously with authentication). It is not enough to authenticate users, you also have to set up policies which grant them certain rights or limit access to certain resources or functionalities. Sometimes access control is independent from authentication: think of web applications which cam be used without logging in. These applications still need to differentiate between individual users and require policies about ability of anonymous guests to do things. As usual, the code of this post is available at GitHub.

Identifying clients: Assigning user ids

Before we approach access control, we need to refine our authentication system a bit. The simple setup used in the last post did react to authentication requests, but did not differentiate between the connected clients. Luckily, socket.io does most of the work for us. Replace lines 29 and following of users.js with the following code:

plugins/users/users.js [GitHub]

  // User management API

  var api = {
    // get userdata. If a property is given as second argument, return just this property
    // the last argument is the callback
    getUserData : function( userid, arg2, arg3  ) {
      var property = arg3 ? arg2 : null;
      var callback = arg3 ? arg3 : arg2;
      userstore.get(userid, function( err, data ){
        if ( data )
        {
          if ( property ) return callback( null, data[property] );
          return callback( null, data );
        }
        return callback( "A user with id '"+userid+"' doesn't exist");
      });
    },
    // password authentication
    authenticate : function(userid, password, callback){
      userstore.get(userid, function( err, data ){
        // check password
        if ( data && data.password == password )
        {
          return callback(null,userid);
        }
        // authentication failed
        return callback( "Invalid username or password" );
      });
    }
  };

  // support of sessions

  // setup socket events
  var io = imports.socket;
  io.on("connection", function(socket){

    // helper functions using the API

    function login( data, callback )
    {
      api.authenticate( data.username, data.password, function(err,userid){
        if (err) return callback(err);
        socket.set("userid", userid, function(){
          console.log("User %s has logged in.", userid);
          // return user data to the client via the callback
          api.getUserData(userid, callback);
        });
      });
    }

    function logout( userid, callback )
    {
      socket.set("userid", null, function(){
        console.log("User %s has logged out.", userid);
        return callback();
      } );
    }

    // wire helpers to events

    socket.on("authenticate",function(data, callback){
      socket.get("userid", function(err,currentUserId){
        if( currentUserId )
        {
          return logout( currentUserId, function(){
            login( data, callback );
          });
        }
        login( data, callback );
      });
    });
    socket.on("logout", logout );
  });

  // register plugin and provide plugin API
  register(null,{
      users : api
  });

Note the difference between the user management “API”, which has no login our logout method because it has no way of knowing which client is currently connected. This information is only available to the socket.io session. We therefore will always need the “socket” object to know which user is attached to the object.

The ACL library

Unfortunately, as of now, not very many ACL libraries for node exist, and they all have different drawbacks or ideosyncrasies. The one library that uses commonly used terminology  (Users,Roles, Permissions, Resources) in its API  is node-acl.The problem is not only that the last commit is more than six months ago, but above all that it uses a Redis database which cannot be installed on Cloud9. I tried to use nedis, a javascript-only redis server, but that didn’t work unfortunately (and not only because it wasn’t compatible to node 0.6). So I ended up using node-roles (npm install roles), which is a simple in-memory ACL library good enough for our purposes. Unfortunately, however, it uses an uncommon terminology. Resources are called “apps”, permissions “roles” and roles “profiles”. In addition, it doesn’t support the mapping of users to roles. So I decided to wrap the libraries API in the API of node-acl. This is not ideal, but provided the most pragmatic solution.

Since you have been following this tutorial, I won’t have to repeat how to set up the new “acl” plugin (GitHub), but instead proceed right to the plugin file:

plugins/acl/acl.js [GitHub]

// This plugin provides access control
module.exports = function setup(options, imports, register)
{
  var roles = require("roles");

  // caches
  var userProfiles = {}; // a map of user ids and profiles ("roles")
  var apps = {}; // these are really "resources"
  var profiles = {}; // these are really "roles"
  var appRoles = {}; // a map to keep track of already added roles

  function getApp( appName )
  {
    if ( ! apps[appName] ){
      apps[appName] = roles.addApplication(appName);
    }
    return apps[appName];
  }

  function getProfile( profileName )
  {
    if ( ! profiles[profileName] ){
      profiles[profileName]  = roles.addProfile(profileName);
    }
    return profiles[profileName] ;
  }

  function arrayfy( value )
  {
    return Array.isArray( value ) ? value : [value];
  }

  // allows access to resources
  function allow( roleName, resourceName, permissions, callback )
  {
    var app = getApp(resourceName);
    var profile = getProfile(roleName);
    arrayfy(permissions).forEach(function(permission){
      appRoles[resourceName] = appRoles[resourceName] || [];
      if( appRoles[resourceName].indexOf(permission) ===-1 ){
        app.addRoles(permission);
        appRoles[resourceName].push(permission);
      }
      profile.addRoles(resourceName + "." + permission);
    });
    if( callback ) callback();
  }

  // assign a role to a userid
  function addUserRoles( userId, roles, callback )
  {
    if ( !userProfiles[userId] ){
      userProfiles[userId] = [];
    }
    userProfiles[userId] = userProfiles[userId].concat( arrayfy(roles) );
    if( callback ) callback();
  }

  // testing access
  function isAllowed( userId, resource, permissions, callback )
  {
    var userRoles = userProfiles[userId];
    if( userRoles === undefined ){
      var error = new Error("User '" + userId + "' has no profile.")
      if ( callback ) return callback(error);
      throw error;
    }
    var permissions = arrayfy(permissions);
    var isAllowed = false;
    for( var i=0; i<userRoles.length; i++){
      for( var j=0; j< permissions.length; j++){
        if ( ! getProfile( userRoles[i] ).hasRoles(resource + "." + permissions[j] ) ) {
          isAllowed= false; break;
        } else{
          isAllowed = true;
        }
      }
      if( isAllowed ) break;
    }
    if ( callback ) callback(isAllowed);
    return isAllowed; // synchronous shortcut
  }

  // return all permissions of a user connected to one or
  // more resources
  function allowedPermissions( userId, resources, callback ) {
    var p, permissions = {};
    arrayfy(resources).forEach(function(resource){
      appRoles[resource].forEach(function(permission){
        // we're cheating here, using a synchronous call because we can.
        // sessions without user ids (not logged in) have no permissions
        p = userId ? isAllowed( userId, resource, permission ): false;
        if ( ! permissions[resource] ) permissions[resource] = {};
        permissions[resource][permission] = p;
      });
    });
    callback(null, permissions);
  }

  // API. Only selected methods are actually implemented
  var acl = {
    allow : allow,
    removeAllow : null,
    isAllowed : isAllowed,
    addUserRoles : addUserRoles,
    removeUserRoles : null,
    userRoles : null,
    addRoleParents: null,
    removeRole: null,
    removeResource: null,
    allowedPermissions : allowedPermissions,
    areAnyRolesAllowed : null,
    whatResources : null
  };

  // socket events
  var io = imports.socket;
  io.on("connection", function(socket){
    socket.on("allowedPermissions",function(resources,callback){
      socket.get("userid", function(err,userId){
        allowedPermissions( userId, resources, callback );
      });
    });
  });

  // create some mock data
  acl.allow("admin","db",["read","write","delete"]);
  acl.allow("user","db","read");
  acl.addUserRoles("john","user");
  acl.addUserRoles("mary","admin");

  // register plugin and provide plugin API
  register(null,{
      acl : acl
  });
}

This is an example for a plugin that “translates” one API into another. The implementation details aren’t really relevant. For our purposes, the only thing that matters is the API. For the moment, we only need three methods, but I have added those which would need to be implemented in environments where the original, redis-dependent library cannot be used.

ACL on the client

As it is obvious from the above code, we expose only one method to the client (allowedPermissions), which takes a resource name (or several) as argument and returns permission data. The client only needs to concern itself with resources and permissions, it does not need to deal with users and roles. There is only one relevant user – the on that is currently logged in, and the server knows which one (through the socket object). There is no need to inform the client what roles the user has or which permissions is part of which role – that is all server-side data.

On the client, we need a resource controller object, which connects the permissions with UI states. Let’s have a look at the code. We need to make substantial changes and additions to our Application.js. I’ll document only the important parts, the whole file is on GitHub.:

testapp/source/class/testapp/Application.js

      // create the qx message bus singleton and give it a socket.io-like API
      // note that the argument passed to the subscriber is a qooxdoo event object
      var bus = qx.event.message.Bus.getInstance();
      bus.on = bus.subscribe;
      bus.emit = bus.dispatchByName;

      // set up socket.io
      var loc = document.location;
      var url = loc.protocol + "//" + loc.host;
      var socket = io.connect(url + "/testapp");

      // Create a button
      var loginButton = new qx.ui.form.Button("Login", "testapp/test.png");
      var doc = this.getRoot();
      doc.add(loginButton, {left: 100, top: 50});

      // we'll need these vars in the closure
      var loginWindow, userid = null, username="";

      // Add an event listener for the button
      loginButton.addListener("execute", function(e)
      {
        // if someone is logged in, log out
        if (userid){
          return socket.emit("logout",userid, function(err){
            if(err) return alert("Something went wrong");
            loginButton.setLabel("Login");
            userid=null;
            bus.emit("updatePermissions");
          });
        }

        // create or reuse login window
        if ( ! loginWindow ){
          loginWindow = new dialog.Login({
            image : "dialog/logo.gif",
            text  : "Please log in",
            checkCredentials  : checkCredentials,
            callback : finalCallback
          });
        }
        loginWindow.show();
      },this);

      // this asyncronously checks the user credentials
      function checkCredentials( username, password, callback ) {
        socket.emit("authenticate", { username:username, password:password }, callback );
      }

      // this reacts on the result of the authentication
      function finalCallback(err, data){
        // error
        if (err) {
          return dialog.Dialog.error( err );
        }
        // Success!
        userid    = data.id;
        username  = data.name
        loginButton.setLabel( "Logout " + username );
        dialog.Dialog.alert("Welcome, " + username + "!" );
        // now permissions have changed, update them
        bus.emit("updatePermissions");
      }

As you can see, we have modified the login code a bit, so that username and userid are stored, and we use the qooxdoo message bus to inform listeners when the permissions should be updated (Changing the bus API is not really necessary, but I like the short “on” and “emit” better than the method names of the bus API).

Now we create a resource controller. For simplicity, we put the code into Application.js, but this should really go into its own class.

      // a resource controller that reacts on permission updates
      // we don't do any type checking to keep this short
      function resourceController( resourceName ){
        var targets =[], permissions={};
        var self = {
          // bind a property of a widget to a permission
          add : function( widget, property, permission, hook ){
            targets.push( {
              widget: widget,
              permission: permission,
              property: property,
              hook: hook || function(v){return v;}
            });
            return self; // make it chainable
          },
          // enforce the given or stored permissions with the controlled
          // widgets
          enforce : function(perms){
            if ( perms ) permissions = perms;
            targets.forEach(function(t){
              var value = t.hook(permissions[t.permission]||false);
              t.widget.set(t.property, value );
            });
          },
          // pull the permissions from the server
          pull : function(){
            socket.emit("allowedPermissions",resourceName,function(err,data){
              if(err) return alert(err);
              self.enforce(data[resourceName]);
            });
          },
          // start listening to events concerning permissions and pull data
          start : function() {
            bus.on("updatePermissions", self.pull );
            socket.on("updatePermissions", self.pull );
            socket.on("acl-update-"+resourceName, self.enforce );
            // this will normally disable everything since no permissions are set
            self.enforce();
            // get permissions from server
            self.pull();
          }
        };
        return self;
      }

      // create buttons
      var readButton = new qx.ui.form.Button("Read");
      doc.add(readButton, {left: 100, top: 100});
      var writeButton = new qx.ui.form.Button("Write");
      doc.add(writeButton, {left: 150, top: 100});
      var deleteButton = new qx.ui.form.Button("Delete");
      doc.add(deleteButton, {left: 200, top: 100});

      // configure ACL
      resourceController("db")
        .add(readButton,  "enabled", "read")
        .add(writeButton, "enabled", "write")
        .add(deleteButton, "enabled", "delete")
        .start();

The resource controller is very simple, but already quite powerful. It binds widget property values to permission states and observes changes in these states. It can be notified by other parts of the client application and then pulls the newest permission data from the server, or the server can push state changes to the server, using a socket.io message (“acl-update-RESOURCENAME”);

Since not all properties are boolean (as the permission values), a hook function can be used to transform a permission state into an appropriate property value. Also, this hook allows to integrate additional logic that affects the UI: even though a user has a certain permission, the current state of the application might require that the permission cannot be given. Think of a “Delete” button that can only be pressed when a record is selected, even though a user has the “delete” permission. (There is no example for the hook function yet, I might add this in a later update. It received the permission value as argument and needs to return the corresponding property value).

If you start the application now. you’ll see three buttons below the login button. They are disabled when no user is logged in. If you log in with john/john, only the “read” button is enabled, as John is only a regular user. However, if you log in Mary (with mary/mary), all buttons are enabled, because Mary has the “admin” role (see the bottom of acl.js). When you log out, all buttons will be immediately disabled.

Even though this is, again, very basic stuff and doesn’t do much, the underlying logic covers already a lot of use cases concerning the control of UI elements. I would be interested, however, on your views on the general setup. What level of complexity is missing which is necessary for production-grade applications?

The next post will deal with deal with data persistence and thus with database support. Stay tuned and as always, your comments are welcome!

Coding

Developing a complete client-server application with qooxdoo and NodeJS on Cloud9IDE. Part 3: Authentication

In my last post, I announced that today’s episode will deal with authentication (user management) and authorization (rights management). That turned out to be a bit too much, so I’ll restrict myself to authentication. We’ll also need some sort of data storage in which to save user data, so we’ll need to set this up first.

To make things easier to follow, I have put the complete code of the application on GitHub, so you can browse and have a look at the code if you get confused.

An extensible key-value store

For the purposes of this tutorial and the ones to follow, we don’t need any powerful database. A simple key-value store, stored in-memory is fully enough for our needs. At the same time, we want to be able to exchange this database with a powerful one later without having to change a lot of code.

Here, again, the virtues of Architect come into play. We’ll create a plugin with a minimalistic API that can be extended and wired to a “real” database once it is needed. All we need for the moment is a getter and a setter method. Any database backend that we might end up choosing will work asynchronous, so the API will need to reflect this. That is why we also need a library to deal with asynchronous function calls. There are many to choose from; we’ll take async.js (npm install async) since it seems quite comprehensive and is used in a lot of other projects.

You’ll already know by now where to put the following code:

configs/build.js & configs/source.js

...
  { packagePath: "../plugins/store" },
  { packagePath: "../plugins/users" }
...

plugins/store/package.json

{
    "name": "store",
    "version": "0.0.1",
    "main": "store.js",
    "private": true,
    "plugin": {
        "provides": ["store"]
    }
}

plugins/store/store.js

// This plugin provides a very simple in-memory key-value store
// with an asynchronous API
module.exports = function setup(options, imports, register)
{
  // the Store object
  function Store()
  {
    // the data
    var data = {};

    // the number of key-value records
    var length = 0;

    // the exported API: get, set, length
    return {
      get : function( id, callback )
      {
        callback( null, data[id] );
      },
      set : function( id, value, callback )
      {
        if ( typeof data[id] === "undefined" ) length++;
        data[id] = value;
        callback( null );
      },
      length : function( callback )
      {
        callback( null, length );
      }
    };
  }

  // register plugin
  register(null, {
    store: {
      createStore: function() {
        return new Store();
      }
    }
  });
}

The user management plugin

plugins/users/package.json

{
    "name": "users",
    "version": "0.0.1",
    "main": "users.js",
    "private": true,
    "plugin": {
        "consumes": ["store","socket"],
        "provides": ["users"]
    }
}

plugins/users/users.js

// This plugin provides user authentication
module.exports = function setup(options, imports, register)
{
    // create new store for users
    var userstore = imports.store.createStore();

    // create some sample user data, this will be removed later
    // we could of course keep the data in a simple array
    var async = require('async');
    var userdata = [
      { id: "john", name : "John Doe", password : "john" },
      { id: "mary", name : "Mary Poppins", password : "mary" },
      { id: "harry", name : "Harry Potter", password : "harry" }
    ];
    async.forEach(userdata,
      // iterator
      function(item, callback){
        userstore.set( item.id, item, callback);
      },
      // final callback
      function(){
        userstore.length(function(err,length){
          console.log("Store now has %s entries.", length);
        });
      }
    );

    // API
    var api = {
      // very simple authentication
      authenticate : function(userid, password, callback){
        userstore.get(userid, function( err, data ){
          // user does not exist
          // you wouldn't usually reveal this
          if( ! data )
          {
            return callback("Unknown user");
          }
          // check password
          if ( data.password == password )
          {
            return callback( null, data.name );
          }
          // authentication failed
          return callback( "Invalid Password" );
        });
      }
    };

    // Listen for authenticate event and return result of authentication to browser
    var io = imports.socket;
    io.on("connection", function(socket){
      socket.on("authenticate",function(data, callback){
        api.authenticate(data.username, data.password, callback);
      });
    });

    // register plugin and provide plugin API
    register(null,{
        users : api
    });
}

If you run the server.js, you should have a console message “Store now has 3 entries.”. So we’re good. We’ll use this mock data for username-password authentication. Stop the server, we’ll restart it later.

Creating a login widget on the client

On the client, we need a login widget. In order to save us some work, we can use the one included in the “Dialog” contribution, which also gives us other useful dialog widgets. At the moment, qooxdoo contributions (contribs) are hosted on sourceforge and can be included automatically by the generator using special syntax.  In preparation of a new system, which will allow to maintain contrib code outside of sourceforge, the code of the Dialog contrib has been moved to GitHub and can be pulled (or downloaded) from there. Create a new folder “qooxdoo-contrib” in the top level directory and get the code with git:

mkdir qooxdoo-contrib
git submodule add https://github.com/cboulanger/qx-contrib-Dialog qooxdoo-contrib/Dialog

Then, tell the generator to include the contrib code by adding a library section to /testapp/config.json:

  "jobs": {

    "libraries": {
      "library": [{
        "manifest": "../qooxdoo-contrib/Dialog/Manifest.json"
      }]
    },
...

Now, replace the complete “main” function code in testapp/source/class/testapp/Application.js with this:

    main : function()
    {
      // Call super class
      this.base(arguments);

      // Enable logging in debug variant
      if (qx.core.Environment.get("qx.debug"))
      {
        // support native logging capabilities, e.g. Firebug for Firefox
        qx.log.appender.Native;
        // support additional cross-browser console. Press F7 to toggle visibility
        qx.log.appender.Console;
      }

      // set up socket.io
      var loc = document.location;
      var url = loc.protocol + "//" + loc.host;
      var socket = io.connect(url + "/testapp");

      // Create a button
      var loginButton = new qx.ui.form.Button("Login", "testapp/test.png");
      var doc = this.getRoot();
      doc.add(loginButton, {left: 100, top: 50});

      // Add an event listener for the button
      var loginWindow, loginStatus = false;
      loginButton.addListener("execute", function(e)
      {
        // if someone is logged in, log out
        if (loginStatus){
          loginButton.setLabel("Login");
          loginStatus = false;
          return;
        }

        // create or reuse login window
        if ( ! loginWindow ){
          loginWindow = new dialog.Login({
            image : "dialog/logo.gif",
            text  : "Please log in",
            checkCredentials  : checkCredentials,
            callback : finalCallback
          });
        }
        loginWindow.show();
      },this);

      // this asyncronously checks the user credentials
      function checkCredentials( username, password, callback ) {
        socket.emit("authenticate", { username:username, password:password }, callback );
      }

      // this reacts on the result of the authentication
      function finalCallback(err, data){
        // error
        if (err) {
          return dialog.Dialog.error( err );
        }
        // Success!
        loginStatus = true;
        loginButton.setLabel( "Logout " + data );
        dialog.Dialog.alert("Welcome, " + data + "!" )
      }
    }

Now run the generator:

cd testapp
python ./generate.py --no-progress-indicator source

Now run server.js and open the source version of the application from http://x.y.c9.io/testapp/source/index.html. When the app has finished loading, you should see a “Login” button. After you press it, the login widget should appear and you can log in with john/john or mary/mary. The killer feature of this immensely useful application is that you can even log out again!

Ok, this doesn’t look like very much yet, but we’ve put in place a basic element of an application that can gradually be improved by adding more sophisticated functionality. One could, for example, add authentication through third-party authentication providers like Google, Facebook or Mozilla by using node libraries such as Passport or everyauth. We need to continue though, with another very important element of an application: access control or authorization. This will be the topic of the next post.

Again, if you want to see the complete source code, head over to  GitHub. Happy coding!

Coding

Developing a complete client-server application with qooxdoo on Cloud9IDE. Part 2: Integrating socket.io

In the previous post, we have chosen the application architecture and integrated a connect server to dish out the frontend code to the clients. We now need to decide how to client and server should communicate.

Choosing the right communication layer: why REST is not enough

The dominant approach in the world of web apps is the use of HTTP requests and the REST paradigm. This allows for a very clean separation of data presentation on the client and data generation on the backend. The use of descriptive and clear URLs to query data increases the clarity of the code and also makes it possible to debug the backend easily. This differentiates REST from, for example, Remote Procedure Call solutions such as JSON-RPC (which is notoriously difficult to debug). Plus you can very easily provide a public API with which the application backend can be queried by third-party clients. Finally, web application frameworks such as Ruby on Rails, or node’s Express server excel in dealing with such requests.

However, there are drawbacks to this kind of solution. The most important one is that usually, the requests are unidirectional. You can query data, but you don’t know when new data is available on the server (unless you use some form of polling). Also, for all of their commendable readability, REST URLs allow very little complexity in the kind of data that can be communicated to the server. In addition, server frameworks introduce an additional layer — the router — between data consumer and data provider, and often force the MVC paradigm on the data interchange, even when this paradigm doesn’t really fit. In many cases, it would be much better if consumer and provider could communicate directly. Finally, REST and JSON-RPC communicate only between a specific client and server, and does not extend the communication to any other client.

Enter socket.io: realtime, bidirectional messaging

This is why I have always wanted to have a closer look at socket.io as a comprehensive solution for all client-server communication requirements. socket.io is a bidirectional messaging system that is compatible with a huge number of different browsers/platforms/transports. You get quasi-realtime connectivity in both directions, and you can even use it isn RPC-like fashion (i.e. send a “command” and wait for its execution). Most importantly, socket.io is not (mainly) about communication between a client and a server. It is a PUB-SUB (shorthand for “publish-subscribe”) system where clients can subscribe to channels and be notified when the server or some other client publishes a message on this channel. Importantly, the library can be considered very stable, since it is in use in many high-traffic production sites. The API is very simple, and it can be installed by a simple npm install socket.io.

Setting up socket.io with Architect

As with the connect server, we’ll integrate the code as an architect plugin that wraps socket.io (which could be always replaced by a different library that uses the same API). Create a new directory in plugins named “socket” with the following package.json:

{
  "name": "socket",
  "version": "0.0.1",
  "main": "socketio.js",
  "private": true,
  "plugin": {
    "consumes": ["http"],
    "provides": ["socket"]
  }
}

As you can see, the plugin requires that the http server is already set up, because it will hook into it. Then, add the line

{ packagePath: "../plugins/socket", namespace : "/testapp", loglevel : 1 }

to your config files (“source.js”/”build.js”), so that the plugin will be loaded by Architect. You can choose the level of logging of the socket.io server from 0 (= only errors) to 3 (=verbose debug log). One sensible setting would be to have 0 in the “build” config and 2 or 3 in the “source” config.
Finally, we need the plugin code.

// This plugin gives provides a socket.io server for the application

// options:
//   options.namespace is the namespace in which to accept messages.
//   options.loglevel

module.exports = function setup(options, imports, register)
{
    // dependencies
    var socketio = require("socket.io");
    var assert   = require("assert");

    // options/parameters
    var namespace = options.namespace;
    assert(namespace && typeof namespace=="string", "You must provide a namespace for the socket.io channels");
    var loglevel  = options.loglevel || 0;

    // attach socket.io server to http server
    var io = socketio.listen(imports.http.server);
    io.set('log level', loglevel);

    // this will be removed later, serves simply to see if everything works
    io.of(namespace).on('connection',function(socket){
        console.log('A new socket connected!');
        socket.emit("buttonlabel", "Press this button");
        socket.on("buttonpressed",function(data,fn){
          console.log("Message from client: " + data);
          fn("Hello from the server!");
          socket.emit("buttonlabel", "Button has been pressed");
        });
    });

    register(null,{
        // API
        socket : io.of(namespace)
    });
    console.log("socket.io server attached, namespace '%s', loglevel %s", namespace, loglevel);
}

Setting up the client

To use socket.io with qooxdoo, we need to tell the generator to include the code before the qooxdoo library, to make sure it is loaded when the qooxdoo application starts up. This works by inserting the following jobs in the config.json configuration file in the testapp directory:

  "jobs": {
      "source-script": {
          "add-script": [{
              "uri": "/socket.io/socket.io.js"
          }]
      },
      "build-script": {
          "add-script": [{
              "uri": "/socket.io/socket.io.min.js"
          }]
      }
  }

This loads the socket.io javascript files from a virtual path.

Finally, we need some code inside the qooxdoo application to interact with the server through our new communication channel. Change lines 54- inside source/class/testapp/Application.js to read like this:

 /*
      -------------------------------------------------------------------------
        Below is your actual application code...
      -------------------------------------------------------------------------
      */

      // set up socket.io
      var loc = document.location;
      var url = loc.protocol + "//" + loc.host;
      var socket = io.connect(url + "/testapp");

      // Create a button
      var button1 = new qx.ui.form.Button("...", "testapp/test.png");
      var doc = this.getRoot();
      doc.add(button1, {left: 100, top: 50});

      // Add an event listener for the button
      button1.addListener("execute", function(e) {
        socket.emit("buttonpressed", "Hello from client!", function(data){
          alert(data);
        });
      });

      // setup socket events
      socket.on('buttonlabel', function (data) {
        button1.setLabel(data);
      });

When you read this together with the socketio.js file, you can see what it does: A button is created with “…” as label. When the client receiveds the “buttonlabel” message from the server, it changes the label. When the user clicks on the button, the client sends a message to the server, and received data back in return, which then is alerted to the user. Finally, another message is dispatched by the server to change the label again.

This example behavior is completely useless, but it shows you that socket.io can fully replace any other means of client-server communication.

We now have setup application structure and communication channels. The next thing we need is authentication and authorization mechanisms, which allow to grant users different degrees of access to the application. This will be covered in the next post.

Coding

Developing a complete client-server application with qooxdoo on Cloud9IDE. Part 1: Application architecture

Having found out that programming with the qooxdoo (qx) framework on the Cloud9-IDE (C9) was not only possible, but actually quite pleasant, I want to now continue by putting together a node-js based architecture that can be the basis of actual applications. In this post and the posts to follow, I will document the different choices of libraries and my experience with them. The current state of the entire application is available on GitHub ]

The frontend is clear — qooxdoo — but what needs to be chosen are three things: 1) the communication layer that lets the frontend and backend talk to each other, 2) the nodejs modules that help me create a maintanable and scalable backend, 3) the persistence layer that allows to save the application data in a -yet to be chosen – database system.

Points 1 and 3 will be covered in later posts. For now, it is enough to say that all my communication needs will be taken care of by  socket.io. In this post, I want to deal with application architecture on the backend.

Application architecture: using C9’s own “Architect”

One of the most important decisions, one that is later hard to change, concerns the application architecture, i.e., how to modularize the code and decide what part of the code does what. When I was coding with PHP, the buzz word was MVC (Model-View-Controller) and this design patterns could be very well translated into the way PHP works. A dispatcher script would call a controller class to query data from the backend into models, and populate a view with the data. If you like this paradigm, there is backbone.js, which is a kind of javascrit MVC framework that many people use, or many others.

MVC and its various siblings is certainly a good pattern when thinking about data. I think, however, that is doesn’t provide an architectural basis for developing data-driven, asynchronous javascript applications. The problem MVC solves (separation of data and presentation) is not a real problem in client-server applications. “Views” are not needed, because data and presentation are naturally separated: all the frontend needs is raw data with which to populate its widgets. But most importantly: the MVC pattern provides no answer to javascript applications’ most important problem, which is the asynchronous nature of javascript.

For some time, I was intrigued by Nicolas C. Zakas Scalable JavaScript Application Architecture, which combined modularity with safety concerns by strictly separating 1) a application “core” which provided all the basic functionality such as client communication, database access, etc., 2) modules, which implemented all the functionality of the application but had no direct access to the core and 3) a “sandbox” which exposed the resources of the core to the modules in a controlled manner. One feature of this approach is that it doesn’t need any library or framework, but can be simply coded by hand – in fact, it must be coded by hand, because the isolation of core and modules is achieved by using closures. This doesn’t work very well with qooxdoo and its traditional OO-approach where one file contains one class with methods, and those classes cannot easily be put into closures. But the main reason I didn’t use this approach was that I wanted some kind of framework that provided me a little more magic.

This is why I was happy to discover that C9 itself was written using a architectural framework named – fittingly – Architect. The idea is that each and every functionality of an application is written as a plugin which registers itself with the application and exposes its functionality to the application without having direct access to the other plugins. There is no sandbox. Instead, each plugin exports an API and “consumes” the API of other plugins. The idea of making everything a plugin with a small but concise API is very powerful. Also, if all functionality of a typical web application backend is modularized into small and focused plugins, it becomes very easy to reuse these modules in a different application. The library also deals with the problem that the initialization of plugins is asynchronous, and it deals with application configuration, which is a very important aspect that cannot be solved at the plugin level.

One caveat: Architect isn’t very well documented yet, seems to have changed considerably before it was released, and there seem to be very different ways of using it. I had to experiment quite a bit before I got things to work. But, as C9 itself, the project seems to hold a lot of promise and just “feels” right, so I was happy to do a little trial & error.

To use Architect, you’ll need to install it (npm install architect) and then create a plugin for each functionality that your application will need.  C9’s github page contains several architect-related repositories which provide examples how to use the framework. There is one calculator demo that is worth looking at, even though I didn’t get it to work. The following code assumes that you have set up a sample qooxdoo project in C9 as described in the the intial tutorial.

The main configuration

Replace the server.js file with the following code, which is from here:

var path = require('path');
var architect = require("architect");

var configName = process.argv[2] || "build";
var configPath = path.resolve("./configs/", configName);
var config     = architect.loadConfig(configPath);

architect.createApp(config, function (err, app) {
    if (err) {
        console.error("While starting the '%s' setup:", configName);
        throw err;
    }
    console.log("Started '%s'!", configName);
});

The calculator demo’s server.js shows that there is a much simpler approach, but this setup allows several configs (for example, for “source” and “build” versions of the qooxdoo app) that can be run from the same file.

The server will try to read a configuration file from the “configs” directory, which we will need to create, and in which to put the following “build.js” file:

// build.js
var path = require("path");

module.exports = [
  { packagePath: "../plugins/http", root : path.resolve("testapp") }
];

The README explains:

Notice that the config is a list of plugin config options. If the only option in the config is packagePath, then a string can be used in place of the object. If you want to pass other options to the plugin when it’s being created, you can put arbitrary properties here.

Even the connect server, which usually is in a central place, should be wrapped in a plugin. This is what we do now. In the (newly created) “plugins” directory, we create a new folder “http” with two files:
package.json

// package.json
{
    "name": "http",
    "version": "0.0.1",
    "main": "http.js",
    "private": true,

    "plugin": {
        "provides": ["http"]
    }
}

http.js (adapted from here)

// This plugin gives provides a connect server for the application

// @options is the object in the config.js file for this plugin.
//   @options.port is the port to listen on.
//   @options.host is the host to bind to
//   @options.root is the directory from which to serve static files
// @imports is the various services that this plugin declared as dependencies
//   This plugin doesn't have any
// @register is a callback function expecting (err, plugin) where plugin is the
// provided services and lifecycle hooks.  This plugin exports "http".

module.exports = function setup(options, imports, register)
{
    // dependencies
    var connect = require("connect");
    var assert  = require("assert");
    var path    = require("path");

    // options/parameters
    var host = options.host || process.env.IP;
    var port = options.port || process.env.PORT;
    var root = options.root;
    assert(root && typeof root=="string", "You must provide a document root for the http server");

    // create server and register with architect when done
    var app = connect().use(connect.static(root));
    var server = app.listen(port, host, function (err) {
        if (err) return register(err);
        console.log("Connect server listening on http://%s:%s, serving %s", host, port, root);
        register(null, {
            // When a plugin is unloaded, it's onDestruct function will be called if there is one.
            onDestruct: function (callback) {
                server.close(callback);
            },
            // API
            http: {
                server : server
            }
        });
    });
}

When you start the server (open server.js as active tab & press “run” or “debug”), you should be able to open the default qooxdoo app at http://PROJECT.USER.c9.io/build/index.html. Ignore the IP/Port that is reported on the console – this information is only relevant when you deploy the finished project to your own server.

To run a “source” version of the qx app, you’ll have to create a file named source.js in the “configs” dir with the following content:

//source.js
var path = require("path");
module.exports = [
  { packagePath: "../plugins/http", root : path.resolve(".") }
];

and then open the application from

http://PROJECT.USER.c9.io/testapp/source/index.html.

You get the idea. As said before, this setup should never be used on a production server, since it exposes your entire source tree.

The “http” plugin is a good example to demonstrate how smart this form of modularization is. Should you decide to later exchange the connect server with a different server (which, of course, needs to have the same API), you can do this without any changes to the rest of the application. Step by step, we can now continue hooking in the other parts of the application. The next post will deal with client-server communication and the installation of socket.io.