Sunday, February 5, 2012

Trusting the client! Letting the client do all the validations!

Back in 2008, I have a strange idea to reduce server side computation needs for a web game. I came out  with a concept to let the client compute all the validation in order to minimize the servers load.  However the validation result can be easily manipulated if it was only done on client side.

At that time my thought was: in order to "secure" the validation, it must be done on server. Yet it eliminated the purpose of introducing the concept in the first place. Validation that done on client side can only have one advantage, it provide blazing fast respond as all the validations are done locally. The front end can react to the user action as soon as possible before even the actions are submitted to the server. If the actions are invalidated by the server, the server can simply respond with a roll-back request. This technique was seen on many MMORPG built with native code even at that time to give an illusion to the players the game is running smoothly by responding to the user action as soon as possible. Although dissatisfy with the dumb idea, i am happy with the accidental outcome.

Further thought, I came out with a variation of the concept, which is letting another client that is not participation in the same game session to validate a player action. However the idea bring me another question which is, "what if a client decided to screw with the system by purposely sending the wrong validation results?: It would be result in total chaos!  Still, back to the basic, I told myself 'the user cannot be trusted!' This lead me to a dead end and I had stopped thinking about (not exactly stopped, i just don't spend much time into solving the problem).

Recently, I had a sudden interest to develop a web based sandbox game which sparkled the old problem I tried to solve.  After scratching my head for quite awhile, I came out with a way that i think suited my needs. I asked myself, "What if the validations are computed by more than one client?" "This could be the solution to the problem!".

So I started to think with this direction, server will assign clients (assume 2 clients) that ain't involved in the game session to do the validation by providing them the necessary data like game state, actions, etc. As long as more than two clients did the validations and send the result back in hash form, server side can easily check the validity of the validation by comparing the hashed results. If the results wasn't matched, the server can then fall back to use its own computation power do the validation by itself. With this idea, the server can eliminate almost all of the needs to validate the actions done by the player. Even better if the back end will assign each validation to be done by different clients every cycle of the game. By randomizing this, there is almost no way that a player can cheat. If a client doesn't send the result in timely manner, the back end can simply flag it as 'timeout' and start comparing with whatever results it had gathered.  Worst case scenario, the server will just do the computation by itself.

Of cause, the concept isn't without its cons. The most obvious disadvantage I can think of is the latency issue. Yet it is easily tolerable by using the 'action first check later' technique that I had mentioned earlier in the article given that your game or application have certain level of fault tolerant where roll-back is possible.

Imagine that apply this on Facebook, or any large scale web apps... it will tremendously reducing the computation needs of the servers. Even better if we can apply this concept on some scientific research by exposing javascript and browser as large scale computation pool. This could mean building the fastest super computer at zero cost!

Friday, January 13, 2012

Run Node on port 80 with non-root user privileges

I had been toying around wtih Node.js these few days.  With background of PHP programming, I had experience on setting up a proper LAMP server (Linux + Apache + MySQL + PHP) from scratch, I quickly noticed that running Node on port 80 with superuser privilege (binding to port below port number 1024 requires superuser aka root ) triggers my security concerns.

Although Node is widely discussed among early technology adpoters but I still wasn't able find sufficient information to run Node on production environment.  In general, Node users doesn't speak about binding port 80 while dropping the superuser privileges.

After a quick look into the Node documentation, I found the process object which packed with two methods called process.setgid() and process.setuid(). These 2 methods are crucial to prevent the process from accessing files that was not intended for it in case anything goes wrong.

Bellow are the sample code with a bare bone express setup to drop the superuser privileges
var process_user = 'evert';
var process_group = 'evert';

app.listen(80, function(){
  try {
    console.log('Giving up root privileges...');
    console.log('New uid: ' + process.getuid());
  catch (err) {
    console.log('Failed to drop root privileges: ' + err);

Without doubts, the most ideal case is to drop the superuser privileges as soon as possible, before everything else being initialized. However that would means diving into the Express.js code to initialize the socket.