Tuesday, March 5, 2013

众包与游戏开发 Part I


我最经有在了解着如何发起众包(crowdsourcing)... 阅读了不少相关的资料,冲动之下开始凭印象写下这篇短文。

说到众包当然少不了众包的先锋 - Kickstarter. 我们经常在kickstarter看到不少非常成功的众包项目...但众包没有我们看得这么简单.

要发起众包项目前要准备的功夫是相当繁琐及复杂的...

首先,单纯是要找一个好的众包平台已经非常困难了,Kickstarter 是非常理想的平台但...有个致命的确定就是他只支持美、英国。如果马来西亚人坚持要使用Kickstarter就必须有美、英国的亲戚朋友肯帮你接款。这样做有很大的风险,如果你的众包项目成功了但接款人不肯把款项交给你....你就很大条了。

当然,除了Kickstarter之外还有一个选择,就是indiegogo 了。 indiegogo 没kickstarter 这么有名气但它的优点(同时也是缺点)是支持很多国家,包括大马。可惜的是,indiegogo的号召力没kickstarter大,能凑得的款项不多, 项目的成功率也很低。就像之前说的,支持多国也是他的缺点,里面的项目有许多都不是来自美国(不是说美国人厉害,只是他们对game dev 比较坚持和较多的热诚) 加上它的条款宽松,有许多不像样的项目掺杂其中...

至于其他众包平台...我看了不少,但很多都没人气...发起的项目几乎不可能成功。

...(待续)

Sunday, February 5, 2012

Trusting the client! Letting the client do all the validations!

Back in 2008, I have a strange idea to reduce server side computation needs for a web game. I came out  with a concept to let the client compute all the validation in order to minimize the servers load.  However the validation result can be easily manipulated if it was only done on client side.

At that time my thought was: in order to "secure" the validation, it must be done on server. Yet it eliminated the purpose of introducing the concept in the first place. Validation that done on client side can only have one advantage, it provide blazing fast respond as all the validations are done locally. The front end can react to the user action as soon as possible before even the actions are submitted to the server. If the actions are invalidated by the server, the server can simply respond with a roll-back request. This technique was seen on many MMORPG built with native code even at that time to give an illusion to the players the game is running smoothly by responding to the user action as soon as possible. Although dissatisfy with the dumb idea, i am happy with the accidental outcome.

Further thought, I came out with a variation of the concept, which is letting another client that is not participation in the same game session to validate a player action. However the idea bring me another question which is, "what if a client decided to screw with the system by purposely sending the wrong validation results?: It would be result in total chaos!  Still, back to the basic, I told myself 'the user cannot be trusted!' This lead me to a dead end and I had stopped thinking about (not exactly stopped, i just don't spend much time into solving the problem).

Recently, I had a sudden interest to develop a web based sandbox game which sparkled the old problem I tried to solve.  After scratching my head for quite awhile, I came out with a way that i think suited my needs. I asked myself, "What if the validations are computed by more than one client?" "This could be the solution to the problem!".

So I started to think with this direction, server will assign clients (assume 2 clients) that ain't involved in the game session to do the validation by providing them the necessary data like game state, actions, etc. As long as more than two clients did the validations and send the result back in hash form, server side can easily check the validity of the validation by comparing the hashed results. If the results wasn't matched, the server can then fall back to use its own computation power do the validation by itself. With this idea, the server can eliminate almost all of the needs to validate the actions done by the player. Even better if the back end will assign each validation to be done by different clients every cycle of the game. By randomizing this, there is almost no way that a player can cheat. If a client doesn't send the result in timely manner, the back end can simply flag it as 'timeout' and start comparing with whatever results it had gathered.  Worst case scenario, the server will just do the computation by itself.

Of cause, the concept isn't without its cons. The most obvious disadvantage I can think of is the latency issue. Yet it is easily tolerable by using the 'action first check later' technique that I had mentioned earlier in the article given that your game or application have certain level of fault tolerant where roll-back is possible.

Imagine that apply this on Facebook, or any large scale web apps... it will tremendously reducing the computation needs of the servers. Even better if we can apply this concept on some scientific research by exposing javascript and browser as large scale computation pool. This could mean building the fastest super computer at zero cost!


Friday, January 13, 2012

Run Node on port 80 with non-root user privileges

I had been toying around wtih Node.js these few days.  With background of PHP programming, I had experience on setting up a proper LAMP server (Linux + Apache + MySQL + PHP) from scratch, I quickly noticed that running Node on port 80 with superuser privilege (binding to port below port number 1024 requires superuser aka root ) triggers my security concerns.

Although Node is widely discussed among early technology adpoters but I still wasn't able find sufficient information to run Node on production environment.  In general, Node users doesn't speak about binding port 80 while dropping the superuser privileges.

After a quick look into the Node documentation, I found the process object which packed with two methods called process.setgid() and process.setuid(). These 2 methods are crucial to prevent the process from accessing files that was not intended for it in case anything goes wrong.


Bellow are the sample code with a bare bone express setup to drop the superuser privileges
...
var process_user = 'evert';
var process_group = 'evert';
...

app.listen(80, function(){
  try {
    console.log('Giving up root privileges...');
    process.setgid(process_group);
    process.setuid(process_user);
    console.log('New uid: ' + process.getuid());
  }
  catch (err) {
    console.log('Failed to drop root privileges: ' + err);
  }
});

...
Without doubts, the most ideal case is to drop the superuser privileges as soon as possible, before everything else being initialized. However that would means diving into the Express.js code to initialize the socket.