Wednesday, January 19, 2011

Play framework with cygwin

For whatever reason the people doing the play framework force people on Windows to use cmd instead of cygwin. If you have the Python installed under Windows (not cygwin - "the windows and the cygwin python are completely different" - http://cygwin.com/ml/cygwin/2004-02/msg01120.html) you can use the following script. I name it play.sh and put it in the play directory:


#!/bin/bash

play=`which play`
python `cygpath -wp $play $1 $2 $3 $4 $5 $6 $7 $8 $9`


Play attention to the quotes... (the script is based on the idea put forth here: http://www.inonit.com/cygwin/faq/)

Note: You still need to have the play directory in your path otherwise which won't work.

Tuesday, January 18, 2011

Membase talk

Yesterday I had the pleasure to listen to a talk from Membase. They lost the T-Shirts in the mail, didn't have beer with the pizza - but promised to go to a non-specified bar afterward. I left after about 3.5 hours and they still weren't on the way to the bar...

Membase is basically the popular memcached with a persistence layer. It has dual licensing - once a "community edition" under Apache license and another one where you pay. They are used on hundreds of servers at Zynga and NHN. Membase is protocol compatible to memcached but it's using moxi (a memcached proxy) for scalability and potential failover.

Membase doesn't have many of the fancy features other No-SQL databases offer. They don't have a query language nor some map-reduce and even no automatic failover. In fact a node will confirm that it has stored a value before it distributes it to other nodes and writes it to disk. Which leaves the data in case of failure exposed to some minor loss or at least inconsistence (aka the latest value might be gone). Furthermore in the failure case the cluster only notifies that a node failed so clients will get errors accessing the failed node until a system administrator has decided manually a failover is apporiate (which is clicking a button on the UI) or brings the broken node back up. They were also decidedly mute about network splits (one of the big enemies of distributed system) and don't support multi-datacenter deployments (the most common source of those splits) yet.

They are basically some memcached on steroids, fast, highly distributable, (kind of) consistent, etc. They don't solve the same problems like dynamo or cassandra who either allow you to always write and/or always read from the data store at the expense of consistency. Membase overall is highly consistent (except in some error cases) at the expense of failing reads and writes once a node dies.

The administration, set-up, etc. are really easy and being able to use the existing memcached protocol makes this a clear winner in the ease of use. They also have some interesting features with the "tap-interface" which allows third party modules to look (aka tap) into all the data in the cluster and do smart things with it. An example would be lucene indexing...

The use cases are either an extension of memcached (e.g. session caching) and/or something which needs massive scale, speed, and simplicity like Zynga's games. One mantra they repeated over and over was: "If half our servers fail we want to be able to handle half of our user's requests" Not every system scales that way so this is quite an accomplishment.

Future plans include to get some more of the popular No-SQL features like (prefix and range) queries and code running on the nodes (like map-reduce) called "node code".

Given the real potential of data loss this system probably appeals to people already using memcached and running out of memory. They now can add disk storage to it together with some very cool administration ui. There are some other use cases where the eventual loss of data is outweigh by ease of use and speed. Apparently gaming and entertainment can tolerate this risk whereas I wouldn't recommend that solution to store bank account data ;-)