At Bitmenu, I find myself working with lots of systems tools that were developed with a few implicit design criteria: 1) Data centers with 2) physical machines that have their own 3) local permanent storage.
But Bitmenu is entirely Cloud-based, and thus we need to rethink how we use our tools (or if we should use them at all) in a world where 1) we don't control the IPs of 2) machines which evaporate unexpectedly losing all 3) local impermanent storage.
One such tool is we've been experimenting with is ZooKeeper. It seems that it was designed to best work in a static IP environment, very common in a data center topology. At startup, the configuration file identifies all the members of its quorum, i.e. the other ZooKeeper services. Since Bitmenu's platform runs exclusively in Amazon's Elastic Computing Cloud, we don't have the luxury of getting the same IP address from EC2 instance to EC2 instance. From the outset, a resilient ZooKeeper topography looked almost impossible; because if a member of the quorum changes--a ZooKeeper instance dies or becomes unresponsive, all very real events in the Cloud and usually require just replacing the failing instance--then all the other members must necessarily restart in order to get the address of the new member.
Fortunately, I figured out a little trick to keep all alive instances of ZooKeeper running and get new information about the change of a quorum member without resorting to a restart.
On Linux, there's a file called /etc/hosts, which is a list of static IP addresses and the names of hosts that they resolve to. They can be FQDN, but they don't have to be. So for example I can have a static entry that looks like:
192.168.1.12 printer1
And in normal Linux configurations, any host lookup will go to this file to resolve its name before using DNS, NIS, or other resolving protocols (the lookup order is controlled by /etc/nsswitch.conf, where usually "files" comes first).
In the zoo.cfg file, the configuration file for ZooKeeper that is parsed on startup, instead of putting in static IPs, I put in host names, like so:
...
server.1=zoov1:2888:3888
server.2=zoov2:2888:3888
server.3=zoov3:2888:3888
...
Then I created a script that checks to see if the IP address for any of the three machines of the quorum has changed by polling an S3 file. If true, then it modifies the IP address /etc/hosts entry. I put his script into cron and run it every minute.
When a machine drops out of the quorum, its replacement will update its IP address in S3, which can then be read by the script above, and update the /etc/hosts file accordingly. ZooKeeper therefore doesn't require restart, and keeps polling until it can communicate with all members of its quorum.