English 中文(简体)
Scala + Akka: How to develop a Multi-Machine Highly Available Cluster
原标题:

We re developing a server system in Scala + Akka for a game that will serve clients in Android, iPhone, and Second Life. There are parts of this server that need to be highly available, running on multiple machines. If one of those servers dies (of, say, hardware failure), the system needs to keep running. I think I want the clients to have a list of machines they will try to connect with, similar to how Cassandra works.

The multi-node examples I ve seen so far with Akka seem to me to be centered around the idea of scalability, rather than high availability (at least with regard to hardware). The multi-node examples seem to always have a single point of failure. For example there are load balancers, but if I need to reboot one of the machines that have load balancers, my system will suffer some downtime.

Are there any examples that show this type of hardware fault tolerance for Akka? Or, do you have any thoughts on good ways to make this happen?

So far, the best answer I ve been able to come up with is to study the Erlang OTP docs, meditate on them, and try to figure out how to put my system together using the building blocks available in Akka.

But if there are resources, examples, or ideas on how to share state between multiple machines in a way that if one of them goes down things keep running, I d sure appreciate them, because I m concerned I might be re-inventing the wheel here. Maybe there is a multi-node STM container that automatically keeps the shared state in sync across multiple nodes? Or maybe this is so easy to make that the documentation doesn t bother showing examples of how to do it, or perhaps I haven t been thorough enough in my research and experimentation yet. Any thoughts or ideas will be appreciated.

最佳回答

HA and load management is a very important aspect of scalability and is available as a part of the AkkaSource commercial offering.

问题回答

If you re listing multiple potential hosts in your clients already, then those can effectively become load balancers.

You could offer a host suggestion service and recommends to the client which machine they should connect to (based on current load, or whatever), then the client can pin to that until the connection fails.

If the host suggestion service is not there, then the client can simply pick a random host from it internal list, trying them until it connects.

Ideally on first time start up, the client will connect to the host suggestion service and not only get directed to an appropriate host, but a list of other potential hosts as well. This list can routinely be updated every time the client connects.

If the host suggestion service is down on the clients first attempt (unlikely, but...) then you can pre-deploy a list of hosts in the client install so it can start immediately randomly selecting hosts from the very beginning if it has too.

Make sure that your list of hosts is actual host names, and not IPs, that give you more flexibility long term (i.e. you ll "always have" host1.example.com, host2.example.com... etc. even if you move infrastructure and change IPs).

You could take a look how RedDwarf and it s fork DimDwarf are built. They are both horizontally scalable crash-only game app servers and DimDwarf is partly written in Scala (new messaging functionality). Their approach and architecture should match your needs quite well :)

2 cents..

"how to share state between multiple machines in a way that if one of them goes down things keep running"

Don t share state between machines, instead partition state across machines. I don t know your domain so I don t know if this will work. But essentially if you assign certain aggregates ( in DDD terms ) to certain nodes, you can keep those aggregates in memory ( actor, agent, etc ) when they are being used. In order to do this you will need to use something like zookeeper to coordinate which nodes handle which aggregates. In the event of failure you can bring the aggregate up on a different node.

Further more, if you use an event sourcing model to build your aggregates, it becomes almost trivial to have real-time copies ( slaves ) of your aggregate on other nodes by those nodes listening for events and maintaining their own copies.

By using Akka, we get remoting between nodes almost for free. This means that which ever node handles a request that might need to interact with an Aggregate/Entity on another nodes can do so with RemoteActors.

What I have outlined here is very general but gives an approach to distributed fault-tolerance with Akka and ZooKeeper. It may or may not help. I hope it does.

All the best, Andy





相关问题
Error monitoring/handling on webservers

We have a web server that we re about to launch a number of applications onto. They will all share database and memcached servers, but each application has it s own mySQL database and all memcached ...

Articles about replication schemes/algorithms?

I m designing a distributed system with a certain flow of data in it. I d like to guarantee that at least N nodes have almost-current data at any given time. I do not need complete consistency, only ...

热门标签