English 中文(简体)
如何设计一个可缩放的 rpc 呼叫听众?
原标题:How to design a scalable rpc call listener?

I have to listen for rpc calls , stack them somewhere , process them, and answer. The thing is that they are not run as soon as they come. The response is an ACK for each rpc call recieved. The problem is that i want to design it in a way that i can have many listening servers writing in the same stack of calls, piling them up as they come.

我的目标是听取尽可能多的电话,我应如何做到这一点?

我的主要技术是 Perl 和 node.js, 但将使用任何开放源码软件来完成此项任务。

最佳回答

听起来像是任何类型的工作队列都会做你需要的东西; 我个人非常喜欢使用 < a href=" http://dis.io/" rel=" nofollow noreferrer" >Redis 来做这类事情。 由于 Redis 列表维持插入顺序, 您可以简单地使用 < a href=" http://redis.io/ commanders/ lpush > rel=" nofollow noreferr" >LPUSH 您的 RPC调用的信息到列表末尾的列表末尾, 从任何数个网络服务器收听 RPC 呼叫的列表中, 到其他地方( 在另一个进程/另一机器上,我假设) < a href="http://redis.io/commands/rpop" rel= " nofol noreferr" > RPOP (或 < a href= "http://redis.io/comges/bpo" rel= " no referrrerererer> > BRPOP < BRPOP < /// a>)

由于 Node.js 完全使用非同步的 IO, 假设你没有在 RPC 听众中做很多处理( 也就是说, 您只听请求, 发送 ACK, 并按住 Redis ), 我猜测, 节点在这一点上会非常有效 。

除了在队列中使用 Redis 以外: 如果您想要确保, 在灾难性失败的情况下, 工作不会丢失, 您需要执行更多一些逻辑; 从 < a href=" http:// redis.io/ commands/rpopplpush" rel= “ nofollow noreferrer" > RPOPLPUSH 文件 :

<强力 > 陪审员:可靠的排队

Redis is often used as a messaging server to implement processing of background jobs or other kinds of messaging tasks. A simple form of queue is often obtained pushing values into a list in the producer side, and waiting for this values in the consumer side using RPOP (using polling), or BRPOP if the client is better served by a blocking operation.

However in this context the obtained queue is not reliable as messages can be lost, for example in the case there is a network problem or if the consumer crashes just after the message is received but it is still to process.

RPOPLPUSH (or BRPOPLPUSH for the blocking variant) offers a way to avoid this problem: the consumer fetches the message and at the same time pushes it into a processing list. It will use the LREM command in order to remove the message from the processing list once the message has been processed.

An additional client may monitor the processing list for items that remain there for too much time, and will push those timed out items into the queue again if needed.

问题回答

暂无回答




相关问题
Scaling Rails with Cache-on-write

I currently have a rails app that uses the traditional caching. cache do blocks are used to cache slow-rendering partials. This works great for the most part, except for a few pages which take too ...

How to create an ASP.NET web farm?

I am looking for information on how to create an ASP.NET web farm - that is, how to make an ASP.NET application (initially designed to work on a single web server) work on 2, 3, 10, etc. servers? We ...

Is this way of using Excel 2007 Pivot table for BI scalable?

Background: We need to consolidate sales data across the country to do analysis Our Internet connection/IT expertise/IT investment is not quite strong, therefore full BI solution is out of question ...

High-traffic, Highly-secure web API, what language? [closed]

If you were planning on building a high-traffic, very secure site what language would you use? For example, if you were planning on say building an authorize.net-scale site, that had to handle tons ...

AS3: Faster to use EventListeners or ArrayLoops?

I am writing a game in AS3 and have, as an example, around 40 objects on screen. Let s say they are clouds. I m wondering which of the two paths would be less a strain on system resources: a) Put an ...

热门标签