Chinaunix首页 | 论坛 | 博客
  • 博客访问: 5119487
  • 博文数量: 921
  • 博客积分: 16037
  • 博客等级: 上将
  • 技术积分: 8469
  • 用 户 组: 普通用户
  • 注册时间: 2006-04-05 02:08
文章分类

全部博文(921)

文章存档

2020年(1)

2019年(3)

2018年(3)

2017年(6)

2016年(47)

2015年(72)

2014年(25)

2013年(72)

2012年(125)

2011年(182)

2010年(42)

2009年(14)

2008年(85)

2007年(89)

2006年(155)

分类: Erlang

2012-05-25 10:43:04

 

A long time user of Python and Twisted, I have used Python and Twisted to developed some production systems that processed 60 – 80 million messages a day. It wasn’t pretty but the code base was fairly small and it worked, but not simply. Our deployment machines were Sun T1000 Niagara based machines. We were doing parallel processing of messages and collecting the results and returning them to the master process. This was mostly just massive string processing and ended up being very CPU intensive. We did the multi-threading in C extensions that were the interfaces to the third party processing libraries we were using for message classification, this was as much to avoid GIL issues as it was because we only had C interfaces to the libraries. So we basically created a Facade that delegated work in parallel and aggregated the results to return back to the client process. The only thing we were doing were reading from the socket, queuing up the message bodies in a thread safe queue in a C extension that then sent the message strings to the classifiers and then we asynchronously collected the results and sent them back to the client of our Twisted server.

We immediately ran into scaling issues with Twisted. When we completely saturated a single CPU core on some Quad Core Intel Xenon development boxes, we knew the lower powered Niagara boxes were not going to work as well, the only way to utilize all the CPU power was to run multiple instances of our Twisted server. The T1000/Niagara boxes have 8 cores with 4 thread contexts each for a total of 32 threads, but each core works out to be about the same power as a 200mhz Pentium Pro. The only thing we were able to do was to run 20+ separate instances of our Twisted server on each T1000 to saturate the network utilization in an attempt to get reasonable throughput. Running 20 copies is a operations and monitoring nightmare. We actually created a CPU bound Twisted application, something that was pretty much unheard of on the Twisted mailing list.

I have since re-implemented the same protocol and Twisted implementation in Erlang/OTP with surprising results. The Erlang/OTP version runs much faster, and it is not even optimized yet. I could move from using lists of characters to binaries and it would use much less RAM and should be much faster since I don’t actually do anything with the data other than ship it off to sub-processes. This is just a proof of concept that I could run my old stress tests against. It is about 1/5 the amount of code that the Twisted version is and most importantly I only need to run a single copy to completely utilized a modern multi-core machine. Which is my major sticking point right now, I have big honking multi-core hardware and I need software tools to match them.

Here is an example of a port of the LineReceiver functionality of Twisted.

it is amazing how much “batteries included” stuff Erlang has in it.
you just need to keep the mailing list address handy to get help. :-)


 

  1. -module(linereceiver).

  2. -export([start/1]).

  3. sleep(T) ->
  4.     receive
  5.        after T ->
  6.            true
  7.     end.

  8. start(Port) ->
  9.     spawn(fun() ->
  10.             start_parallel_server(Port),
  11.             sleep(infinity)
  12.           end).

  13. start_parallel_server(Port) ->
  14.     {ok, Listen} = gen_tcp:listen(Port, [binary, {packet,line},{reuseaddr, true},{active, true}]),
  15.     spawn(fun() -> par_connect(Listen)end).

  16. par_connect(Listen) ->
  17.     {ok, Socket} = gen_tcp:accept(Listen),
  18.     spawn(fun() -> par_connect(Listen) end),
  19.     inet:setopts(Socket, [{packet, line}, list, {nodelay, true}, {active, true}]),
  20.     io:format("Connection Made!~n"),
  21.     get_line(Socket).

  22. get_line(Socket) ->
  23.     receive
  24.         {tcp, Socket, Line} ->
  25.             io:format("Received Line:~p~n", [Line]),
  26.             get_line(Socket);
  27.         {tcp_closed, Socket} ->
  28.             io:format("Connection Closed!~n"),
  29.             void
  30.     end.

to compile at the command line enter.

  1. c(linereceiver).

then at the command line enter

  1. linereceiver:start(8080).

to start the server. Then connect with telnet to the server running the file on port 8080.
This should get you started on your way to writing highly scalable line oriented protocol servers.

FROM:

http://www.vertigrated.com/blog/2009/10/python-twisted-vs-erlang-otp/

阅读(1834) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~