分类:
2008-10-13 16:11:11
Hi,
At the risk of further fanning the flamewar, I thought I would do some
performance comparisons between Ice and omniORB. ZeroC make a big deal
out of Ice's performance -- it's mentioned before anything else in the
description of Ice on the ZeroC web site -- so it's good to see how it
really performs. I don't want to imply that performance is the only
important thing -- whether Ice is better or worse than CORBA in other
respects is a separate issue.
To do the comparison, I used the latency and throughput demos from the
Ice distribution. It didn't take much modification to turn them into
CORBA programs.
The performance tests were done on a machine with an Intel Pentium 4
3.2 GHz hyper-threaded processor, running Fedora Core 3 Linux. Both
Ice and omniORB were compiled with g++ 3.4.4, with the -O3
optimisation level. The tests run on a single machine, between two
processes. I have only tested C++ so far. I might do some Python
comparisons later.
I used the latest stable releases of Ice and omniORB: 3.0.0 and 4.0.6
respectively.
Both Ice and omniORB are run with no special configuration, so they
use the default options for everything. For omniORB, I tried both the
default TCP transport and the Unix domain socket transport.
So, the results:
The latency test just pings an object. I ran each test three times and
picked the best result:
Ice tcp: time for 100000 pings: 6926.86ms
omniORB tcp: time for 100000 pings: 4566.58ms
omniORB unix: time for 100000 pings: 2373.28ms
omniORB is over 1.5 times faster using TCP, and more than 2.9 times
faster using Unix sockets.
The throughput test looks at the time taken to transfer large
sequences of various types. Each type is tested four ways: sending as
an argument, sending oneway as an argument, receiving as a return
value, and echoing.
The values shown are a throughput in MBit/s, calculated by the test.
Larger numbers are better.
Test Ice omniORB tcp omniORB unix
byte sequence
send 1180.1 2433.7 4788.8
oneway 1321.0 2504.5 4267.8
return 777.5 1184.1 1554.3
echo 950.7 1501.6 2250.3
string sequence
send 78.8 82.9 86.7
oneway 81.6 83.7 88.8
return 60.7 52.2 53.6
echo 66.2 64.4 67.9
struct: string and double
send 178.6 186.6 210.5
oneway 181.4 193.7 211.7
return 129.0 121.9 130.9
echo 145.8 150.0 159.9
struct: two longs and a double
send 611.4 1301.6 2118.0
oneway 567.7 1310.0 2059.1
return 486.7 1052.0 1520.8
echo 545.8 1159.4 1758.5
As you can see, omniORB mostly beats Ice, With fixed length types, it
beats it significantly. Ice wins on a couple of the string sequence
tests and one of the tests with a struct containing a string and a
double.
I think the results with strings are more an artifact of the way the
tests work than anything else. All the tests pre-build the sequences
they are going to transfer. Ice sensibly uses C++ standard strings to
represent the strings; CORBA just uses C-style char* arrays (plus
holder objects for deallocation). Constructing a C++ string is more
expensive than constructing a C string, but once it has been
constructed its length has been pre-calculated. Marshalling a sequence
of existing C++ strings therefore only requires one pass over each
string, while C-style strings require two passes. In a real
application, the strings would probably be constructed during
operation of the program, rather than right at the start, so I'm not
sure this difference would appear in many real situations.
I hope some people find this interesting.
Cheers,
Duncan.
--
-- Duncan Grisby --
-- dun@grisby.org --
-- --
Very interesting results. I have a few questions.
How does oO 4.0.l6 compare against other ORBs? Specifically,
TAO (which Ice uses in their comparisons) and oO 3.x? If
oO 4.0.6's performance against TAO is consistent with
Ice's performance against TAO, then the above oO vs Ice
numbers have an increased 'validity potential'. If they
don't (for example, if TAO is faster...), then I'm still
left wondering whose version of the performance tests were
done 'right'.
> On Tue, 22 Nov 2005 12:12:55 +0000, Duncan Grisby wrote:
>>Both Ice and omniORB are run with no special configuration, so they
>>use the default options for everything. For omniORB, I tried both the
>>default TCP transport and the Unix domain socket transport.
> This decision concerns me, as the default configurations may
> have vastly different philosophies. As an example, the
> PostgreSQL database comes configured for operating on small,
> slow box - you *have* to reconfigure it to get *any* sort of
> decent performance on a typical machine really used for
> databases these days. It would be much more useful to see
> these comparisons done with both Ice and oO *fully* tuned
> for performance.
> How does oO 4.0.l6 compare against other ORBs? Specifically,
> TAO (which Ice uses in their comparisons) and oO 3.x? If
> oO 4.0.6's performance against TAO is consistent with
> Ice's performance against TAO, then the above oO vs Ice
> numbers have an increased 'validity potential'. If they
> don't (for example, if TAO is faster...), then I'm still
> left wondering whose version of the performance tests were
> done 'right'.
Hi Folks,
++ On Tue, 22 Nov 2005 12:12:55 +0000, Duncan Grisby wrote:
++ > Both Ice and omniORB are run with no special configuration, so they
++ > use the default options for everything. For omniORB, I tried both the
++ > default TCP transport and the Unix domain socket transport.
++
++ This decision concerns me, as the default configurations may
++ have vastly different philosophies. As an example, the
++ PostgreSQL database comes configured for operating on small,
++ slow box - you *have* to reconfigure it to get *any* sort of
++ decent performance on a typical machine really used for
++ databases these days. It would be much more useful to see
++ these comparisons done with both Ice and oO *fully* tuned
++ for performance.
++
++ How does oO 4.0.l6 compare against other ORBs? Specifically,
++ TAO (which Ice uses in their comparisons) and oO 3.x? If
++ oO 4.0.6's performance against TAO is consistent with
++ Ice's performance against TAO, then the above oO vs Ice
++ numbers have an increased 'validity potential'. If they
++ don't (for example, if TAO is faster...), then I'm still
++ left wondering whose version of the performance tests were
++ done 'right'.
omniORB is widely considered as one of the fastest ORBs around. I
recommend you check out
for lots more information, benchmarks, and performance comparisons of
various middleware.
Thanks,
Doug
--
Dr. Douglas C. Schmidt Professor and Associate Chair
Electrical Engineering and Computer Science TEL: (615) 343-8197
Institute for Software Integrated Systems WEB:
Vanderbilt University, Nashville TN, 37203 NET: d.schm@vanderbilt.edu