<div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small">Seb,</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small">This is great news. So if I am reading this correctly, we are now on par with other actors implementations in terms of order of magnitude (10^6 for the computation messaging rate looks promising). Which of your test programs is being used to do this benchmark?</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small">I spent yesterday in our department/faculty meeting. Almost back in business.</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small">George</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif;font-size:small"><br></div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>George K. Thiruvathukal, PhD<br></div><div style="font-size:12.7272720336914px"><div style="font-size:12.7272720336914px"><i style="font-size:12.7272720336914px">Professor of Computer Science</i><span style="font-size:12.7272720336914px">, Loyola University Chicago</span><br></div><div style="font-size:12.7272720336914px"><span style="font-size:12.7272720336914px"><i>Director</i>, Center for Textual Studies and Digital Humanities</span></div><div style="font-size:12.7272720336914px"><span style="font-size:12.7272720336914px"><i>Guest Faculty</i>, Argonne National Laboratory, Math and Computer Science Division</span></div><div style="font-size:12.7272720336914px"><div style="font-size:12.7272720336914px">Editor in Chief, <a href="http://www.computer.org/portal/web/computingnow/cise" target="_blank">Computing in Science and Engineering</a> (IEEE CS/AIP)<br></div><div><span style="font-size:12.7272720336914px">(w) <a href="http://gkt.tv/" target="_blank">gkt.tv</a> </span><span style="font-size:12.7272720336914px">(v) 773.829.4872</span><br></div><div><span style="font-size:12.7272720336914px"><br></span></div></div></div></div></div></div></div></div>
<br><div class="gmail_quote">On Fri, Oct 31, 2014 at 8:48 PM, Boisvert, Sebastien <span dir="ltr"><<a href="mailto:boisvert@anl.gov" target="_blank">boisvert@anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">> ________________________________________<br>
> From: <a href="mailto:biosal-bounces@lists.cels.anl.gov">biosal-bounces@lists.cels.anl.gov</a> [<a href="mailto:biosal-bounces@lists.cels.anl.gov">biosal-bounces@lists.cels.anl.gov</a>] on behalf of Boisvert, Sebastien [<a href="mailto:boisvert@anl.gov">boisvert@anl.gov</a>]<br>
> Sent: Thursday, October 30, 2014 9:26 PM<br>
> To: <a href="mailto:biosal@lists.cels.anl.gov">biosal@lists.cels.anl.gov</a><br>
> Subject: [BIOSAL] Progress on the message delivery time<br>
<div><div class="h5">> OK, in the industry, people are getting 50 M messages / second on one single machine with 48 x86-64 Opteron cores with Akka. They are using<br>
> 2 actors per core. (96 actors). The article is online here: <a href="http://letitcrash.com/post/20397701710/50-million-messages-per-second-on-a-single-machine" target="_blank">http://letitcrash.com/post/20397701710/50-million-messages-per-second-on-a-single-machine</a><br>
> They are using a throughput setting of 20, which means that when an actor takes control of a x86 core, it can receive (one after the other)<br>
> up to 20 messages. I suppose therefore that any actor in the system have more than 1 in-flight message to any destination, otherwise the<br>
> throughput configuration would not do anything because each actor would statistically receive at most 1 message for any time window of, say, a couple of<br>
> μs. They don't mention in-flight messages in the blog article though, so I might be wrong on that anyway.<br>
> With the LMAX Disuptor (1-consumer 1-producer), a throughput of 6 M msg / sec is generated.<br>
> According to <a href="http://musings-of-an-erlang-priest.blogspot.com/2012/07/i-only-trust-benchmarks-i-have-rigged.html" target="_blank">http://musings-of-an-erlang-priest.blogspot.com/2012/07/i-only-trust-benchmarks-i-have-rigged.html</a> :<br>
> Erlang can tick at 1 M msg / s and Akka can do 2 M msg / s.<br>
> In that same article, they are using more than 1 in-flight message per source actor ("so the right way to test this is to push a lot of messages to it").<br>
> In F#, one single actor is processing 4.6 M msg / s. I think, however, that their system is only running one actor, and also the code<br>
> contains some "shared memory black magic" here: <a href="http://zbray.com/2012/12/09/building-an-actor-in-f-with-higher-throughput-than-akka-and-erlang-actors/" target="_blank">http://zbray.com/2012/12/09/building-an-actor-in-f-with-higher-throughput-than-akka-and-erlang-actors/</a><br>
> Finally, the Gemini network (Cray XE6) is capable of moving " tens of millions of MPI messages per second" <a href="http://www.cray.com/Products/Computing/XE/Technology.aspx" target="_blank">http://www.cray.com/Products/Computing/XE/Technology.aspx</a><br>
> I think this figure of for the whole system, not for an individual node.<br>
><br>
> In Thorium:<br>
> Hardware: 32 x86-64 cores (Intel(R) Xeon(R) CPU E7- 4830 @ 2.13GHz)<br>
> Rules: 100 actor per core, each source actor is allowed only 1 in-flight request message at most (ACTION_PING).<br>
> Also, there is a timeout of 100 μs for the multiplexer, so messages typically wait at least 100 μs before leaving workers<br>
> so that they can be multiplexed.<br>
> ###<br>
> 4x8, 100 actors per core<br>
> 4 nodes, 28 worker threads (4 * 7), 2800 actors (28 * 100)<br>
> Total sent message count: 112000000 (2800 * 40000)<br>
> Time: 196955912801 nanoseconds (196.955913 s)<br>
> Computation messaging rate: 568655.179767 messages / second<br>
> Node messaging rate: 142163.794942 messages / second<br>
> Worker messaging rate: 20309.113563 messages / second <=====================<br>
> Actor messaging rate: 203.091136 messages / second<br>
> With this, the delivery latency (at the worker level) is around 50 μs.<br>
<br>
<br>
</div></div>The actor-level throughput was increased by ~80%:<br>
<span class=""><br>
<br>
4 nodes, 28 worker threads (4 * 7), 2800 actors (28 * 100)<br>
Total sent message count: 112000000 (2800 * 40000)<br>
</span>Time: 107121732947 nanoseconds (107.121733 s)<br>
Computation messaging rate: 1045539.471019 messages / second<br>
Node messaging rate: 261384.867755 messages / second<br>
Worker messaging rate: 37340.695394 messages / second<br>
Actor messaging rate: 373.406954 messages / second <=====================<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
<br>
> ###<br>
> 1x4, 100 actors per core<br>
> 1 nodes, 3 worker threads (1 * 3), 300 actors (3 * 100)<br>
> Total sent message count: 12000000 (300 * 40000)<br>
> Time: 49616199643 nanoseconds (49.616200 s)<br>
> Computation messaging rate: 241856.492161 messages / second<br>
> Node messaging rate: 241856.492161 messages / second<br>
> Worker messaging rate: 80618.830720 messages / second <=====================<br>
> Actor messaging rate: 806.188307 messages / second<br>
> With this, the delivery latency (at the worker level) is around 12 μs.<br>
><br>
> I moved the small message multiplexing into workers. Now, the small message demultiplexing and also the message<br>
> recycling code path must be migrated to workers too (outside of thorium_node).<br>
> My main goal is to reduce 4x8 down to 12 μs (like 1x4).<br>
><br>
> Thanks.<br>
><br>
> _______________________________________________<br>
> BIOSAL mailing list<br>
> <a href="mailto:BIOSAL@lists.cels.anl.gov">BIOSAL@lists.cels.anl.gov</a><br>
> <a href="https://lists.cels.anl.gov/mailman/listinfo/biosal" target="_blank">https://lists.cels.anl.gov/mailman/listinfo/biosal</a><br>
_______________________________________________<br>
BIOSAL mailing list<br>
<a href="mailto:BIOSAL@lists.cels.anl.gov">BIOSAL@lists.cels.anl.gov</a><br>
<a href="https://lists.cels.anl.gov/mailman/listinfo/biosal" target="_blank">https://lists.cels.anl.gov/mailman/listinfo/biosal</a><br>
</div></div></blockquote></div><br></div>