[BIOSAL] Results with Xeon and Xeon Phi

George K. Thiruvathukal gkt at cs.luc.edu
Mon Nov 3 22:18:54 CST 2014


Hi,

Sorry for just responding now. My Mondays are often a bit difficult (for
multiple reasons).

First, I am really glad to read Seb's earlier e-mail that you have a way of
decoupling the build from MPI. I actually don't plan on running code
without a transport (well, at least not at the moment) but being able to
build it with gcc without extra flags means it can be studied and used
out-of-box with IDEs. It will be great if we can eventually have a pure
implementation (designed for standalone multicore systems) that works with
ordinary processes/threads and shared memory. This would be great for users
who want to develop Thorium applications on platforms that most of us might
never use (e.g. Windows), say, using Cygwin.

Reading the rest of this thread reminded me of something. What do I need to
do in order to get access to some of these nifty systems (i.e. jenny)? I
also would like to be able to run Thorium programs on one of our clusters
but don't think I have access to anything yet (silverclaw is just a single
VM, right?) Perhaps we could give some priority to this when I come in this
week. I feel a growing need to be able to run code on real systems.

See you on Wednesday. I'm hoping to do some more coding tomorrow.

Best,
George




George K. Thiruvathukal, PhD
*Professor of Computer Science*, Loyola University Chicago
*Director*, Center for Textual Studies and Digital Humanities
*Guest Faculty*, Argonne National Laboratory, Math and Computer Science
Division
Editor in Chief, Computing in Science and Engineering
<http://www.computer.org/portal/web/computingnow/cise> (IEEE CS/AIP)
(w) gkt.tv (v) 773.829.4872


On Mon, Nov 3, 2014 at 5:10 PM, Boisvert, Sebastien <boisvert at anl.gov>
wrote:

> With rand_r instead of rand on Xeon E7:
>
> PERFORMANCE_COUNTER node-count = 1
> PERFORMANCE_COUNTER worker-count-per-node = 29
> PERFORMANCE_COUNTER actor-count-per-worker = 100
> PERFORMANCE_COUNTER worker-count = 29
> PERFORMANCE_COUNTER actor-count = 2900
> PERFORMANCE_COUNTER message-count-per-actor = 40000
> PERFORMANCE_COUNTER message-count = 116000000
> PERFORMANCE_COUNTER elapsed-time = 91.960110 s
> PERFORMANCE_COUNTER computation-throughput = 1261416.493718 messages / s
> PERFORMANCE_COUNTER node-throughput = 1261416.493718 messages / s
> PERFORMANCE_COUNTER worker-throughput = 43497.120473 messages / s
> PERFORMANCE_COUNTER worker-latency = 22990 ns
> PERFORMANCE_COUNTER actor-throughput = 434.971205 messages / s
> PERFORMANCE_COUNTER actor-latency = 2299002 ns
>
> I will redo the tests on jenny-mic0 since the glibc was using a lot of
> syscalls !
>
>
> > From: Fangfang Xia [fangfang.xia at gmail.com]
> > Sent: Monday, November 03, 2014 3:47 PM
> > To: Boisvert, Sebastien
> > Cc: biosal at lists.cels.anl.gov
> > Subject: Re: [BIOSAL] Results with Xeon and Xeon Phi
> >
> >
> > This interesting. I’m curious what the call stacks for these spin locks
> are?
> >
> > On Nov 3, 2014, at 3:35 PM, Boisvert, Sebastien <boisvert at anl.gov>
> wrote:
> > 42.42%
> >   [kernel]                          [k] _spin_lock
> >
> >
> >
> _______________________________________________
> BIOSAL mailing list
> BIOSAL at lists.cels.anl.gov
> https://lists.cels.anl.gov/mailman/listinfo/biosal
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cels.anl.gov/pipermail/biosal/attachments/20141103/8ab3fefb/attachment.html>


More information about the BIOSAL mailing list