WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: Xen & I/O in clusters - problems!

To: xen-devel@xxxxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Re: Xen & I/O in clusters - problems!
From: John Enok Vollestad <john.enok@xxxxxxxxxxxx>
Date: Sat, 16 Oct 2004 20:05:05 +0000 (UTC)
Delivery-date: Sun, 17 Oct 2004 20:27:34 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
References: <20041015104841.GD27151@xxxxxxxxxxx> <200410151402.31933.mark.williamson@xxxxxxxxxxxx> <20041015162724.GA23334@xxxxxxxxxxx>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
User-agent: Loom/3.14 (http://gmane.org/)
Håvard Bjerke <Havard.Bjerke <at> idi.ntnu.no> writes:

> 
> On Fri, Oct 15, 2004 at 02:02:31PM +0000, Mark A. Williamson wrote:
> > > Between two nodes running plain redhat EL3 with kernel 2.4.21-15.EL:
> > >
> > >  786.034 MByte/s
> > >
> > > Between two nodes each running only xen domain 0:
> > >
> > >  56.480 MByte/s
> > 
> > That's surprising - I'd have expected any performance problems to involve 
> > unpriv domains somehow.  We've never had any performance problems when just 
> > running domain 0, even when the code was still under development...
> > 
> 
> What's possibly even more funny is that when I do the same benchmark localhost
<-> localhost, ie. through
> the loopback interface, on domain 0, the bandwidth is halved. CPU use is ~100%
during both these
> benchmarks on domain 0 (50% per process in the last benchmark). This indicates
to me that the bandwidth
> depends on CPU resources. Some heavy processing is happening somewhere.
> 
> I suspect this might have something to do with the MPI library scaMPI, which
is supposed to be more closely
> linked with the lower layers of the OSI protocol stack or something. I will
investigate it further.

Around 2000 the driver used a mix of polling and interrupts to get the latency
down.  If I remember correctly there done polling for abount half the time of an
interrupt.

To get the adapters manipulate memory directly the driver also have to allocate
memory in physical continuous blocks.  Exact how the reading and writing of this
areas is done through the drivers I do not know but this should not make any
performance hit except if Xen have any issues with MMU manipulation.

You could send SCALI an email..


-- 
John Enok



-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal
Use IT products in your business? Tell us what you think of them. Give us
Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel

<Prev in Thread] Current Thread [Next in Thread>