WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Xen & I/O in clusters - Single Vs. Dual CPU issue

To: Xen Virtual Machine Monitor <xen-devel@xxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Xen & I/O in clusters - Single Vs. Dual CPU issue
From: Rune Johan Andresen <runejoha@xxxxxxxxxxx>
Date: Thu, 4 Nov 2004 18:42:34 +0100
Cc: H=E5vard_Bjerke <Havard.Bjerke@xxxxxxxxxxx>, Rune Andresen <Rune.Johan.Andresen@xxxxxxxxxxx>
Delivery-date: Thu, 04 Nov 2004 17:53:16 +0000
Envelope-to: steven.hand@xxxxxxxxxxxx
In-reply-to: <200411021651.35419.mark.williamson@xxxxxxxxxxxx>
Keywords: CERN SpamKiller Note: -49 Charset: west-latin
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
References: <20041029183953.GB18329@xxxxxxxxxxx> <20041102161308.GD18329@xxxxxxxxxxx> <200411021635.36596.mark.williamson@xxxxxxxxxxxx> <200411021651.35419.mark.williamson@xxxxxxxxxxxx>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
Well, after the issue between two xen dom0 domains is solved there is a new case we don't
understand:

With two physical domains and 4 guest OSs (2 on each physical node) we get some rare results with ttcp (b=1000000, l = 1000000):

Lets say we have two guest OSs on physical node A, A1 and A2, and two guest OSs on physical node B, B1 and B2.

Between A1 and B1 I get 110 000 KB/s (which is almost optimal!)
Between A1 and B2 I get 81 0000 KB/s
Between A2 and B1 I get 94 000 KB/s

Do you have any idea why we get less performance in the last two cases? It doesn't make sense. It cant be
a bottleneck in the network either because of case 1.(?)

Cheers,
Rune


On Nov 2, 2004, at 5:51 PM, Mark A. Williamson wrote:

If you're using MPI over TCP/IP (which I imagine you are) then it should Just Work (TM). We have tried live migration with MPI applications but you
shouldn't have any problems moving the VMs around with a cluster.

Sorry I meant to say we have *not* tried live migration with MPI applications.

Note to self: read before clicking send!

Cheers,
Mark


-------------------------------------------------------
This SF.Net email is sponsored by:
Sybase ASE Linux Express Edition - download now for FREE
LinuxWorld Reader's Choice Award Winner for best database on Linux.
http://ads.osdn.com/?ad_id=5588&alloc_id=12065&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel



-------------------------------------------------------
This SF.Net email is sponsored by:
Sybase ASE Linux Express Edition - download now for FREE
LinuxWorld Reader's Choice Award Winner for best database on Linux.
http://ads.osdn.com/?ad_id=5588&alloc_id=12065&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel