WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] problems with smp

To: "Alex Williamson" <alex.williamson@xxxxxx>
Subject: Re: [Xen-ia64-devel] problems with smp
From: "David Brown" <dmlb2000@xxxxxxxxx>
Date: Thu, 8 Feb 2007 16:28:18 -0800
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 08 Feb 2007 16:27:32 -0800
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=KQ3bqWkVYAcLaovaUCv2wFgf38VZqUgnFB/DAWz5CwegR5Hjnnz24HQZwijSgFsgHHRp1wnHFYdX061CtEuOQgkgP5S1zQQ/kucZfSGSZOd9ovFCoOkgOlqPOswg9/xTgNe4yxnaNSB1Nfv+fbDFNjw9BNfjnS5pBIcmd3CwRAM=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <1170980235.30297.176.camel@bling>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
References: <9c21eeae0702071310g6a8bc60nf69721e445f91604@xxxxxxxxxxxxxx> <9c21eeae0702071737j5cdc0508t6dd741acbc291e7a@xxxxxxxxxxxxxx> <9c21eeae0702072040g222e2e42qa1bcdf8e8531e197@xxxxxxxxxxxxxx> <67C74B5C0666C9takebe_akio@xxxxxxxxxxxxxx> <9c21eeae0702080910l5f7214d9n713f51f8783cc11d@xxxxxxxxxxxxxx> <1170977418.30297.156.camel@bling> <9c21eeae0702081542y3ce148aej24818d6c75dd3cbb@xxxxxxxxxxxxxx> <1170978989.30297.163.camel@bling> <9c21eeae0702081602v7c8dc8adr5e90efe65d1139a9@xxxxxxxxxxxxxx> <1170980235.30297.176.camel@bling>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
   You can tune the Xen credit scheduler to effectively do what you're
asking (I think).  AFAIK, dom0 already has some scheduling priority.
See here for details on tweaking the credit scheduler:

http://wiki.xensource.com/xenwiki/CreditScheduler

Thanks, I'll take a look at this, for sure...

You can't really "share" PCI devices.  At the PCI function level, one
and only one domain can own a function.  This sometimes means you can
split a dual port card among 2 domains if the device exposes the 2 ports
as 2 separate functions.  So you could give each domain it's own NIC and
SCSI device, if you have enough empty PCI slots.  I'm not sure that's
going to help your situation though.  There's probably some tuning you
can do on the vbd side too, like using LVM devices or raw block devices
instead of disk images if you're not already.

I'm definately using raw block devices for each domU and passing them
directly onto the distributed filesystem. I only have one nic really
to use so the dom0 has a bridge that handles the network, which I
guess is more work for the dom0...

If things go well there might be a press release out of this and I'll
most certainly post to the a link to the ML for what I'm doing... have
to run it by management but I'll probably be able to share most of
what I'm doing as well when the time comes (actual code).

I really appreciate the help I've been getting from the ML, thanks all of you.

- David Brown

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>