WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] problems with smp

To: David Brown <dmlb2000@xxxxxxxxx>
Subject: Re: [Xen-ia64-devel] problems with smp
From: Alex Williamson <alex.williamson@xxxxxx>
Date: Thu, 08 Feb 2007 17:17:15 -0700
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 08 Feb 2007 16:16:38 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <9c21eeae0702081602v7c8dc8adr5e90efe65d1139a9@xxxxxxxxxxxxxx>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: HP OSLO R&D
References: <9c21eeae0702071310g6a8bc60nf69721e445f91604@xxxxxxxxxxxxxx> <1170883890.11374.1.camel@xxxxxxxxxxxxx> <9c21eeae0702071737j5cdc0508t6dd741acbc291e7a@xxxxxxxxxxxxxx> <9c21eeae0702072040g222e2e42qa1bcdf8e8531e197@xxxxxxxxxxxxxx> <67C74B5C0666C9takebe_akio@xxxxxxxxxxxxxx> <9c21eeae0702080910l5f7214d9n713f51f8783cc11d@xxxxxxxxxxxxxx> <1170977418.30297.156.camel@bling> <9c21eeae0702081542y3ce148aej24818d6c75dd3cbb@xxxxxxxxxxxxxx> <1170978989.30297.163.camel@bling> <9c21eeae0702081602v7c8dc8adr5e90efe65d1139a9@xxxxxxxxxxxxxx>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Thu, 2007-02-08 at 16:02 -0800, David Brown wrote:
> >    What kind of load are you running?  If it involves I/O, then all that
> > has to go through dom0 and can bottleneck the other domains.  There's
> > also some overhead in scheduling between vCPUs and time spent running in
> > Xen itself, so you may not be able to reach that 200% total, but I'd
> > think you could get closer than 150%.  Thanks,
> >
> 
> Yeah, its very I/O driven, running a distributed filesystem under
> xen... so could the xen kernel be switching too much then? is there
> anyway to provide some nice like level to an OS so that it can get
> most of the time running? Would pci sharing rather than block sharing
> help (if there's more than one block per pci device)?

   You can tune the Xen credit scheduler to effectively do what you're
asking (I think).  AFAIK, dom0 already has some scheduling priority.
See here for details on tweaking the credit scheduler:

http://wiki.xensource.com/xenwiki/CreditScheduler

You can't really "share" PCI devices.  At the PCI function level, one
and only one domain can own a function.  This sometimes means you can
split a dual port card among 2 domains if the device exposes the 2 ports
as 2 separate functions.  So you could give each domain it's own NIC and
SCSI device, if you have enough empty PCI slots.  I'm not sure that's
going to help your situation though.  There's probably some tuning you
can do on the vbd side too, like using LVM devices or raw block devices
instead of disk images if you're not already.

        Alex

-- 
Alex Williamson                             HP Open Source & Linux Org.


_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel