WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] tools/python/xend: VBD QoS policy bits

To: Christoph Egger <Christoph.Egger@xxxxxxx>
Subject: Re: [Xen-devel] [PATCH] tools/python/xend: VBD QoS policy bits
From: William Pitcock <nenolod@xxxxxxxxxxxxxxxx>
Date: Fri, 14 Aug 2009 20:21:15 +0400 (MSD)
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 14 Aug 2009 09:21:40 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <200908141719.22764.Christoph.Egger@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi,

----- "Christoph Egger" <Christoph.Egger@xxxxxxx> wrote:

> Does this enforce the VBD to do 5000 IO operations or none at all or
> is
> this considered as an upper limit ?

That is dependent on the dom0 kernel, obviously... but my kernel-side
patch basically keeps the specified IOPS target as the upper limit, while
also allowing some level of initial burstability.

The goal here is to stop domUs which have gone swap-happy from degrading
system performance (especially with say, SANs) while the domU's owner is
not around to fix the problem... basically instead of the entire storage
volume's resources being consumed, the requests going to the physical volume
become ratelimited.

I submitted some patches earlier (around March, then May) which worked a
little bit differently, and feel that this way is more self-explanatory and
generally robust.

William

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>