This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] Xen qos

To: Felix Chu <felixchu@xxxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] Xen qos
From: "Fajar A. Nugraha" <fajar@xxxxxxxxx>
Date: Sat, 11 Apr 2009 11:11:14 +0700
Cc: Xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 10 Apr 2009 21:11:55 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <027401c9ba59$2450caa0$6cf25fe0$@com>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <027401c9ba59$2450caa0$6cf25fe0$@com>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
On Sat, Apr 11, 2009 at 10:53 AM, Felix Chu
<felixchu@xxxxxxxxxxxxxxxxxxxx> wrote:
> Hi, I am using Xen3.3.1 on Centos5.2. I plan to run 20 domu in single
> physical host(2 x quad-core xeon). But before production, I worry about qos:
> 1.      For CPU, if one of the domu runs cpu intensive app (e.g. wrong app
> with forever loop), how to prevent it affecting other domu?

See http://wiki.xensource.com/xenwiki/CreditScheduler and run "xm
create --help_config" to see config syntax.

> 2.      For network, I would like to control the amount of data transfer per
> domu(e.g. 10GB in/out per day), any way to do it on dom0 and without
> configuration on domu?

No easy setup that I know of.
Your best bet is probably tc or tcng to limit bandwidth usage (not the
amount of data transfer), and snmp to monitor usage.



Xen-users mailing list

<Prev in Thread] Current Thread [Next in Thread>