WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] OOM killer problem

To: "'Keir Fraser'" <Keir.Fraser@xxxxxxxxxxxx>
Subject: RE: [Xen-devel] OOM killer problem
From: "Satoshi Uchida" <s-uchida@xxxxxxxxxxxxx>
Date: Mon, 19 Jun 2006 17:11:22 +0900
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 19 Jun 2006 01:12:21 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <c44bb6850b7797abcbc32d337fce4534@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcaTcXYuaVUWhBd8SZqerbcuWyPshwAAUKOw
Thank you for reply.

>> In my examination, vbd thread became often OO-states.

> We'll need more info. You mention the vbd thread, so it is it usually
domain0 that ends up in OOM state? How much memory 
> do you a allocate each domain0 and each other VM?

My envrionment is
 X86 (Pentium 4), 2.5GB Mem and IDE and SCSI disk.

I allocate 1GB memory for domain0 and 256MB memory for each domainU (4VMs).
Sweps are allocated 2.5GB on IDE for domain0, and No swaps for domain U
the dd program run on each domainUs,
VM disks for domainU are used pysical partition on SCSI (ex. /dev/sda1
/dev/sda2 /dev/sda3/ /dev/sda4).

Oom-killer wakes up for each domainU, and kills each deamon such as sshd.
Thereafter, it kills login and bash, so return login console.

Above OO-states shows state in vbd-back driver.
I seem that each variable counts following state.
  oo_req is blockd requests.
  rd_req is pended read requests.
  wr_req is rended write requests.

In the case of three VM,  OO state is not so occured.
However, In the case of four VM, OO state is occured enormously.

So, I seem that I/O is interfered by I/O from other VM, and so slower.
Therefore, I/O request is pileing up, and so OOM killer is waked up.

I create observation program for VBD I/O requests.
This patch will through after few days.


>> So, I think that this cause is that I/O request is not processed 
>> faster.
>> This problem is found in the case file systems is slower by some Web 
>> page.

>Can you clarify what you mean here?

In past, OOM killer is became the problems all over the file system. 
  ex ext3, nfs.
I research these problem not enough, but I think to occur sisutation
 which is similler to explain above.

Thank you,
Satoshi UCHIDA

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>