Thank you for reply.
>> In my examination, vbd thread became often OO-states.
> We'll need more info. You mention the vbd thread, so it is it usually
domain0 that ends up in OOM state? How much memory
> do you a allocate each domain0 and each other VM?
My envrionment is
X86 (Pentium 4), 2.5GB Mem and IDE and SCSI disk.
I allocate 1GB memory for domain0 and 256MB memory for each domainU (4VMs).
Sweps are allocated 2.5GB on IDE for domain0, and No swaps for domain U
the dd program run on each domainUs,
VM disks for domainU are used pysical partition on SCSI (ex. /dev/sda1
/dev/sda2 /dev/sda3/ /dev/sda4).
Oom-killer wakes up for each domainU, and kills each deamon such as sshd.
Thereafter, it kills login and bash, so return login console.
Above OO-states shows state in vbd-back driver.
I seem that each variable counts following state.
oo_req is blockd requests.
rd_req is pended read requests.
wr_req is rended write requests.
In the case of three VM, OO state is not so occured.
However, In the case of four VM, OO state is occured enormously.
So, I seem that I/O is interfered by I/O from other VM, and so slower.
Therefore, I/O request is pileing up, and so OOM killer is waked up.
I create observation program for VBD I/O requests.
This patch will through after few days.
>> So, I think that this cause is that I/O request is not processed
>> faster.
>> This problem is found in the case file systems is slower by some Web
>> page.
>Can you clarify what you mean here?
In past, OOM killer is became the problems all over the file system.
ex ext3, nfs.
I research these problem not enough, but I think to occur sisutation
which is similler to explain above.
Thank you,
Satoshi UCHIDA
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|