WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] limitation in the process address space size,

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] limitation in the process address space size,
From: Lamia Youseff <lyouseff@xxxxxxxxxxx>
Date: Wed, 12 Sep 2007 01:34:23 -0700
Delivery-date: Wed, 12 Sep 2007 01:35:02 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.6 (Macintosh/20070728)
Hi ,
I am performing some experimentation with Xen paravirtualization impact on computational workloads, when i observed a strange behavior of the domains (dom0 with 256MB, but i did not verify it for domU yet).

As I request more pages to my process address space (through regular malloc calls), Xen places a limitation on the process address space size in real memory. Above this threshold, the process would swap like crazy, and hurt the computational performance of my code. To be more specific, my program pseudo code is shown below. My dom0 is allocated 256 MB at initialization. I then run my code and measure its performance in MFLOPS, as well as the swap activity as the process requests more memory. I observed that when the process resident set size (pages in real memory) reaches about 78.39 MB (or 20,070 pages), the process starts swapping memory pages. I did not see the same performance degradation when i allocate 756MB for dom0. This was not definitely the same behavior I get from the native kernel on the same machine (I get no performance degradation). I will appreciate if some one can shed some light on this kernel behavior for me. Please ask me if i don't give enough details of the problem here.
Thank you,
Lamia Youseff


while (){
   malloc more X bytes;
   fill new bytes  with random numbers;
   do some floating point operations, and measure performance;
   measure swapped pages and RSS (resident set size);
}


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>