WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] [PATCH] Use saner dom0 memory and vcpu defaults, d

To: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>
Subject: Re: [Xen-ia64-devel] [PATCH] Use saner dom0 memory and vcpu defaults, don't panic on over-allocation
From: Jarod Wilson <jwilson@xxxxxxxxxx>
Date: Thu, 02 Aug 2007 11:36:06 -0400
Cc: Alex Williamson <alex.williamson@xxxxxx>, xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 02 Aug 2007 08:33:38 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <46B1E766.7000003@xxxxxxxxxx>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Red Hat, Inc.
References: <46AFF7F6.5090105@xxxxxxxxxx> <1185943424.6802.98.camel@bling> <20070801052434.GC14448%yamahata@xxxxxxxxxxxxx> <46B08EE2.5020106@xxxxxxxxxx> <46B0ACEB.3080200@xxxxxxxxxx> <46B0C21C.9010605@xxxxxxxxxx> <46B0D5AF.1050309@xxxxxxxxxx> <20070802021200.GA6395%yamahata@xxxxxxxxxxxxx> <46B1E766.7000003@xxxxxxxxxx>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.5 (X11/20070719)
Jarod Wilson wrote:
> Isaku Yamahata wrote:
>> On Wed, Aug 01, 2007 at 02:49:19PM -0400, Jarod Wilson wrote:
>>
>>>> Rather than that approach, a simple 'max_dom0_pages =
>>>> avail_domheap_pages()' is working just fine on both my 4G and 16G boxes,
>>>> with the 4G box now getting ~260MB more memory for dom0 and the 16G box
>>>> getting ~512MB more. Are there potential pitfalls here? 
>> Hi Jarod. Sorry for delayed reply.
>> Reviewing the Alex's mail, it might have used up xenheap at that time.
>> However now that the p2m table is allocated from domheap, 
>> memory for the p2m table would be counted.
>> It can be calculated by very roughly dom0_pages / PTRS_PER_PTE.
>> Here PTRS_PER_PTE = 2048 with 16kb page size, 1024 with 8KB page size...
>>
>> the p2m table needs about  2MB for  4GB of dom0 with 16KB page size.
>>                     about  8MB for 16GB
>>                  about 43MB for 86GB 
>>                  about 48MB for 96GB 
>>
>> (It counts only PTE pages and it supposes that dom0 memory is contiguous.
>> For more precise calculation it should count PMD, PGD and sparseness.
>> But its memory size would be only KB order. Even for 1TB dom0,
>> it would be about 1MB. So I ignored them.)
>>
>> With max_dom0_pages = avail_domheap_pages() as you proposed,
>> we use xenheap for the p2m table, I suppose.
>> Xenheap size is at most 64MB and so precious.
>>
>> How about this heurictic?
>> max_dom0_pages = avail_domheap_pages() - avail_domheap_pages() / 
>> PTRS_PER_PTE;
> 
> Sounds quite reasonable to me. I'm build and boot testing an updated
> patch, which assuming all goes well, I'll ship off to the list a bit
> later today...
> 
> Ah, one more thing I'm adding: if one specifies dom0_mem=0 on the xen
> command line, that'll now allocate all available memory.

...and here it is. I shuffled a few things around in the max_dom0_size
calculation a little bit for better readability and avoid multiple calls
to avail_domheap_pages() (my assumption being that its increasingly
costly on larger and larger systems).

Indeed, on my 16GB system, only 8MB less from the v2 incantation, and
the dom0_mem=0 option does properly allocate all available memory to
dom0. I'm quite happy with this version if everyone else is...

-- 
Jarod Wilson
jwilson@xxxxxxxxxx

Some ia64 xen dom0 tweaks:
* Increase default memory allocation from 512M to 4G
* Increase default vcpu allocation from 1 to 4
* If need be, scale down requested memory allocation to fit
  available memory, rather than simply panicking
* If dom0_mem=0 is specified, allocate all available mem

Signed-off-by: Jarod Wilson <jwilson@xxxxxxxxxx>

diff -r 039f2ccb1e38 xen/arch/ia64/xen/domain.c
--- a/xen/arch/ia64/xen/domain.c        Tue Jul 31 10:30:40 2007 -0600
+++ b/xen/arch/ia64/xen/domain.c        Thu Aug 02 11:29:34 2007 -0400
@@ -52,10 +52,11 @@
 #include <asm/perfmon.h>
 #include <public/vcpu.h>
 
-static unsigned long __initdata dom0_size = 512*1024*1024;
+/* dom0_size: default memory allocation for dom0 (~4GB) */
+static unsigned long __initdata dom0_size = 4096UL*1024UL*1024UL;
 
 /* dom0_max_vcpus: maximum number of VCPUs to create for dom0.  */
-static unsigned int __initdata dom0_max_vcpus = 1;
+static unsigned int __initdata dom0_max_vcpus = 4;
 integer_param("dom0_max_vcpus", dom0_max_vcpus); 
 
 extern char dom0_command_line[];
@@ -1195,8 +1196,35 @@ static void __init loaddomainelfimage(st
        }
 }
 
-void __init alloc_dom0(void)
-{
+static void __init calc_dom0_size(void)
+{
+       unsigned long domheap_pages;
+       unsigned long p2m_pages;
+       unsigned long max_dom0_size;
+
+       /* Estimate maximum memory we can safely allocate for dom0
+        * by subtracting the p2m table allocation from the available
+        * domheap pages. */
+       domheap_pages = avail_domheap_pages();
+       p2m_pages = domheap_pages / PTRS_PER_PTE;
+       max_dom0_size = (domheap_pages - p2m_pages) * PAGE_SIZE;
+       printk("Maximum permitted dom0 size: %luMB\n",
+              max_dom0_size / (1024*1024));
+
+       /* validate proposed dom0_size, fix up as needed */
+       if (dom0_size > max_dom0_size) {
+               printk("Reducing dom0 memory allocation from %luK to %luK "
+                      "to fit available memory\n",
+                      dom0_size / 1024, max_dom0_size / 1024);
+               dom0_size = max_dom0_size;
+       }
+
+       /* dom0_mem=0 can be passed in to give all available mem to dom0 */
+       if (dom0_size == 0) {
+               printk("Allocating all available memory to dom0\n");
+               dom0_size = max_dom0_size;
+       }
+
        /* Check dom0 size.  */
        if (dom0_size < 4 * 1024 * 1024) {
                panic("dom0_mem is too small, boot aborted"
@@ -1261,6 +1289,8 @@ int __init construct_dom0(struct domain 
        BUG_ON(v->is_initialised);
 
        printk("*** LOADING DOMAIN 0 ***\n");
+
+       calc_dom0_size();
 
        max_pages = dom0_size / PAGE_SIZE;
        d->max_pages = max_pages;
diff -r 039f2ccb1e38 xen/arch/ia64/xen/xensetup.c
--- a/xen/arch/ia64/xen/xensetup.c      Tue Jul 31 10:30:40 2007 -0600
+++ b/xen/arch/ia64/xen/xensetup.c      Wed Aug 01 13:44:31 2007 -0400
@@ -46,7 +46,6 @@ extern void early_setup_arch(char **);
 extern void early_setup_arch(char **);
 extern void late_setup_arch(char **);
 extern void hpsim_serial_init(void);
-extern void alloc_dom0(void);
 extern void setup_per_cpu_areas(void);
 extern void mem_init(void);
 extern void init_IRQ(void);
@@ -469,8 +468,6 @@ void __init start_kernel(void)
 
     trap_init();
 
-    alloc_dom0();
-
     init_xenheap_pages(__pa(xen_heap_start), xenheap_phys_end);
     printk("Xen heap: %luMB (%lukB)\n",
        (xenheap_phys_end-__pa(xen_heap_start)) >> 20,

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
<Prev in Thread] Current Thread [Next in Thread>