WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] [PATCH] Use saner dom0 memory and vcpu defaults, d

To: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>
Subject: Re: [Xen-ia64-devel] [PATCH] Use saner dom0 memory and vcpu defaults, don't panic on over-allocation
From: Jarod Wilson <jwilson@xxxxxxxxxx>
Date: Wed, 01 Aug 2007 14:49:19 -0400
Cc: Alex Williamson <alex.williamson@xxxxxx>, xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 01 Aug 2007 11:46:55 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <46B0C21C.9010605@xxxxxxxxxx>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Red Hat, Inc.
References: <46AFF7F6.5090105@xxxxxxxxxx> <1185943424.6802.98.camel@bling> <20070801052434.GC14448%yamahata@xxxxxxxxxxxxx> <46B08EE2.5020106@xxxxxxxxxx> <46B0ACEB.3080200@xxxxxxxxxx> <46B0C21C.9010605@xxxxxxxxxx>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.5 (X11/20070719)
Jarod Wilson wrote:
> Jarod Wilson wrote:
>> Jarod Wilson wrote:
>>> Isaku Yamahata wrote:
>>>> On Tue, Jul 31, 2007 at 10:43:44PM -0600, Alex Williamson wrote:
>>>>>> +       /* maximum available memory for dom0 */
>>>>>> +       max_dom0_pages = avail_domheap_pages() -
>>>>>> +                        min(avail_domheap_pages() /
>>>>>> +                        16UL, 512UL << (20 - PAGE_SHIFT)) ;
>>>>>    I assume this heuristic came from Akio's patch in the thread you
>>>>> referenced; can anyone explain how this was derived and why it's
>>>>> necessary?  It looks like a fairly random fudge factor.  Thanks,
>>>> I guess it comes from compute_dom0_nr_pages() under arch/x86.
>>>> However I don't know why compute_dom0_nr_pages() is so.
>>>> Anyway It should be different for ia64. While I'm guessing the most
>>>> dominant factor is the p2m table, domain0 building process should
>>>> be revised for the correct estimation.
>>> The version above does seem to work well for me on all the boxes I've
>>> tested it on, but I'm definitely all ears for how exactly to obtain a
>>> better calculation. I'm not familiar enough with the memory layout to
>>> easily come up with it myself, so anyone else has a suggestion there,
>>> please do speak up.
>> Still reading over code, but throwing this idea out there... Would it
>> make sense to use efi_memmap_walk() to determine max_dom0_size? And if
>> so, should the size of the xenheap be subtracted from that?
> 
> Rather than that approach, a simple 'max_dom0_pages =
> avail_domheap_pages()' is working just fine on both my 4G and 16G boxes,
> with the 4G box now getting ~260MB more memory for dom0 and the 16G box
> getting ~512MB more. Are there potential pitfalls here? I had a brief
> explanation for why the fudge factor on x86 was needed that now escapes
> me, but I think ia64 may be good to go without it...
> 
> --
> (XEN) System RAM: 4069MB (4166832kB)
> (XEN) size of virtual frame_table: 10256kB
> (XEN) virtual machine to physical table: f3fffffff7e00070 size: 2096kB
> (XEN) max_page: 0x103fff2
> (XEN) allocating frame table/mpt table at mfn 0.
> (XEN) Xen heap: 60MB (61664kB)
> (XEN) Domain heap initialised: DMA width 32 bits
> (XEN) avail:0x1180c60000000000,
> status:0x60000000000,control:0x1180c00000000000, vm?0x0
> (XEN) No VT feature supported.
> (XEN) cpu_init: current=f000000004118000
> (XEN) vhpt_init: vhpt paddr=0x40febc0000, end=0x40febcffff
> (XEN) Using scheduler: SMP Credit Scheduler (credit)
> (XEN) Time init:
> (XEN) .... System Time: 1503261ns
> (XEN) .... scale:              11C71C71C
> (XEN) num_online_cpus=1, max_cpus=64
> (XEN) Brought up 1 CPUs
> (XEN) xenoprof: using perfmon.
> (XEN) perfmon: version 2.0 IRQ 238
> (XEN) perfmon: Itanium 2 PMU detected, 16 PMCs, 18 PMDs, 4 counters (47
> bits)
> (XEN) Maximum number of domains: 63; 18 RID bits per domain
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) Max dom0 size: 3978MB
> (XEN) Reducing dom0 memory allocation from 4294967296 to 4172185600 to
> fit available memory
> --
> 
> Note that we've got a reported total of 4069MB up there, and a max dom0
> size of 3978MB, so perhaps there's some further tweaking that could be
> done, but I think this looks quite reasonable.

Attached patch is working well for me. It also includes the function
name switch from alloc_dom0_size to calc_dom0_size as suggested by
Isaku, and an S-O-B.

-- 
Jarod Wilson
jwilson@xxxxxxxxxx

Some ia64 xen dom0 tweaks:
* Increase default memory allocation from 512M to 4G
* Increase default vcpu allocation from 1 to 4
* If need be, scale down requested memory allocation to fit
  available memory, rather than simply panicking

Signed-off-by: Jarod Wilson <jwilson@xxxxxxxxxx>

diff -r 039f2ccb1e38 xen/arch/ia64/xen/domain.c
--- a/xen/arch/ia64/xen/domain.c        Tue Jul 31 10:30:40 2007 -0600
+++ b/xen/arch/ia64/xen/domain.c        Wed Aug 01 13:44:05 2007 -0400
@@ -52,10 +52,11 @@
 #include <asm/perfmon.h>
 #include <public/vcpu.h>
 
-static unsigned long __initdata dom0_size = 512*1024*1024;
+/* dom0_size: default memory allocation for dom0 (~4GB) */
+static unsigned long __initdata dom0_size = 4096UL*1024UL*1024UL;
 
 /* dom0_max_vcpus: maximum number of VCPUs to create for dom0.  */
-static unsigned int __initdata dom0_max_vcpus = 1;
+static unsigned int __initdata dom0_max_vcpus = 4;
 integer_param("dom0_max_vcpus", dom0_max_vcpus); 
 
 extern char dom0_command_line[];
@@ -1195,8 +1196,24 @@ static void __init loaddomainelfimage(st
        }
 }
 
-void __init alloc_dom0(void)
-{
+static void __init calc_dom0_size(void)
+{
+       unsigned long max_dom0_pages;
+       unsigned long max_dom0_size;
+
+       /* maximum available memory for dom0 */
+       max_dom0_pages = avail_domheap_pages();
+       max_dom0_size = max_dom0_pages * PAGE_SIZE;
+       printk("Maximum permitted dom0 size: %luMB\n",
+              max_dom0_size / (1024*1024));
+
+       /* validate proposed dom0_size, fix up as needed */
+       if (dom0_size > max_dom0_size) {
+               printk("Reducing dom0 memory allocation from %lu to %lu "
+                      "to fit available memory\n", dom0_size, max_dom0_size);
+               dom0_size = max_dom0_size;
+       }
+
        /* Check dom0 size.  */
        if (dom0_size < 4 * 1024 * 1024) {
                panic("dom0_mem is too small, boot aborted"
@@ -1261,6 +1278,8 @@ int __init construct_dom0(struct domain 
        BUG_ON(v->is_initialised);
 
        printk("*** LOADING DOMAIN 0 ***\n");
+
+       calc_dom0_size();
 
        max_pages = dom0_size / PAGE_SIZE;
        d->max_pages = max_pages;
diff -r 039f2ccb1e38 xen/arch/ia64/xen/xensetup.c
--- a/xen/arch/ia64/xen/xensetup.c      Tue Jul 31 10:30:40 2007 -0600
+++ b/xen/arch/ia64/xen/xensetup.c      Wed Aug 01 13:44:31 2007 -0400
@@ -46,7 +46,6 @@ extern void early_setup_arch(char **);
 extern void early_setup_arch(char **);
 extern void late_setup_arch(char **);
 extern void hpsim_serial_init(void);
-extern void alloc_dom0(void);
 extern void setup_per_cpu_areas(void);
 extern void mem_init(void);
 extern void init_IRQ(void);
@@ -469,8 +468,6 @@ void __init start_kernel(void)
 
     trap_init();
 
-    alloc_dom0();
-
     init_xenheap_pages(__pa(xen_heap_start), xenheap_phys_end);
     printk("Xen heap: %luMB (%lukB)\n",
        (xenheap_phys_end-__pa(xen_heap_start)) >> 20,

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
<Prev in Thread] Current Thread [Next in Thread>