WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Linux Stubdom Problem

To: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>, Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>, "Xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] Linux Stubdom Problem
From: Jiageng Yu <yujiageng734@xxxxxxxxx>
Date: Fri, 22 Jul 2011 00:54:56 +0800
Cc:
Delivery-date: Thu, 21 Jul 2011 09:55:31 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=N6KgPrUjGlpqLtCRFtLdXdduXJabHcFvBvleLHYYhw8=; b=HRGWqOcjZRLAMlCCuLQU5NwKOctdJ5YKtGC5I83qUv/Bh2oujYiMMuazm1NcZ1rxr6 HL4ylOVnaIuOPDjhekrQTgM21a5yjdeip/gGa6d5qot/S1FyHIvP8d6uVSili/OXEoww 6vk15Aptnu1TLS9HucQpK9kea4wDTsxgTIS9I=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
2011/7/19 Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>:
> CC'ing Tim and xen-devel
>
> On Mon, 18 Jul 2011, Jiageng Yu wrote:
>> 2011/7/16 Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>:
>> > On Fri, 15 Jul 2011, Jiageng Yu wrote:
>> >> 2011/7/15 Jiageng Yu <yujiageng734@xxxxxxxxx>:
>> >> > 2011/7/15 Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>:
>> >> >> On Fri, 15 Jul 2011, Jiageng Yu wrote:
>> >> >>> > Does it mean you are actually able to boot an HVM guest using Linux
>> >> >>> > based stubdoms?? Did you manage to solve the framebuffer problem 
>> >> >>> > too?
>> >> >>>
>> >> >>>
>> >> >>> The HVM guest is booted. But the boot process is terminated because
>> >> >>> vga bios is not invoked by seabios. I have got stuck here for a week.
>> >> >>>
>> >> >>
>> >> >> There was a bug in xen-unstable.hg or seabios that would prevent vga 
>> >> >> bios from
>> >> >> being loaded, it should be fixed now.
>> >> >>
>> >> >> Alternatively you can temporarely work around the issue with this 
>> >> >> hacky patch:
>> >> >>
>> >> >> ---
>> >> >>
>> >> >>
>> >> >> diff -r 00d2c5ca26fd tools/firmware/hvmloader/hvmloader.c
>> >> >> --- a/tools/firmware/hvmloader/hvmloader.c      Fri Jul 08 18:35:24 
>> >> >> 2011 +0100
>> >> >> +++ b/tools/firmware/hvmloader/hvmloader.c      Fri Jul 15 11:37:12 
>> >> >> 2011 +0000
>> >> >> @@ -430,7 +430,7 @@ int main(void)
>> >> >>             bios->create_pir_tables();
>> >> >>     }
>> >> >>
>> >> >> -    if ( bios->load_roms )
>> >> >> +    if ( 1 )
>> >> >>     {
>> >> >>         switch ( virtual_vga )
>> >> >>         {
>> >> >>
>> >> >>
>> >> >
>> >> > Yes. Vga bios is booted. However, the upstram qemu receives a SIGSEGV
>> >> > signal subsequently. I am trying to print the call stack when
>> >> > receiving the signal.
>> >> >
>> >>
>> >> Hi,
>> >>
>> >>    I find the cause of SIGSEGV signal:
>> >>
>> >>    cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, int
>> >> len, int is_write)
>> >>                   ->memcpy(buf, ptr + (addr & ~TARGET_PAGE_MASK), l);
>> >>
>> >>     In my case, ptr=0 and addr=0xc253e, when qemu attempts to vist
>> >> 0x53e address, the SIGSEGV signal is generated.
>> >>
>> >>     I believe the qemu is trying to vist vram in this moment. This
>> >> code seems no problem, and I will continue to find the root cause.
>> >>
>> >
>> > The vram is allocated by qemu, see hw/vga.c:vga_common_init.
>> > qemu_ram_alloc under xen ends up calling xen_ram_alloc that calls
>> > xc_domain_populate_physmap_exact.
>> > xc_domain_populate_physmap_exact is the hypercall that should ask Xen to
>> > add the missing vram pages in the guest. Maybe this hypercall is failing
>> > in your case?
>>
>>
>> Hi,
>>
>>    I continue to invesgate this bug and find hypercall_mmu_update in
>> qemu_remap_bucket(xc_map_foreign_bulk) is failing:
>>
>> do_mmu_update
>>       ->mod_l1_entry
>>              ->  if ( !p2m_is_ram(p2mt) || unlikely(mfn == INVALID_MFN) )
>>                          return -EINVAL;
>>
>>    mfn==INVALID_MFN, because :
>>
>> mod_l1_entry
>>       ->gfn_to_mfn(p2m_get_hostp2m(pg_dom), l1e_get_pfn(nl1e), &p2mt));
>>               ->p2m->get_entry
>>                         ->p2m_gfn_to_mfn
>>                                -> if ( gfn > p2m->max_mapped_pfn )
>>                                    /* This pfn is higher than the
>> highest the p2m map currently holds */
>>                                    return _mfn(INVALID_MFN);
>>
>>    The p2m->max_mapped_pfn is usually 0xfffff. In our case,
>> mmu_update.val exceeds 0x8000000100000000.  Additionally, l1e =
>> l1e_from_intpte(mmu_update.val); gfn=l1e_get_pfn(l1e ). Therefore, gfn
>> will exceed 0xfffff.
>>
>>    In the case of minios based stubdom, the mmu_update.vals do not
>> exceed 0x8000000100000000. Next, I will invesgate why mmu_update.val
>> exceeds 0x8000000100000000.
>
> It looks like the address of the guest that qemu is trying to map is not
> valid.
> Make sure you are running a guest with less than 2GB of ram, otherwise
> you need the patch series that Anthony sent on Friday:
>
> http://marc.info/?l=qemu-devel&m=131074042905711&w=2

Not this problem. I never alloc more than 2GB for the hvm guest. The
call stack in qemu is:

qemu_get_ram_ptr
      ->qemu_map_cache(addr, 0, 1)
                 -> if (!entry->vaddr_base || entry->paddr_index !=
address_index ||
                                          !test_bit(address_offset >>
XC_PAGE_SHIFT, entry->valid_mapping)) {
                           qemu_remap_bucket(entry, size ? :
MCACHE_BUCKET_SIZE, address_index);
                                 ->xc_map_foreign_bulk(xen_xc,
xen_domid, PROT_READ|PROT_WRITE,

                pfns, err, nb_pfn);

The qemu tries to map pages from hvm guest(xen_domid) to linux
stubdom. But some hvm pages' pfns are larger than 0xfffff. So, in the
p2m_gfn_to_mfn, the judgement condition is valid:(p2m->max_mapped_pfn
= 0xfffff)

    if ( gfn > p2m->max_mapped_pfn )
        /* This pfn is higher than the highest the p2m map currently holds */
        return _mfn(INVALID_MFN);

 In minios stubdom case, the hvm pages' pfns do not exceed 0xfffff.
Maybe the address translation in linux stubdom cause this probem?

 BTW, in minios stubdom case, there seems no hvmloader process. Is it
needed in linux stubdom?

 Thanks,

Jiageng Yu.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel