[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Share Memory Between DomainU and Domain0

Hi there!

Have you made any progress with this? I've made some comments inline below...

1.     Dom0 grants access permission of pages to DomU  using
2. Dom0 passes "Machine Page Numbers" and "Ref" to DomU through I/O Ring
3.     DomU install granted pages. (Change original mfn to new mfn of Dom0)
A.     map.host_addr = mfn_to_virt(pfn_to_mfn(DomU's PFN));
B.      HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,&map,count)
C.      Set_phys_to_machine(DomU's PFN, Dom0's MFN)

In blkback.c

          dom0_cache_vaddr = get_zeroed_page(GFP_KERNEL);

          dom0_cache_mfn[i] = pfn_to_mfn(__pa(dom0_cache_vaddr) >>

share_ref = gnttab_claim_grant_reference(&share_ref_head);

          BUG_ON(share_ref == -ENOSPC);

          share_ref = gnttab_grant_foreign_access( req_domid,
dom0_cache_mfn[i], 1);

In blkfront.c

          bret = RING_GET_RESPONSE(&info->ring, i);

          map.host_addr =

for(j < 0 ; j < number_of_pages ;j++){

                     map.dom = (domid_t) 0;

                     map.ref = bret->map_pfn_mfn[j].ref;

                     map.flags = ( GNTMAP_host_map | GNTMAP_readonly);

                     ret =






For the sharing, I modified response struct. So it has grant ref, Dom0's MFN and DomU's PFN now.

OK. Make sure your dom0 and domU are *definitely* using the same version of the block drivers, and that both kernels are compiled to use your modified request structure. Otherwise things will get very confused. If you're using two kernels, make sure you rebuild them both.

How does dom0 know what the domU's pfn is in order to queue it in responses?

But I encountered Kernel Panic as DomU booted up.

Question :

1.     Should I update Page table with "HYPERVISOR_mmu_update" in DomU
although I call "HYPERVISOR_grant_table_op"??

That shouldn't be necessary.

2.     Anyone can advice to me about this problem? (Anything about the
kernel panic message or mistakes in source code)

Are you trying to implement a shared buffer cache or something similar? Could you possibly post your entire diff vs the standard block drivers so that I can take a look at everything in context? I've written some similar code to this but would like to look through a full diff if possible.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.