|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] [PATCH 6/7] xen-gntdev: Support mapping in HVM domains
On 01/10/2011 05:41 PM, Konrad Rzeszutek Wilk wrote:
>> @@ -284,8 +304,25 @@ static void unmap_grant_pages(struct grant_map *map,
>> int offset, int pages)
>> goto out;
>>
>> for (i = 0; i < pages; i++) {
>> + uint32_t check, *tmp;
>> WARN_ON(unmap_ops[i].status);
>> - __free_page(map->pages[offset+i]);
>> + if (!map->pages[i])
>> + continue;
>> + /* XXX When unmapping, Xen will sometimes end up mapping the GFN
>> + * to an invalid MFN. In this case, writes will be discarded and
>> + * reads will return all 0xFF bytes. Leak these unusable GFNs
>> + * until a way to restore them is found.
>> + */
>> + tmp = kmap(map->pages[i]);
>> + tmp[0] = 0xdeaddead;
>> + mb();
>> + check = tmp[0];
>> + kunmap(map->pages[i]);
>> + if (check == 0xdeaddead)
>> + __free_page(map->pages[i]);
>> + else if (debug)
>> + printk("%s: Discard page %d=%ld\n", __func__,
>> + i, page_to_pfn(map->pages[i]));
>
> Whoa. Any leads to when the "sometimes" happens? Does the status report an
> error or is it silent?
Status is silent in this case. I can produce it quite reliably on my
test system where I am mapping a framebuffer (1280 pages) between two
HVM guests - in this case, about 2/3 of the released pages will end up
being invalid. It doesn't seem to be size-related as I have also seen
it on the small 3-page page index mapping. There is a message on xm
dmesg that may be related:
(XEN) sh error: sh_remove_all_mappings(): can't find all mappings of mfn 7cbc6:
c=8000000000000004 t=7400000000000002
This appears about once per page, with different MFNs but the same c/t.
One of the two HVM guests (the one doing the mapping) has the PCI
graphics card forwarded to it.
>> map->pages[offset+i] = NULL;
>> map->pginfo[offset+i].handle = 0;
>> }
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|