|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] Re: [PATCH 4/6] xen-gntdev: Support mapping in HVM domains
On Thu, 2011-01-27 at 18:52 +0000, Konrad Rzeszutek Wilk wrote:
> > @@ -179,11 +184,32 @@ static void gntdev_put_map(struct grant_map *map)
> >
> > atomic_sub(map->count, &pages_mapped);
> >
> > - if (map->pages)
> > + if (map->pages) {
> > + if (!use_ptemod)
> > + unmap_grant_pages(map, 0, map->count);
> > +
> > for (i = 0; i < map->count; i++) {
> > - if (map->pages[i])
> > + uint32_t check, *tmp;
> > + if (!map->pages[i])
> > + continue;
> > + /* XXX When unmapping in an HVM domain, Xen will
> > + * sometimes end up mapping the GFN to an invalid MFN.
> > + * In this case, writes will be discarded and reads
> > will
> > + * return all 0xFF bytes. Leak these unusable GFNs
>
> I forgot to ask, under what version of Xen did you run this? I want to add
> that to the comment so when it gets fixed we know what the failing version is.
>
> > + * until Xen supports fixing their p2m mapping.
> > + */
> > + tmp = kmap(map->pages[i]);
> > + *tmp = 0xdeaddead;
I've just tripped over this check which faults in my PV guest. Seems to
be related to the handling failures of map_grant_pages()?
Was the underlying Xen issue here reported somewhere more obvious than
this comment buried in a patch to the kernel?
If not please can you raise it as a separate thread clearly marked as a
hypervisor issue/question, all I can find is bits and pieces spread
through the threads associated with this kernel patch. I don't think
I've got a clear enough picture of the issue from those fragments to
pull it together into a sensible report.
Ian.
> > + mb();
> > + check = *tmp;
> > + kunmap(map->pages[i]);
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] Re: [PATCH 4/6] xen-gntdev: Support mapping in HVM domains,
Ian Campbell <=
|
|
|
|
|