[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0 of 5] v2: Nested-p2m cleanups and locking changes



At 15:24 +0200 on 27 Jun (1309188245), Christoph Egger wrote:
> On 06/27/11 15:20, Tim Deegan wrote:
> >At 14:15 +0100 on 27 Jun (1309184128), Tim Deegan wrote:
> >>At 14:23 +0200 on 27 Jun (1309184586), Christoph Egger wrote:
> >>>>  - Why is there a 10x increase in IPIs after this series?  I don't see
> >>>>    what sequence of events sets the relevant cpumask bits to make this
> >>>>    happen.
> >>>
> >>>In patch 1 the code that sends the IPIs was outside of the loop and
> >>>moved into the loop.
> >>
> >>Well, yes, but I don't see what that causes 10x IPIs, unless the vcpus
> >>are burning through np2m tables very quickly indeed.  Maybe removing the
> >>extra flushes for TLB control will do the trick.  I'll make a patch...
> >
> >Hmmm, on second thoughts, we can't remove those flushes after all.
> >The np2m is in sync with the p2m but not with the guest-supplied p2m,
> >so we do need to flush it when the guest asks for that to happen.

And futhermore we can't share np2ms between vcpus because that could 
violate the TLB's coherence rules. E.g.:
 - vcpu 1 uses ncr3 A, gets np2m A', A' is populated from A;
 - vcpu 1 switches to ncr3 B;
 - guest updates p2m @ A, knows there are no users so doesn't flush it;
 - vcpu 2 uses ncr3 A, gets np2m A' with stale data.

So in fact we have to have per-vcpu np2ms, unless we want a lot of
implicit flushes.  Which means we can discard the 'cr3' field in the
nested p2m.

Tim.

-- 
Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.