Hello,
> Hi,
>
> At 00:33 -0400 on 27 Oct (1319675633), Andres Lagar-Cavilla wrote:
>> This patch only modifies code internal to the p2m, adding convenience
>> macros, etc. It will yield a compiling code base but an incorrect
>> hypervisor (external callers of queries into the p2m will not unlock).
>> Next patch takes care of external callers, split done for the benefit
>> of conciseness.
>
> Better to do it the other way round: put the enormous change-all-callers
> patch first, with noop unlock functions, and then hook in the unlocks.
> That way you won't cause chaos when people try to bisect to find when a
> bug was introduced.
Indeed, excellent point.
>
>> diff -r 8a98179666de -r 471d4f2754d6 xen/include/asm-x86/p2m.h
>> --- a/xen/include/asm-x86/p2m.h
>> +++ b/xen/include/asm-x86/p2m.h
>> @@ -220,7 +220,7 @@ struct p2m_domain {
>> * tables on every host-p2m change. The setter of this flag
>> * is responsible for performing the full flush before releasing
>> the
>> * host p2m's lock. */
>> - int defer_nested_flush;
>> + atomic_t defer_nested_flush;
>>
>> /* Pages used to construct the p2m */
>> struct page_list_head pages;
>> @@ -298,6 +298,15 @@ struct p2m_domain *p2m_get_p2m(struct vc
>> #define p2m_get_pagetable(p2m) ((p2m)->phys_table)
>>
>>
>> +/* No matter what value you get out of a query, the p2m has been locked
>> for
>> + * that range. No matter what you do, you need to drop those locks.
>> + * You need to pass back the mfn obtained when locking, not the new
>> one,
>> + * as the refcount of the original mfn was bumped. */
>
> Surely the caller doesn't need to remember the old MFN for this? After
> allm, the whole point of the lock was that nobody else could change the
> p2m entry under our feet!
>
> In any case, I thing there needs to be a big block comment a bit futher
> up that describes what all this locking and refcounting does, and why.
Comment will be added. I was being doubly-paranoid. I can undo the
get_page/put_page of the old mfn. I'm not a 100% behind it.
Andres
>
>> +void drop_p2m_gfn(struct p2m_domain *p2m, unsigned long gfn,
>> + unsigned long mfn);
>> +#define drop_p2m_gfn_domain(d, g, m) \
>> + drop_p2m_gfn(p2m_get_hostp2m((d)), (g), (m))
>> +
>> /* Read a particular P2M table, mapping pages as we go. Most callers
>> * should _not_ call this directly; use the other gfn_to_mfn_*
>> functions
>> * below unless you know you want to walk a p2m that isn't a domain's
>> @@ -327,6 +336,28 @@ static inline mfn_t gfn_to_mfn_type(stru
>> #define gfn_to_mfn_guest(d, g, t) gfn_to_mfn_type((d), (g), (t),
>> p2m_guest)
>> #define gfn_to_mfn_unshare(d, g, t) gfn_to_mfn_type((d), (g), (t),
>> p2m_unshare)
>>
>> +/* This one applies to very specific situations in which you're
>> querying
>> + * a p2m entry and will be done "immediately" (such as a printk or
>> computing a
>> + * return value). Use this only if there are no expectations of the p2m
>> entry
>> + * holding steady. */
>> +static inline mfn_t gfn_to_mfn_type_unlocked(struct domain *d,
>> + unsigned long gfn, p2m_type_t
>> *t,
>> + p2m_query_t q)
>> +{
>> + mfn_t mfn = gfn_to_mfn_type(d, gfn, t, q);
>> + drop_p2m_gfn_domain(d, gfn, mfn_x(mfn));
>> + return mfn;
>> +}
>> +
>> +#define gfn_to_mfn_unlocked(d, g, t) \
>> + gfn_to_mfn_type_unlocked((d), (g), (t), p2m_alloc)
>> +#define gfn_to_mfn_query_unlocked(d, g, t) \
>> + gfn_to_mfn_type_unlocked((d), (g), (t), p2m_query)
>> +#define gfn_to_mfn_guest_unlocked(d, g, t) \
>> + gfn_to_mfn_type_unlocked((d), (g), (t), p2m_guest)
>> +#define gfn_to_mfn_unshare_unlocked(d, g, t) \
>> + gfn_to_mfn_type_unlocked((d), (g), (t), p2m_unshare)
>> +
>
> Ugh. This could really benefit from having the gfn_to_mfn_* functions
> take a set of flags instead of an enum. This exponential blowup in
> interface is going too far. :)
I don't think these names are the most terrible -- we've all seen far
worse :) I mean, the naming encodes the arguments, and I don't see an
intrinsic advantage to
gfn_to_mfn(d, g, t, p2m_guest, p2m_unlocked)
over
gfn_to_mfn_guest_unlocked(d,g,t)
Andres
>
> That oughtn't to stop this interface from going in, of course, but if
> we're going to tinker with the p2m callers once, we should do it all
> together.
>
> Tim.
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|