WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] RE: question about shadow_blow_tables

To: "Tim Deegan" <Tim.Deegan@xxxxxxxxxx>
Subject: [Xen-devel] RE: question about shadow_blow_tables
From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Date: Tue, 27 Nov 2007 19:47:06 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 27 Nov 2007 03:51:04 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20071127105118.GB17453@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <D470B4E54465E3469E2ABBC5AFAC390F024D8C92@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20071127095827.GA17453@xxxxxxxxxxxxxxxxxxxxx> <D470B4E54465E3469E2ABBC5AFAC390F024D8CA1@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20071127105118.GB17453@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acgw5BU5kb5iGk4XSvOcfuLHqhgjfgABqkLQ
Thread-topic: question about shadow_blow_tables
Thanks a lot for your clarification. Now I'm clear that current solution is 
safe.
Actually in the start I also doubt my suspicion, since such hole is severe and
unlikely nobody reports. So just post for your help on this part. :-)

Thanks,
Kevin

>From: Tim Deegan [mailto:Tim.Deegan@xxxxxxxxxx] 
>Sent: 2007年11月27日 18:51
>
>Hi,
>
>At 18:32 +0800 on 27 Nov (1196188342), Tian, Kevin wrote:
>> Maybe I made some misunderstanding here. By comment of shadow_blow_
>> tables:
>> /* Deliberately free all the memory we can: this will tear 
>down all of
>>  * this domain's shadows */
>
>In this comment, "free" means only freeing as far as the 
>domain's shadow
>free lists, not to domheap.  Does that make more sense?
> 
>> The implicit here is that all shadow pages of this domain 
>will be released
>> as result. However when 'blow' is on-going on one cpu, the 
>'blow-ed' pages
>> may be active on address translation on another cpu, if 
>other vcpus are
>> not paused. I think anyway hardware should be prevented from walking
>> shadow pages which are torn down from another cpu...
>
>As I said, it's safe to do this concurrently with other CPUs 
>reading the
>shadow pagetables, and we have the shadow lock to protect against
>concurrent writes.
>
>- other CPUs never see a half-written entry because of the logic in 
>  safe_write_entry(). 
>- l1es in other CPUs' TLBs are safe to leave until the final TLB flush
>  because there's no intermediate stage mid-operation that 
>requires rights
>  to have been relinquished.
>- higher-level entries in other CPUs' TLBs are safe because we 
>leave the
>  contents of the shadow pagetables they point at alone until 
>we're sure
>  all the TLBs are flushed.  (We never write to pages on the 
>shadow free
>  list and we check the TLB flush timestamps when we allocate them
>  from the free list again.)
>
>Can you be clearer about what you think the risk is?  If we've missed
>something then it's quite important, because it probably affects every
>other shadow operation as well.
>
>> So my question is, whether all shadow pages are indeed 
>free-ed as result
>> of 'blow' option?
>
>Only as far as the free list.  We never free shadow pages back to
>domheap until the allocation is changed or shadow mode is 
>disabled.  But
>I think it would still be safe even if we freed to domheap because the
>deferred-TLB-flush logic in page_alloc.c would do the right thing.
>
>> Or some IPI will be definitely triggered when free-ing one
>> shadow page referenced by multiple VCPUs, before final TLB flush?
>
>No.  We do no synchronisation until the TLB flush at the end.
>
>Cheers,
>
>Tim.
>
>-- 
>Tim Deegan <Tim.Deegan@xxxxxxxxxx>
>Principal Software Engineer, Citrix Systems.
>[Company #5334508: XenSource UK Ltd, reg'd c/o EC2Y 5EB, UK.]
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>