WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH 6/10] Allow vcpu to pause self

To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, "Keir Fraser" <keir@xxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [PATCH 6/10] Allow vcpu to pause self
From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Date: Thu, 12 Jul 2007 14:02:56 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 11 Jul 2007 23:00:49 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <D470B4E54465E3469E2ABBC5AFAC390F013B1FCC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Ace4wFEyWBrDMwUBRbudwW+SC7WIGwLHO6q6ABLR0tAABjWo8AACLF4A
Thread-topic: [Xen-devel] [PATCH 6/10] Allow vcpu to pause self
Oh, I can check is_running flag bit of dom0/vcpu0 as a sync point 
before requesting lazy context flush on all CPUs. :-)

Thanks,
Kevin

>From: Tian, Kevin
>Sent: 2007年7月12日 13:06
>
>>From: Tian, Kevin
>>Sent: 2007年7月12日 10:37
>>>
>>>I think this should not be needed. Why is dom0/vcpu0 special at all? If
>>>you
>>>are doing the final work from a softirq context, can't dom0/vcpu0
>simply
>>>be
>>>paused like all others at that point? If not then we'll need to make
>some
>>>arrangement using vcpu_set_affinity() - I won't add another flag on the
>>>context-switch path.
>>
>>I tried to recall the reason for adding this flag. The major reason is that
>>sleep hypercall happens on dom0/vcpu0's context, while actual
>>enter_state may happen in softirq on idle vcpu context. As a result, we
>>need to update rax as return value to dom0/vcpu0 which means lazy
>>state required flush into per-vcpu guest context before updating.
>>However existing vcpu_pause doesn't work on self context and
>>vcpu_pause_nosync leaves lazy state there. That's why a new flag is
>>added to allow lazy context sync-ed after switching out.
>>
>>But after a further thinking, based on the fact that enter_state will force
>>a lazy context flush on all CPUs now, this interface can be abandoned
>>then.
>>
>
>Seems issue still existing. It's possible that force lazy context flush
>in enter_state is done before dom0/vcpu0 enters context switch,
>since softirq is sent out before pause. How to find a safe point where
>we know that dom0/vcpu0 is definitely switched out?
>
>Vcpu_set_affinity doesn't solve the problem, since migrated vcpu
>won't continue with previous flow. Or do you mean forcing user to set
>such affinity explicitly before requesting suspend?
>
>Thanks,
>Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel