|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] [PATCH 6/10] Allow vcpu to pause self
Oh, I can check is_running flag bit of dom0/vcpu0 as a sync point
before requesting lazy context flush on all CPUs. :-)
Thanks,
Kevin
>From: Tian, Kevin
>Sent: 2007年7月12日 13:06
>
>>From: Tian, Kevin
>>Sent: 2007年7月12日 10:37
>>>
>>>I think this should not be needed. Why is dom0/vcpu0 special at all? If
>>>you
>>>are doing the final work from a softirq context, can't dom0/vcpu0
>simply
>>>be
>>>paused like all others at that point? If not then we'll need to make
>some
>>>arrangement using vcpu_set_affinity() - I won't add another flag on the
>>>context-switch path.
>>
>>I tried to recall the reason for adding this flag. The major reason is that
>>sleep hypercall happens on dom0/vcpu0's context, while actual
>>enter_state may happen in softirq on idle vcpu context. As a result, we
>>need to update rax as return value to dom0/vcpu0 which means lazy
>>state required flush into per-vcpu guest context before updating.
>>However existing vcpu_pause doesn't work on self context and
>>vcpu_pause_nosync leaves lazy state there. That's why a new flag is
>>added to allow lazy context sync-ed after switching out.
>>
>>But after a further thinking, based on the fact that enter_state will force
>>a lazy context flush on all CPUs now, this interface can be abandoned
>>then.
>>
>
>Seems issue still existing. It's possible that force lazy context flush
>in enter_state is done before dom0/vcpu0 enters context switch,
>since softirq is sent out before pause. How to find a safe point where
>we know that dom0/vcpu0 is definitely switched out?
>
>Vcpu_set_affinity doesn't solve the problem, since migrated vcpu
>won't continue with previous flow. Or do you mean forcing user to set
>such affinity explicitly before requesting suspend?
>
>Thanks,
>Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|