Hi, Anthony
Thanks for your comments,
I measured the DomVTI boot up time to EFI shell.
(CS10559)
time emulation
22sec nothing
22sec domU pal_halt_light (Anthony's idea)
28sec domU/0 pal_halt_light
If it emulates only domU, idle-domain is not working.
Which solution is better?
Thanks,
Atsushi SAKAI
>Sakai-san,
>Another short term approach,
>When emulating PAL_HALT_LIGHT
>If(dom0)
> Ia64_pal_halt_hight();
>Else //domU
> do_block;
>
>Thanks,
>anthony
>>-----Original Message-----
>>From: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>[mailto:xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Atsushi
>>SAKAI
>>Sent: 2006?7?11? 21:46
>>To: Alex Williamson; Zhang, Xiantao
>>Cc: Isaku Yamahata; xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>>Subject: Re: [Xen-ia64-devel] [IPF-ia64] with Cset 10690, creating a
>>VTImakexen0 hang
>>
>>Hi, Alex
>>
>>Sorry for late.
>>
>>I found your problem(boot time difference w/ PAL_HALT_LIGHT emulation patch)
>>occurred in SMP(credit).
>>But, it does not occurred in UP, SMP(bvt) and SMP(credit w/ affinity).
>>
>>I think the emulation of pal_halt_light for domU
>>does not good work for DomVTI boot up
>>under credit scheduling w/o affinity.
>>
>>And consider the Xiantao survey,
>>qemu make heavy I/O operations at the boot up.
>>
>>Consider the above two conditions,
>>I think credit scheduler algorithm does not consider
>>the block state.(caused by pal_halt_light emulation)
>>So I want to switch off the vcpu migration at heavy load
>>
>>
>>I planned as follows.
>>
>>1)In the short term,
>>I want to avoid this problem by
>>HALT the PAL_HALT_LIGHT emulation while DomVTI boot up.
>>or
>>Lock VCPUs migrations while DomVTI boot up.
>>(when Credit scheduler runs)
>>
>>2)In the long term,
>>I will make a patch to avoid this problem.
>>(Consider the heavy io w/ vcpu migration)
>>
>>N.B.
>>I checked under CS:10559.(original patch made)
>>
>>Thanks,
>>Atsushi SAKAI
>>
>>
>>
>>
>>
>>>On Tue, 2006-07-11 at 19:42 +0800, Zhang, Xiantao wrote:
>>>> Hi Alex,
>>>> Seems this issue was caused by Cset 10688. In vcpu_itr_d, the current
>>>> logic purges vhpt with cpu_flush_vhpt_range but it is very heavy to
>>>> xen0. When creating VTi domain @ early stage, IO operation is very
>>>> excessive, so qemu was scheduled out and in very frequently and this
>>>> logic was executed every time. In addition, cpu_flush_vhpt_range using
>>>> identity map to purge vhpt may cause more tlb miss due to no TR map.
>>>> If remove vcpu_flush_tlb_vhpt_range logic although it definitely
>>>> needed, seems VTi becomes healthy. Maybe potential bugs exist there.:)
>>>
>>> Thanks for investigating Xiantao. Isaku, any thoughts on how to
>>>regain VTI performance? Thanks,
>>>
>>> Alex
>>>
>>>--
>>>Alex Williamson HP Open Source & Linux Org.
>>>
>>>
>>>_______________________________________________
>>>Xen-ia64-devel mailing list
>>>Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>>>http://lists.xensource.com/xen-ia64-devel
>>>
>>
>>
>>
>>
>>
>>
>>
>>_______________________________________________
>>Xen-ia64-devel mailing list
>>Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>>http://lists.xensource.com/xen-ia64-devel
>
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|