| On 06/02/2010 09:24 AM, Rafal Wojtczuk wrote:
>> Why not just balloon the domain down?
>>     
> I thought it (well, rather the matching balloon up after restore) would cost 
> quite some CPU time; it used to AFAIR. But nowadays it looks sensible, in 90ms
> range. Yes, that is much cleaner, thank you for the hint.
>   
Aside from the cost of the hypercalls to actually give up the pages,
ballooning is just the same as memory allocation from the system's
perspective.
>>> should be no disk reads at all). Is the single threaded nature of xenstored 
>>> the possible cause for the delays ?
>>>       
>> Have you tried oxenstored?  It works well for me, and seems to be a lot
>> faster.
>>     
> Do you mean 
> http://xenbits.xensource.com/ext/xen-ocaml-tools.hg
> ?
> After some tweaks to Makefiles (-fPIC is required on x86_64 for libs sources) 
> it compiles,
It builds out of the box for me on my x86-64 machine.
>  but then it bails during startup with 
> fatal error: exception Failure("ioctl bind_interdomain failed")
> This happens under xen-3.4.3; does it require 4.0.0 ?
>   
No, I don't think so, but it does have to be the first xenstore you run
after boot.  Ah, but Xen 4 probably has oxenstored build and other fixes
which aren't in 3.4.3.  In particular, I think it has been brought into
the main xen-unstable repo, rather than living off to the side.
But it is much quicker than the C one, I think primarily because it is
entirely memory resident.
> Well, it looks like xc_restore should _usually_ call 
> xc_map_foreign_batch once per pages batch (once per 1024 read pages), which
> looks sensible. xc_add_mmu_update also tries to batch requests. There are 
> 432 occurences of ioctl syscall in the xc_restore strace output; I am not 
> sure if it is damagingly numerous. 
>   
Time for some profiling to see where the time is going then.
    J
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
 |