On 29/4/08 14:39, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>> ctxt_switch_{from,to} exist only in x86 Xen and are called from a single
>> hook point out from the common scheduler. Thus either they both happen
>> before, or both happen after, current is changed by the common scheduler. It
>
> Maybe I'm mistaken (or it is being done twice with no good reason), but
> I see a set_current(next) in x86's context_switch() ...
Um, good point, I'd forgotten exactly how the code fitted together. Anyhow,
the reason you see ctxt_switch_{from,to} happening after set_current() is
because context_switch() and __context_switch() can actually be decoupled.
When switching to the idle vcpu we run context_switch() but we do not run
__context_switch().
> If pages mapped that way survive context switches, then it would
> certainly be possible to map them once and keep them until no longer
> needed. Doing this during context switch was more as an attempt to
> conserve on virtual address use (so other vCPU-s of the same guest
> not using this functionality would have less chances of running out
> of space). The background is that I think that it'll also be necessary
> to extend MAX_VIRT_CPUS beyond 32 at some not too distant point
> (at least in dom0 for CPU frequency management - or do you have
> another scheme in mind how to deal with systems having more than
> 32 CPU threads), resulting in more pressure on the address space.
I'm hoping that Intel's patches to allow uniproc dom0 to perform multiproc
Cx and Px state management will be acceptable. Apart from that, yes we may
have to increase MAX_VIRT_CPUS.
> I know your position here, but - are all 32-on-64 migration/save/restore
> issues meanwhile resolved (that is, can the tools meanwhile deal with
> either size domains no matter whether using a 32- or 64-bit dom0)? If
> not, there may be reasons beyond that of needing vm86 mode that
> might force people to stay with 32-bit Xen. (I certainly agree that there
> are unavoidable limitations, but obviously there is a big difference
> between requiring 64 bytes and 4k per vCPU for this particular
> functionality.)
I don't really see a few kilobytes of overhead per vcpu as very significant.
Given the limitations of the map_domain_page_global() address space, we're
limiting ourselves to probably around 700-800 vcpus. That's quite a lot imo!
I'm not sure on our position regarding 32-on-64 save/restore compatibility.
Tim Deegan made some patches a while ago, but that was mainly focused on
correctly saving 64-bit HVM domUs from a 32-bit dom0. I also know that
Oracle had some patches they floated a while ago. I don;t they ever got
posted for inclusion into xen-unstable though. *However* I do know that I'd
rather we spent time fixing 32-on-64 save/restore compatibility than
fretting about and optimising 32-bit Xen scalability. The former has greater
long-term usefulness.
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|