WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] x86's context switch ordering of operations

To: Jan Beulich <jbeulich@xxxxxxxxxx>
Subject: Re: [Xen-devel] x86's context switch ordering of operations
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Tue, 29 Apr 2008 10:03:40 -0700
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 29 Apr 2008 10:04:15 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <48173326.76E4.0078.0@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <48173326.76E4.0078.0@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.12 (X11/20080418)
Jan Beulich wrote:
1) How does the storing of vcpu_info_mfn in the hypervisor survive
migration or save/restore? The mainline Linux code, which uses this
hypercall, doesn't appear to make any attempt to revert to using the
default location during suspend or to re-setup the alternate location
during resume (but of course I'm not sure that guest is save/restore/
migrate ready in the first place). I would imagine it to be at least
difficult for the guest to manage its state post resume without the
hypervisor having restored the previously established alternative
placement.

The only kernel which uses it is 32-on-32 pvops, and that doesn't currently support migration. It would be easy for the guest to restore that state for itself shortly after resuming.

I still need to add 32-on-64 and 64-on-64 implementations for this. Just haven't looked at it yet.

2) The implementation in the hypervisor seems to have added yet another
scalibility issue (on 32-bits), as this is being carried out using
map_domain_page_global() - if there are sufficiently many guests with
sufficiently many vCPU-s, there just won't be any space left at some
point. This worries me especially in the context of seeing a call to
sh_map_domain_page_global() that is followed by a BUG_ON() checking
whether the call failed.

Yes, we discussed it, and, erm, don't do that. Guests should be able to deal with VCPUOP_register_vcpu_info failing, but that doesn't address overall heap starvation.

   J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel