This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [PATCH 0/4] XSAVE/XRSTOR fixes and enhancements

To: Weidong Han <weidong.han@xxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 0/4] XSAVE/XRSTOR fixes and enhancements
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Tue, 31 Aug 2010 08:41:30 +0100
Cc: Tim Deegan <Tim.Deegan@xxxxxxxxxxxxx>, Jeremy Fitzhardinge <jeremy@xxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 31 Aug 2010 00:42:38 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C7CB0AE.6050908@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: ActI3wpuLDrNBwjsRxyO6Kg2/XJWJQAAOJ+e
Thread-topic: [Xen-devel] [PATCH 0/4] XSAVE/XRSTOR fixes and enhancements
User-agent: Microsoft-Entourage/
On 31/08/2010 08:35, "Weidong Han" <weidong.han@xxxxxxxxx> wrote:

>> Breaks backward compatibility by changing size of vcpu_guest_context (part
>> of the PV guest ABI). Totally unacceptable -- is this a nasty attempt to get
> yes, it cannot restore guest saved by old Xen. do you mean we should not
> change size of vcpu_guest_context in future?

Yes, and it's not just a save/restore issue. The PV guest itself handles
vcpu_guest_context when booting secondary vcpus (via VCPUOP_initialise). The
structure layout/size is set in stone.

 -- Keir

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>