WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] HVM Save/Restore status.

To: "Keir Fraser" <keir@xxxxxxxxxxxxx>, "Tim Deegan" <Tim.Deegan@xxxxxxxxxxxxx>
Subject: RE: [Xen-devel] HVM Save/Restore status.
From: "Petersson, Mats" <Mats.Petersson@xxxxxxx>
Date: Wed, 25 Apr 2007 19:07:07 +0200
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, "Woller, Thomas" <thomas.woller@xxxxxxx>
Delivery-date: Wed, 25 Apr 2007 10:08:52 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <C2554592.DE2C%keir@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AceHQ1NhmcwbUEeuQemQtofsXL8/jAAAMWQgAAIiRHAAAfGQEAABYcN8AAAnaDA=
Thread-topic: [Xen-devel] HVM Save/Restore status.
 

> -----Original Message-----
> From: Keir Fraser [mailto:keir@xxxxxxxxxxxxx] 
> Sent: 25 April 2007 17:51
> To: Petersson, Mats; Tim Deegan
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Woller, Thomas
> Subject: Re: [Xen-devel] HVM Save/Restore status.
> 
> 
> 
> 
> On 25/4/07 17:15, "Petersson, Mats" <Mats.Petersson@xxxxxxx> wrote:
> 
> > So, a few printf later: The first time (which succeds) and 
> the second
> > time (which fails) is exactly the same frame numbers (1fff, 
> 1ffe, 1ffd).
> > It fails on the FIRST (I split the "if( ... [0] || ... [1] 
> || ... [2] )"
> > into separate lines, and print the failure on each with a "[n]" to
> > indicate which one failed, and it got [0] in the printout.
> 
> Mats,
> 
> Can you try adding 1 to p2m_size after the line:
> p2m_size = xc_memory_op(xc_handle, XENMEM_maximum_gpfn, &dom);
> In xc_domain_save.c, please. I think we have an out-by-one 
> error that you
> are triggering because your mini-domain does not drive the 
> cirrus_vga lfb
> and hence does not have any 'video ram' mapped in the RAM 
> hole. You might
> also want to print p2m_size in xc_domain_save and confirm 
> this hypothesis
> that way too.

Ok, here goes:
First save: p2m_size = 0xfffff (succeeds)
Second sace: p2m_size = 0x1fff (fails)

I'm a little bit surprized about the first number, as it's about 4GB(?)
(my domain is officially only 32MB, and uses a whole lot less actual
memory), but I guess the second number should be 0x2000 if it's the
actual size rather than the highest number of pfn available in the
guest. Does that make sense to you?

[Aside from my printout, there's also a printout already in the xend.log
from xc_domain_restore start: p2m_size = xxxxx which displays the same
data as I've reported above, both before my change and after it, so I do
believe my printout isn't completely bogus]. 

--
Mats
> 
> This would also bite us for other guests with more than 4GB 
> (we'd lose a
> page per save/restore, I think). So this is a nice one to fix 
> before 3.0.5
> if I'm right!
> 
>  -- Keir
> 
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel