WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 1 of 2] tools/libxc: Remus Checkpoint Compression

To: rshriram@xxxxxxxxx
Subject: Re: [Xen-devel] [PATCH 1 of 2] tools/libxc: Remus Checkpoint Compression
From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Date: Fri, 17 Jun 2011 14:55:50 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, ian.jackson@xxxxxxxxxxxxx
Delivery-date: Fri, 17 Jun 2011 06:57:01 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=/Jnt1zkm4RghBeZPI/5UsJRBB23ZhGeMv19sE040Kqc=; b=aQZhfvvcJkkELVex5Co8lOoRpRS9PRUmXt69hzUcX7MDjq5pp3PqJqFIzovXYxpOy1 4KW+ro+KXgYfuZZhB15NHdqO0yUG6K0LYEyZkGYjej0uWPcutWtTvjU+WFV+1J/fqZ3F Hs9oKxrLMN08mXTANkUxRPvuZymuvUWCERSzw=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=OlqqIMppHCeJ/KQ1DMQ9b3pKxTzOk7VG4SKfWDr2CYnfDJBeLuha3ufljorLZwZfI3 bkmVo1FTczGWf2yuXBakng1m6U+XB5xFLEoDZgqtGlUSMPD9nHw9ngTWJ5g78Wb3hlVh 3CD+uV8RdZSDiUvxXH3dWs4n9U10f7tocRc0I=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <BANLkTin7i0PE0_xBsa7Vbfd8r3up7E_c4Q@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <patchbomb.1308239405@xxxxxxxxxxxxxxxxxxx> <5dbafaf24c7036f3e24e.1308239406@xxxxxxxxxxxxxxxxxxx> <BANLkTim-5HNq37kAMp-7dqwQ56e4M48=SA@xxxxxxxxxxxxxx> <BANLkTin7i0PE0_xBsa7Vbfd8r3up7E_c4Q@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Fri, Jun 17, 2011 at 12:46 PM, Shriram Rajagopalan
<rshriram@xxxxxxxxx> wrote:
> On Fri, Jun 17, 2011 at 5:35 AM, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
> wrote:
>>
>> On Thu, Jun 16, 2011 at 4:50 PM, Shriram Rajagopalan <rshriram@xxxxxxxxx>
>> wrote:
>> > @@ -846,6 +881,9 @@
>> >     if (!countpages)
>> >         return count;
>> >
>> > +    if (buf->compression)
>> > +        return pagebuf_get_one(xch, ctx, buf, fd, dom);
>>
>> I'm having trouble understanding the logic of this one.  What's
>> supposed to be going on here?
>>
> The original xc_domain_restore code expects the list of pages immediately
> after the list of pfns for a batch.
>
> With compression, the compressed chunks are attached at the fag end of the
> pagebuf,
> [<pfn list>]+,[XC_SAVE_ID_* stuff like vcpu info, hvm_ident_pt, etc],
>        [XC_SAVE_ID_COMPRESSED_DATA,compressed chunk]+
>
> So that if block diverts the pagebuf_get_one()'s control flow to
> consume the rest of pagebuf in usual recursive manner.
>
> original flow:
>
> pagebuf_get_one()
>  read integer
>    if integer is positive, its a count of number of pfns in the pfn list,
> read count*sizeof(pfn) from io_fd
>     1.1 parse pfn list and determine number of valid pages (countpages)
>     1.2. read (countpages * PAGE_SIZE) from io_fd
>        -- basically read batch of pfns + pages
>    else if its negative
>      check for one of XC_SAVE_ID_* elements and read appropriate size from
> io_fd
>   return pagebuf_get_one()
>
> **********
> The modified flow:
>
> pagebuf_get_one()
>  read integer
>    if integer is positive, its a count of number of pfns in the pfn list,
> read count*sizeof(pfn) from io_fd
>     1.1 parse pfn list and determine number of valid pages (countpages)
>>>1.2  if (compression) return pagebuf_get_one()
>         --basically accumulate all the pfns and the (compressed) pages would
> come later
>    else if its negative
>      check for one of XC_SAVE_ID_* elements and read appropriate size from
> io_fd
>      includes XC_SAVE_ID_COMPRESSED_DATA.
>   return pagebuf_get_one()

OK -- you should add a comment there explaining that the data will
come later in an XC_SAVE_ID_COMPRESSED (or whatever) packet.

> oh shux. you are right. thanks. I could probably send this as
> XC_SAVE_ID_COMPRESSION
> after the last iteration and enable pagebuf->compression once its received.

Yeah, I think that should work -- the first round you'll be sending
uncompressed pages anyway.

Hmm -- while you're at it, could you add a comment somewhere in this
area saying that this is part of the wire protocol that can't be
changed without breaking backwards compatibility?  Just to make it
easier to avoid this kind of error in the future...

Other than that, looks all right to me.

(Keir / IanJ, let me know if you want me to take a close enough look
to give a proper ACK.)

Thanks,
 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>