WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [PATCH 0 of 2 V5] libxc: checkpoint compression

To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
Subject: Re: [Xen-devel] Re: [PATCH 0 of 2 V5] libxc: checkpoint compression
From: Shriram Rajagopalan <rshriram@xxxxxxxxx>
Date: Tue, 8 Nov 2011 11:41:38 -0800
Cc: "brendan@xxxxxxxxx" <brendan@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Delivery-date: Tue, 08 Nov 2011 11:43:24 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <CAP8mzPPZcU6H_EtadDZZ2aTwXMxxtAQntpFPZ7aQ1g5k1nSKcA@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <patchbomb.1320350703@xxxxxxxxxxxxxxxxxxx> <20147.55048.90214.665321@xxxxxxxxxxxxxxxxxxxxxxxx> <CAP8mzPM1Fxj6M5R+H79nKToSCxHxsMaq1CKN=csjRMMocG0LRA@xxxxxxxxxxxxxx> <CAP8mzPOqv9oiNyCRTSz9DaBDwB-MtOvPJ9+3xc5NwAepOgUm7w@xxxxxxxxxxxxxx> <1320771752.955.109.camel@xxxxxxxxxxxxxxxxxxxxxx> <CAP8mzPO49gNVbouAB8nehBRC4K7Q37O+FgX8bGR_3EHSY8p6JQ@xxxxxxxxxxxxxx> <1320772565.955.110.camel@xxxxxxxxxxxxxxxxxxxxxx> <CAP8mzPPZcU6H_EtadDZZ2aTwXMxxtAQntpFPZ7aQ1g5k1nSKcA@xxxxxxxxxxxxxx>
Reply-to: rshriram@xxxxxxxxx
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, Nov 8, 2011 at 9:20 AM, Shriram Rajagopalan <rshriram@xxxxxxxxx> wrote:
On Tue, Nov 8, 2011 at 9:16 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
On Tue, 2011-11-08 at 17:13 +0000, Shriram Rajagopalan wrote:
> On Tue, Nov 8, 2011 at 9:02 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx>
> wrote:
>         On Tue, 2011-11-08 at 16:51 +0000, Shriram Rajagopalan wrote:
>         > On Fri, Nov 4, 2011 at 12:21 PM, Shriram Rajagopalan
>         > <rshriram@xxxxxxxxx> wrote:
>         >         Why posix_memalign?
>         >
>         >         The compression code involves a lot of memcpys at 4K
>         >         granularity (dirty pages
>         >         copied from domU's memory to internal cache/page
>         buffers etc).
>         >         I would like to
>         >         keep these memcpys page aligned for purposes of
>         speed. The
>         >         source pages
>         >         (from domU) are already aligned. The destination
>         pages
>         >         allocated by the
>         >         compression code need to be page aligned.
>         >
>         >         correct me if I am wrong:
>         >          mallocing a huge buffer for this purpose is not
>         optimal.
>         >         malloc aligns allocations
>         >          on 16byte (or 8byte) granularity but if a 4K region
>         straddles
>         >         across two physical
>         >         memory frames, then the memcpy is going to be
>         suboptimal.
>         >         OTOH, memalign
>         >         ensures that we are dealing with just 2 memory
>         frames as
>         >         opposed
>         >         to 3 (possible) frames in malloc.
>         >
>         >         A simple 8Mb memcpy test shows an average of 500us
>         overhead
>         >         for malloc
>         >         based allocation compared to posix_memalign based
>         allocation.
>         >         While this
>         >         might seem low, the checkpoints are being taken at
>         high
>         >         frequency
>         >         (every 20ms for instance).
>         >
>         >         It is not okay to use malloc on other platforms. I
>         simply dont
>         >         have access to other
>         >         platforms to test their equivalent versions.  Short
>         of using
>         >         something
>         >         like qemu_memalign function.
>         >
>         >         I am open to suggestions :)
>
>
>         This is due to minios (aka stubdoms) not having
>         posix_memalign, right?
>
>         minios (or rather newlib) does appear to have memalign though,
>         which if
>         true would also work, right? You could potentially also
>         implement
>         posix_memalign in terms of memalign on minios and avoid the
>         ifdef.
>
>
> Sounds good. In that case, can I just post a patch to minios,
> implementing posix_memalign and will you then directly take the
> previous version V4 of this patch series (the one without #ifdefs) ?

Well, *I* won't be taking any version of the patch but that sounds like
a sane plan to me, assuming V4 builds after your minios patch.


oops. sorry.. I was referring to IanJ. 


Just realized i forgot to state why I had to the __linux__.

a. minios lacks posix_memalign
b. I looked up online. solaris has no posix_memalign. I am not sure about netbsd.
c. in tools/libxc/
xc_solaris.c uses memalign
xc_netbsd.c uses valloc
xc_minios.c uses memalign
xc_linux_osdep.c uses posix_memalign!

further posix_memalign manpage states that
"posix_memalign() verifies that alignment matches the requirements detailed above.
 memalign() may not check that the boundary argument is correct."

fortified by newlib-1.16.0's comments in mallocr.c (newlib-1.16.0/newlib/libc/stdlib/)
"The alignment argument must be a power of two. This property is not
checked by memalign, so misuse may result in random runtime errors."

Judging by all this mess, I thought i was better off doing a #ifdef __linux__ and
resorting to simple malloc for the other platforms.

One alternative would be to add the xc_memalign function alone, that was removed
by c/s 22520.

-void *xc_memalign(size_t alignment, size_t size)
-{
-#if defined(_POSIX_C_SOURCE) && !defined(__sun__)
-    int ret;
-    void *ptr;
-    ret = posix_memalign(&ptr, alignment, size);
-    if (ret != 0)
-        return NULL;
-    return ptr;
-#elif defined(__NetBSD__) || defined(__OpenBSD__)
-    return valloc(size);
-#else
-    return memalign(alignment, size);
-#endif
-}
-

shriram
 
>
> thanks
> shriram
>
>
>         Ian.
>
>         >
>         >         shriram
>         >
>         >
>         >
>         > Ping.
>         >
>         >
>         >         On Fri, Nov 4, 2011 at 5:14 AM, Ian Jackson
>         >         <Ian.Jackson@xxxxxxxxxxxxx> wrote:
>         >                 rshriram@xxxxxxxxx writes ("[PATCH 0 of 2
>         V5] libxc:
>         >                 checkpoint compression"):
>         >                 > This patch series adds checkpoint
>         compression
>         >                 functionality, while
>         >                 > running under Remus.
>         >
>         >                 ...
>         >                 > Changes since last version:
>         >                 > 1. use posix_memalign only on linux
>         platforms and
>         >                 switch to normal malloc for
>         >                 >    the rest. stubdom compiles
>         successfully.
>         >
>         >
>         >                 Looking at this in more detail, I don't
>         understand why
>         >                 you're using
>         >                 posix_memalign rather than just malloc,
>         anyway.  If
>         >                 it's necessary to
>         >                 use posix_memalign on Linux, why is it OK to
>         use
>         >                 malloc on other
>         >                 platforms ?
>         >
>         >                 Also this #ifdef is quite ugly.
>         >
>         >                 Ian.
>         >
>         >
>         >
>         >
>
>
>
>




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel