WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-api

RE: [Xen-API] cross-pool migrate, with any kind of storage (shared or lo

To: xen-api@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-API] cross-pool migrate, with any kind of storage (shared or local)
From: George Shuklin <george.shuklin@xxxxxxxxx>
Date: Mon, 18 Jul 2011 15:50:23 +0400
Delivery-date: Mon, 18 Jul 2011 04:51:40 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=subject:from:to:in-reply-to:references:content-type:date:message-id :mime-version:x-mailer:content-transfer-encoding; bh=9VtyW1CcJKL09/BxdprLR9Irg8ZBy1HKbG0mR29+46E=; b=KzVQPOkMkIt11dzFYWye8lJbLGBuxl0v1IWGkLPHHK5wA355HfQU7efZosnUbpvy86 ISEY5fiAhnLKirA6AuOr6KHdo3YaqKfQBetxGTnQf7v73+HOdPj1ANLuCGNyWvDQHqgK A1BORbwkxao1ufrczI4+V7bi/eEs0mJiH87YI=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <81A73678E76EA642801C8F2E4823AD21BC2D12C582@xxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
References: <81A73678E76EA642801C8F2E4823AD21BC2D12C569@xxxxxxxxxxxxxxxxxxxxxxxxx> <1310966125.29412.75.camel@ramone> <81A73678E76EA642801C8F2E4823AD21BC2D12C582@xxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx
В Пнд, 18/07/2011 в 11:25 +0100, Dave Scott пишет:

> 
> It seems that DRBD can operate in 3 different synchronization modes:
> 
> 1. fully synchronous: writes are ACK'ed only when written to both disks
> 2. asynchronous: writes are ACK'ed when written to the primary disk (data is 
> somewhere in-flight to the secondary)
> 3. semi-synchronous: writes are ACK'ed when written to the primary disk and 
> in the memory (not disk) of the secondary
> 
> Apparently most people run it in fully synchronous mode over a fast LAN. 
> Provided we could get DRBD to flush outstanding updates and guarantee that 
> the two block devices are identical during the migration downtime when the 
> domain is shutdown, I guess we could use any of these methods. Although if 
> fully synchronous is the most common option, we may want to stick with that?

I do think we can put it as option to operation. If user does not sure
in quality of network/disks it can use 'C' protocol. In default behavior
we can use 'B' and in some cases (dude-you-know-what-you-do) we can
allow to use 'A'.

'B' protocol seems be fine, because /dev/drbd on recipient already have
consistent data, but it uknown is they flushed to underlaying block
device or not. For migration purposes it seems be fine, because it will
be flushed in short time and we can read/write to consistent area.


> > Anyway, it's not exactly a rainy weekend project, so if you want
> > consistent mirroring, there doesn't seem to be anything better than
> > DRBD
> > around the corner.
> 
> It did rain this weekend :) So I've half-written a python module for 
> configuring and controlling DRBD:
> 
> https://github.com/djs55/drbd-manager
> 
> It'll be interesting to see how this performs in practice. For some realistic 
> workloads I'd quite like to measure
> 1. total migration time
> 2. total migration downtime
> 3. ... effect on the guest during migration (somehow)
> 
> For (3) I would expect that continuous replication would slow down guest I/O 
> more during the migrate than explicit snapshot/copy (as if every I/O 
> performed a "mini snapshot/copy") but it would probably improve the downtime 
> (2), since there would be no final disk copy.


> What would you recommend for workloads / measurements?

>From my experience of DRBD usage with 'C' protocol and external
meta-data, in good network environment overhead is almost negligible (I
got 110 Mb/s linear copy instead 115 on single SATA dirve  and 850Mb
instead 1.5 Gb/s on huge RAID0 array - both tests in optic 10G network
with LRO offload and super-jumbo frames).

As good workload for measurement I propose use mkfs. It actually create
a very bad load to storage system (random and sequence operations with
small and big chunks to be written - we tests writes, because read will
happen from local drive only and should not speed down process).

As far as I tests mkfs for certain disk is almost constant time for same
hardware, so any difference in performance will be easily noticeable via
time utility).



_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api