On Sun, 2011-02-13 at 10:45 -0500, alice wan wrote:
> hi all,
>
> i have some doubt about live migration which may cause inconsistent
> metadata of vhd file between two tapdisk2 process.
>
> given that vm migrates from host A to host B, which image is vhd
> file.
>
> in host B, it first creates devices including starting tapdisk2
> process, at this time, tapdisk2 will read some metadata of vhd file.
> then, it xc_restore
>
> in host A, before it start last iteration(stop-and-copy phase), while
> xc_save's going, vhd file has been changed including metadata. So, in
> hostB tapdisk2 process doesn't read the
>
> newest metadata of vhd file.
>
> for tapdisk2, when it starts, it will read footer, header, bat of vhd
> file. especially bat structure, if it's inconsistent, it'll cause
> problem.
>
> Maybe my doubt isn't a real problem, however, i hope someone to figure
> it out for me. thanks in advance.
If that's what's done right now in the toolchain, it's a real problem
and needs to be fixed.
Options:
A. Avoid VBD lifetime overlap. This is how XCP presently does it. XCP
has vdi.activate/deactivate operations in addition to attach/detach to
control storage during migration.
Attach/detach is the same as described above. It may be desired as the
preferred transfer method on non-shared storage nodes to avoid latency
in stop/copy.
The simpler way is of course activate/deactivate semantics everywhere,
which is mutually exclusive.
This is needed for any indirectly mapped disk format (vhd, qcow? etc) on
shared physical nodes.
Not that this doesn't only matter for metadata. There are physical
layers where exclusive login is preferred/mandatory, so you won't even
get access to the device before pre-copy is done and the node could be
released on A.
Diagram:
Node A B
VM.migrate .. pre-copy > < stop-and-copy > <resumed ...
VDI.attached ..------------A--------------->
<-----------B-------------------..
VDI.active -----------A----> <----B-------..
B. Hack.
Let the toolstack issue a tap-ctl pause/unpause cycle before resume.
This will reopen the image.
C. Back then, in the dark ages, blktap did this implicitly.
Every I/O request after disk create run an implicit close/open
cycle on the physical image.
Cheers,
Daniel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|