WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] XCP: sr driver question wrt vm-migrate

To: YAMAMOTO Takashi <yamamoto@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] XCP: sr driver question wrt vm-migrate
From: Jonathan Ludlam <Jonathan.Ludlam@xxxxxxxxxxxxx>
Date: Wed, 16 Jun 2010 13:06:28 +0100
Accept-language: en-US
Acceptlanguage: en-US
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 16 Jun 2010 05:07:25 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100616061920.4AE78718FD@xxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20100608071147.8D4DB719F7@xxxxxxxxxxxxxxxx> <20100616061920.4AE78718FD@xxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcsNTFo6UlLOCdoOT8mEuMhym3dKeQ==
Thread-topic: [Xen-devel] XCP: sr driver question wrt vm-migrate
This is usually the result of a failure earier on. Could you grep through the 
logs to get the whole trace of what went on? Best thing to do is grep for 
VM.pool_migrate, then find the task reference (the hex string beginning with 
'R:' immediately after the 'VM.pool_migrate') and grep for this string in the 
logs on both the source and destination machines. 

Have a look  through these, and if it's still not obvious what went wrong, post 
them to the list and we can have a look.

Cheers,

Jon


On 16 Jun 2010, at 07:19, YAMAMOTO Takashi wrote:

> hi,
> 
> after making my sr driver defer the attach operation as you suggested,
> i got migration work.  thanks!
> 
> however, when repeating live migration between two hosts for testing,
> i got the following error.  it doesn't seem so reproducable.
> do you have any idea?
> 
> YAMAMOTO Takashi
> 
> + xe vm-migrate live=true uuid=23ecfa58-aa30-ea6a-f9fe-7cb2a5487592 
> host=67b8b07b-8c50-4677-a511-beb196ea766f
> An error occurred during the migration process.
> vm: 23ecfa58-aa30-ea6a-f9fe-7cb2a5487592 (CentOS53x64-1)
> source: eea41bdd-d2ce-4a9a-bc51-1ca286320296 (s6)
> destination: 67b8b07b-8c50-4677-a511-beb196ea766f (s1)
> msg: Caught exception INTERNAL_ERROR: [ 
> Xapi_vm_migrate.Remote_failed("unmarshalling result code from remote") ] at 
> last minute during migration
> 
>> hi,
>> 
>> i'll try deferring the attach operation to vdi_activate.
>> thanks!
>> 
>> YAMAMOTO Takashi
>> 
>>> Yup, vdi activate is the way forward.
>>> 
>>> If you advertise VDI_ACTIVATE and VDI_DEACTIVATE in the 'get_driver_info' 
>>> response, xapi will call the following during the start-migrate-shutdown 
>>> lifecycle:
>>> 
>>> VM start:
>>> 
>>> host A: VDI.attach
>>> host A: VDI.activate
>>> 
>>> VM migrate:
>>> 
>>> host B: VDI.attach
>>> 
>>>  (VM pauses on host A)
>>> 
>>> host A: VDI.deactivate
>>> host B: VDI.activate
>>> 
>>>  (VM unpauses on host B)
>>> 
>>> host A: VDI.detach
>>> 
>>> VM shutdown:
>>> 
>>> host B: VDI.deactivate
>>> host B: VDI.detach
>>> 
>>> so the disk is never activated on both hosts at once, but it does still go 
>>> through a period when it is attached to both hosts at once. So you could, 
>>> for example, check that the disk *could* be attached on the vdi_attach 
>>> SMAPI call, and actually attach it properly on the vdi_activate call.
>>> 
>>> Hope this helps,
>>> 
>>> Jon
>>> 
>>> 
>>> On 7 Jun 2010, at 09:26, YAMAMOTO Takashi wrote:
>>> 
>>>> hi,
>>>> 
>>>> on vm-migrate, xapi attaches a vdi on the migrate-to host
>>>> before detaching it on the migrate-from host.
>>>> unfortunately it doesn't work for our product, which doesn't
>>>> provide a way to attach a volume to multiple hosts at the same time.
>>>> is VDI_ACTIVATE something what i can use as a workaround?
>>>> or any other suggestions?
>>>> 
>>>> YAMAMOTO Takashi
>>>> 
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-devel
>>> 
>>> 
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-devel
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel