WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-api

Re: [Xen-API] vm export bug in XCP 0.5

To: George Shuklin <george.shuklin@xxxxxxxxx>
Subject: Re: [Xen-API] vm export bug in XCP 0.5
From: Jonathan Ludlam <Jonathan.Ludlam@xxxxxxxxxxxxx>
Date: Mon, 20 Dec 2010 15:29:22 +0000
Accept-language: en-US
Acceptlanguage: en-US
Cc: "xen-api@xxxxxxxxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 20 Dec 2010 07:29:41 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1292855656.1938.20.camel@mabase>
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
References: <1292855656.1938.20.camel@mabase>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcugWq6yNGWkTxTjSSes2SkJEHFnlQ==
Thread-topic: [Xen-API] vm export bug in XCP 0.5
Had the server noticed that the CLI had gone away after you ctrl-c'd it? If you 
run:

xe task-list

Does it still show the export? Does it still fail to start if you wait until 
the export task has gone away?

Jon

On 20 Dec 2010, at 14:34, George Shuklin wrote:

> Good day.
> 
> vbd, used by control domain for vm export does not destroyed if vm
> export is canceled.
> 
> Steps to reproduce (may include some unnecessary steps):
> 
> a) Create vm with vdi on lvmoiscsi storage, and host suspend-sr on
> lvmoiscsi-storage
> b) Start VM on slave
> c) suspend VM
> d) start export to file (in my case it was NFS share) via cli directly
> from slave host where VM was resident before suspend.
> e) interrupt export about 20-30 sec after start by pressing Ctrl-C
> f) Try to start VM (vm-start), got: 
> xe vm-resume vm=test
> The server failed to handle your request, due to an internal error.  The
> given message may give details useful for debugging the problem.
> message: Failure("The VDI 4813431c-56ee-4a68-811a-b15b69a20e57 is
> already attached in RO mode; it can't be attached in RW mode!")
> 
> Yes, VBD was attached to control domain:
> 
> uuid ( RO)             : b10b03c1-79f1-3406-aea1-46a13bb040aa
>          vm-uuid ( RO): d4e93255-cbc0-43a6-95cc-936c9dcdd79c
>    vm-name-label ( RO): Control domain on host: cvt-xh3
>         vdi-uuid ( RO): 4813431c-56ee-4a68-811a-b15b69a20e57
>            empty ( RO): false
>           device ( RO): xvda
> 
> 
> 
> _______________________________________________
> xen-api mailing list
> xen-api@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/mailman/listinfo/xen-api


_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api

<Prev in Thread] Current Thread [Next in Thread>