WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] A snapshot is not (really) a cow

To: Christian Limpach <Christian.Limpach@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] A snapshot is not (really) a cow
From: Peri Hankey <mpah@xxxxxxxxxxxxxx>
Date: Sun, 26 Sep 2004 20:05:30 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxxxx
Delivery-date: Sun, 26 Sep 2004 20:15:05 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
In-reply-to: <20040926144842.GA7435@xxxxxxxxxxxx>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
References: <4156AA1E.5070308@xxxxxxxxxxxxxx> <20040926144842.GA7435@xxxxxxxxxxxx>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040115
It's very encouraging to know that I misunderstood the situation. But I do have a problem: It may be that I am genuinely running out of memory, but I am also getting a lot of messages about nbdNNN, and the overall effect is pretty disastrous.

I successfully created 2 new snapshots while another was in use as the root file system of a xenU domain. Previously this had failed immediately. Then the next lvcreate -s failed with a message about running out of memory, as before.

I wonder whether the timing problem I have which affects rpm is also causing some problem?

The relevant bit of /var/log/messages is:

... lots of nbdNNN messages ...
Sep 26 19:47:33 a4 kernel: nbd126: Request when not-ready
Sep 26 19:47:33 a4 kernel: end_request: I/O error, dev nbd126, sector 0
Sep 26 19:47:33 a4 kernel: nbd127: Request when not-ready
Sep 26 19:47:33 a4 kernel: end_request: I/O error, dev nbd127, sector 0
Sep 26 19:48:26 a4 net.agent[7343]: add event not handled
Sep 26 19:48:26 a4 kernel: device vif6.0 entered promiscuous mode
Sep 26 19:48:27 a4 kernel: xen-br0: port 3(vif6.0) entering learning state
Sep 26 19:48:27 a4 kernel: xen-br0: topology change detected, propagating
Sep 26 19:48:27 a4 kernel: xen-br0: port 3(vif6.0) entering forwarding state
Sep 26 19:50:00 a4 CROND[7360]: (mail) CMD (/usr/bin/python -S /usr/lib/mailman/cron/gate_news)
Sep 26 19:53:40 a4 kernel: nbd0: Request when not-ready
Sep 26 19:53:40 a4 kernel: end_request: I/O error, dev nbd0, sector 0
Sep 26 19:53:40 a4 kernel: nbd1: Request when not-ready
Sep 26 19:53:40 a4 kernel: end_request: I/O error, dev nbd1, sector 0
Sep 26 19:53:40 a4 kernel: nbd2: Request when not-ready

... lots more nbdNNN messages ...

Sep 26 19:53:44 a4 kernel: end_request: I/O error, dev nbd124, sector 0
Sep 26 19:53:44 a4 kernel: nbd125: Request when not-ready
Sep 26 19:53:44 a4 kernel: end_request: I/O error, dev nbd125, sector 0
Sep 26 19:53:44 a4 kernel: nbd126: Request when not-ready
Sep 26 19:53:44 a4 kernel: end_request: I/O error, dev nbd126, sector 0
Sep 26 19:53:44 a4 kernel: nbd127: Request when not-ready
Sep 26 19:53:44 a4 kernel: end_request: I/O error, dev nbd127, sector 0
Sep 26 19:53:44 a4 kernel: lvcreate: page allocation failure. order:0, mode:0xd0 Sep 26 19:53:44 a4 kernel: [__alloc_pages+824/842] __alloc_pages+0x338/0x34a
Sep 26 19:53:44 a4 kernel:  [<c013a16f>] __alloc_pages+0x338/0x34a
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel: [kmem_cache_alloc+89/100] kmem_cache_alloc+0x59/0x64
Sep 26 19:53:44 a4 kernel:  [<c013eb87>] kmem_cache_alloc+0x59/0x64
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel:  [alloc_pl+48/76] alloc_pl+0x30/0x4c
Sep 26 19:53:44 a4 kernel:  [<c0328a54>] alloc_pl+0x30/0x4c
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel: [client_alloc_pages+40/129] client_alloc_pages+0x28/0x81
Sep 26 19:53:44 a4 kernel:  [<c0328ba7>] client_alloc_pages+0x28/0x81
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel:  [vmalloc+32/36] vmalloc+0x20/0x24
Sep 26 19:53:44 a4 kernel:  [<c014f4d1>] vmalloc+0x20/0x24
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel: [kcopyd_client_create+104/199] kcopyd_client_create+0x68/0xc7
Sep 26 19:53:44 a4 kernel:  [<c03296cd>] kcopyd_client_create+0x68/0xc7
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel: [dm_create_persistent+199/320] dm_create_persistent+0xc7/0x140
Sep 26 19:53:44 a4 kernel:  [<c032b769>] dm_create_persistent+0xc7/0x140
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel:  [snapshot_ctr+715/892] snapshot_ctr+0x2cb/0x37c
Sep 26 19:53:44 a4 kernel:  [<c0329f7f>] snapshot_ctr+0x2cb/0x37c
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel: [dm_table_add_target+323/464] dm_table_add_target+0x143/0x1d0
Sep 26 19:53:44 a4 kernel:  [<c0324aa7>] dm_table_add_target+0x143/0x1d0
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel: [populate_table+125/216] populate_table+0x7d/0xd8
Sep 26 19:53:44 a4 kernel:  [<c0327105>] populate_table+0x7d/0xd8
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel:  [table_load+103/309] table_load+0x67/0x135
Sep 26 19:53:44 a4 kernel:  [<c03271c7>] table_load+0x67/0x135
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel:  [ctl_ioctl+236/330] ctl_ioctl+0xec/0x14a
Sep 26 19:53:44 a4 kernel:  [<c03279d3>] ctl_ioctl+0xec/0x14a
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel:  [table_load+0/309] table_load+0x0/0x135
Sep 26 19:53:44 a4 kernel:  [<c0327160>] table_load+0x0/0x135
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel:  [ctl_ioctl+0/330] ctl_ioctl+0x0/0x14a
Sep 26 19:53:44 a4 kernel:  [<c03278e7>] ctl_ioctl+0x0/0x14a
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel:  [sys_ioctl+484/670] sys_ioctl+0x1e4/0x29e
Sep 26 19:53:44 a4 kernel:  [<c0167606>] sys_ioctl+0x1e4/0x29e
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel:  [syscall_call+7/11] syscall_call+0x7/0xb
Sep 26 19:53:44 a4 kernel:  [<c010d967>] syscall_call+0x7/0xb
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel: device-mapper: : Could not create kcopyd client
Sep 26 19:53:44 a4 kernel:
Sep 26 19:53:44 a4 kernel: device-mapper: error adding target to table

... at this point lvm2 is unusable until after a reboot (unless some service can be restarted?) and xenU domains (or at least those with lvm2 snapshot based file systems) are dead.

I certainly have no intention of writing to  the original image.

Thanks
Peri


Christian Limpach wrote:

On Sun, Sep 26, 2004 at 12:38:06PM +0100, Peri Hankey wrote:
I always found the lvm2 'snapshot' terminology confusing - the thing created as a 'snapshot' is what accepts changes while a backup is made of the original volume.

I don't think that's the terminology the LVM2 people use.  The regular
use is to create a snapshot and backup this snapshot while you keep
using the original.

# drat - I needed another domain
lvcreate -L512M -s -n u4 /dev/vmgroup/root_file_system
... nasty messages .... all xenU domains dead ....
... lmv2 system in inconsistent state ...
... /dev/vmgroup/u4 doesn't exist ...
... /dev/mapper/root_file_system-u4 does exist ...

This should work, if it doesn't then it would seem to be a bug in
LVM2.  Since you mention out of memory error messages, are you sure
that you're not running out of memory in dom0?

The problem is that the 'snapshot' cows hold onto each other's tails - they seem to be held in a list linked (I think) from the original logical volume (here /dev/vmgroup/root_file_system). For their intended use as enabling backup, this seems to be meant to allow writes to the original volume to be propagated to all 'snapshots' created against that volume - there are comments about getting rid of the 'snapshots' after the backup has been done because this propagation of writes hits performance.

For my requirements, and I imagine for most others reading this list, all of this is superfluous. I don't need

   original -> snap1 -> snap2 -> snap3 ...

This is not the layout LVM2 uses.  If you look at the output of
``dmsetup table'', you'll see that each snapshot is independent
and only refers to the device it is a snapshot of and to its cow
device which will hold modifications.

so that I can't create a new snap4 while any of the others are in use.

I just need

   original <- cow1
   original <- cow2
   original <- cow3
   original <- cow4
   ...

where A '<-' B means B is a cow image of A, and where each of the cows is independent of the others so that a new cow can be created at any time, regardless how many others are active.

This is the layout LVM2 uses.  And it is indeed simple (and should be
quite robust) as long as you don't want to write to the original.
If you write to the original, you will have to copy the changed
blocks to every snapshot's cow device.  I think I've seen this
fail when having multiple snapshots and writing to the original.
But since you didn't write to the original (and one generally doesn't
need/want to write to the original in our case), that problem
is unlikely to be relevant to the failure you've seen.

   christian



-------------------------------------------------------
This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
Project Admins to receive an Apple iPod Mini FREE for your judgement on
who ports your project to Linux PPC the best. Sponsored by IBM.
Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel





-------------------------------------------------------
This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
Project Admins to receive an Apple iPod Mini FREE for your judgement on
who ports your project to Linux PPC the best. Sponsored by IBM.
Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel