oups.. forgot the text :-)
i did.. and e.g. /dev/sda5 without drbd gives the same error...
but just with
xen-3.3.0 !
On Thu, 2008-09-11 at 11:08 +0200, Sebastian Igerl wrote:
> On Thu, 2008-09-11 at 09:42 +0100, Robert Dunkley wrote:
> > The DRBD device is in use by Dom0 or another DomU. Try one DRBD device
> > per VM, if you have already done this then make sure the DRBD device is
> > NOT mounted in Dom0.
> >
> > Rob
> >
> > -----Original Message-----
> > From: Sebastian Igerl [mailto:sig@xxxxxxxxx]
> > Sent: 11 September 2008 09:36
> > To: Robert Dunkley
> > Cc: xen-users@xxxxxxxxxxxxxxxxxxx
> > Subject: RE: [Xen-users] drbd8 - lvm - xen
> >
> > Hi,
> >
> > Found out that it didn't had anything to to with drbd or lvm...
> >
> > I upgraded from 3.2.1 to xen-3.3.0...
> >
> > No i can't start any domu's... regardless if they are on lvm drbd oder
> > local partiton...
> >
> > Does anyone has any idea ?
> >
> > xen4 conf # xm create test1.cfg
> > > Using config file "./test1.cfg".
> > > Error: Device 768 (vbd) could not be connected.
> > > Device /dev/xen3and4/hd23 is mounted in the privileged domain,
> >
> >
> >
> > xen4 conf # xm create test1.cfg
> > > Using config file "./test1.cfg".
> > > Error: Device 768 (vbd) could not be connected.
> > > Device /dev/sda5 is mounted in the privileged domain,
> >
> >
> >
> >
> > another thing is... : is ist possible to make 20 different drbd devices
> > on one host ? drbd0 - drbd20 ? aren't there limits ?
> >
> >
> > On Wed, 2008-09-10 at 15:27 +0100, Robert Dunkley wrote:
> > > Hi Sebastian,
> > >
> > > I think you should split the volume into LVM partitions and then run a
> > > DRBD device for each partition. I run this config, RaidArray -> LVM ->
> > > DRBD8 -> Xen and it works well. You need the independence of one drbd
> > > device per VM otherwise things are difficult. You may also want to ask
> > > xen-users -at- antibozo.net for their DRBD script, on my Centos Xen
> > 3.30
> > > setup it works much better, it also avoids the need for IMHO dangerous
> > > dual primary config.
> > >
> > > I synch multiple arrays with multiple DRBD devices at once. DRBD
> > equally
> > > shares the array write bandwidth between DRBD devices. My Raid1 arrays
> > > manage 80Mb/second each write speed, 2 DRBD devices synching at once
> > on
> > > each of the 3 arrays yields 40mb.sec per DRBD device which totals a
> > > perfect 240Mb/sec synch rate (I used Infiniband to link the two DRBD
> > > servers together so am not bound by the bandwidth constraints of GB
> > > Ethernet).
> > >
> > > Rob
> > >
> > > -----Original Message-----
> > > From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> > > [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Sebastian
> > > Igerl
> > > Sent: 10 September 2008 14:54
> > > To: xen-users@xxxxxxxxxxxxxxxxxxx
> > > Subject: [Xen-users] drbd8 - lvm - xen
> > >
> > > Hi,
> > >
> > > What I'm trying to to is to set up 1 drbd active/active device then
> > > split this with lvm creating about 20 volumes and then run xen on top
> > of
> > > that lvm volums !
> > >
> > > I tried for days now and it isn't working... anyhow, do you see any
> > > problems why this shouldn't work..
> > >
> > > Should I create 20 drbd devices and use the drbd helper script for xen
> > ?
> > > isn't using so many drbd syncs a lot of overhead ? never tried it...
> > >
> > >
> > > xen4 conf # xm create test1.cfg
> > > Using config file "./test1.cfg".
> > > Error: Device 768 (vbd) could not be connected.
> > > Device /dev/xen3and4/hd23 is mounted in the privileged domain,
> > >
> > >
> > > This is what i'm getting right now..
> > > /dev/xen3and4/hd23 isn't mounted...
> > >
> > >
> > >
> > > xen4 conf # cat test1.cfg
> > >
> > > kernel = "/usr/lib64/xen/boot/hvmloader"
> > > builder = "hvm"
> > > device_model = "/usr/lib64/xen/bin/qemu-dm"
> > > memory = "1024"
> > > maxmem = 2048
> > > maxmemory = 2048
> > > name = "test1"
> > >
> > > dhcp = "off"
> > > vif = ["type=ioemu, mac=00:16:3e:00:00:20, bridge=xenbr0"]
> > > #cpus = ""
> > > #vcpus = 1
> > > disk = ["phy:/dev/xen3and4/hd23,ioemu:hda,w" ]
> > >
> > > serial="pty"
> > > boot="c"
> > >
> > > vnc=1
> > > vncviewer=0
> > > keymap="de"
> > > vfb = [ "type=vnc, vncpasswd=ssss, vnclisten=0.0.0.0, vncdisplay=23" ]
> > > sdl=0
> > >
> > >
> > >
> > > since i'm using lvm on top of drbd, xen should'nt be aware of drbd..
> > > using only lvm it workd...
> > >
> > > mouting drbd-lvm2-device works too.... ( only on one site because i
> > > haven't set up any cluster filesystems.. )
> > >
> > >
> > >
> > > Thanks,
> > >
> > > Sebastian
> > >
> > >
> > > _______________________________________________
> > > Xen-users mailing list
> > > Xen-users@xxxxxxxxxxxxxxxxxxx
> > > http://lists.xensource.com/xen-users
> > >
> > > The SAQ Group
> > > Registered Office: 18 Chapel Street, Petersfield, Hampshire. GU32 3DZ
> > > SEMTEC Limited trading as SAQ is Registered in England & Wales
> > > Company Number: 06481952
> > >
> > > http://www.saqnet.co.uk AS29219
> > > SAQ Group Delivers high quality, honestly priced communication and
> > I.T. services to UK Business.
> > > DSL : Domains : Email : Hosting : CoLo : Servers : Racks : Transit :
> > Backups : Managed Networks : Remote Support.
> > >
> > > Find us in http://www.thebestof.co.uk/petersfield
> > >
> >
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|