This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-API] How Pygrub work on VHD

To: Anthony Xu <anthony@xxxxxxxxx>, Dave Scott <Dave.Scott@xxxxxxxxxxxxx>
Subject: RE: [Xen-API] How Pygrub work on VHD
From: Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>
Date: Mon, 25 Jan 2010 21:51:50 +0000
Accept-language: en-US
Acceptlanguage: en-US
Cc: Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>, "xen-api@xxxxxxxxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 25 Jan 2010 13:53:11 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1264454746.2927.19.camel@mobl-ant>
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
References: <1264451889.2927.3.camel@mobl-ant> <81A73678E76EA642801C8F2E4823AD2143E92238EB@xxxxxxxxxxxxxxxxxxxxxxxxx> <1264452764.2927.14.camel@mobl-ant> <81A73678E76EA642801C8F2E4823AD2143E92238EE@xxxxxxxxxxxxxxxxxxxxxxxxx> <1264454746.2927.19.camel@mobl-ant>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcqeBU4B15v9sLfrT5uNidgVuU2UFwAArbnA
Thread-topic: [Xen-API] How Pygrub work on VHD
> > That's correct. When a vhd-based VDI is attached to a domain,
> blktap(kernel-space) + tapdisk(user-space) do the translation from raw
> disk block accesses to vhd read/writes.
> What you are talking about is how VM accesses vhd-based disk image. What
> I want to know is how pygrub grabs kernel and initrd from vhd-based disk
> image, pygrub is running on dom0, there is no /dev/xvda, which is inside
> VM.

Before booting a VM, xapi will set up blktap for each of the VM's disks. One 
side effect of this is that a block device is exposed in dom0 that enables 
tools in dom0 to also access the disk. For a HVM guest this is used by qemu for 
accessing the guest's disk through emulation. It is this same device that 
pygrub operates against.

I'm not sure why pygrub isn't working for you. You should be able to run it 
manually against the block device and see why its failing.


> Anthony
> >
> > > What I'm doing is,
> > >
> > > 1. Create a lvm vdi on iscsi SR,
> > > 2. dd a vhd file to this vdi,
> > > 3. attach this vdi to a (empty)PV vm as device 0(vbd),
> > > 4. mark this vbd bootable,
> > > 5. then start this vm
> >
> > Unfortunately this isn't going to work. The choice of whether to use
> blktap (vhd-capable) or blkback (raw device only) is a function of the
> SR's content_type. The 'iscsi' SR uses blkback :(
> >
> > To see what I mean, try something like this instead:
> >
> > 1. Create an 'lvmoiscsi' SR
> > 2. create a VDI in the new SR
> > 3. look inside the new LV -- it should have vhd metadata
> >
> > Are you trying to import disks in .vhd format? The most future-proof way
> of doing this is to:
> > * create a VDI using the API
> > * hotplug the VDI into a VM (eg dom0 or a domU)
> > * decode the .vhd data, write() it to the raw block device and use
> seek() to preserve sparseness
> >
> > Simply dd'ing an existing .vhd is risky because XCP is expecting the
> .vhd to have a particular, optimized layout. In particular:
> > * extra space is left at the beginning of the file for later resizing
> > * parent locators have a particular naming convention
> > * blocks are carefully aligned for performance
> I understand all you said, but the volume in lvmoiscsi SR seems have the
> exact format as VHD file. I'll get back to you after some experimentals
> - Anthony
> _______________________________________________
> xen-api mailing list
> xen-api@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/mailman/listinfo/xen-api

xen-api mailing list