WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] disk access besk practice

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] disk access besk practice
From: Brian Krusic <brian@xxxxxxxxxx>
Date: Tue, 6 Jan 2009 09:58:56 -0800
Delivery-date: Tue, 06 Jan 2009 09:59:40 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hi all,

While I've read some faqs, forums and Professional Xen Virtualization, I would like your take on this.

I've 2 paravirtualized domUs running, each using tap:aio disk image located on a local 500GB raid.

While performance seems fine both interactively and using benchmarks, is there a practical limit to the image size before I should start breaking it up?

I plan to build another dom0 box with a 24TB raid on it and hosting 2 paravirtualized domUs, one of which will need 20TB.

Should I break up the domU into 2 images, 1 for the OS and the other for storage needs?

So my questions are;

1 - Whats a practical single disk image size?
2 - Should I pre allocate all image space during domU creation or have it dynamically grow?

- Brian





_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>